Browse code

documentation improvements

Bas Nijholt authored on 19/10/2018 14:19:42
Showing 22 changed files
... ...
@@ -1,4 +1,4 @@
1
-# Adaptive Authors
1
+## Authors
2 2
 Below is a list of the contributors to Adaptive:
3 3
 
4 4
 + [Anton Akhmerov](<https://antonakhmerov.org>)
... ...
@@ -1,12 +1,10 @@
1 1
 .. summary-start
2 2
 
3
-.. _logo-adaptive:
3
+|logo| adaptive
4
+===============
4 5
 
5
-|image0| adaptive
6
-=================
7
-
8
-|PyPI| |Conda| |Downloads| |pipeline status| |DOI| |Binder| |Join the
9
-chat at https://gitter.im/python-adaptive/adaptive| |Documentation Status|
6
+|PyPI| |Conda| |Downloads| |Pipeline status| |DOI| |Binder| |Gitter|
7
+|Documentation| |GitHub|
10 8
 
11 9
 **Tools for adaptive parallel sampling of mathematical functions.**
12 10
 
... ...
@@ -126,7 +124,9 @@ We would like to give credits to the following people:
126 124
   Mathematical Software, 37 (3), art. no. 26, 2010.
127 125
 - Pauli Virtanen for his ``AdaptiveTriSampling`` script (no longer
128 126
   available online since SciPy Central went down) which served as
129
-  inspiration for the ``~adaptive.Learner2D``.
127
+  inspiration for the `~adaptive.Learner2D`.
128
+
129
+.. credits-end
130 130
 
131 131
 For general discussion, we have a `Gitter chat
132 132
 channel <https://gitter.im/python-adaptive/adaptive>`_. If you find any
... ...
@@ -136,21 +136,23 @@ or submit a `merge
136 136
 request <https://gitlab.kwant-project.org/qt/adaptive/merge_requests>`_.
137 137
 
138 138
 .. references-start
139
-.. |image0| image:: https://gitlab.kwant-project.org/qt/adaptive/uploads/d20444093920a4a0499e165b5061d952/logo.png
139
+.. |logo| image:: https://adaptive.readthedocs.io/en/latest/_static/logo.png
140 140
 .. |PyPI| image:: https://img.shields.io/pypi/v/adaptive.svg
141 141
    :target: https://pypi.python.org/pypi/adaptive
142
-.. |Conda| image:: https://anaconda.org/conda-forge/adaptive/badges/installer/conda.svg
142
+.. |Conda| image:: https://img.shields.io/badge/install%20with-conda-green.svg
143 143
    :target: https://anaconda.org/conda-forge/adaptive
144
-.. |Downloads| image:: https://anaconda.org/conda-forge/adaptive/badges/downloads.svg
144
+.. |Downloads| image:: https://img.shields.io/conda/dn/conda-forge/adaptive.svg
145 145
    :target: https://anaconda.org/conda-forge/adaptive
146
-.. |pipeline status| image:: https://gitlab.kwant-project.org/qt/adaptive/badges/master/pipeline.svg
146
+.. |Pipeline status| image:: https://gitlab.kwant-project.org/qt/adaptive/badges/master/pipeline.svg
147 147
    :target: https://gitlab.kwant-project.org/qt/adaptive/pipelines
148 148
 .. |DOI| image:: https://zenodo.org/badge/113714660.svg
149
-   :target: https://zenodo.org/badge/latestdoi/113714660
149
+   :target: https://doi.org/10.5281/zenodo.1446400
150 150
 .. |Binder| image:: https://mybinder.org/badge.svg
151 151
    :target: https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb
152
-.. |Join the chat at https://gitter.im/python-adaptive/adaptive| image:: https://img.shields.io/gitter/room/nwjs/nw.js.svg
152
+.. |Gitter| image:: https://img.shields.io/gitter/room/nwjs/nw.js.svg
153 153
    :target: https://gitter.im/python-adaptive/adaptive
154
-.. |Documentation Status| image:: https://readthedocs.org/projects/adaptive/badge/?version=latest
154
+.. |Documentation| image:: https://readthedocs.org/projects/adaptive/badge/?version=latest
155 155
    :target: https://adaptive.readthedocs.io/en/latest/?badge=latest
156
+.. |GitHub| image:: https://img.shields.io/github/stars/python-adaptive/adaptive.svg?style=social
157
+   :target: https://github.com/python-adaptive/adaptive/stargazers
156 158
 .. references-end
... ...
@@ -27,6 +27,8 @@ class AverageLearner(BaseLearner):
27 27
         Sampled points and values.
28 28
     pending_points : set
29 29
         Points that still have to be evaluated.
30
+    npoints : int
31
+        Number of evaluated points.
30 32
     """
31 33
 
32 34
     def __init__(self, function, atol=None, rtol=None):
... ...
@@ -48,7 +50,7 @@ class AverageLearner(BaseLearner):
48 50
 
49 51
     @property
50 52
     def n_requested(self):
51
-        return len(self.data) + len(self.pending_points)
53
+        return self.npoints + len(self.pending_points)
52 54
 
53 55
     def ask(self, n, tell_pending=True):
54 56
         points = list(range(self.n_requested, self.n_requested + n))
... ...
@@ -59,7 +61,7 @@ class AverageLearner(BaseLearner):
59 61
                           - set(self.data)
60 62
                           - set(self.pending_points))[:n]
61 63
 
62
-        loss_improvements = [self.loss_improvement(n) / n] * n
64
+        loss_improvements = [self._loss_improvement(n) / n] * n
63 65
         if tell_pending:
64 66
             for p in points:
65 67
                 self.tell_pending(p)
... ...
@@ -81,10 +83,13 @@ class AverageLearner(BaseLearner):
81 83
 
82 84
     @property
83 85
     def mean(self):
86
+        """The average of all values in `data`."""
84 87
         return self.sum_f / self.npoints
85 88
 
86 89
     @property
87 90
     def std(self):
91
+        """The corrected sample standard deviation of the values
92
+        in `data`."""
88 93
         n = self.npoints
89 94
         if n < 2:
90 95
             return np.inf
... ...
@@ -106,7 +111,7 @@ class AverageLearner(BaseLearner):
106 111
         return max(standard_error / self.atol,
107 112
                    standard_error / abs(self.mean) / self.rtol)
108 113
 
109
-    def loss_improvement(self, n):
114
+    def _loss_improvement(self, n):
110 115
         loss = self.loss()
111 116
         if np.isfinite(loss):
112 117
             return loss - self.loss(n=self.npoints + n)
... ...
@@ -118,6 +123,12 @@ class AverageLearner(BaseLearner):
118 123
         self.pending_points = set()
119 124
 
120 125
     def plot(self):
126
+        """Returns a histogram of the evaluated data.
127
+
128
+        Returns
129
+        -------
130
+        holoviews.element.Histogram
131
+            A histogram of the evaluated data."""
121 132
         hv = ensure_holoviews()
122 133
         vals = [v for v in self.data.values() if v is not None]
123 134
         if not vals:
... ...
@@ -22,7 +22,7 @@ class BalancingLearner(BaseLearner):
22 22
 
23 23
     Parameters
24 24
     ----------
25
-    learners : sequence of `BaseLearner`
25
+    learners : sequence of `~adaptive.BaseLearner`\s
26 26
         The learners from which to choose. These must all have the same type.
27 27
     cdims : sequence of dicts, or (keys, iterable of values), optional
28 28
         Constant dimensions; the parameters that label the learners. Used
... ...
@@ -42,6 +42,13 @@ class BalancingLearner(BaseLearner):
42 42
             >>> cdims = (['A', 'B'], [(True, 0), (True, 1),
43 43
             ...                       (False, 0), (False, 1)])
44 44
 
45
+    Attributes
46
+    ----------
47
+    learners : list
48
+        The sequence of `~adaptive.BaseLearner`\s.
49
+    function : callable
50
+        A function that calls the functions of the underlying learners.
51
+        Its signature is ``function(learner_index, point)``.
45 52
     strategy : 'loss_improvements' (default), 'loss', or 'npoints'
46 53
         The points that the `BalancingLearner` choses can be either based on:
47 54
         the best 'loss_improvements', the smallest total 'loss' of the
... ...
@@ -51,13 +58,13 @@ class BalancingLearner(BaseLearner):
51 58
 
52 59
     Notes
53 60
     -----
54
-    This learner compares the 'loss' calculated from the "child" learners.
61
+    This learner compares the `loss` calculated from the "child" learners.
55 62
     This requires that the 'loss' from different learners *can be meaningfully
56 63
     compared*. For the moment we enforce this restriction by requiring that
57 64
     all learners are the same type but (depending on the internals of the
58 65
     learner) it may be that the loss cannot be compared *even between learners
59
-    of the same type*. In this case the `BalancingLearner` will behave in an
60
-    undefined way.
66
+    of the same type*. In this case the `~adaptive.BalancingLearner` will
67
+    behave in an undefined way. Change the `strategy` in that case.
61 68
     """
62 69
 
63 70
     def __init__(self, learners, *, cdims=None, strategy='loss_improvements'):
... ...
@@ -81,6 +88,12 @@ class BalancingLearner(BaseLearner):
81 88
 
82 89
     @property
83 90
     def strategy(self):
91
+        """Can be either 'loss_improvements' (default), 'loss', or 'npoints'
92
+        The points that the `BalancingLearner` choses can be either based on:
93
+        the best 'loss_improvements', the smallest total 'loss' of the
94
+        child learners, or the number of points per learner, using 'npoints'.
95
+        One can dynamically change the strategy while the simulation is
96
+        running by changing the ``learner.strategy`` attribute."""
84 97
         return self._strategy
85 98
 
86 99
     @strategy.setter
... ...
@@ -122,7 +135,7 @@ class BalancingLearner(BaseLearner):
122 135
         points = []
123 136
         loss_improvements = []
124 137
         for _ in range(n):
125
-            losses = self.losses(real=False)
138
+            losses = self._losses(real=False)
126 139
             max_ind = np.argmax(losses)
127 140
             xs, ls = self.learners[max_ind].ask(1)
128 141
             points.append((max_ind, xs[0]))
... ...
@@ -165,7 +178,7 @@ class BalancingLearner(BaseLearner):
165 178
         self._loss.pop(index, None)
166 179
         self.learners[index].tell_pending(x)
167 180
 
168
-    def losses(self, real=True):
181
+    def _losses(self, real=True):
169 182
         losses = []
170 183
         loss_dict = self._loss if real else self._pending_loss
171 184
 
... ...
@@ -178,7 +191,7 @@ class BalancingLearner(BaseLearner):
178 191
 
179 192
     @cache_latest
180 193
     def loss(self, real=True):
181
-        losses = self.losses(real)
194
+        losses = self._losses(real)
182 195
         return max(losses)
183 196
 
184 197
     def plot(self, cdims=None, plotter=None, dynamic=True):
... ...
@@ -215,8 +228,8 @@ class BalancingLearner(BaseLearner):
215 228
         Returns
216 229
         -------
217 230
         dm : `holoviews.core.DynamicMap` (default) or `holoviews.core.HoloMap`
218
-            A `DynamicMap` (dynamic=True) or `HoloMap` (dynamic=False) with
219
-            sliders that are defined by `cdims`.
231
+             A `DynamicMap` ``(dynamic=True)`` or `HoloMap`
232
+             ``(dynamic=False)`` with sliders that are defined by `cdims`.
220 233
         """
221 234
         hv = ensure_holoviews()
222 235
         cdims = cdims or self._cdims_default
... ...
@@ -295,7 +308,7 @@ class BalancingLearner(BaseLearner):
295 308
         Notes
296 309
         -----
297 310
         The order of the child learners inside `learner.learners` is the same
298
-        as `adaptive.utils.named_product(**combos)`.
311
+        as ``adaptive.utils.named_product(**combos)``.
299 312
         """
300 313
         learners = []
301 314
         arguments = named_product(**combos)
... ...
@@ -313,7 +326,7 @@ class BalancingLearner(BaseLearner):
313 326
         folder : str
314 327
             Directory in which the learners's data will be saved.
315 328
         compress : bool, default True
316
-            Compress the data upon saving using 'gzip'. When saving
329
+            Compress the data upon saving using `gzip`. When saving
317 330
             using compression, one must load it with compression too.
318 331
 
319 332
         Notes
... ...
@@ -364,7 +377,7 @@ class BalancingLearner(BaseLearner):
364 377
 
365 378
         Example
366 379
         -------
367
-        See the example in the 'BalancingLearner.save' doc-string.
380
+        See the example in the `BalancingLearner.save` doc-string.
368 381
         """
369 382
         for l in self.learners:
370 383
             l.load(os.path.join(folder, l.fname), compress=compress)
... ...
@@ -20,6 +20,9 @@ class BaseLearner(metaclass=abc.ABCMeta):
20 20
     npoints : int, optional
21 21
         The number of evaluated points that have been added to the learner.
22 22
         Subclasses do not *have* to implement this attribute.
23
+    pending_points : set, optional
24
+        Points that have been requested but have not been evaluated yet.
25
+        Subclasses do not *have* to implement this attribute.
23 26
 
24 27
     Notes
25 28
     -----
... ...
@@ -118,10 +121,11 @@ class BaseLearner(metaclass=abc.ABCMeta):
118 121
 
119 122
         Notes
120 123
         -----
121
-        There are __two ways__ of naming the files:
122
-        1. Using the 'fname' argument in 'learner.save(fname='example.p')
123
-        2. Setting the 'fname' attribute, like
124
-           'learner.fname = "data/example.p"' and then 'learner.save()'.
124
+        There are **two ways** of naming the files:
125
+
126
+        1. Using the ``fname`` argument in ``learner.save(fname='example.p')``
127
+        2. Setting the ``fname`` attribute, like
128
+           ``learner.fname = "data/example.p"`` and then ``learner.save()``.
125 129
         """
126 130
         fname = fname or self.fname
127 131
         data = self._get_data()
... ...
@@ -142,7 +146,7 @@ class BaseLearner(metaclass=abc.ABCMeta):
142 146
 
143 147
         Notes
144 148
         -----
145
-        See the notes in the 'BaseLearner.save' doc-string.
149
+        See the notes in the `save` doc-string.
146 150
         """
147 151
         fname = fname or self.fname
148 152
         with suppress(FileNotFoundError, EOFError):
... ...
@@ -157,6 +161,9 @@ class BaseLearner(metaclass=abc.ABCMeta):
157 161
 
158 162
     @property
159 163
     def fname(self):
164
+        """Filename for the learner when it is saved (or loaded) using
165
+        `~adaptive.BaseLearner.save` (or `~adaptive.BaseLearner.load` ).
166
+        """
160 167
         # This is a property because then it will be availible in the DataSaver
161 168
         try:
162 169
             return self._fname
... ...
@@ -35,11 +35,13 @@ class DataSaver:
35 35
     def __getattr__(self, attr):
36 36
         return getattr(self.learner, attr)
37 37
 
38
+    @copy_docstring_from(BaseLearner.tell)
38 39
     def tell(self, x, result):
39 40
         y = self.arg_picker(result)
40 41
         self.extra_data[x] = result
41 42
         self.learner.tell(x, y)
42 43
 
44
+    @copy_docstring_from(BaseLearner.tell_pending)
43 45
     def tell_pending(self, x):
44 46
         self.learner.tell_pending(x)
45 47
 
... ...
@@ -487,6 +487,7 @@ class IntegratorLearner(BaseLearner):
487 487
 
488 488
     @property
489 489
     def npoints(self):
490
+        """Number of evaluated points."""
490 491
         return len(self.done_points)
491 492
 
492 493
     @property
... ...
@@ -34,7 +34,7 @@ def uniform_loss(interval, scale, function_values):
34 34
 
35 35
 
36 36
 def default_loss(interval, scale, function_values):
37
-    """Calculate loss on a single interval
37
+    """Calculate loss on a single interval.
38 38
 
39 39
     Currently returns the rescaled length of the interval. If one of the
40 40
     y-values is missing, returns 0 (so the intervals with missing data are
... ...
@@ -148,6 +148,12 @@ class Learner1D(BaseLearner):
148 148
 
149 149
     @property
150 150
     def vdim(self):
151
+        """Length of the output of ``learner.function``.
152
+        If the output is unsized (when it's a scalar)
153
+        then `vdim = 1`.
154
+
155
+        As long as no data is known `vdim = 1`.
156
+        """
151 157
         if self._vdim is None:
152 158
             if self.data:
153 159
                 y = next(iter(self.data.values()))
... ...
@@ -162,6 +168,7 @@ class Learner1D(BaseLearner):
162 168
 
163 169
     @property
164 170
     def npoints(self):
171
+        """Number of evaluated points."""
165 172
         return len(self.data)
166 173
 
167 174
     @cache_latest
... ...
@@ -169,7 +176,7 @@ class Learner1D(BaseLearner):
169 176
         losses = self.losses if real else self.losses_combined
170 177
         return max(losses.values()) if len(losses) > 0 else float('inf')
171 178
 
172
-    def update_interpolated_loss_in_interval(self, x_left, x_right):
179
+    def _update_interpolated_loss_in_interval(self, x_left, x_right):
173 180
         if x_left is not None and x_right is not None:
174 181
             dx = x_right - x_left
175 182
             if dx < self._dx_eps:
... ...
@@ -187,13 +194,13 @@ class Learner1D(BaseLearner):
187 194
                 self.losses_combined[a, b] = (b - a) * loss / dx
188 195
                 a = b
189 196
 
190
-    def update_losses(self, x, real=True):
197
+    def _update_losses(self, x, real=True):
191 198
         # When we add a new point x, we should update the losses
192 199
         # (x_left, x_right) are the "real" neighbors of 'x'.
193
-        x_left, x_right = self.find_neighbors(x, self.neighbors)
200
+        x_left, x_right = self._find_neighbors(x, self.neighbors)
194 201
         # (a, b) are the neighbors of the combined interpolated
195 202
         # and "real" intervals.
196
-        a, b = self.find_neighbors(x, self.neighbors_combined)
203
+        a, b = self._find_neighbors(x, self.neighbors_combined)
197 204
 
198 205
         # (a, b) is splitted into (a, x) and (x, b) so if (a, b) exists
199 206
         self.losses_combined.pop((a, b), None)  # we get rid of (a, b).
... ...
@@ -202,8 +209,8 @@ class Learner1D(BaseLearner):
202 209
             # We need to update all interpolated losses in the interval
203 210
             # (x_left, x) and (x, x_right). Since the addition of the point
204 211
             # 'x' could change their loss.
205
-            self.update_interpolated_loss_in_interval(x_left, x)
206
-            self.update_interpolated_loss_in_interval(x, x_right)
212
+            self._update_interpolated_loss_in_interval(x_left, x)
213
+            self._update_interpolated_loss_in_interval(x, x_right)
207 214
 
208 215
             # Since 'x' is in between (x_left, x_right),
209 216
             # we get rid of the interval.
... ...
@@ -230,7 +237,7 @@ class Learner1D(BaseLearner):
230 237
             self.losses_combined[x, b] = float('inf')
231 238
 
232 239
     @staticmethod
233
-    def find_neighbors(x, neighbors):
240
+    def _find_neighbors(x, neighbors):
234 241
         if x in neighbors:
235 242
             return neighbors[x]
236 243
         pos = neighbors.bisect_left(x)
... ...
@@ -239,14 +246,14 @@ class Learner1D(BaseLearner):
239 246
         x_right = keys[pos] if pos != len(neighbors) else None
240 247
         return x_left, x_right
241 248
 
242
-    def update_neighbors(self, x, neighbors):
249
+    def _update_neighbors(self, x, neighbors):
243 250
         if x not in neighbors:  # The point is new
244
-            x_left, x_right = self.find_neighbors(x, neighbors)
251
+            x_left, x_right = self._find_neighbors(x, neighbors)
245 252
             neighbors[x] = [x_left, x_right]
246 253
             neighbors.get(x_left, [None, None])[1] = x
247 254
             neighbors.get(x_right, [None, None])[0] = x
248 255
 
249
-    def update_scale(self, x, y):
256
+    def _update_scale(self, x, y):
250 257
         """Update the scale with which the x and y-values are scaled.
251 258
 
252 259
         For a learner where the function returns a single scalar the scale
... ...
@@ -291,16 +298,16 @@ class Learner1D(BaseLearner):
291 298
         if not self.bounds[0] <= x <= self.bounds[1]:
292 299
             return
293 300
 
294
-        self.update_neighbors(x, self.neighbors_combined)
295
-        self.update_neighbors(x, self.neighbors)
296
-        self.update_scale(x, y)
297
-        self.update_losses(x, real=True)
301
+        self._update_neighbors(x, self.neighbors_combined)
302
+        self._update_neighbors(x, self.neighbors)
303
+        self._update_scale(x, y)
304
+        self._update_losses(x, real=True)
298 305
 
299 306
         # If the scale has increased enough, recompute all losses.
300 307
         if self._scale[1] > 2 * self._oldscale[1]:
301 308
 
302 309
             for interval in self.losses:
303
-                self.update_interpolated_loss_in_interval(*interval)
310
+                self._update_interpolated_loss_in_interval(*interval)
304 311
 
305 312
             self._oldscale = deepcopy(self._scale)
306 313
 
... ...
@@ -309,8 +316,8 @@ class Learner1D(BaseLearner):
309 316
             # The point is already evaluated before
310 317
             return
311 318
         self.pending_points.add(x)
312
-        self.update_neighbors(x, self.neighbors_combined)
313
-        self.update_losses(x, real=False)
319
+        self._update_neighbors(x, self.neighbors_combined)
320
+        self._update_losses(x, real=False)
314 321
 
315 322
     def tell_many(self, xs, ys, *, force=False):
316 323
         if not force and not (len(xs) > 0.5 * len(self.data) and len(xs) > 2):
... ...
@@ -379,10 +386,10 @@ class Learner1D(BaseLearner):
379 386
             if ival in self.losses:
380 387
                 # If this interval does not exist it should already
381 388
                 # have an inf loss.
382
-                self.update_interpolated_loss_in_interval(*ival)
389
+                self._update_interpolated_loss_in_interval(*ival)
383 390
 
384 391
     def ask(self, n, tell_pending=True):
385
-        """Return n points that are expected to maximally reduce the loss."""
392
+        """Return 'n' points that are expected to maximally reduce the loss."""
386 393
         points, loss_improvements = self._ask_points_without_adding(n)
387 394
 
388 395
         if tell_pending:
... ...
@@ -392,7 +399,7 @@ class Learner1D(BaseLearner):
392 399
         return points, loss_improvements
393 400
 
394 401
     def _ask_points_without_adding(self, n):
395
-        """Return n points that are expected to maximally reduce the loss.
402
+        """Return 'n' points that are expected to maximally reduce the loss.
396 403
         Without altering the state of the learner"""
397 404
         # Find out how to divide the n points over the intervals
398 405
         # by finding  positive integer n_i that minimize max(L_i / n_i) subject
... ...
@@ -466,6 +473,14 @@ class Learner1D(BaseLearner):
466 473
         return points, loss_improvements
467 474
 
468 475
     def plot(self):
476
+        """Returns a plot of the evaluated data.
477
+
478
+        Returns
479
+        -------
480
+        plot : `holoviews.element.Scatter` (if vdim=1)\
481
+               else `holoviews.element.Path`
482
+            Plot of the evaluated data.
483
+        """
469 484
         hv = ensure_holoviews()
470 485
         if not self.data:
471 486
             p = hv.Scatter([]) * hv.Path([])
... ...
@@ -15,6 +15,19 @@ from ..utils import cache_latest
15 15
 # Learner2D and helper functions.
16 16
 
17 17
 def deviations(ip):
18
+    """Returns the deviation of the linear estimate.
19
+
20
+    Is useful when defining custom loss functions.
21
+
22
+    Parameters
23
+    ----------
24
+    ip : `scipy.interpolate.LinearNDInterpolator` instance
25
+
26
+    Returns
27
+    -------
28
+    numpy array
29
+        The deviation per triangle.
30
+    """
18 31
     values = ip.values / (ip.values.ptp(axis=0).max() or 1)
19 32
     gradients = interpolate.interpnd.estimate_gradients_2d_global(
20 33
         ip.tri, values, tol=1e-6)
... ...
@@ -37,6 +50,20 @@ def deviations(ip):
37 50
 
38 51
 
39 52
 def areas(ip):
53
+    """Returns the area per triangle of the triangulation inside
54
+    a `LinearNDInterpolator` instance.
55
+
56
+    Is useful when defining custom loss functions.
57
+
58
+    Parameters
59
+    ----------
60
+    ip : `scipy.interpolate.LinearNDInterpolator` instance
61
+
62
+    Returns
63
+    -------
64
+    numpy array
65
+        The area per triangle in ``ip.tri``.
66
+    """
40 67
     p = ip.tri.points[ip.tri.vertices]
41 68
     q = p[:, :-1, :] - p[:, -1, None, :]
42 69
     areas = abs(q[:, 0, 0] * q[:, 1, 1] - q[:, 0, 1] * q[:, 1, 0]) / 2
... ...
@@ -289,10 +316,17 @@ class Learner2D(BaseLearner):
289 316
 
290 317
     @property
291 318
     def npoints(self):
319
+        """Number of evaluated points."""
292 320
         return len(self.data)
293 321
 
294 322
     @property
295 323
     def vdim(self):
324
+        """Length of the output of ``learner.function``.
325
+        If the output is unsized (when it's a scalar)
326
+        then `vdim = 1`.
327
+
328
+        As long as no data is known `vdim = 1`.
329
+        """
296 330
         if self._vdim is None and self.data:
297 331
             try:
298 332
                 value = next(iter(self.data.values()))
... ...
@@ -337,11 +371,15 @@ class Learner2D(BaseLearner):
337 371
         return points_combined, values_combined
338 372
 
339 373
     def data_combined(self):
374
+        """Like `data`, however this includes the points in
375
+        `pending_points` for which the values are interpolated."""
340 376
         # Interpolate the unfinished points
341 377
         points, values = self._data_combined()
342 378
         return {tuple(k): v for k, v in zip(points, values)}
343 379
 
344 380
     def ip(self):
381
+        """A `scipy.interpolate.LinearNDInterpolator` instance
382
+        containing the learner's data."""
345 383
         if self._ip is None:
346 384
             points, values = self._data_in_bounds()
347 385
             points = self._scale(points)
... ...
@@ -349,6 +387,9 @@ class Learner2D(BaseLearner):
349 387
         return self._ip
350 388
 
351 389
     def ip_combined(self):
390
+        """A `scipy.interpolate.LinearNDInterpolator` instance
391
+        containing the learner's data *and* interpolated data of
392
+        the `pending_points`."""
352 393
         if self._ip_combined is None:
353 394
             points, values = self._data_combined()
354 395
             points = self._scale(points)
... ...
@@ -188,10 +188,17 @@ class LearnerND(BaseLearner):
188 188
 
189 189
     @property
190 190
     def npoints(self):
191
+        """Number of evaluated points."""
191 192
         return len(self.data)
192 193
 
193 194
     @property
194 195
     def vdim(self):
196
+        """Length of the output of ``learner.function``.
197
+        If the output is unsized (when it's a scalar)
198
+        then `vdim = 1`.
199
+
200
+        As long as no data is known `vdim = 1`.
201
+        """
195 202
         if self._vdim is None and self.data:
196 203
             try:
197 204
                 value = next(iter(self.data.values()))
... ...
@@ -205,6 +212,8 @@ class LearnerND(BaseLearner):
205 212
         return all(p in self.data for p in self._bounds_points)
206 213
 
207 214
     def ip(self):
215
+        """A `scipy.interpolate.LinearNDInterpolator` instance
216
+        containing the learner's data."""
208 217
         # XXX: take our own triangulation into account when generating the ip
209 218
         return interpolate.LinearNDInterpolator(self.points, self.values)
210 219
 
... ...
@@ -227,10 +236,12 @@ class LearnerND(BaseLearner):
227 236
 
228 237
     @property
229 238
     def values(self):
239
+        """Get the values from `data` as a numpy array."""
230 240
         return np.array(list(self.data.values()), dtype=float)
231 241
 
232 242
     @property
233 243
     def points(self):
244
+        """Get the points from `data` as a numpy array."""
234 245
         return np.array(list(self.data.keys()), dtype=float)
235 246
 
236 247
     def tell(self, point, value):
... ...
@@ -262,6 +273,7 @@ class LearnerND(BaseLearner):
262 273
         return simplex in self.tri.simplices
263 274
 
264 275
     def inside_bounds(self, point):
276
+        """Check whether a point is inside the bounds."""
265 277
         return all(mn <= p <= mx for p, (mn, mx) in zip(point, self.bounds))
266 278
 
267 279
     def tell_pending(self, point, *, simplex=None):
... ...
@@ -8,18 +8,18 @@ from ..utils import cache_latest
8 8
 
9 9
 
10 10
 class SKOptLearner(Optimizer, BaseLearner):
11
-    """Learn a function minimum using 'skopt.Optimizer'.
11
+    """Learn a function minimum using ``skopt.Optimizer``.
12 12
 
13
-    This is an 'Optimizer' from 'scikit-optimize',
13
+    This is an ``Optimizer`` from ``scikit-optimize``,
14 14
     with the necessary methods added to make it conform
15
-    to the 'adaptive' learner interface.
15
+    to the ``adaptive`` learner interface.
16 16
 
17 17
     Parameters
18 18
     ----------
19 19
     function : callable
20 20
         The function to learn.
21 21
     **kwargs :
22
-        Arguments to pass to 'skopt.Optimizer'.
22
+        Arguments to pass to ``skopt.Optimizer``.
23 23
     """
24 24
 
25 25
     def __init__(self, function, **kwargs):
... ...
@@ -63,6 +63,7 @@ class SKOptLearner(Optimizer, BaseLearner):
63 63
 
64 64
     @property
65 65
     def npoints(self):
66
+        """Number of evaluated points."""
66 67
         return len(self.Xi)
67 68
 
68 69
     def plot(self, nsamples=200):
... ...
@@ -54,7 +54,7 @@ else:
54 54
 
55 55
 
56 56
 class BaseRunner:
57
-    """Base class for runners that use concurrent.futures.Executors.
57
+    """Base class for runners that use `concurrent.futures.Executors`.
58 58
 
59 59
     Parameters
60 60
     ----------
... ...
@@ -346,7 +346,7 @@ class BlockingRunner(BaseRunner):
346 346
 
347 347
 
348 348
 class AsyncRunner(BaseRunner):
349
-    """Run a learner asynchronously in an executor using asyncio.
349
+    """Run a learner asynchronously in an executor using `asyncio`.
350 350
 
351 351
     Parameters
352 352
     ----------
... ...
@@ -548,7 +548,7 @@ class AsyncRunner(BaseRunner):
548 548
         Parameters
549 549
         ----------
550 550
         save_kwargs : dict
551
-            Key-word arguments for 'learner.save(**save_kwargs)'.
551
+            Key-word arguments for ``learner.save(**save_kwargs)``.
552 552
         interval : int
553 553
             Number of seconds between saving the learner.
554 554
 
... ...
@@ -586,7 +586,7 @@ def simple(learner, goal):
586 586
 
587 587
     Parameters
588 588
     ----------
589
-    learner : adaptive.BaseLearner
589
+    learner : ~`adaptive.BaseLearner` instance
590 590
     goal : callable
591 591
         The end condition for the calculation. This function must take the
592 592
         learner as its sole argument, and return True if we should stop.
... ...
@@ -605,9 +605,10 @@ def replay_log(learner, log):
605 605
 
606 606
     Parameters
607 607
     ----------
608
-    learner : learner.BaseLearner
608
+    learner : `~adaptive.BaseLearner` instance
609
+        New learner where the log will be applied.
609 610
     log : list
610
-        contains tuples: '(method_name, *args)'.
611
+        contains tuples: ``(method_name, *args)``.
611 612
     """
612 613
     for method, *args in log:
613 614
         getattr(learner, method)(*args)
614 615
new file mode 100644
615 616
Binary files /dev/null and b/docs/source/_static/logo.png differ
... ...
@@ -43,6 +43,7 @@ release = adaptive.__version__
43 43
 extensions = [
44 44
     'sphinx.ext.autodoc',
45 45
     'sphinx.ext.autosummary',
46
+    'sphinx.ext.autosectionlabel',
46 47
     'sphinx.ext.intersphinx',
47 48
     'sphinx.ext.mathjax',
48 49
     'sphinx.ext.viewcode',
49 50
similarity index 85%
50 51
rename from docs/source/rest_of_readme.rst
51 52
rename to docs/source/docs.rst
... ...
@@ -19,9 +19,13 @@ The following learners are implemented:
19 19
 - `~adaptive.AverageLearner`, For stochastic functions where you want to
20 20
   average the result over many evaluations,
21 21
 - `~adaptive.IntegratorLearner`, for
22
-  when you want to intergrate a 1D function ``f: ℝ → ℝ``,
22
+  when you want to intergrate a 1D function ``f: ℝ → ℝ``.
23
+
24
+Meta-learners (to be used with other learners):
25
+
23 26
 - `~adaptive.BalancingLearner`, for when you want to run several learners at once,
24
-  selecting the “best” one each time you get more points.
27
+  selecting the “best” one each time you get more points,
28
+- `~adaptive.DataSaver`, for when your function doesn't just return a scalar or a vector.
25 29
 
26 30
 In addition to the learners, ``adaptive`` also provides primitives for
27 31
 running the sampling across several cores and even several machines,
... ...
@@ -47,8 +51,6 @@ on the *Play* :fa:`play` button or move the sliders.
47 51
     adaptive.notebook_extension()
48 52
     %output holomap='scrubber'
49 53
 
50
-
51
-
52 54
 `adaptive.Learner1D`
53 55
 ~~~~~~~~~~~~~~~~~~~~
54 56
 
... ...
@@ -82,8 +84,6 @@ on the *Play* :fa:`play` button or move the sliders.
82 84
     (get_hm(uniform_loss).relabel('homogeneous samping')
83 85
      + get_hm(default_loss).relabel('with adaptive'))
84 86
 
85
-
86
-
87 87
 `adaptive.Learner2D`
88 88
 ~~~~~~~~~~~~~~~~~~~~
89 89
 
... ...
@@ -111,8 +111,6 @@ on the *Play* :fa:`play` button or move the sliders.
111 111
     plots = {n: plot(learner, n) for n in range(4, 1010, 20)}
112 112
     hv.HoloMap(plots, kdims=['npoints']).collate()
113 113
 
114
-
115
-
116 114
 `adaptive.AverageLearner`
117 115
 ~~~~~~~~~~~~~~~~~~~~~~~~~
118 116
 
... ...
@@ -134,15 +132,31 @@ on the *Play* :fa:`play` button or move the sliders.
134 132
     plots = {n: plot(learner, n) for n in range(10, 10000, 200)}
135 133
     hv.HoloMap(plots, kdims=['npoints'])
136 134
 
135
+`adaptive.LearnerND`
136
+~~~~~~~~~~~~~~~~~~~~
137 137
 
138
-see more in the :ref:`Tutorial Adaptive`.
138
+.. jupyter-execute::
139
+    :hide-code:
139 140
 
141
+    def sphere(xyz):
142
+        import numpy as np
143
+        x, y, z = xyz
144
+        a = 0.4
145
+        return np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)
140 146
 
141
-.. include:: ../../README.rst
142
-    :start-after: not-in-documentation-end
147
+    learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])
148
+    adaptive.runner.simple(learner, lambda l: l.npoints == 3000)
149
+
150
+    learner.plot_3D()
143 151
 
152
+see more in the :ref:`Tutorial Adaptive`.
144 153
 
145
-Authors
154
+.. include:: ../../README.rst
155
+    :start-after: not-in-documentation-end
156
+    :end-before: credits-end
146 157
 
147 158
 .. mdinclude:: ../../AUTHORS.md
159
+
160
+.. include:: ../../README.rst
161
+    :start-after: credits-end
162
+    :end-before: references-start
... ...
@@ -16,6 +16,6 @@
16 16
    :maxdepth: 2
17 17
    :hidden:
18 18
 
19
-   rest_of_readme
19
+   docs
20 20
    tutorial/tutorial
21 21
    reference/adaptive
... ...
@@ -1,4 +1,4 @@
1
-adaptive.learner.BaseLearner
1
+adaptive.BaseLearner
2 2
 ============================
3 3
 
4 4
 .. autoclass:: adaptive.learner.BaseLearner
5 5
new file mode 100644
... ...
@@ -0,0 +1,7 @@
1
+adaptive.learner.triangulation module
2
+=====================================
3
+
4
+.. automodule:: adaptive.learner.triangulation
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
... ...
@@ -7,6 +7,7 @@ Learners
7 7
 .. toctree::
8 8
 
9 9
     adaptive.learner.average_learner
10
+    adaptive.learner.base_learner
10 11
     adaptive.learner.balancing_learner
11 12
     adaptive.learner.data_saver
12 13
     adaptive.learner.integrator_learner
... ...
@@ -22,6 +23,7 @@ Runners
22 23
     adaptive.runner.Runner
23 24
     adaptive.runner.AsyncRunner
24 25
     adaptive.runner.BlockingRunner
26
+    adaptive.runner.BaseRunner
25 27
     adaptive.runner.extras
26 28
 
27 29
 Other
... ...
@@ -1,5 +1,5 @@
1
-adaptive.runner.simple
2
-======================
1
+Runner extras
2
+=============
3 3
 
4 4
 Simple executor
5 5
 ---------------
... ...
@@ -9,7 +9,7 @@ Simple executor
9 9
 Sequential excecutor
10 10
 --------------------
11 11
 
12
-.. autofunction:: adaptive.runner.SequentialExecutor
12
+.. autoclass:: adaptive.runner.SequentialExecutor
13 13
 
14 14
 
15 15
 Replay log
... ...
@@ -8,7 +8,7 @@ Custom adaptive logic for 1D and 2D
8 8
 
9 9
 .. seealso::
10 10
     The complete source code of this tutorial can be found in
11
-    :jupyter-download:notebook:`tutorial.custom-loss-function`
11
+    :jupyter-download:notebook:`tutorial.custom-loss`
12 12
 
13 13
 .. jupyter-execute::
14 14
     :hide-code:
... ...
@@ -22,6 +22,7 @@ on the following packages
22 22
 - ``bokeh``
23 23
 - ``ipywidgets``
24 24
 
25
+We recommend to start with the :ref:`Tutorial `~adaptive.Learner1D``.
25 26
 
26 27
 .. note::
27 28
     Because this documentation consists of static html, the ``live_plot``