Browse code

Merge branch 'doc' into 'master'

add documentation

Closes #91

See merge request qt/adaptive!120

Bas Nijholt authored on 17/10/2018 11:20:31
Showing 38 changed files
1 1
deleted file mode 100644
... ...
@@ -1,100 +0,0 @@
1
-# ![][logo] adaptive
2
-
3
-[![PyPI](https://img.shields.io/pypi/v/adaptive.svg)](https://pypi.python.org/pypi/adaptive)
4
-[![Conda](https://anaconda.org/conda-forge/adaptive/badges/installer/conda.svg)](https://anaconda.org/conda-forge/adaptive)
5
-[![Downloads](https://anaconda.org/conda-forge/adaptive/badges/downloads.svg)](https://anaconda.org/conda-forge/adaptive)
6
-[![pipeline status](https://gitlab.kwant-project.org/qt/adaptive/badges/master/pipeline.svg)](https://gitlab.kwant-project.org/qt/adaptive/pipelines)
7
-[![DOI](https://zenodo.org/badge/113714660.svg)](https://zenodo.org/badge/latestdoi/113714660)
8
-[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb)
9
-[![Join the chat at https://gitter.im/python-adaptive/adaptive](https://img.shields.io/gitter/room/nwjs/nw.js.svg)](https://gitter.im/python-adaptive/adaptive)
10
-
11
-**Tools for adaptive parallel sampling of mathematical functions.**
12
-
13
-`adaptive` is an [open-source](LICENSE) Python library designed to make adaptive parallel function evaluation simple.
14
-With `adaptive` you just supply a function with its bounds, and it will be evaluated at the "best" points in parameter space.
15
-With just a few lines of code you can evaluate functions on a computing cluster, live-plot the data as it returns, and fine-tune the adaptive sampling algorithm.
16
-
17
-Check out the `adaptive` [example notebook `learner.ipynb`](learner.ipynb) (or run it [live on Binder](https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb)) to see examples of how to use `adaptive`.
18
-
19
-
20
-**WARNING: `adaptive` is still in a beta development stage**
21
-
22
-
23
-## Implemented algorithms
24
-The core concept in `adaptive` is that of a *learner*. A *learner* samples
25
-a function at the best places in its parameter space to get maximum
26
-"information" about the function. As it evaluates the function
27
-at more and more points in the parameter space, it gets a better idea of where
28
-the best places are to sample next.
29
-
30
-Of course, what qualifies as the "best places" will depend on your application domain!
31
-`adaptive` makes some reasonable default choices, but the details of the adaptive
32
-sampling are completely customizable.
33
-
34
-
35
-The following learners are implemented:
36
-* `Learner1D`, for 1D functions `f: ℝ → ℝ^N`,
37
-* `Learner2D`, for 2D functions `f: ℝ^2 → ℝ^N`,
38
-* `LearnerND`, for ND functions `f: ℝ^N → ℝ^M`,
39
-* `AverageLearner`, For stochastic functions where you want to average the result over many evaluations,
40
-* `IntegratorLearner`, for when you want to intergrate a 1D function `f: ℝ → ℝ`,
41
-* `BalancingLearner`, for when you want to run several learners at once, selecting the "best" one each time you get more points.
42
-
43
-In addition to the learners, `adaptive` also provides primitives for running
44
-the sampling across several cores and even several machines, with built-in support
45
-for [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html),
46
-[`ipyparallel`](https://ipyparallel.readthedocs.io/en/latest/)
47
-and [`distributed`](https://distributed.readthedocs.io/en/latest/).
48
-
49
-
50
-## Examples
51
-<img src="https://user-images.githubusercontent.com/6897215/38739170-6ac7c014-3f34-11e8-9e8f-93b3a3a3d61b.gif" width='20%'> </img>
52
-<img src="https://user-images.githubusercontent.com/6897215/35219611-ac8b2122-ff73-11e7-9332-adffab64a8ce.gif" width='40%'> </img>
53
-
54
-
55
-## Installation
56
-`adaptive` works with Python 3.6 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook.
57
-
58
-The recommended way to install adaptive is using `conda`:
59
-```bash
60
-conda install -c conda-forge adaptive
61
-```
62
-
63
-`adaptive` is also available on PyPI:
64
-```bash
65
-pip install adaptive[notebook]
66
-```
67
-
68
-The `[notebook]` above will also install the optional dependencies for running `adaptive` inside
69
-a Jupyter notebook.
70
-
71
-
72
-## Development
73
-Clone the repository and run `setup.py develop` to add a link to the cloned repo into your
74
-Python path:
75
-```
76
-git clone git@github.com:python-adaptive/adaptive.git
77
-cd adaptive
78
-python3 setup.py develop
79
-```
80
-
81
-We highly recommend using a Conda environment or a virtualenv to manage the versions of your installed
82
-packages while working on `adaptive`.
83
-
84
-In order to not pollute the history with the output of the notebooks, please setup the git filter by executing
85
-
86
-```bash
87
-python ipynb_filter.py
88
-```
89
-
90
-in the repository.
91
-
92
-
93
-## Credits
94
-We would like to give credits to the following people:
95
-- Pedro Gonnet for his implementation of [`CQUAD`](https://www.gnu.org/software/gsl/manual/html_node/CQUAD-doubly_002dadaptive-integration.html), "Algorithm 4" as described in "Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants", P. Gonnet, ACM Transactions on Mathematical Software, 37 (3), art. no. 26, 2010.
96
-- Pauli Virtanen for his `AdaptiveTriSampling` script (no longer available online since SciPy Central went down) which served as inspiration for the [`Learner2D`](adaptive/learner/learner2D.py).
97
-
98
-For general discussion, we have a [Gitter chat channel](https://gitter.im/python-adaptive/adaptive). If you find any bugs or have any feature suggestions please file a GitLab [issue](https://gitlab.kwant-project.org/qt/adaptive/issues/new?issue) or submit a [merge request](https://gitlab.kwant-project.org/qt/adaptive/merge_requests).
99
-
100
-[logo]: https://gitlab.kwant-project.org/qt/adaptive/uploads/d20444093920a4a0499e165b5061d952/logo.png "adaptive logo"
101 0
new file mode 100644
... ...
@@ -0,0 +1,155 @@
1
+.. summary-start
2
+
3
+.. _logo-adaptive:
4
+
5
+|image0| adaptive
6
+=================
7
+
8
+|PyPI| |Conda| |Downloads| |pipeline status| |DOI| |Binder| |Join the
9
+chat at https://gitter.im/python-adaptive/adaptive|
10
+
11
+**Tools for adaptive parallel sampling of mathematical functions.**
12
+
13
+``adaptive`` is an open-source Python library designed to
14
+make adaptive parallel function evaluation simple. With ``adaptive`` you
15
+just supply a function with its bounds, and it will be evaluated at the
16
+“best” points in parameter space. With just a few lines of code you can
17
+evaluate functions on a computing cluster, live-plot the data as it
18
+returns, and fine-tune the adaptive sampling algorithm.
19
+
20
+Check out the ``adaptive`` example notebook
21
+`learner.ipynb <https://github.com/python-adaptive/adaptive/blob/master/learner.ipynb>`_ (or run it `live on
22
+Binder <https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb>`_)
23
+to see examples of how to use ``adaptive``.
24
+
25
+.. summary-end
26
+
27
+**WARNING: adaptive is still in a beta development stage**
28
+
29
+.. implemented-algorithms-start
30
+
31
+Implemented algorithms
32
+----------------------
33
+
34
+The core concept in ``adaptive`` is that of a *learner*. A *learner*
35
+samples a function at the best places in its parameter space to get
36
+maximum “information” about the function. As it evaluates the function
37
+at more and more points in the parameter space, it gets a better idea of
38
+where the best places are to sample next.
39
+
40
+Of course, what qualifies as the “best places” will depend on your
41
+application domain! ``adaptive`` makes some reasonable default choices,
42
+but the details of the adaptive sampling are completely customizable.
43
+
44
+The following learners are implemented:
45
+
46
+- ``Learner1D``, for 1D functions ``f: ℝ → ℝ^N``,
47
+- ``Learner2D``, for 2D functions ``f: ℝ^2 → ℝ^N``,
48
+- ``LearnerND``, for ND functions ``f: ℝ^N → ℝ^M``,
49
+- ``AverageLearner``, For stochastic functions where you want to
50
+  average the result over many evaluations,
51
+- ``IntegratorLearner``, for
52
+  when you want to intergrate a 1D function ``f: ℝ → ℝ``,
53
+- ``BalancingLearner``, for when you want to run several learners at once,
54
+  selecting the “best” one each time you get more points.
55
+
56
+In addition to the learners, ``adaptive`` also provides primitives for
57
+running the sampling across several cores and even several machines,
58
+with built-in support for
59
+`concurrent.futures <https://docs.python.org/3/library/concurrent.futures.html>`_,
60
+`ipyparallel <https://ipyparallel.readthedocs.io/en/latest/>`_ and
61
+`distributed <https://distributed.readthedocs.io/en/latest/>`_.
62
+
63
+.. implemented-algorithms-end
64
+
65
+Examples
66
+--------
67
+
68
+.. raw:: html
69
+
70
+  <img src="https://user-images.githubusercontent.com/6897215/38739170-6ac7c014-3f34-11e8-9e8f-93b3a3a3d61b.gif" width='20%'> </img> <img src="https://user-images.githubusercontent.com/6897215/35219611-ac8b2122-ff73-11e7-9332-adffab64a8ce.gif" width='40%'> </img>
71
+
72
+
73
+Installation
74
+------------
75
+
76
+``adaptive`` works with Python 3.6 and higher on Linux, Windows, or Mac,
77
+and provides optional extensions for working with the Jupyter/IPython
78
+Notebook.
79
+
80
+The recommended way to install adaptive is using ``conda``:
81
+
82
+.. code:: bash
83
+
84
+    conda install -c conda-forge adaptive
85
+
86
+``adaptive`` is also available on PyPI:
87
+
88
+.. code:: bash
89
+
90
+    pip install adaptive[notebook]
91
+
92
+The ``[notebook]`` above will also install the optional dependencies for
93
+running ``adaptive`` inside a Jupyter notebook.
94
+
95
+Development
96
+-----------
97
+
98
+Clone the repository and run ``setup.py develop`` to add a link to the
99
+cloned repo into your Python path:
100
+
101
+.. code:: bash
102
+
103
+    git clone git@github.com:python-adaptive/adaptive.git
104
+    cd adaptive
105
+    python3 setup.py develop
106
+
107
+We highly recommend using a Conda environment or a virtualenv to manage
108
+the versions of your installed packages while working on ``adaptive``.
109
+
110
+In order to not pollute the history with the output of the notebooks,
111
+please setup the git filter by executing
112
+
113
+.. code:: bash
114
+
115
+    python ipynb_filter.py
116
+
117
+in the repository.
118
+
119
+Credits
120
+-------
121
+
122
+We would like to give credits to the following people:
123
+
124
+- Pedro Gonnet for his implementation of `CQUAD <https://www.gnu.org/software/gsl/manual/html_node/CQUAD-doubly_002dadaptive-integration.html>`_,
125
+  “Algorithm 4” as described in “Increasing the Reliability of Adaptive
126
+  Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on
127
+  Mathematical Software, 37 (3), art. no. 26, 2010.
128
+- Pauli Virtanen for his ``AdaptiveTriSampling`` script (no longer
129
+  available online since SciPy Central went down) which served as
130
+  inspiration for the ``~adaptive.Learner2D``.
131
+
132
+For general discussion, we have a `Gitter chat
133
+channel <https://gitter.im/python-adaptive/adaptive>`_. If you find any
134
+bugs or have any feature suggestions please file a GitLab
135
+`issue <https://gitlab.kwant-project.org/qt/adaptive/issues/new?issue>`_
136
+or submit a `merge
137
+request <https://gitlab.kwant-project.org/qt/adaptive/merge_requests>`_.
138
+
139
+.. references-start
140
+.. |image0| image:: https://gitlab.kwant-project.org/qt/adaptive/uploads/d20444093920a4a0499e165b5061d952/logo.png
141
+.. |PyPI| image:: https://img.shields.io/pypi/v/adaptive.svg
142
+   :target: https://pypi.python.org/pypi/adaptive
143
+.. |Conda| image:: https://anaconda.org/conda-forge/adaptive/badges/installer/conda.svg
144
+   :target: https://anaconda.org/conda-forge/adaptive
145
+.. |Downloads| image:: https://anaconda.org/conda-forge/adaptive/badges/downloads.svg
146
+   :target: https://anaconda.org/conda-forge/adaptive
147
+.. |pipeline status| image:: https://gitlab.kwant-project.org/qt/adaptive/badges/master/pipeline.svg
148
+   :target: https://gitlab.kwant-project.org/qt/adaptive/pipelines
149
+.. |DOI| image:: https://zenodo.org/badge/113714660.svg
150
+   :target: https://zenodo.org/badge/latestdoi/113714660
151
+.. |Binder| image:: https://mybinder.org/badge.svg
152
+   :target: https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb
153
+.. |Join the chat at https://gitter.im/python-adaptive/adaptive| image:: https://img.shields.io/gitter/room/nwjs/nw.js.svg
154
+   :target: https://gitter.im/python-adaptive/adaptive
155
+.. references-end
... ...
@@ -8,15 +8,15 @@ from . import learner
8 8
 from . import runner
9 9
 from . import utils
10 10
 
11
-from .learner import (Learner1D, Learner2D, LearnerND, AverageLearner,
12
-                      BalancingLearner, make_datasaver, DataSaver,
13
-                      IntegratorLearner)
11
+from .learner import (BaseLearner, Learner1D, Learner2D, LearnerND,
12
+                      AverageLearner, BalancingLearner, make_datasaver,
13
+                      DataSaver, IntegratorLearner)
14 14
 
15 15
 with suppress(ImportError):
16 16
     # Only available if 'scikit-optimize' is installed
17 17
     from .learner import SKOptLearner
18 18
 
19
-from .runner import Runner, BlockingRunner
19
+from .runner import Runner, AsyncRunner, BlockingRunner
20 20
 
21 21
 from ._version import __version__
22 22
 del _version
... ...
@@ -17,9 +17,9 @@ class AverageLearner(BaseLearner):
17 17
     Parameters
18 18
     ----------
19 19
     atol : float
20
-        Desired absolute tolerance
20
+        Desired absolute tolerance.
21 21
     rtol : float
22
-        Desired relative tolerance
22
+        Desired relative tolerance.
23 23
 
24 24
     Attributes
25 25
     ----------
... ...
@@ -21,27 +21,32 @@ class BalancingLearner(BaseLearner):
21 21
 
22 22
     Parameters
23 23
     ----------
24
-    learners : sequence of BaseLearner
24
+    learners : sequence of `BaseLearner`
25 25
         The learners from which to choose. These must all have the same type.
26 26
     cdims : sequence of dicts, or (keys, iterable of values), optional
27 27
         Constant dimensions; the parameters that label the learners. Used
28 28
         in `plot`.
29 29
         Example inputs that all give identical results:
30
+
30 31
         - sequence of dicts:
32
+
31 33
             >>> cdims = [{'A': True, 'B': 0},
32 34
             ...          {'A': True, 'B': 1},
33 35
             ...          {'A': False, 'B': 0},
34 36
             ...          {'A': False, 'B': 1}]`
37
+
35 38
         - tuple with (keys, iterable of values):
39
+
36 40
             >>> cdims = (['A', 'B'], itertools.product([True, False], [0, 1]))
37 41
             >>> cdims = (['A', 'B'], [(True, 0), (True, 1),
38 42
             ...                       (False, 0), (False, 1)])
43
+
39 44
     strategy : 'loss_improvements' (default), 'loss', or 'npoints'
40
-        The points that the 'BalancingLearner' choses can be either based on:
45
+        The points that the `BalancingLearner` choses can be either based on:
41 46
         the best 'loss_improvements', the smallest total 'loss' of the
42 47
         child learners, or the number of points per learner, using 'npoints'.
43 48
         One can dynamically change the strategy while the simulation is
44
-        running by changing the 'learner.strategy' attribute.
49
+        running by changing the ``learner.strategy`` attribute.
45 50
 
46 51
     Notes
47 52
     -----
... ...
@@ -50,7 +55,7 @@ class BalancingLearner(BaseLearner):
50 55
     compared*. For the moment we enforce this restriction by requiring that
51 56
     all learners are the same type but (depending on the internals of the
52 57
     learner) it may be that the loss cannot be compared *even between learners
53
-    of the same type*. In this case the BalancingLearner will behave in an
58
+    of the same type*. In this case the `BalancingLearner` will behave in an
54 59
     undefined way.
55 60
     """
56 61
 
... ...
@@ -183,28 +188,34 @@ class BalancingLearner(BaseLearner):
183 188
         cdims : sequence of dicts, or (keys, iterable of values), optional
184 189
             Constant dimensions; the parameters that label the learners.
185 190
             Example inputs that all give identical results:
191
+
186 192
             - sequence of dicts:
193
+
187 194
                 >>> cdims = [{'A': True, 'B': 0},
188 195
                 ...          {'A': True, 'B': 1},
189 196
                 ...          {'A': False, 'B': 0},
190 197
                 ...          {'A': False, 'B': 1}]`
198
+
191 199
             - tuple with (keys, iterable of values):
200
+
192 201
                 >>> cdims = (['A', 'B'], itertools.product([True, False], [0, 1]))
193 202
                 >>> cdims = (['A', 'B'], [(True, 0), (True, 1),
194 203
                 ...                       (False, 0), (False, 1)])
204
+
195 205
         plotter : callable, optional
196 206
             A function that takes the learner as a argument and returns a
197
-            holoviews object. By default learner.plot() will be called.
207
+            holoviews object. By default ``learner.plot()`` will be called.
198 208
         dynamic : bool, default True
199
-            Return a holoviews.DynamicMap if True, else a holoviews.HoloMap.
200
-            The DynamicMap is rendered as the sliders change and can therefore
201
-            not be exported to html. The HoloMap does not have this problem.
209
+            Return a `holoviews.core.DynamicMap` if True, else a
210
+            `holoviews.core.HoloMap`. The `~holoviews.core.DynamicMap` is
211
+            rendered as the sliders change and can therefore not be exported
212
+            to html. The `~holoviews.core.HoloMap` does not have this problem.
202 213
 
203 214
         Returns
204 215
         -------
205
-        dm : holoviews.DynamicMap object (default) or holoviews.HoloMap object
206
-            A DynamicMap (dynamic=True) or HoloMap (dynamic=False) with
207
-            sliders that are defined by 'cdims'.
216
+        dm : `holoviews.core.DynamicMap` (default) or `holoviews.core.HoloMap`
217
+            A `DynamicMap` (dynamic=True) or `HoloMap` (dynamic=False) with
218
+            sliders that are defined by `cdims`.
208 219
         """
209 220
         hv = ensure_holoviews()
210 221
         cdims = cdims or self._cdims_default
... ...
@@ -248,13 +259,13 @@ class BalancingLearner(BaseLearner):
248 259
     def from_product(cls, f, learner_type, learner_kwargs, combos):
249 260
         """Create a `BalancingLearner` with learners of all combinations of
250 261
         named variables’ values. The `cdims` will be set correctly, so calling
251
-        `learner.plot` will be a `holoviews.HoloMap` with the correct labels.
262
+        `learner.plot` will be a `holoviews.core.HoloMap` with the correct labels.
252 263
 
253 264
         Parameters
254 265
         ----------
255 266
         f : callable
256 267
             Function to learn, must take arguments provided in in `combos`.
257
-        learner_type : BaseLearner
268
+        learner_type : `BaseLearner`
258 269
             The learner that should wrap the function. For example `Learner1D`.
259 270
         learner_kwargs : dict
260 271
             Keyword argument for the `learner_type`. For example `dict(bounds=[0, 1])`.
... ...
@@ -11,14 +11,16 @@ class BaseLearner(metaclass=abc.ABCMeta):
11 11
     function : callable: X → Y
12 12
         The function to learn.
13 13
     data : dict: X → Y
14
-        'function' evaluated at certain points.
14
+        `function` evaluated at certain points.
15 15
         The values can be 'None', which indicates that the point
16 16
         will be evaluated, but that we do not have the result yet.
17 17
     npoints : int, optional
18 18
         The number of evaluated points that have been added to the learner.
19 19
         Subclasses do not *have* to implement this attribute.
20 20
 
21
-    Subclasses may define a 'plot' method that takes no parameters
21
+    Notes
22
+    -----
23
+    Subclasses may define a ``plot`` method that takes no parameters
22 24
     and returns a holoviews plot.
23 25
     """
24 26
 
... ...
@@ -75,9 +77,8 @@ class BaseLearner(metaclass=abc.ABCMeta):
75 77
         n : int
76 78
             The number of points to choose.
77 79
         tell_pending : bool, default: True
78
-            If True, add the chosen points to this
79
-            learner's 'data' with 'None' for the 'y'
80
-            values. Set this to False if you do not
80
+            If True, add the chosen points to this learner's
81
+            `pending_points`. Set this to False if you do not
81 82
             want to modify the state of the learner.
82 83
         """
83 84
         pass
... ...
@@ -8,7 +8,7 @@ class DataSaver:
8 8
 
9 9
     Parameters
10 10
     ----------
11
-    learner : Learner object
11
+    learner : `~adaptive.BaseLearner` instance
12 12
         The learner that needs to be wrapped.
13 13
     arg_picker : function
14 14
         Function that returns the argument that needs to be learned.
... ...
@@ -16,10 +16,11 @@ class DataSaver:
16 16
     Example
17 17
     -------
18 18
     Imagine we have a function that returns a dictionary
19
-    of the form: `{'y': y, 'err_est': err_est}`.
20
-
19
+    of the form: ``{'y': y, 'err_est': err_est}``.
20
+    
21
+    >>> from operator import itemgetter
21 22
     >>> _learner = Learner1D(f, bounds=(-1.0, 1.0))
22
-    >>> learner = DataSaver(_learner, arg_picker=operator.itemgetter('y'))
23
+    >>> learner = DataSaver(_learner, arg_picker=itemgetter('y'))
23 24
     """
24 25
 
25 26
     def __init__(self, learner, arg_picker):
... ...
@@ -46,12 +47,12 @@ def _ds(learner_type, arg_picker, *args, **kwargs):
46 47
 
47 48
 
48 49
 def make_datasaver(learner_type, arg_picker):
49
-    """Create a DataSaver of a `learner_type` that can be instantiated
50
+    """Create a `DataSaver` of a `learner_type` that can be instantiated
50 51
     with the `learner_type`'s key-word arguments.
51 52
 
52 53
     Parameters
53 54
     ----------
54
-    learner_type : BaseLearner
55
+    learner_type : `~adaptive.BaseLearner` type
55 56
         The learner type that needs to be wrapped.
56 57
     arg_picker : function
57 58
         Function that returns the argument that needs to be learned.
... ...
@@ -59,15 +60,16 @@ def make_datasaver(learner_type, arg_picker):
59 60
     Example
60 61
     -------
61 62
     Imagine we have a function that returns a dictionary
62
-    of the form: `{'y': y, 'err_est': err_est}`.
63
+    of the form: ``{'y': y, 'err_est': err_est}``.
63 64
 
64
-    >>> DataSaver = make_datasaver(Learner1D,
65
-    ...     arg_picker=operator.itemgetter('y'))
65
+    >>> from operator import itemgetter
66
+    >>> DataSaver = make_datasaver(Learner1D, arg_picker=itemgetter('y'))
66 67
     >>> learner = DataSaver(function=f, bounds=(-1.0, 1.0))
67 68
 
68
-    Or when using `BalacingLearner.from_product`:
69
+    Or when using `adaptive.BalancingLearner.from_product`:
70
+
69 71
     >>> learner_type = make_datasaver(adaptive.Learner1D,
70
-    ...     arg_picker=operator.itemgetter('y'))
72
+    ...     arg_picker=itemgetter('y'))
71 73
     >>> learner = adaptive.BalancingLearner.from_product(
72 74
     ...     jacobi, learner_type, dict(bounds=(0, 1)), combos)
73 75
     """
... ...
@@ -330,7 +330,7 @@ class IntegratorLearner(BaseLearner):
330 330
             The integral value in `self.bounds`.
331 331
         err : float
332 332
             The absolute error associated with `self.igral`.
333
-        max_ivals : int, default 1000
333
+        max_ivals : int, default: 1000
334 334
             Maximum number of intervals that can be present in the calculation
335 335
             of the integral. If this amount exceeds max_ivals, the interval
336 336
             with the smallest error will be discarded.
... ...
@@ -94,17 +94,24 @@ class Learner1D(BaseLearner):
94 94
         If not provided, then a default is used, which uses the scaled distance
95 95
         in the x-y plane as the loss. See the notes for more details.
96 96
 
97
+    Attributes
98
+    ----------
99
+    data : dict
100
+        Sampled points and values.
101
+    pending_points : set
102
+        Points that still have to be evaluated.
103
+
97 104
     Notes
98 105
     -----
99
-    'loss_per_interval' takes 3 parameters: interval, scale, and function_values,
100
-    and returns a scalar; the loss over the interval.
106
+    `loss_per_interval` takes 3 parameters: ``interval``,  ``scale``, and
107
+    ``function_values``, and returns a scalar; the loss over the interval.
101 108
 
102 109
     interval : (float, float)
103 110
         The bounds of the interval.
104 111
     scale : (float, float)
105 112
         The x and y scale over all the intervals, useful for rescaling the
106 113
         interval loss.
107
-    function_values : dict(float -> float)
114
+    function_values : dict(float → float)
108 115
         A map containing evaluated function values. It is guaranteed
109 116
         to have values for both of the points in 'interval'.
110 117
     """
... ...
@@ -363,7 +370,7 @@ class Learner1D(BaseLearner):
363 370
                 x_left, x_right = ival
364 371
                 a, b = to_interpolate[-1] if to_interpolate else (None, None)
365 372
                 if b == x_left and (a, b) not in self.losses:
366
-                    # join (a, b) and (x_left, x_right) --> (a, x_right)
373
+                    # join (a, b) and (x_left, x_right) → (a, x_right)
367 374
                     to_interpolate[-1] = (a, x_right)
368 375
                 else:
369 376
                     to_interpolate.append((x_left, x_right))
... ...
@@ -62,8 +62,8 @@ def uniform_loss(ip):
62 62
 
63 63
 
64 64
 def resolution_loss(ip, min_distance=0, max_distance=1):
65
-    """Loss function that is similar to the default loss function, but you can
66
-    set the maximimum and minimum size of a triangle.
65
+    """Loss function that is similar to the `default_loss` function, but you
66
+    can set the maximimum and minimum size of a triangle.
67 67
 
68 68
     Works with `~adaptive.Learner2D` only.
69 69
 
... ...
@@ -101,8 +101,8 @@ def resolution_loss(ip, min_distance=0, max_distance=1):
101 101
 
102 102
 def minimize_triangle_surface_loss(ip):
103 103
     """Loss function that is similar to the default loss function in the
104
-    `Learner1D`. The loss is the area spanned by the 3D vectors of the
105
-    vertices.
104
+    `~adaptive.Learner1D`. The loss is the area spanned by the 3D
105
+    vectors of the vertices.
106 106
 
107 107
     Works with `~adaptive.Learner2D` only.
108 108
 
... ...
@@ -206,15 +206,15 @@ class Learner2D(BaseLearner):
206 206
     pending_points : set
207 207
         Points that still have to be evaluated and are currently
208 208
         interpolated, see `data_combined`.
209
-    stack_size : int, default 10
209
+    stack_size : int, default: 10
210 210
         The size of the new candidate points stack. Set it to 1
211 211
         to recalculate the best points at each call to `ask`.
212
-    aspect_ratio : float, int, default 1
213
-        Average ratio of `x` span over `y` span of a triangle. If
214
-        there is more detail in either `x` or `y` the `aspect_ratio`
215
-        needs to be adjusted. When `aspect_ratio > 1` the
216
-        triangles will be stretched along `x`, otherwise
217
-        along `y`.
212
+    aspect_ratio : float, int, default: 1
213
+        Average ratio of ``x`` span over ``y`` span of a triangle. If
214
+        there is more detail in either ``x`` or ``y`` the ``aspect_ratio``
215
+        needs to be adjusted. When ``aspect_ratio > 1`` the
216
+        triangles will be stretched along ``x``, otherwise
217
+        along ``y``.
218 218
 
219 219
     Methods
220 220
     -------
... ...
@@ -239,13 +239,13 @@ class Learner2D(BaseLearner):
239 239
     This sampling procedure is not extremely fast, so to benefit from
240 240
     it, your function needs to be slow enough to compute.
241 241
 
242
-    'loss_per_triangle' takes a single parameter, 'ip', which is a
242
+    `loss_per_triangle` takes a single parameter, `ip`, which is a
243 243
     `scipy.interpolate.LinearNDInterpolator`. You can use the
244
-    *undocumented* attributes 'tri' and 'values' of 'ip' to get a
244
+    *undocumented* attributes ``tri`` and ``values`` of `ip` to get a
245 245
     `scipy.spatial.Delaunay` and a vector of function values.
246 246
     These can be used to compute the loss. The functions
247
-    `adaptive.learner.learner2D.areas` and
248
-    `adaptive.learner.learner2D.deviations` to calculate the
247
+    `~adaptive.learner.learner2D.areas` and
248
+    `~adaptive.learner.learner2D.deviations` to calculate the
249 249
     areas and deviations from a linear interpolation
250 250
     over each triangle.
251 251
     """
... ...
@@ -464,19 +464,21 @@ class Learner2D(BaseLearner):
464 464
             Number of points in x and y. If None (default) this number is
465 465
             evaluated by looking at the size of the smallest triangle.
466 466
         tri_alpha : float
467
-            The opacity (0 <= tri_alpha <= 1) of the triangles overlayed on
468
-            top of the image. By default the triangulation is not visible.
467
+            The opacity ``(0 <= tri_alpha <= 1)`` of the triangles overlayed
468
+            on top of the image. By default the triangulation is not visible.
469 469
 
470 470
         Returns
471 471
         -------
472
-        plot : holoviews.Overlay or holoviews.HoloMap
473
-            A `holoviews.Overlay` of `holoviews.Image * holoviews.EdgePaths`.
474
-            If the `learner.function` returns a vector output, a
475
-            `holoviews.HoloMap` of the `holoviews.Overlay`s wil be returned.
472
+        plot : `holoviews.core.Overlay` or `holoviews.core.HoloMap`
473
+            A `holoviews.core.Overlay` of
474
+            ``holoviews.Image * holoviews.EdgePaths``. If the
475
+            `learner.function` returns a vector output, a
476
+            `holoviews.core.HoloMap` of the
477
+            `holoviews.core.Overlay`\s wil be returned.
476 478
 
477 479
         Notes
478 480
         -----
479
-        The plot object that is returned if `learner.function` returns a
481
+        The plot object that is returned if ``learner.function`` returns a
480 482
         vector *cannot* be used with the live_plotting functionality.
481 483
         """
482 484
         hv = ensure_holoviews()
... ...
@@ -124,15 +124,8 @@ class LearnerND(BaseLearner):
124 124
         Coordinates of the currently known points
125 125
     values : numpy array
126 126
         The values of each of the known points
127
-
128
-    Methods
129
-    -------
130
-    plot(n)
131
-        If dim == 2, this method will plot the function being learned.
132
-    plot_slice(cut_mapping, n)
133
-        plot a slice of the function using interpolation of the current data.
134
-        the cut_mapping contains the fixed parameters, the other parameters are
135
-        used as axes for plotting.
127
+    pending_points : set
128
+        Points that still have to be evaluated.
136 129
 
137 130
     Notes
138 131
     -----
... ...
@@ -169,10 +162,10 @@ class LearnerND(BaseLearner):
169 162
         self._tri = None
170 163
         self._losses = dict()
171 164
 
172
-        self._pending_to_simplex = dict()  # vertex -> simplex
165
+        self._pending_to_simplex = dict()  # vertex → simplex
173 166
 
174 167
         # triangulation of the pending points inside a specific simplex
175
-        self._subtriangulations = dict()  # simplex -> triangulation
168
+        self._subtriangulations = dict()  # simplex → triangulation
176 169
 
177 170
         # scale to unit
178 171
         self._transform = np.linalg.inv(np.diag(np.diff(bounds).flat))
... ...
@@ -217,6 +210,8 @@ class LearnerND(BaseLearner):
217 210
 
218 211
     @property
219 212
     def tri(self):
213
+        """A `adaptive.learner.Triangulation` instance with all the points
214
+        of the learner."""
220 215
         if self._tri is not None:
221 216
             return self._tri
222 217
 
... ...
@@ -517,13 +512,14 @@ class LearnerND(BaseLearner):
517 512
         return im.opts(style=im_opts) * tris.opts(style=tri_opts, **no_hover)
518 513
 
519 514
     def plot_slice(self, cut_mapping, n=None):
520
-        """Plot a 1d or 2d interpolated slice of a N-dimensional function.
515
+        """Plot a 1D or 2D interpolated slice of a N-dimensional function.
521 516
 
522 517
         Parameters
523 518
         ----------
524
-        cut_mapping : dict (int -> float)
519
+        cut_mapping : dict (int → float)
525 520
             for each fixed dimension the value, the other dimensions
526
-            are interpolated
521
+            are interpolated. e.g. ``cut_mapping = {0: 1}``, so from
522
+            dimension 0 ('x') to value 1.
527 523
         n : int
528 524
             the number of boxes in the interpolation grid along each axis
529 525
         """
... ...
@@ -612,7 +612,7 @@ class Triangulation:
612 612
 
613 613
         Parameters
614 614
         ----------
615
-        check : bool, default True
615
+        check : bool, default: True
616 616
             Whether to raise an error if the computed hull is different from
617 617
             stored.
618 618
 
... ...
@@ -56,21 +56,21 @@ def live_plot(runner, *, plotter=None, update_interval=2, name=None):
56 56
 
57 57
     Parameters
58 58
     ----------
59
-    runner : Runner
59
+    runner : `Runner`
60 60
     plotter : function
61 61
         A function that takes the learner as a argument and returns a
62
-        holoviews object. By default learner.plot() will be called.
62
+        holoviews object. By default ``learner.plot()`` will be called.
63 63
     update_interval : int
64 64
         Number of second between the updates of the plot.
65 65
     name : hasable
66 66
         Name for the `live_plot` task in `adaptive.active_plotting_tasks`.
67
-        By default the name is `None` and if another task with the same name
68
-        already exists that other live_plot is canceled.
67
+        By default the name is None and if another task with the same name
68
+        already exists that other `live_plot` is canceled.
69 69
 
70 70
     Returns
71 71
     -------
72
-    dm : holoviews.DynamicMap
73
-        The plot that automatically updates every update_interval.
72
+    dm : `holoviews.core.DynamicMap`
73
+        The plot that automatically updates every `update_interval`.
74 74
     """
75 75
     if not _plotting_enabled:
76 76
         raise RuntimeError("Live plotting is not enabled; did you run "
... ...
@@ -58,60 +58,56 @@ class BaseRunner:
58 58
 
59 59
     Parameters
60 60
     ----------
61
-    learner : adaptive.learner.BaseLearner
61
+    learner : `~adaptive.BaseLearner` instance
62 62
     goal : callable
63 63
         The end condition for the calculation. This function must take
64 64
         the learner as its sole argument, and return True when we should
65 65
         stop requesting more points.
66
-    executor : concurrent.futures.Executor, distributed.Client,
67
-               or ipyparallel.Client, optional
66
+    executor : `concurrent.futures.Executor`, `distributed.Client`,\
67
+               or `ipyparallel.Client`, optional
68 68
         The executor in which to evaluate the function to be learned.
69
-        If not provided, a new `ProcessPoolExecutor` is used on Unix systems
70
-        while on Windows a `distributed.Client` is used if `distributed` is
71
-        installed.
69
+        If not provided, a new `~concurrent.futures.ProcessPoolExecutor`
70
+        is used on Unix systems while on Windows a `distributed.Client`
71
+        is used if `distributed` is installed.
72 72
     ntasks : int, optional
73 73
         The number of concurrent function evaluations. Defaults to the number
74
-        of cores available in 'executor'.
74
+        of cores available in `executor`.
75 75
     log : bool, default: False
76 76
         If True, record the method calls made to the learner by this runner.
77
-    shutdown_executor : Bool, default: False
77
+    shutdown_executor : bool, default: False
78 78
         If True, shutdown the executor when the runner has completed. If
79
-        'executor' is not provided then the executor created internally
79
+        `executor` is not provided then the executor created internally
80 80
         by the runner is shut down, regardless of this parameter.
81 81
     retries : int, default: 0
82
-        Maximum amount of retries of a certain point 'x' in
83
-        'learner.function(x)'. After 'retries' is reached for 'x' the
84
-        point is present in 'runner.failed'.
82
+        Maximum amount of retries of a certain point ``x`` in
83
+        ``learner.function(x)``. After `retries` is reached for ``x``
84
+        the point is present in ``runner.failed``.
85 85
     raise_if_retries_exceeded : bool, default: True
86
-        Raise the error after a point 'x' failed 'retries'.
86
+        Raise the error after a point ``x`` failed `retries`.
87 87
 
88 88
     Attributes
89 89
     ----------
90
-    learner : Learner
90
+    learner : `~adaptive.BaseLearner` instance
91 91
         The underlying learner. May be queried for its state.
92 92
     log : list or None
93 93
         Record of the method calls made to the learner, in the format
94
-        '(method_name, *args)'.
94
+        ``(method_name, *args)``.
95 95
     to_retry : dict
96
-        Mapping of {point: n_fails, ...}. When a point has failed
97
-        'runner.retries' times it is removed but will be present
98
-        in 'runner.tracebacks'.
96
+        Mapping of ``{point: n_fails, ...}``. When a point has failed
97
+        ``runner.retries`` times it is removed but will be present
98
+        in ``runner.tracebacks``.
99 99
     tracebacks : dict
100 100
         A mapping of point to the traceback if that point failed.
101 101
     pending_points : dict
102
-        A mapping of 'concurrent.Future's to points, {Future: point, ...}.
102
+        A mapping of `~concurrent.futures.Future`\s to points.
103 103
 
104 104
     Methods
105 105
     -------
106 106
     overhead : callable
107 107
         The overhead in percent of using Adaptive. This includes the
108 108
         overhead of the executor. Essentially, this is
109
-        100 * (1 - total_elapsed_function_time / self.elapsed_time()).
109
+        ``100 * (1 - total_elapsed_function_time / self.elapsed_time())``.
110 110
 
111
-    Properties
112
-    ----------
113
-    failed : set
114
-        Set of points that failed 'retries' times.
115 111
     """
116 112
 
117 113
     def __init__(self, learner, goal, *,
... ...
@@ -145,7 +141,7 @@ class BaseRunner:
145 141
         self.to_retry = {}
146 142
         self.tracebacks = {}
147 143
 
148
-    def max_tasks(self):
144
+    def _get_max_tasks(self):
149 145
         return self._max_tasks or _get_ncores(self.executor)
150 146
 
151 147
     def _do_raise(self, e, x):
... ...
@@ -173,7 +169,7 @@ class BaseRunner:
173 169
     def overhead(self):
174 170
         """Overhead of using Adaptive and the executor in percent.
175 171
 
176
-        This is measured as 100 * (1 - t_function / t_elapsed).
172
+        This is measured as ``100 * (1 - t_function / t_elapsed)``.
177 173
 
178 174
         Notes
179 175
         -----
... ...
@@ -213,7 +209,7 @@ class BaseRunner:
213 209
         # Launch tasks to replace the ones that completed
214 210
         # on the last iteration, making sure to fill workers
215 211
         # that have started since the last iteration.
216
-        n_new_tasks = max(0, self.max_tasks() - len(self.pending_points))
212
+        n_new_tasks = max(0, self._get_max_tasks() - len(self.pending_points))
217 213
 
218 214
         if self.do_log:
219 215
             self.log.append(('ask', n_new_tasks))
... ...
@@ -243,7 +239,7 @@ class BaseRunner:
243 239
 
244 240
     @property
245 241
     def failed(self):
246
-        """Set of points that failed 'self.retries' times."""
242
+        """Set of points that failed ``runner.retries`` times."""
247 243
         return set(self.tracebacks) - set(self.to_retry)
248 244
 
249 245
 
... ...
@@ -252,48 +248,48 @@ class BlockingRunner(BaseRunner):
252 248
 
253 249
     Parameters
254 250
     ----------
255
-    learner : adaptive.learner.BaseLearner
251
+    learner : `~adaptive.BaseLearner` instance
256 252
     goal : callable
257 253
         The end condition for the calculation. This function must take
258 254
         the learner as its sole argument, and return True when we should
259 255
         stop requesting more points.
260
-    executor : concurrent.futures.Executor, distributed.Client,
261
-               or ipyparallel.Client, optional
256
+    executor : `concurrent.futures.Executor`, `distributed.Client`,\
257
+               or `ipyparallel.Client`, optional
262 258
         The executor in which to evaluate the function to be learned.
263
-        If not provided, a new `ProcessPoolExecutor` is used on Unix systems
264
-        while on Windows a `distributed.Client` is used if `distributed` is
265
-        installed.
259
+        If not provided, a new `~concurrent.futures.ProcessPoolExecutor`
260
+        is used on Unix systems while on Windows a `distributed.Client`
261
+        is used if `distributed` is installed.
266 262
     ntasks : int, optional
267 263
         The number of concurrent function evaluations. Defaults to the number
268
-        of cores available in 'executor'.
264
+        of cores available in `executor`.
269 265
     log : bool, default: False
270 266
         If True, record the method calls made to the learner by this runner.
271 267
     shutdown_executor : Bool, default: False
272 268
         If True, shutdown the executor when the runner has completed. If
273
-        'executor' is not provided then the executor created internally
269
+        `executor` is not provided then the executor created internally
274 270
         by the runner is shut down, regardless of this parameter.
275 271
     retries : int, default: 0
276
-        Maximum amount of retries of a certain point 'x' in
277
-        'learner.function(x)'. After 'retries' is reached for 'x' the
278
-        point is present in 'runner.failed'.
272
+        Maximum amount of retries of a certain point ``x`` in
273
+        ``learner.function(x)``. After `retries` is reached for ``x``
274
+        the point is present in ``runner.failed``.
279 275
     raise_if_retries_exceeded : bool, default: True
280
-        Raise the error after a point 'x' failed 'retries'.
276
+        Raise the error after a point ``x`` failed `retries`.
281 277
 
282 278
     Attributes
283 279
     ----------
284
-    learner : Learner
280
+    learner : `~adaptive.BaseLearner` instance
285 281
         The underlying learner. May be queried for its state.
286 282
     log : list or None
287 283
         Record of the method calls made to the learner, in the format
288
-        '(method_name, *args)'.
284
+        ``(method_name, *args)``.
289 285
     to_retry : dict
290
-        Mapping of {point: n_fails, ...}. When a point has failed
291
-        'runner.retries' times it is removed but will be present
292
-        in 'runner.tracebacks'.
286
+        Mapping of ``{point: n_fails, ...}``. When a point has failed
287
+        ``runner.retries`` times it is removed but will be present
288
+        in ``runner.tracebacks``.
293 289
     tracebacks : dict
294 290
         A mapping of point to the traceback if that point failed.
295 291
     pending_points : dict
296
-        A mapping of 'concurrent.Future's to points, {Future: point, ...}.
292
+        A mapping of `~concurrent.futures.Future`\to points.
297 293
 
298 294
     Methods
299 295
     -------
... ...
@@ -303,12 +299,8 @@ class BlockingRunner(BaseRunner):
303 299
     overhead : callable
304 300
         The overhead in percent of using Adaptive. This includes the
305 301
         overhead of the executor. Essentially, this is
306
-        100 * (1 - total_elapsed_function_time / self.elapsed_time()).
302
+        ``100 * (1 - total_elapsed_function_time / self.elapsed_time())``.
307 303
 
308
-    Properties
309
-    ----------
310
-    failed : set
311
-        Set of points that failed 'retries' times.
312 304
     """
313 305
 
314 306
     def __init__(self, learner, goal, *,
... ...
@@ -330,7 +322,7 @@ class BlockingRunner(BaseRunner):
330 322
     def _run(self):
331 323
         first_completed = concurrent.FIRST_COMPLETED
332 324
 
333
-        if self.max_tasks() < 1:
325
+        if self._get_max_tasks() < 1:
334 326
             raise RuntimeError('Executor has no workers')
335 327
 
336 328
         try:
... ...
@@ -358,54 +350,54 @@ class AsyncRunner(BaseRunner):
358 350
 
359 351
     Parameters
360 352
     ----------
361
-    learner : adaptive.learner.BaseLearner
353
+    learner : `~adaptive.BaseLearner` instance
362 354
     goal : callable, optional
363 355
         The end condition for the calculation. This function must take
364 356
         the learner as its sole argument, and return True when we should
365 357
         stop requesting more points. If not provided, the runner will run
366
-        forever, or until 'self.task.cancel()' is called.
367
-    executor : concurrent.futures.Executor, distributed.Client,
368
-               or ipyparallel.Client, optional
358
+        forever, or until ``self.task.cancel()`` is called.
359
+    executor : `concurrent.futures.Executor`, `distributed.Client`,\
360
+               or `ipyparallel.Client`, optional
369 361
         The executor in which to evaluate the function to be learned.
370
-        If not provided, a new `ProcessPoolExecutor` is used on Unix systems
371
-        while on Windows a `distributed.Client` is used if `distributed` is
372
-        installed.
362
+        If not provided, a new `~concurrent.futures.ProcessPoolExecutor`
363
+        is used on Unix systems while on Windows a `distributed.Client`
364
+        is used if `distributed` is installed.
373 365
     ntasks : int, optional
374 366
         The number of concurrent function evaluations. Defaults to the number
375
-        of cores available in 'executor'.
367
+        of cores available in `executor`.
376 368
     log : bool, default: False
377 369
         If True, record the method calls made to the learner by this runner.
378 370
     shutdown_executor : Bool, default: False
379 371
         If True, shutdown the executor when the runner has completed. If
380
-        'executor' is not provided then the executor created internally
372
+        `executor` is not provided then the executor created internally
381 373
         by the runner is shut down, regardless of this parameter.
382
-    ioloop : asyncio.AbstractEventLoop, optional
374
+    ioloop : ``asyncio.AbstractEventLoop``, optional
383 375
         The ioloop in which to run the learning algorithm. If not provided,
384 376
         the default event loop is used.
385 377
     retries : int, default: 0
386
-        Maximum amount of retries of a certain point 'x' in
387
-        'learner.function(x)'. After 'retries' is reached for 'x' the
388
-        point is present in 'runner.failed'.
378
+        Maximum amount of retries of a certain point ``x`` in
379
+        ``learner.function(x)``. After `retries` is reached for ``x``
380
+        the point is present in ``runner.failed``.
389 381
     raise_if_retries_exceeded : bool, default: True
390
-        Raise the error after a point 'x' failed 'retries'.
382
+        Raise the error after a point ``x`` failed `retries`.
391 383
 
392 384
     Attributes
393 385
     ----------
394
-    task : asyncio.Task
386
+    task : `asyncio.Task`
395 387
         The underlying task. May be cancelled in order to stop the runner.
396
-    learner : Learner
388
+    learner : `~adaptive.BaseLearner` instance
397 389
         The underlying learner. May be queried for its state.
398 390
     log : list or None
399 391
         Record of the method calls made to the learner, in the format
400
-        '(method_name, *args)'.
392
+        ``(method_name, *args)``.
401 393
     to_retry : dict
402
-        Mapping of {point: n_fails, ...}. When a point has failed
403
-        'runner.retries' times it is removed but will be present
404
-        in 'runner.tracebacks'.
394
+        Mapping of ``{point: n_fails, ...}``. When a point has failed
395
+        ``runner.retries`` times it is removed but will be present
396
+        in ``runner.tracebacks``.
405 397
     tracebacks : dict
406 398
         A mapping of point to the traceback if that point failed.
407 399
     pending_points : dict
408
-        A mapping of 'concurrent.Future's to points, {Future: point, ...}.
400
+        A mapping of `~concurrent.futures.Future`\s to points.
409 401
 
410 402
     Methods
411 403
     -------
... ...
@@ -415,17 +407,13 @@ class AsyncRunner(BaseRunner):
415 407
     overhead : callable
416 408
         The overhead in percent of using Adaptive. This includes the
417 409
         overhead of the executor. Essentially, this is
418
-        100 * (1 - total_elapsed_function_time / self.elapsed_time()).
410
+        ``100 * (1 - total_elapsed_function_time / self.elapsed_time())``.
419 411
 
420
-    Properties
421
-    ----------
422
-    failed : set
423
-        Set of points that failed 'retries' times.
424 412
 
425 413
     Notes
426 414
     -----
427 415
     This runner can be used when an async function (defined with
428
-    'async def') has to be learned. In this case the function will be
416
+    ``async def``) has to be learned. In this case the function will be
429 417
     run directly on the event loop (and not in the executor).
430 418
     """
431 419
 
... ...
@@ -486,7 +474,7 @@ class AsyncRunner(BaseRunner):
486 474
     def cancel(self):
487 475
         """Cancel the runner.
488 476
 
489
-        This is equivalent to calling `runner.task.cancel()`.
477
+        This is equivalent to calling ``runner.task.cancel()``.
490 478
         """
491 479
         self.task.cancel()
492 480
 
... ...
@@ -495,21 +483,21 @@ class AsyncRunner(BaseRunner):
495 483
 
496 484
         Parameters
497 485
         ----------
498
-        runner : Runner
486
+        runner : `Runner`
499 487
         plotter : function
500 488
             A function that takes the learner as a argument and returns a
501
-            holoviews object. By default learner.plot() will be called.
489
+            holoviews object. By default ``learner.plot()`` will be called.
502 490
         update_interval : int
503 491
             Number of second between the updates of the plot.
504 492
         name : hasable
505 493
             Name for the `live_plot` task in `adaptive.active_plotting_tasks`.
506
-            By default the name is `None` and if another task with the same name
507
-            already exists that other live_plot is canceled.
494
+            By default the name is None and if another task with the same name
495
+            already exists that other `live_plot` is canceled.
508 496
 
509 497
         Returns
510 498
         -------
511
-        dm : holoviews.DynamicMap
512
-            The plot that automatically updates every update_interval.
499
+        dm : `holoviews.core.DynamicMap`
500
+            The plot that automatically updates every `update_interval`.
513 501
         """
514 502
         return live_plot(self, plotter=plotter,
515 503
                          update_interval=update_interval,
... ...
@@ -526,7 +514,7 @@ class AsyncRunner(BaseRunner):
526 514
     async def _run(self):
527 515
         first_completed = asyncio.FIRST_COMPLETED
528 516
 
529
-        if self.max_tasks() < 1:
517
+        if self._get_max_tasks() < 1:
530 518
             raise RuntimeError('Executor has no workers')
531 519
 
532 520
         try:
533 521
new file mode 100644
... ...
@@ -0,0 +1 @@
1
+build/*
0 2
new file mode 100644
... ...
@@ -0,0 +1,20 @@
1
+# Minimal makefile for Sphinx documentation
2
+#
3
+
4
+# You can set these variables from the command line.
5
+SPHINXOPTS    =
6
+SPHINXBUILD   = sphinx-build
7
+SPHINXPROJ    = adaptive
8
+SOURCEDIR     = source
9
+BUILDDIR      = build
10
+
11
+# Put it first so that "make" without argument is like "make help".
12
+help:
13
+	@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
14
+
15
+.PHONY: help Makefile
16
+
17
+# Catch-all target: route all unknown targets to Sphinx using the new
18
+# "make mode" option.  $(O) is meant as a shortcut for $(SPHINXOPTS).
19
+%: Makefile
20
+	@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
0 21
new file mode 100644
... ...
@@ -0,0 +1,18 @@
1
+name: adaptive
2
+
3
+channels:
4
+- conda-forge
5
+
6
+dependencies:
7
+  - python=3.6
8
+  - sortedcontainers
9
+  - scipy
10
+  - holoviews
11
+  - ipyparallel
12
+  - distributed
13
+  - ipykernel>=4.8*
14
+  - jupyter_client>=5.2.2
15
+  - ipywidgets
16
+  - scikit-optimize
17
+  - pip:
18
+      - sphinx_rtd_theme
0 19
new file mode 100644
... ...
@@ -0,0 +1,125 @@
1
+# -*- coding: utf-8 -*-
2
+#
3
+# Configuration file for the Sphinx documentation builder.
4
+#
5
+# This file does only contain a selection of the most common options. For a
6
+# full list see the documentation:
7
+# http://www.sphinx-doc.org/en/master/config
8
+
9
+# -- Path setup --------------------------------------------------------------
10
+
11
+# If extensions (or modules to document with autodoc) are in another directory,
12
+# add these directories to sys.path here. If the directory is relative to the
13
+# documentation root, use os.path.abspath to make it absolute, like shown here.
14
+#
15
+import os
16
+import sys
17
+sys.path.insert(0, os.path.abspath('../..'))
18
+
19
+import adaptive
20
+
21
+
22
+# -- Project information -----------------------------------------------------
23
+
24
+project = 'adaptive'
25
+copyright = '2018, Adaptive Authors'
26
+author = 'Adaptive Authors'
27
+
28
+# The short X.Y version
29
+version = adaptive.__version__
30
+# The full version, including alpha/beta/rc tags
31
+release = adaptive.__version__
32
+
33
+
34
+# -- General configuration ---------------------------------------------------
35
+
36
+# If your documentation needs a minimal Sphinx version, state it here.
37
+#
38
+# needs_sphinx = '1.0'
39
+
40
+# Add any Sphinx extension module names here, as strings. They can be
41
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
42
+# ones.
43
+extensions = [
44
+    'sphinx.ext.autodoc',
45
+    'sphinx.ext.autosummary',
46
+    'sphinx.ext.intersphinx',
47
+    'sphinx.ext.mathjax',
48
+    'sphinx.ext.viewcode',
49
+    'sphinx.ext.napoleon',
50
+]
51
+
52
+source_parsers = {}
53
+
54
+# Add any paths that contain templates here, relative to this directory.
55
+templates_path = ['_templates']
56
+
57
+# The suffix(es) of source filenames.
58
+# You can specify multiple suffix as a list of string:
59
+#
60
+source_suffix = ['.rst', '.md']
61
+#source_suffix = '.rst'
62
+
63
+# The master toctree document.
64
+master_doc = 'index'
65
+
66
+# The language for content autogenerated by Sphinx. Refer to documentation
67
+# for a list of supported languages.
68
+#
69
+# This is also used if you do content translation via gettext catalogs.
70
+# Usually you set "language" from the command line for these cases.
71
+language = None
72
+
73
+# List of patterns, relative to source directory, that match files and
74
+# directories to ignore when looking for source files.
75
+# This pattern also affects html_static_path and html_extra_path .
76
+exclude_patterns = []
77
+
78
+# The name of the Pygments (syntax highlighting) style to use.
79
+pygments_style = 'sphinx'
80
+
81
+
82
+# -- Options for HTML output -------------------------------------------------
83
+
84
+# The theme to use for HTML and HTML Help pages.  See the documentation for
85
+# a list of builtin themes.
86
+#
87
+html_theme = 'sphinx_rtd_theme'
88
+
89
+# Theme options are theme-specific and customize the look and feel of a theme
90
+# further.  For a list of options available for each theme, see the
91
+# documentation.
92
+#
93
+# html_theme_options = {}
94
+
95
+# Add any paths that contain custom static files (such as style sheets) here,
96
+# relative to this directory. They are copied after the builtin static files,
97
+# so a file named "default.css" will overwrite the builtin "default.css".
98
+html_static_path = ['_static']
99
+
100
+# Custom sidebar templates, must be a dictionary that maps document names
101
+# to template names.
102
+#
103
+# The default sidebars (for documents that don't match any pattern) are
104
+# defined by theme itself.  Builtin themes are using these templates by
105
+# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
106
+# 'searchbox.html']``.
107
+#
108
+# html_sidebars = {}
109
+
110
+
111
+# -- Options for HTMLHelp output ---------------------------------------------
112
+
113
+# Output file base name for HTML help builder.
114
+htmlhelp_basename = 'adaptivedoc'
115
+
116
+# -- Extension configuration -------------------------------------------------
117
+
118
+default_role = 'autolink'
119
+
120
+intersphinx_mapping = {'python': ('https://docs.python.org/3', None),
121
+                       'distributed': ('https://distributed.readthedocs.io/en/stable/', None),
122
+                       'holoviews': ('https://holoviews.org/', None),
123
+                       'ipyparallel': ('https://ipyparallel.readthedocs.io/en/stable/', None),
124
+                       'scipy': ('https://docs.scipy.org/doc/scipy/reference', None),
125
+}
0 126
new file mode 100644
... ...
@@ -0,0 +1,31 @@
1
+Implemented algorithms
2
+----------------------
3
+
4
+The core concept in ``adaptive`` is that of a *learner*. A *learner*
5
+samples a function at the best places in its parameter space to get
6
+maximum “information” about the function. As it evaluates the function
7
+at more and more points in the parameter space, it gets a better idea of
8
+where the best places are to sample next.
9
+
10
+Of course, what qualifies as the “best places” will depend on your
11
+application domain! ``adaptive`` makes some reasonable default choices,
12
+but the details of the adaptive sampling are completely customizable.
13
+
14
+The following learners are implemented:
15
+
16
+- `~adaptive.Learner1D`, for 1D functions ``f: ℝ → ℝ^N``,
17
+- `~adaptive.Learner2D`, for 2D functions ``f: ℝ^2 → ℝ^N``,
18
+- `~adaptive.LearnerND`, for ND functions ``f: ℝ^N → ℝ^M``,
19
+- `~adaptive.AverageLearner`, For stochastic functions where you want to
20
+  average the result over many evaluations,
21
+- `~adaptive.IntegratorLearner`, for
22
+  when you want to intergrate a 1D function ``f: ℝ → ℝ``,
23
+- `~adaptive.BalancingLearner`, for when you want to run several learners at once,
24
+  selecting the “best” one each time you get more points.
25
+
26
+In addition to the learners, ``adaptive`` also provides primitives for
27
+running the sampling across several cores and even several machines,
28
+with built-in support for
29
+`concurrent.futures <https://docs.python.org/3/library/concurrent.futures.html>`_,
30
+`ipyparallel <https://ipyparallel.readthedocs.io/en/latest/>`_ and
31
+`distributed <https://distributed.readthedocs.io/en/latest/>`_.
0 32
new file mode 100644
... ...
@@ -0,0 +1,20 @@
1
+.. include:: ../../README.rst
2
+    :start-after: summary-start
3
+    :end-before: summary-end
4
+
5
+.. include:: ../../README.rst
6
+    :start-after: references-start
7
+    :end-before: references-end
8
+
9
+
10
+.. toctree::
11
+   :hidden:
12
+
13
+   self
14
+
15
+.. toctree::
16
+   :maxdepth: 2
17
+   :hidden:
18
+
19
+   rest_of_readme
20
+   reference/adaptive
0 21
new file mode 100644
... ...
@@ -0,0 +1,7 @@
1
+adaptive.AverageLearner
2
+=======================
3
+
4
+.. autoclass:: adaptive.AverageLearner
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
0 8
new file mode 100644
... ...
@@ -0,0 +1,7 @@
1
+adaptive.BalancingLearner
2
+=========================
3
+
4
+.. autoclass:: adaptive.BalancingLearner
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
0 8
new file mode 100644
... ...
@@ -0,0 +1,7 @@
1
+adaptive.learner.BaseLearner
2
+============================
3
+
4
+.. autoclass:: adaptive.learner.BaseLearner
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
0 8
new file mode 100644
... ...
@@ -0,0 +1,16 @@
1
+adaptive.DataSaver
2
+==================
3
+
4
+The ``DataSaver`` class
5
+-----------------------
6
+
7
+.. autoclass:: adaptive.DataSaver
8
+    :members:
9
+    :undoc-members:
10
+    :show-inheritance:
11
+
12
+
13
+The ``make_datasaver`` function
14
+-------------------------------
15
+
16
+.. autofunction:: adaptive.make_datasaver
0 17
new file mode 100644
... ...
@@ -0,0 +1,7 @@
1
+adaptive.IntegratorLearner
2
+==========================
3
+
4
+.. autoclass:: adaptive.IntegratorLearner
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
0 8
new file mode 100644
... ...
@@ -0,0 +1,14 @@
1
+adaptive.Learner1D
2
+==================
3
+
4
+.. autoclass:: adaptive.Learner1D
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
8
+
9
+
10
+Custom loss functions
11
+---------------------
12
+.. autofunction:: adaptive.learner.learner1D.default_loss
13
+
14
+.. autofunction:: adaptive.learner.learner1D.uniform_loss
0 15
new file mode 100644
... ...
@@ -0,0 +1,25 @@
1
+adaptive.Learner2D
2
+==================
3
+
4
+.. autoclass:: adaptive.Learner2D
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
8
+
9
+
10
+Custom loss functions
11
+---------------------
12
+.. autofunction:: adaptive.learner.learner2D.default_loss
13
+
14
+.. autofunction:: adaptive.learner.learner2D.minimize_triangle_surface_loss
15
+
16
+.. autofunction:: adaptive.learner.learner2D.uniform_loss
17
+
18
+.. autofunction:: adaptive.learner.learner2D.resolution_loss
19
+
20
+
21
+Helper functions
22
+----------------
23
+.. autofunction:: adaptive.learner.learner2D.areas
24
+
25
+.. autofunction:: adaptive.learner.learner2D.deviations
0 26
new file mode 100644
... ...
@@ -0,0 +1,15 @@
1
+adaptive.LearnerND
2
+==================
3
+
4
+.. autoclass:: adaptive.LearnerND
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
8
+
9
+Custom loss functions
10
+---------------------
11
+.. autofunction:: adaptive.learner.learnerND.default_loss
12
+
13
+.. autofunction:: adaptive.learner.learnerND.uniform_loss
14
+
15
+.. autofunction:: adaptive.learner.learnerND.std_loss
0 16
new file mode 100644
... ...
@@ -0,0 +1,7 @@
1
+adaptive.SKOptLearner
2
+=====================
3
+
4
+.. autoclass:: adaptive.SKOptLearner
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
0 8
new file mode 100644
... ...
@@ -0,0 +1,7 @@
1
+adaptive.notebook\_integration module
2
+=====================================
3
+
4
+.. automodule:: adaptive.notebook_integration
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
0 8
new file mode 100644
... ...
@@ -0,0 +1,30 @@
1
+API documentation
2
+=================
3
+
4
+Learners
5
+--------
6
+
7
+.. toctree::
8
+
9
+    adaptive.learner.average_learner
10
+    adaptive.learner.balancing_learner
11
+    adaptive.learner.data_saver
12
+    adaptive.learner.integrator_learner
13
+    adaptive.learner.learner1D
14
+    adaptive.learner.learner2D
15
+    adaptive.learner.learnerND
16
+    adaptive.learner.skopt_learner
17
+
18
+Runners
19
+-------
20
+
21
+.. toctree::
22
+    adaptive.runner.Runner
23
+    adaptive.runner.AsyncRunner
24
+    adaptive.runner.BlockingRunner
25
+
26
+Other
27
+-----
28
+.. toctree::
29
+    adaptive.utils
30
+    adaptive.notebook_integration
0 31
new file mode 100644
... ...
@@ -0,0 +1,7 @@
1
+adaptive.AsyncRunner
2
+====================
3
+
4
+.. autoclass:: adaptive.runner.AsyncRunner
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
0 8
new file mode 100644
... ...
@@ -0,0 +1,7 @@
1
+adaptive.runner.BaseRunner
2
+==========================
3
+
4
+.. autoclass:: adaptive.runner.BaseRunner
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
0 8
new file mode 100644
... ...
@@ -0,0 +1,7 @@
1
+adaptive.BlockingRunner
2
+=======================
3
+
4
+.. autoclass:: adaptive.BlockingRunner
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
0 8
new file mode 100644
... ...
@@ -0,0 +1,7 @@
1
+adaptive.Runner
2
+===============
3
+
4
+.. autoclass:: adaptive.Runner
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
0 8
new file mode 100644
... ...
@@ -0,0 +1,7 @@
1
+adaptive.utils module
2
+=====================
3
+
4
+.. automodule:: adaptive.utils
5
+    :members:
6
+    :undoc-members:
7
+    :show-inheritance:
0 8
new file mode 100644
... ...
@@ -0,0 +1,4 @@
1
+.. include:: implemented-algorithms.rst
2
+
3
+.. include:: ../../README.rst
4
+    :start-after: implemented-algorithms-end
0 5
new file mode 100644
... ...
@@ -0,0 +1,2 @@
1
+conda:
2
+    file: docs/environment.yml