Browse code

update README.md

Joseph Weston authored on 11/01/2018 14:28:59 • Bas Nijholt committed on 11/01/2018 15:49:37
Showing 1 changed files
... ...
@@ -6,33 +6,44 @@
6 6
 
7 7
 **Tools for adaptive parallel evaluation of functions.**
8 8
 
9
-Adaptive is an [open-source](LICENSE) Python library designed to make adaptive parallel function evaluation simple. With adaptive, you can adaptively sample functions by only supplying (in general) a function and its bounds, and run it on a cluster in a few lines of code. Since `adaptive` knows the problem it is solving, it can plot the data for you (even live as the data returns) without any boilerplate. 
9
+`adaptive` is an [open-source](LICENSE) Python library designed to make adaptive parallel function evaluation simple.
10
+With `adaptive` you just supply a function with its bounds, and it will be evaluated at the "best" points in parameter space.
11
+With just a few lines of code you can evaluate functions on a computing cluster, live-plot the data as it returns, and fine-tune the adaptive sampling algorithm.
10 12
 
11
-Check out the Adaptive [example notebook `learner.ipynb`](learner.ipynb) (or run it [live on Binder](https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb)) to see examples of how to use `adaptive`.
13
+Check out the `adaptive` [example notebook `learner.ipynb`](learner.ipynb) (or run it [live on Binder](https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb)) to see examples of how to use `adaptive`.
12 14
 
13 15
 
14 16
 **WARNING: `adaptive` is still in an early alpha development stage**
15 17
 
16 18
 
17
-## Implemented algorithms / learners
18
-We introduce the concept of a "learner", which is an object that knows:
19
-* the function it is trying to "learn"
20
-* the bounds of the function domain that you are interested in
21
-* the data that has been calculated (which could be empty at the start)
19
+## Implemented algorithms
20
+The core concept in `adaptive` is that of a "learner". A "learner" samples
21
+a function at the best places in its parameter space to get maximum
22
+"information" about the function. As it evaluates the function
23
+at more and more points in the parameter space, it gets a better idea of where
24
+the best places are to sample next.
22 25
 
23
-Using this information, a learner can return as many suggested points as are requested.
24
-Adding more data to the learner means that the newly suggested points will become better and better.
26
+Of course, what qualifies as the "best places" will depend on your application domain!
27
+`adaptive` makes some reasonable default choices, but the details of the adaptive
28
+sampling are completely customizable.
25 29
 
26
-The following learners are implemented
27
-* `Learner1D` which learns a 1D function `f: ℝ → ℝ^N`
28
-* `Learner2D` which learns a 2D function `f: ℝ^2 → ℝ^N`
29
-* `AverageLearner` which learns a 0D function that has a source of randomness
30
-* `IntegratorLearner` which learns the integral value of a 1D function `f: ℝ → ℝ` up to a specified tolerance
31
-* `BalancingLearner` which takes a list of learners and suggests points of the learners that improve the loss (quality) the most
30
+
31
+The following learners are implemented:
32
+* `Learner1D`, for 1D functions `f: ℝ → ℝ^N`
33
+* `Learner2D`, for 2D functions `f: ℝ^2 → ℝ^N`
34
+* `AverageLearner`, For stochastic functions where you want to average the result over many evaluations
35
+* `IntegratorLearner`, for when you want to intergrate a 1D function `f: ℝ → ℝ`
36
+* `BalancingLearner`, for when you want to run several learners at once, selecting the "best" one each time you get more points
37
+
38
+In addition to the learners, `adaptive` also provides primitives for running
39
+the sampling across several cores and even several machines, with built-in support
40
+for [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html),
41
+[`ipyparallel`](https://ipyparallel.readthedocs.io/en/latest/)
42
+and [`distributed`](https://distributed.readthedocs.io/en/latest/).
32 43
 
33 44
 
34 45
 ## Installation
35
-Adaptive works with Python 3.5 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook.
46
+`adaptive` works with Python 3.5 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook.
36 47
 
37 48
 The recommended way to install adaptive is using `pip`:
38 49
 ```bash
... ...
@@ -53,7 +64,7 @@ in the repository.
53 64
 ## Credits
54 65
 We would like to give credits to the following people:
55 66
 - Pedro Gonnet for his implementation of [`CQUAD`](https://www.gnu.org/software/gsl/manual/html_node/CQUAD-doubly_002dadaptive-integration.html), "Algorithm 4" as described in "Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants", P. Gonnet, ACM Transactions on Mathematical Software, 37 (3), art. no. 26, 2010.
56
-- Christoph Groth for his Python implementation of [`CQUAD`](https://gitlab.kwant-project.org/cwg/python-cquad) which served as inspiration for the [`IntegratorLearner`](adaptive/learner/integrator_learner.py)
67
+- Christoph Groth for his Python implementation of [`CQUAD`](https://gitlab.kwant-project.org/cwg/python-cquad) which served as basis for the [`IntegratorLearner`](adaptive/learner/integrator_learner.py)
57 68
 - Pauli Virtanen for his `AdaptiveTriSampling` script (no longer available online since SciPy Central went down) which served as inspiration for the [`Learner2D`](adaptive/learner/learner2D.py)
58 69
 
59 70
 For general discussion, we have a [chat channel](https://chat.quantumtinkerer.tudelft.nl/external/channels/adaptive). If you find any bugs or have any feature suggestions please file a GitLab [issue](https://gitlab.kwant-project.org/qt/adaptive/issues/new?issue) or submit a [merge request](https://gitlab.kwant-project.org/qt/adaptive/merge_requests).