name mode size
.github 040000
adaptive 040000
benchmarks 040000
docs 040000
.gitattributes 100644 69B
.gitignore 100644 1.16kB
.mailmap 100644 222B
.pre-commit-config.yaml 100644 856B
.zenodo.json 100644 834B 100644 428B
LICENSE 100644 1.52kB 100644 16B
README.rst 100644 8.06kB 100644 2.44kB
azure-pipelines.yml 100644 2.8kB
environment.yml 100644 419B
example-notebook.ipynb 100644 45.7kB 100755 1.15kB
pyproject.toml 100644 132B
readthedocs.yml 100644 54B 100644 2.38kB
tox.ini 100644 1.27kB
.. summary-start |logo| adaptive =============== |PyPI| |Conda| |Downloads| |Pipeline status| |DOI| |Binder| |Gitter| |Documentation| |Coverage| |GitHub| *Adaptive*: parallel active learning of mathematical functions. ``adaptive`` is an open-source Python library designed to make adaptive parallel function evaluation simple. With ``adaptive`` you just supply a function with its bounds, and it will be evaluated at the “best” points in parameter space. With just a few lines of code you can evaluate functions on a computing cluster, live-plot the data as it returns, and fine-tune the adaptive sampling algorithm. Run the ``adaptive`` example notebook `live on Binder <>`_ to see examples of how to use ``adaptive`` or visit the `tutorial on Read the Docs <>`__. .. summary-end **WARNING: adaptive is still in a beta development stage** .. not-in-documentation-start Implemented algorithms ---------------------- The core concept in ``adaptive`` is that of a *learner*. A *learner* samples a function at the best places in its parameter space to get maximum “information” about the function. As it evaluates the function at more and more points in the parameter space, it gets a better idea of where the best places are to sample next. Of course, what qualifies as the “best places” will depend on your application domain! ``adaptive`` makes some reasonable default choices, but the details of the adaptive sampling are completely customizable. The following learners are implemented: - ``Learner1D``, for 1D functions ``f: ℝ → ℝ^N``, - ``Learner2D``, for 2D functions ``f: ℝ^2 → ℝ^N``, - ``LearnerND``, for ND functions ``f: ℝ^N → ℝ^M``, - ``AverageLearner``, For stochastic functions where you want to average the result over many evaluations, - ``IntegratorLearner``, for when you want to intergrate a 1D function ``f: ℝ → ℝ``, - ``BalancingLearner``, for when you want to run several learners at once, selecting the “best” one each time you get more points. In addition to the learners, ``adaptive`` also provides primitives for running the sampling across several cores and even several machines, with built-in support for `concurrent.futures <>`_, `mpi4py <>`_, `loky <>`_, `ipyparallel <>`_ and `distributed <>`_. Examples -------- Adaptively learning a 1D function (the `gif` below) and live-plotting the process in a Jupyter notebook is as easy as .. code:: python from adaptive import notebook_extension, Runner, Learner1D notebook_extension() def peak(x, a=0.01): return x + a**2 / (a**2 + x**2) learner = Learner1D(peak, bounds=(-1, 1)) runner = Runner(learner, goal=lambda l: l.loss() < 0.01) runner.live_info() runner.live_plot() .. raw:: html <img src="" width='20%'> </img> <img src="" width='40%'> </img> <img src="" width='20%'> </img> .. not-in-documentation-end Installation ------------ ``adaptive`` works with Python 3.6 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook. The recommended way to install adaptive is using ``conda``: .. code:: bash conda install -c conda-forge adaptive ``adaptive`` is also available on PyPI: .. code:: bash pip install adaptive[notebook] The ``[notebook]`` above will also install the optional dependencies for running ``adaptive`` inside a Jupyter notebook. To use Adaptive in Jupyterlab, you need to install the following labextensions. .. code:: bash jupyter labextension install @jupyter-widgets/jupyterlab-manager jupyter labextension install @pyviz/jupyterlab_pyviz Development ----------- Clone the repository and run `` develop`` to add a link to the cloned repo into your Python path: .. code:: bash git clone cd adaptive python3 develop We highly recommend using a Conda environment or a virtualenv to manage the versions of your installed packages while working on ``adaptive``. In order to not pollute the history with the output of the notebooks, please setup the git filter by executing .. code:: bash python in the repository. We implement several other checks in order to maintain a consistent code style. We do this using `pre-commit <>`_, execute .. code:: bash pre-commit install in the repository. Citing ------ If you used Adaptive in a scientific work, please cite it as follows. .. code:: bib @misc{Nijholt2019, doi = {10.5281/zenodo.1182437}, author = {Bas Nijholt and Joseph Weston and Jorn Hoofwijk and Anton Akhmerov}, title = {\textit{Adaptive}: parallel active learning of mathematical functions}, publisher = {Zenodo}, year = {2019} } Credits ------- We would like to give credits to the following people: - Pedro Gonnet for his implementation of `CQUAD <>`_, “Algorithm 4” as described in “Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on Mathematical Software, 37 (3), art. no. 26, 2010. - Pauli Virtanen for his ``AdaptiveTriSampling`` script (no longer available online since SciPy Central went down) which served as inspiration for the `~adaptive.Learner2D`. .. credits-end For general discussion, we have a `Gitter chat channel <>`_. If you find any bugs or have any feature suggestions please file a GitHub `issue <>`_ or submit a `pull request <>`_. .. references-start .. |logo| image:: .. |PyPI| image:: :target: .. |Conda| image:: :target: .. |Downloads| image:: :target: .. |Pipeline status| image:: :target: .. |DOI| image:: :target: .. |Binder| image:: :target: .. |Gitter| image:: :target: .. |Documentation| image:: :target: .. |GitHub| image:: :target: .. |Coverage| image:: :target: .. references-end