Browse code

remove _inline_js=False in adaptive.notebook_extension call

Bas Nijholt authored on 17/09/2020 22:19:56
Showing 1 changed files
... ...
@@ -14,7 +14,7 @@ Tutorial `~adaptive.Learner1D`
14 14
     :hide-code:
15 15
 
16 16
     import adaptive
17
-    adaptive.notebook_extension(_inline_js=False)
17
+    adaptive.notebook_extension()
18 18
 
19 19
     import numpy as np
20 20
     from functools import partial
Browse code

remove thebelab buttons

Bas Nijholt authored on 01/08/2019 16:10:21
Showing 1 changed files
... ...
@@ -10,8 +10,6 @@ Tutorial `~adaptive.Learner1D`
10 10
     The complete source code of this tutorial can be found in
11 11
     :jupyter-download:notebook:`tutorial.Learner1D`
12 12
 
13
-.. thebe-button:: Run the code live inside the documentation!
14
-
15 13
 .. jupyter-execute::
16 14
     :hide-code:
17 15
 
Browse code

add thebelab activation buttons

Bas Nijholt authored on 10/07/2019 19:30:06
Showing 1 changed files
... ...
@@ -10,6 +10,8 @@ Tutorial `~adaptive.Learner1D`
10 10
     The complete source code of this tutorial can be found in
11 11
     :jupyter-download:notebook:`tutorial.Learner1D`
12 12
 
13
+.. thebe-button:: Run the code live inside the documentation!
14
+
13 15
 .. jupyter-execute::
14 16
     :hide-code:
15 17
 
Browse code

do not inline the HoloViews JS

Bas Nijholt authored on 26/03/2019 12:44:31
Showing 1 changed files
... ...
@@ -14,7 +14,7 @@ Tutorial `~adaptive.Learner1D`
14 14
     :hide-code:
15 15
 
16 16
     import adaptive
17
-    adaptive.notebook_extension()
17
+    adaptive.notebook_extension(_inline_js=False)
18 18
 
19 19
     import numpy as np
20 20
     from functools import partial
Browse code

change loss function signature

Jorn Hoofwijk authored on 19/11/2018 16:04:24 • Bas Nijholt committed on 22/11/2018 11:40:31
Showing 1 changed files
... ...
@@ -150,10 +150,10 @@ by specifying ``loss_per_interval``.
150 150
 
151 151
 .. jupyter-execute::
152 152
 
153
-    from adaptive.learner.learner1D import (get_curvature_loss,
153
+    from adaptive.learner.learner1D import (curvature_loss_function,
154 154
                                             uniform_loss,
155 155
                                             default_loss)
156
-    curvature_loss = get_curvature_loss()
156
+    curvature_loss = curvature_loss_function()
157 157
     learner = adaptive.Learner1D(f, bounds=(-1, 1), loss_per_interval=curvature_loss)
158 158
     runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
159 159
 
Browse code

added curvature docs

Jorn Hoofwijk authored on 31/10/2018 13:31:03 • Bas Nijholt committed on 22/11/2018 11:34:02
Showing 1 changed files
... ...
@@ -137,3 +137,61 @@ functions:
137 137
 .. jupyter-execute::
138 138
 
139 139
     runner.live_plot(update_interval=0.1)
140
+
141
+
142
+Looking at curvature
143
+....................
144
+
145
+By default ``adaptive`` will sample more points where the (normalized)
146
+euclidean distance between the neighboring points is large.
147
+You may achieve better results sampling more points in regions with high
148
+curvature. To do this, you need to tell the learner to look at the curvature
149
+by specifying ``loss_per_interval``.
150
+
151
+.. jupyter-execute::
152
+
153
+    from adaptive.learner.learner1D import (get_curvature_loss,
154
+                                            uniform_loss,
155
+                                            default_loss)
156
+    curvature_loss = get_curvature_loss()
157
+    learner = adaptive.Learner1D(f, bounds=(-1, 1), loss_per_interval=curvature_loss)
158
+    runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
159
+
160
+.. jupyter-execute::
161
+    :hide-code:
162
+
163
+    await runner.task  # This is not needed in a notebook environment!
164
+
165
+.. jupyter-execute::
166
+
167
+    runner.live_info()
168
+
169
+.. jupyter-execute::
170
+
171
+    runner.live_plot(update_interval=0.1)
172
+
173
+We may see the difference of homogeneous sampling vs only one interval vs
174
+including nearest neighboring intervals in this plot: We will look at 100 points.
175
+
176
+.. jupyter-execute::
177
+
178
+    def sin_exp(x):
179
+        from math import exp, sin
180
+        return sin(15 * x) * exp(-x**2*2)
181
+
182
+    learner_h = adaptive.Learner1D(sin_exp, (-1, 1), loss_per_interval=uniform_loss)
183
+    learner_1 = adaptive.Learner1D(sin_exp, (-1, 1), loss_per_interval=default_loss)
184
+    learner_2 = adaptive.Learner1D(sin_exp, (-1, 1), loss_per_interval=curvature_loss)
185
+
186
+    npoints_goal = lambda l: l.npoints >= 100
187
+    # adaptive.runner.simple is a non parallel blocking runner.
188
+    adaptive.runner.simple(learner_h, goal=npoints_goal)
189
+    adaptive.runner.simple(learner_1, goal=npoints_goal)
190
+    adaptive.runner.simple(learner_2, goal=npoints_goal)
191
+
192
+    (learner_h.plot().relabel('homogeneous')
193
+     + learner_1.plot().relabel('euclidean loss')
194
+     + learner_2.plot().relabel('curvature loss')).cols(2)
195
+
196
+More info about using custom loss functions can be found
197
+in :ref:`Custom adaptive logic for 1D and 2D`.
Browse code

fix live_info css

Bas Nijholt authored on 19/10/2018 13:24:42
Showing 1 changed files
... ...
@@ -39,7 +39,7 @@ background with a sharp peak at a random location:
39 39
 
40 40
         a = 0.01
41 41
         if wait:
42
-            sleep(random())
42
+            sleep(random() / 10)
43 43
         return x + a**2 / (a**2 + (x - offset)**2)
44 44
 
45 45
 We start by initializing a 1D “learner”, which will suggest points to
Browse code

change "execute" into "jupyter-execute"

Bas Nijholt authored on 18/10/2018 18:21:31
Showing 1 changed files
... ...
@@ -8,11 +8,10 @@ Tutorial `~adaptive.Learner1D`
8 8
 
9 9
 .. seealso::
10 10
     The complete source code of this tutorial can be found in
11
-    :jupyter-download:notebook:`Learner1D`
11
+    :jupyter-download:notebook:`tutorial.Learner1D`
12 12
 
13
-.. execute::
13
+.. jupyter-execute::
14 14
     :hide-code:
15
-    :new-notebook: Learner1D
16 15
 
17 16
     import adaptive
18 17
     adaptive.notebook_extension()
... ...
@@ -30,7 +29,7 @@ We start with the most common use-case: sampling a 1D function
30 29
 We will use the following function, which is a smooth (linear)
31 30
 background with a sharp peak at a random location:
32 31
 
33
-.. execute::
32
+.. jupyter-execute::
34 33
 
35 34
     offset = random.uniform(-0.5, 0.5)
36 35
 
... ...
@@ -47,7 +46,7 @@ We start by initializing a 1D “learner”, which will suggest points to
47 46
 evaluate, and adapt its suggestions as more and more points are
48 47
 evaluated.
49 48
 
50
-.. execute::
49
+.. jupyter-execute::
51 50
 
52 51
     learner = adaptive.Learner1D(f, bounds=(-1, 1))
53 52
 
... ...
@@ -61,13 +60,13 @@ On Windows systems the runner will try to use a `distributed.Client`
61 60
 if `distributed` is installed. A `~concurrent.futures.ProcessPoolExecutor`
62 61
 cannot be used on Windows for reasons.
63 62
 
64
-.. execute::
63
+.. jupyter-execute::
65 64
 
66 65
     # The end condition is when the "loss" is less than 0.1. In the context of the
67 66
     # 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.
68 67
     runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
69 68
 
70
-.. execute::
69
+.. jupyter-execute::
71 70
     :hide-code:
72 71
 
73 72
     await runner.task  # This is not needed in a notebook environment!
... ...
@@ -76,23 +75,23 @@ When instantiated in a Jupyter notebook the runner does its job in the
76 75
 background and does not block the IPython kernel. We can use this to
77 76
 create a plot that updates as new data arrives:
78 77
 
79
-.. execute::
78
+.. jupyter-execute::
80 79
 
81 80
     runner.live_info()
82 81
 
83
-.. execute::
82
+.. jupyter-execute::
84 83
 
85 84
     runner.live_plot(update_interval=0.1)
86 85
 
87 86
 We can now compare the adaptive sampling to a homogeneous sampling with
88 87
 the same number of points:
89 88
 
90
-.. execute::
89
+.. jupyter-execute::
91 90
 
92 91
     if not runner.task.done():
93 92
         raise RuntimeError('Wait for the runner to finish before executing the cells below!')
94 93
 
95
-.. execute::
94
+.. jupyter-execute::
96 95
 
97 96
     learner2 = adaptive.Learner1D(f, bounds=learner.bounds)
98 97
 
... ...
@@ -107,7 +106,7 @@ vector output: ``f:ℝ → ℝ^N``
107 106
 
108 107
 Sometimes you may want to learn a function with vector output:
109 108
 
110
-.. execute::
109
+.. jupyter-execute::
111 110
 
112 111
     random.seed(0)
113 112
     offsets = [random.uniform(-0.8, 0.8) for _ in range(3)]
... ...
@@ -121,20 +120,20 @@ Sometimes you may want to learn a function with vector output:
121 120
 ``adaptive`` has you covered! The ``Learner1D`` can be used for such
122 121
 functions:
123 122
 
124
-.. execute::
123
+.. jupyter-execute::
125 124
 
126 125
     learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))
127 126
     runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
128 127
 
129
-.. execute::
128
+.. jupyter-execute::
130 129
     :hide-code:
131 130
 
132 131
     await runner.task  # This is not needed in a notebook environment!
133 132
 
134
-.. execute::
133
+.. jupyter-execute::
135 134
 
136 135
     runner.live_info()
137 136
 
138
-.. execute::
137
+.. jupyter-execute::
139 138
 
140 139
     runner.live_plot(update_interval=0.1)
Browse code

add tutorials

Bas Nijholt authored on 17/10/2018 13:30:10
Showing 1 changed files
1 1
new file mode 100644
... ...
@@ -0,0 +1,140 @@
1
+Tutorial `~adaptive.Learner1D`
2
+------------------------------
3
+
4
+.. note::
5
+   Because this documentation consists of static html, the ``live_plot``
6
+   and ``live_info`` widget is not live. Download the notebook
7
+   in order to see the real behaviour.
8
+
9
+.. seealso::
10
+    The complete source code of this tutorial can be found in
11
+    :jupyter-download:notebook:`Learner1D`
12
+
13
+.. execute::
14
+    :hide-code:
15
+    :new-notebook: Learner1D
16
+
17
+    import adaptive
18
+    adaptive.notebook_extension()
19
+
20
+    import numpy as np
21
+    from functools import partial
22
+    import random
23
+
24
+scalar output: ``f:ℝ → ℝ``
25
+..........................
26
+
27
+We start with the most common use-case: sampling a 1D function
28
+:math:`\ f: ℝ → ℝ`.
29
+
30
+We will use the following function, which is a smooth (linear)
31
+background with a sharp peak at a random location:
32
+
33
+.. execute::
34
+
35
+    offset = random.uniform(-0.5, 0.5)
36
+
37
+    def f(x, offset=offset, wait=True):
38
+        from time import sleep
39
+        from random import random
40
+
41
+        a = 0.01
42
+        if wait:
43
+            sleep(random())
44
+        return x + a**2 / (a**2 + (x - offset)**2)
45
+
46
+We start by initializing a 1D “learner”, which will suggest points to
47
+evaluate, and adapt its suggestions as more and more points are
48
+evaluated.
49
+
50
+.. execute::
51
+
52
+    learner = adaptive.Learner1D(f, bounds=(-1, 1))
53
+
54
+Next we create a “runner” that will request points from the learner and
55
+evaluate ‘f’ on them.
56
+
57
+By default on Unix-like systems the runner will evaluate the points in
58
+parallel using local processes `concurrent.futures.ProcessPoolExecutor`.
59
+
60
+On Windows systems the runner will try to use a `distributed.Client`
61
+if `distributed` is installed. A `~concurrent.futures.ProcessPoolExecutor`
62
+cannot be used on Windows for reasons.
63
+
64
+.. execute::
65
+
66
+    # The end condition is when the "loss" is less than 0.1. In the context of the
67
+    # 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.
68
+    runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
69
+
70
+.. execute::
71
+    :hide-code:
72
+
73
+    await runner.task  # This is not needed in a notebook environment!
74
+
75
+When instantiated in a Jupyter notebook the runner does its job in the
76
+background and does not block the IPython kernel. We can use this to
77
+create a plot that updates as new data arrives:
78
+
79
+.. execute::
80
+
81
+    runner.live_info()
82
+
83
+.. execute::
84
+
85
+    runner.live_plot(update_interval=0.1)
86
+
87
+We can now compare the adaptive sampling to a homogeneous sampling with
88
+the same number of points:
89
+
90
+.. execute::
91
+
92
+    if not runner.task.done():
93
+        raise RuntimeError('Wait for the runner to finish before executing the cells below!')
94
+
95
+.. execute::
96
+
97
+    learner2 = adaptive.Learner1D(f, bounds=learner.bounds)
98
+
99
+    xs = np.linspace(*learner.bounds, len(learner.data))
100
+    learner2.tell_many(xs, map(partial(f, wait=False), xs))
101
+
102
+    learner.plot() + learner2.plot()
103
+
104
+
105
+vector output: ``f:ℝ → ℝ^N``
106
+............................
107
+
108
+Sometimes you may want to learn a function with vector output:
109
+
110
+.. execute::
111
+
112
+    random.seed(0)
113
+    offsets = [random.uniform(-0.8, 0.8) for _ in range(3)]
114
+
115
+    # sharp peaks at random locations in the domain
116
+    def f_levels(x, offsets=offsets):
117
+        a = 0.01
118
+        return np.array([offset + x + a**2 / (a**2 + (x - offset)**2)
119
+                         for offset in offsets])
120
+
121
+``adaptive`` has you covered! The ``Learner1D`` can be used for such
122
+functions:
123
+
124
+.. execute::
125
+
126
+    learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))
127
+    runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
128
+
129
+.. execute::
130
+    :hide-code:
131
+
132
+    await runner.task  # This is not needed in a notebook environment!
133
+
134
+.. execute::
135
+
136
+    runner.live_info()
137
+
138
+.. execute::
139
+
140
+    runner.live_plot(update_interval=0.1)