Browse code

remove _inline_js=False in adaptive.notebook_extension call

Bas Nijholt authored on 17/09/2020 22:19:56
Showing 1 changed files
... ...
@@ -14,7 +14,7 @@ Tutorial `~adaptive.LearnerND`
14 14
     :hide-code:
15 15
 
16 16
     import adaptive
17
-    adaptive.notebook_extension(_inline_js=False)
17
+    adaptive.notebook_extension()
18 18
 
19 19
     import holoviews as hv
20 20
     import numpy as np
Browse code

remove thebelab buttons

Bas Nijholt authored on 01/08/2019 16:10:21
Showing 1 changed files
... ...
@@ -10,8 +10,6 @@ Tutorial `~adaptive.LearnerND`
10 10
     The complete source code of this tutorial can be found in
11 11
     :jupyter-download:notebook:`tutorial.LearnerND`
12 12
 
13
-.. thebe-button:: Run the code live inside the documentation!
14
-
15 13
 .. jupyter-execute::
16 14
     :hide-code:
17 15
 
Browse code

add thebelab activation buttons

Bas Nijholt authored on 10/07/2019 19:30:06
Showing 1 changed files
... ...
@@ -10,6 +10,8 @@ Tutorial `~adaptive.LearnerND`
10 10
     The complete source code of this tutorial can be found in
11 11
     :jupyter-download:notebook:`tutorial.LearnerND`
12 12
 
13
+.. thebe-button:: Run the code live inside the documentation!
14
+
13 15
 .. jupyter-execute::
14 16
     :hide-code:
15 17
 
Browse code

change the required loss to 1e-3 because the loss definition changed

For the LearnerND

Bas Nijholt authored on 21/03/2019 11:28:05
Showing 1 changed files
... ...
@@ -34,14 +34,13 @@ of the learner drops quickly with increasing number of dimensions.
34 34
 
35 35
 .. jupyter-execute::
36 36
 
37
-    # this step takes a lot of time, it will finish at about 3300 points, which can take up to 6 minutes
38 37
     def sphere(xyz):
39 38
         x, y, z = xyz
40 39
         a = 0.4
41 40
         return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)
42 41
 
43 42
     learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])
44
-    runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
43
+    runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1e-3)
45 44
 
46 45
 .. jupyter-execute::
47 46
     :hide-code:
Browse code

do not inline the HoloViews JS

Bas Nijholt authored on 26/03/2019 12:44:31
Showing 1 changed files
... ...
@@ -14,7 +14,7 @@ Tutorial `~adaptive.LearnerND`
14 14
     :hide-code:
15 15
 
16 16
     import adaptive
17
-    adaptive.notebook_extension()
17
+    adaptive.notebook_extension(_inline_js=False)
18 18
 
19 19
     import holoviews as hv
20 20
     import numpy as np
Browse code

fix several documentation mistakes

* add 'curvature_loss_function' to the 'tutorial.custom_loss.rst'
* fix header styling
* fix doc-string

Bas Nijholt authored on 22/11/2018 12:36:22
Showing 1 changed files
... ...
@@ -92,7 +92,7 @@ lines. However, as always, when you sample more points the graph will
92 92
 become gradually smoother.
93 93
 
94 94
 Using any convex shape as domain
95
+................................
95 96
 
96 97
 Suppose you do not simply want to sample your function on a square (in 2D) or in
97 98
 a cube (in 3D). The LearnerND supports using a `scipy.spatial.ConvexHull` as
Browse code

add an example of using a ConvexHull to the tutorial

Jorn Hoofwijk authored on 07/11/2018 23:40:20 • Bas Nijholt committed on 07/11/2018 23:43:05
Showing 1 changed files
... ...
@@ -90,3 +90,37 @@ is a result of the fact that the learner chooses points in 3 dimensions
90 90
 and the simplices are not in the same face as we try to interpolate our
91 91
 lines. However, as always, when you sample more points the graph will
92 92
 become gradually smoother.
93
+
94
+Using any convex shape as domain
95
+--------------------------------
96
+
97
+Suppose you do not simply want to sample your function on a square (in 2D) or in
98
+a cube (in 3D). The LearnerND supports using a `scipy.spatial.ConvexHull` as
99
+your domain. This is best illustrated in the following example.
100
+
101
+Suppose you would like to sample you function in a cube split in half diagonally.
102
+You could use the following code as an example:
103
+
104
+.. jupyter-execute::
105
+
106
+    import scipy
107
+
108
+    def f(xyz):
109
+        x, y, z = xyz
110
+        return x**4 + y**4 + z**4 - (x**2+y**2+z**2)**2
111
+
112
+    # set the bound points, you can change this to be any shape
113
+    b = [(-1, -1, -1),
114
+         (-1,  1, -1),
115
+         (-1, -1,  1),
116
+         (-1,  1,  1),
117
+         ( 1,  1, -1),
118
+         ( 1, -1, -1)]
119
+
120
+    # you have to convert the points into a scipy.spatial.ConvexHull
121
+    hull = scipy.spatial.ConvexHull(b)
122
+
123
+    learner = adaptive.LearnerND(f, hull)
124
+    adaptive.BlockingRunner(learner, goal=lambda l: l.npoints > 2000)
125
+
126
+    learner.plot_isosurface(-0.5)
Browse code

change "execute" into "jupyter-execute"

Bas Nijholt authored on 18/10/2018 18:21:31
Showing 1 changed files
... ...
@@ -8,11 +8,10 @@ Tutorial `~adaptive.LearnerND`
8 8
 
9 9
 .. seealso::
10 10
     The complete source code of this tutorial can be found in
11
-    :jupyter-download:notebook:`LearnerND`
11
+    :jupyter-download:notebook:`tutorial.LearnerND`
12 12
 
13
-.. execute::
13
+.. jupyter-execute::
14 14
     :hide-code:
15
-    :new-notebook: LearnerND
16 15
 
17 16
     import adaptive
18 17
     adaptive.notebook_extension()
... ...
@@ -33,7 +32,7 @@ Do keep in mind the speed and
33 32
 `effectiveness <https://en.wikipedia.org/wiki/Curse_of_dimensionality>`__
34 33
 of the learner drops quickly with increasing number of dimensions.
35 34
 
36
-.. execute::
35
+.. jupyter-execute::
37 36
 
38 37
     # this step takes a lot of time, it will finish at about 3300 points, which can take up to 6 minutes
39 38
     def sphere(xyz):
... ...
@@ -44,18 +43,18 @@ of the learner drops quickly with increasing number of dimensions.
44 43
     learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])
45 44
     runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
46 45
 
47
-.. execute::
46
+.. jupyter-execute::
48 47
     :hide-code:
49 48
 
50 49
     await runner.task  # This is not needed in a notebook environment!
51 50
 
52
-.. execute::
51
+.. jupyter-execute::
53 52
 
54 53
     runner.live_info()
55 54
 
56 55
 Let’s plot 2D slices of the 3D function
57 56
 
58
-.. execute::
57
+.. jupyter-execute::
59 58
 
60 59
     def plot_cut(x, direction, learner=learner):
61 60
         cut_mapping = {'XYZ'.index(direction): x}
... ...
@@ -70,7 +69,7 @@ Let’s plot 2D slices of the 3D function
70 69
 
71 70
 Or we can plot 1D slices
72 71
 
73
-.. execute::
72
+.. jupyter-execute::
74 73
 
75 74
     %%opts Path {+framewise}
76 75
     def plot_cut(x1, x2, directions, learner=learner):
Browse code

add tutorials

Bas Nijholt authored on 17/10/2018 13:30:10
Showing 1 changed files
1 1
new file mode 100644
... ...
@@ -0,0 +1,93 @@
1
+Tutorial `~adaptive.LearnerND`
2
+------------------------------
3
+
4
+.. note::
5
+   Because this documentation consists of static html, the ``live_plot``
6
+   and ``live_info`` widget is not live. Download the notebook
7
+   in order to see the real behaviour.
8
+
9
+.. seealso::
10
+    The complete source code of this tutorial can be found in
11
+    :jupyter-download:notebook:`LearnerND`
12
+
13
+.. execute::
14
+    :hide-code:
15
+    :new-notebook: LearnerND
16
+
17
+    import adaptive
18
+    adaptive.notebook_extension()
19
+
20
+    import holoviews as hv
21
+    import numpy as np
22
+
23
+    def dynamicmap_to_holomap(dm):
24
+        # XXX: change when https://github.com/ioam/holoviews/issues/3085
25
+        # is fixed.
26
+        vals = {d.name: d.values for d in dm.dimensions() if d.values}
27
+        return hv.HoloMap(dm.select(**vals))
28
+
29
+Besides 1 and 2 dimensional functions, we can also learn N-D functions:
30
+:math:`\ f: ℝ^N → ℝ^M, N \ge 2, M \ge 1`.
31
+
32
+Do keep in mind the speed and
33
+`effectiveness <https://en.wikipedia.org/wiki/Curse_of_dimensionality>`__
34
+of the learner drops quickly with increasing number of dimensions.
35
+
36
+.. execute::
37
+
38
+    # this step takes a lot of time, it will finish at about 3300 points, which can take up to 6 minutes
39
+    def sphere(xyz):
40
+        x, y, z = xyz
41
+        a = 0.4
42
+        return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)
43
+
44
+    learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])
45
+    runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
46
+
47
+.. execute::
48
+    :hide-code:
49
+
50
+    await runner.task  # This is not needed in a notebook environment!
51
+
52
+.. execute::
53
+
54
+    runner.live_info()
55
+
56
+Let’s plot 2D slices of the 3D function
57
+
58
+.. execute::
59
+
60
+    def plot_cut(x, direction, learner=learner):
61
+        cut_mapping = {'XYZ'.index(direction): x}
62
+        return learner.plot_slice(cut_mapping, n=100)
63
+
64
+    dm = hv.DynamicMap(plot_cut, kdims=['val', 'direction'])
65
+    dm = dm.redim.values(val=np.linspace(-1, 1, 11), direction=list('XYZ'))
66
+
67
+    # In a notebook one would run `dm` however we want a statically generated
68
+    # html, so we use a HoloMap to display it here
69
+    dynamicmap_to_holomap(dm)
70
+
71
+Or we can plot 1D slices
72
+
73
+.. execute::
74
+
75
+    %%opts Path {+framewise}
76
+    def plot_cut(x1, x2, directions, learner=learner):
77
+        cut_mapping = {'xyz'.index(d): x for d, x in zip(directions, [x1, x2])}
78
+        return learner.plot_slice(cut_mapping)
79
+
80
+    dm = hv.DynamicMap(plot_cut, kdims=['v1', 'v2', 'directions'])
81
+    dm = dm.redim.values(v1=np.linspace(-1, 1, 6),
82
+                    v2=np.linspace(-1, 1, 6),
83
+                    directions=['xy', 'xz', 'yz'])
84
+
85
+    # In a notebook one would run `dm` however we want a statically generated
86
+    # html, so we use a HoloMap to display it here
87
+    dynamicmap_to_holomap(dm)
88
+
89
+The plots show some wobbles while the original function was smooth, this
90
+is a result of the fact that the learner chooses points in 3 dimensions
91
+and the simplices are not in the same face as we try to interpolate our
92
+lines. However, as always, when you sample more points the graph will
93
+become gradually smoother.