Browse code

remove _inline_js=False in adaptive.notebook_extension call

Bas Nijholt authored on 17/09/2020 22:19:56
Showing 1 changed files
... ...
@@ -14,7 +14,7 @@ Custom adaptive logic for 1D and 2D
14 14
     :hide-code:
15 15
 
16 16
     import adaptive
17
-    adaptive.notebook_extension(_inline_js=False)
17
+    adaptive.notebook_extension()
18 18
 
19 19
     # Import modules that are used in multiple cells
20 20
     import numpy as np
Browse code

avoid ZeroDivisionError

Bas Nijholt authored on 08/04/2020 22:52:37
Showing 1 changed files
... ...
@@ -68,6 +68,8 @@ simple (but naive) strategy is to *uniformly* sample the domain:
68 68
         return dx
69 69
 
70 70
     def f_divergent_1d(x):
71
+        if x == 0:
72
+            return np.inf
71 73
         return 1 / x**2
72 74
 
73 75
     learner = adaptive.Learner1D(f_divergent_1d, (-1, 1), loss_per_interval=uniform_sampling_1d)
Browse code

remove thebelab buttons

Bas Nijholt authored on 01/08/2019 16:10:21
Showing 1 changed files
... ...
@@ -10,8 +10,6 @@ Custom adaptive logic for 1D and 2D
10 10
     The complete source code of this tutorial can be found in
11 11
     :jupyter-download:notebook:`tutorial.custom-loss`
12 12
 
13
-.. thebe-button:: Run the code live inside the documentation!
14
-
15 13
 .. jupyter-execute::
16 14
     :hide-code:
17 15
 
Browse code

add thebelab activation buttons

Bas Nijholt authored on 10/07/2019 19:30:06
Showing 1 changed files
... ...
@@ -10,6 +10,8 @@ Custom adaptive logic for 1D and 2D
10 10
     The complete source code of this tutorial can be found in
11 11
     :jupyter-download:notebook:`tutorial.custom-loss`
12 12
 
13
+.. thebe-button:: Run the code live inside the documentation!
14
+
13 15
 .. jupyter-execute::
14 16
     :hide-code:
15 17
 
Browse code

do not inline the HoloViews JS

Bas Nijholt authored on 26/03/2019 12:44:31
Showing 1 changed files
... ...
@@ -14,7 +14,7 @@ Custom adaptive logic for 1D and 2D
14 14
     :hide-code:
15 15
 
16 16
     import adaptive
17
-    adaptive.notebook_extension()
17
+    adaptive.notebook_extension(_inline_js=False)
18 18
 
19 19
     # Import modules that are used in multiple cells
20 20
     import numpy as np
Browse code

fix several documentation mistakes

* add 'curvature_loss_function' to the 'tutorial.custom_loss.rst'
* fix header styling
* fix doc-string

Bas Nijholt authored on 22/11/2018 12:36:22
Showing 1 changed files
... ...
@@ -46,11 +46,14 @@ tl;dr, one can use the following *loss functions* that
46 46
 
47 47
 + `adaptive.learner.learner1D.default_loss`
48 48
 + `adaptive.learner.learner1D.uniform_loss`
49
++ `adaptive.learner.learner1D.curvature_loss_function`
49 50
 + `adaptive.learner.learner2D.default_loss`
50 51
 + `adaptive.learner.learner2D.uniform_loss`
51 52
 + `adaptive.learner.learner2D.minimize_triangle_surface_loss`
52 53
 + `adaptive.learner.learner2D.resolution_loss_function`
53 54
 
55
+Whenever a loss function has `_function` appended to its name, it is a factory function
56
+that returns the loss function with certain settings.
54 57
 
55 58
 Uniform sampling
56 59
 ~~~~~~~~~~~~~~~~
Browse code

change resolution_loss to a factory function

Bas Nijholt authored on 22/11/2018 11:48:25
Showing 1 changed files
... ...
@@ -49,7 +49,7 @@ tl;dr, one can use the following *loss functions* that
49 49
 + `adaptive.learner.learner2D.default_loss`
50 50
 + `adaptive.learner.learner2D.uniform_loss`
51 51
 + `adaptive.learner.learner2D.minimize_triangle_surface_loss`
52
-+ `adaptive.learner.learner2D.resolution_loss`
52
++ `adaptive.learner.learner2D.resolution_loss_function`
53 53
 
54 54
 
55 55
 Uniform sampling
... ...
@@ -132,34 +132,23 @@ small (0 loss).
132 132
 
133 133
     %%opts EdgePaths (color='w') Image [logz=True colorbar=True]
134 134
 
135
-    def resolution_loss(ip, min_distance=0, max_distance=1):
135
+    def resolution_loss_function(min_distance=0, max_distance=1):
136 136
         """min_distance and max_distance should be in between 0 and 1
137 137
         because the total area is normalized to 1."""
138
+        def resolution_loss(ip):
139
+            from adaptive.learner.learner2D import default_loss, areas
140
+            loss = default_loss(ip)
138 141
 
139
-        from adaptive.learner.learner2D import areas, deviations
142
+            A = areas(ip)
143
+            # Setting areas with a small area to zero such that they won't be chosen again
144
+            loss[A < min_distance**2] = 0
140 145
 
141
-        A = areas(ip)
142
-
143
-        # 'deviations' returns an array of shape '(n, len(ip))', where
144
-        # 'n' is the  is the dimension of the output of the learned function
145
-        # In this case we know that the learned function returns a scalar,
146
-        # so 'deviations' returns an array of shape '(1, len(ip))'.
147
-        # It represents the deviation of the function value from a linear estimate
148
-        # over each triangular subdomain.
149
-        dev = deviations(ip)[0]
150
-
151
-        # we add terms of the same dimension: dev == [distance], A == [distance**2]
152
-        loss = np.sqrt(A) * dev + A
153
-
154
-        # Setting areas with a small area to zero such that they won't be chosen again
155
-        loss[A < min_distance**2] = 0
156
-
157
-        # Setting triangles that have a size larger than max_distance to infinite loss
158
-        loss[A > max_distance**2] = np.inf
159
-
160
-        return loss
146
+            # Setting triangles that have a size larger than max_distance to infinite loss
147
+            loss[A > max_distance**2] = np.inf
161 148
 
162
-    loss = partial(resolution_loss, min_distance=0.01)
149
+            return loss
150
+        return resolution_loss
151
+    loss = resolution_loss_function(min_distance=0.01)
163 152
 
164 153
     learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=loss)
165 154
     runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.02)
... ...
@@ -169,4 +158,4 @@ Awesome! We zoom in on the singularity, but not at the expense of
169 158
 sampling the rest of the domain a reasonable amount.
170 159
 
171 160
 The above strategy is available as
172
-`adaptive.learner.learner2D.resolution_loss`.
161
+`adaptive.learner.learner2D.resolution_loss_function`.
Browse code

update 'uniform_sampling_1d' example in the docs

Bas Nijholt authored on 22/11/2018 11:57:43
Showing 1 changed files
... ...
@@ -60,11 +60,8 @@ simple (but naive) strategy is to *uniformly* sample the domain:
60 60
 
61 61
 .. jupyter-execute::
62 62
 
63
-    def uniform_sampling_1d(interval, scale, data):
64
-        # Note that we never use 'data'; the loss is just the size of the subdomain
65
-        x_left, x_right = interval
66
-        x_scale, _ = scale
67
-        dx = (x_right - x_left) / x_scale
63
+    def uniform_sampling_1d(xs, ys):
64
+        dx = xs[1] - xs[0]
68 65
         return dx
69 66
 
70 67
     def f_divergent_1d(x):
Browse code

rename 'function_values' to 'data' because its more obvious

Because data is now in the 'BaseLearner'

Bas Nijholt authored on 28/10/2018 14:22:27
Showing 1 changed files
... ...
@@ -60,8 +60,8 @@ simple (but naive) strategy is to *uniformly* sample the domain:
60 60
 
61 61
 .. jupyter-execute::
62 62
 
63
-    def uniform_sampling_1d(interval, scale, function_values):
64
-        # Note that we never use 'function_values'; the loss is just the size of the subdomain
63
+    def uniform_sampling_1d(interval, scale, data):
64
+        # Note that we never use 'data'; the loss is just the size of the subdomain
65 65
         x_left, x_right = interval
66 66
         x_scale, _ = scale
67 67
         dx = (x_right - x_left) / x_scale
Browse code

documentation improvements

Bas Nijholt authored on 19/10/2018 14:19:42
Showing 1 changed files
... ...
@@ -8,7 +8,7 @@ Custom adaptive logic for 1D and 2D
8 8
 
9 9
 .. seealso::
10 10
     The complete source code of this tutorial can be found in
11
-    :jupyter-download:notebook:`tutorial.custom-loss-function`
11
+    :jupyter-download:notebook:`tutorial.custom-loss`
12 12
 
13 13
 .. jupyter-execute::
14 14
     :hide-code:
Browse code

change "execute" into "jupyter-execute"

Bas Nijholt authored on 18/10/2018 18:21:31
Showing 1 changed files
... ...
@@ -8,11 +8,10 @@ Custom adaptive logic for 1D and 2D
8 8
 
9 9
 .. seealso::
10 10
     The complete source code of this tutorial can be found in
11
-    :jupyter-download:notebook:`custom-loss-function`
11
+    :jupyter-download:notebook:`tutorial.custom-loss-function`
12 12
 
13
-.. execute::
13
+.. jupyter-execute::
14 14
     :hide-code:
15
-    :new-notebook: custom-loss-function
16 15
 
17 16
     import adaptive
18 17
     adaptive.notebook_extension()
... ...
@@ -59,7 +58,7 @@ Uniform sampling
59 58
 Say we want to properly sample a function that contains divergences. A
60 59
 simple (but naive) strategy is to *uniformly* sample the domain:
61 60
 
62
-.. execute::
61
+.. jupyter-execute::
63 62
 
64 63
     def uniform_sampling_1d(interval, scale, function_values):
65 64
         # Note that we never use 'function_values'; the loss is just the size of the subdomain
... ...
@@ -75,7 +74,7 @@ simple (but naive) strategy is to *uniformly* sample the domain:
75 74
     runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01)
76 75
     learner.plot().select(y=(0, 10000))
77 76
 
78
-.. execute::
77
+.. jupyter-execute::
79 78
 
80 79
     %%opts EdgePaths (color='w') Image [logz=True colorbar=True]
81 80
 
... ...
@@ -95,16 +94,16 @@ simple (but naive) strategy is to *uniformly* sample the domain:
95 94
     # this takes a while, so use the async Runner so we know *something* is happening
96 95
     runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)
97 96
 
98
-.. execute::
97
+.. jupyter-execute::
99 98
     :hide-code:
100 99
 
101 100
     await runner.task  # This is not needed in a notebook environment!
102 101
 
103
-.. execute::
102
+.. jupyter-execute::
104 103
 
105 104
     runner.live_info()
106 105
 
107
-.. execute::
106
+.. jupyter-execute::
108 107
 
109 108
     plotter = lambda l: l.plot(tri_alpha=0.3).relabel(
110 109
             '1 / (x^2 + y^2) in log scale')
... ...
@@ -132,7 +131,7 @@ subdomains are appropriately small it will prioritise places where the
132 131
 function is very nonlinear, but will ignore subdomains that are too
133 132
 small (0 loss).
134 133
 
135
-.. execute::
134
+.. jupyter-execute::
136 135
 
137 136
     %%opts EdgePaths (color='w') Image [logz=True colorbar=True]
138 137
 
Browse code

add tutorials

Bas Nijholt authored on 17/10/2018 13:30:10
Showing 1 changed files
1 1
new file mode 100644
... ...
@@ -0,0 +1,176 @@
1
+Custom adaptive logic for 1D and 2D
2
+-----------------------------------
3
+
4
+.. note::
5
+   Because this documentation consists of static html, the ``live_plot``
6
+   and ``live_info`` widget is not live. Download the notebook
7
+   in order to see the real behaviour.
8
+
9
+.. seealso::
10
+    The complete source code of this tutorial can be found in
11
+    :jupyter-download:notebook:`custom-loss-function`
12
+
13
+.. execute::
14
+    :hide-code:
15
+    :new-notebook: custom-loss-function
16
+
17
+    import adaptive
18
+    adaptive.notebook_extension()
19
+
20
+    # Import modules that are used in multiple cells
21
+    import numpy as np
22
+    from functools import partial
23
+
24
+
25
+`~adaptive.Learner1D` and `~adaptive.Learner2D` both work on the principle of
26
+subdividing their domain into subdomains, and assigning a property to
27
+each subdomain, which we call the *loss*. The algorithm for choosing the
28
+best place to evaluate our function is then simply *take the subdomain
29
+with the largest loss and add a point in the center, creating new
30
+subdomains around this point*.
31
+
32
+The *loss function* that defines the loss per subdomain is the canonical
33
+place to define what regions of the domain are “interesting”. The
34
+default loss function for `~adaptive.Learner1D` and `~adaptive.Learner2D` is sufficient
35
+for a wide range of common cases, but it is by no means a panacea. For
36
+example, the default loss function will tend to get stuck on
37
+divergences.
38
+
39
+Both the `~adaptive.Learner1D` and `~adaptive.Learner2D` allow you to specify a *custom
40
+loss function*. Below we illustrate how you would go about writing your
41
+own loss function. The documentation for `~adaptive.Learner1D` and `~adaptive.Learner2D`
42
+specifies the signature that your loss function needs to have in order
43
+for it to work with ``adaptive``.
44
+
45
+tl;dr, one can use the following *loss functions* that
46
+**we** already implemented:
47
+
48
++ `adaptive.learner.learner1D.default_loss`
49
++ `adaptive.learner.learner1D.uniform_loss`
50
++ `adaptive.learner.learner2D.default_loss`
51
++ `adaptive.learner.learner2D.uniform_loss`
52
++ `adaptive.learner.learner2D.minimize_triangle_surface_loss`
53
++ `adaptive.learner.learner2D.resolution_loss`
54
+
55
+
56
+Uniform sampling
57
+~~~~~~~~~~~~~~~~
58
+
59
+Say we want to properly sample a function that contains divergences. A
60
+simple (but naive) strategy is to *uniformly* sample the domain:
61
+
62
+.. execute::
63
+
64
+    def uniform_sampling_1d(interval, scale, function_values):
65
+        # Note that we never use 'function_values'; the loss is just the size of the subdomain
66
+        x_left, x_right = interval
67
+        x_scale, _ = scale
68
+        dx = (x_right - x_left) / x_scale
69
+        return dx
70
+
71
+    def f_divergent_1d(x):
72
+        return 1 / x**2
73
+
74
+    learner = adaptive.Learner1D(f_divergent_1d, (-1, 1), loss_per_interval=uniform_sampling_1d)
75
+    runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01)
76
+    learner.plot().select(y=(0, 10000))
77
+
78
+.. execute::
79
+
80
+    %%opts EdgePaths (color='w') Image [logz=True colorbar=True]
81
+
82
+    from adaptive.runner import SequentialExecutor
83
+
84
+    def uniform_sampling_2d(ip):
85
+        from adaptive.learner.learner2D import areas
86
+        A = areas(ip)
87
+        return np.sqrt(A)
88
+
89
+    def f_divergent_2d(xy):
90
+        x, y = xy
91
+        return 1 / (x**2 + y**2)
92
+
93
+    learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=uniform_sampling_2d)
94
+
95
+    # this takes a while, so use the async Runner so we know *something* is happening
96
+    runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)
97
+
98
+.. execute::
99
+    :hide-code:
100
+
101
+    await runner.task  # This is not needed in a notebook environment!
102
+
103
+.. execute::
104
+
105
+    runner.live_info()
106
+
107
+.. execute::
108
+
109
+    plotter = lambda l: l.plot(tri_alpha=0.3).relabel(
110
+            '1 / (x^2 + y^2) in log scale')
111
+    runner.live_plot(update_interval=0.2, plotter=plotter)
112
+
113
+The uniform sampling strategy is a common case to benchmark against, so
114
+the 1D and 2D versions are included in ``adaptive`` as
115
+`adaptive.learner.learner1D.uniform_loss` and
116
+`adaptive.learner.learner2D.uniform_loss`.
117
+
118
+Doing better
119
+~~~~~~~~~~~~
120
+
121
+Of course, using ``adaptive`` for uniform sampling is a bit of a waste!
122
+
123
+Let’s see if we can do a bit better. Below we define a loss per
124
+subdomain that scales with the degree of nonlinearity of the function
125
+(this is very similar to the default loss function for `~adaptive.Learner2D`),
126
+but which is 0 for subdomains smaller than a certain area, and infinite
127
+for subdomains larger than a certain area.
128
+
129
+A loss defined in this way means that the adaptive algorithm will first
130
+prioritise subdomains that are too large (infinite loss). After all
131
+subdomains are appropriately small it will prioritise places where the
132
+function is very nonlinear, but will ignore subdomains that are too
133
+small (0 loss).
134
+
135
+.. execute::
136
+
137
+    %%opts EdgePaths (color='w') Image [logz=True colorbar=True]
138
+
139
+    def resolution_loss(ip, min_distance=0, max_distance=1):
140
+        """min_distance and max_distance should be in between 0 and 1
141
+        because the total area is normalized to 1."""
142
+
143
+        from adaptive.learner.learner2D import areas, deviations
144
+
145
+        A = areas(ip)
146
+
147
+        # 'deviations' returns an array of shape '(n, len(ip))', where
148
+        # 'n' is the  is the dimension of the output of the learned function
149
+        # In this case we know that the learned function returns a scalar,
150
+        # so 'deviations' returns an array of shape '(1, len(ip))'.
151
+        # It represents the deviation of the function value from a linear estimate
152
+        # over each triangular subdomain.
153
+        dev = deviations(ip)[0]
154
+
155
+        # we add terms of the same dimension: dev == [distance], A == [distance**2]
156
+        loss = np.sqrt(A) * dev + A
157
+
158
+        # Setting areas with a small area to zero such that they won't be chosen again
159
+        loss[A < min_distance**2] = 0
160
+
161
+        # Setting triangles that have a size larger than max_distance to infinite loss
162
+        loss[A > max_distance**2] = np.inf
163
+
164
+        return loss
165
+
166
+    loss = partial(resolution_loss, min_distance=0.01)
167
+
168
+    learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=loss)
169
+    runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.02)
170
+    learner.plot(tri_alpha=0.3).relabel('1 / (x^2 + y^2) in log scale')
171
+
172
+Awesome! We zoom in on the singularity, but not at the expense of
173
+sampling the rest of the domain a reasonable amount.
174
+
175
+The above strategy is available as
176
+`adaptive.learner.learner2D.resolution_loss`.