Browse code

move 2D learner up in notebook

Bas Nijholt authored on 08/09/2017 12:53:46
Showing 1 changed files
... ...
@@ -159,17 +159,29 @@
159 159
    "cell_type": "markdown",
160 160
    "metadata": {},
161 161
    "source": [
162
-    "# Averaging learner"
162
+    "# 2D function learner"
163 163
    ]
164 164
   },
165 165
   {
166 166
    "cell_type": "markdown",
167 167
    "metadata": {},
168 168
    "source": [
169
-    "The next type of learner averages a function until the uncertainty in the average meets some condition.\n",
169
+    "Besides 1D functions, we can also learn 2D functions: $\\ f: ℝ^2 → ℝ$"
170
+   ]
171
+  },
172
+  {
173
+   "cell_type": "code",
174
+   "execution_count": null,
175
+   "metadata": {},
176
+   "outputs": [],
177
+   "source": [
178
+    "def func(xy):\n",
179
+    "    import numpy as np\n",
180
+    "    x, y = xy\n",
181
+    "    a = 0.2\n",
182
+    "    return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
170 183
     "\n",
171
-    "This is useful for sampling a random variable. The function passed to the learner must formally take a single parameter,\n",
172
-    "which should be used like a \"seed\" for the (pseudo-) random variable (although in the current implementation the seed parameter can be ignored by the function)."
184
+    "learner = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])"
173 185
    ]
174 186
   },
175 187
   {
... ...
@@ -178,16 +190,7 @@
178 190
    "metadata": {},
179 191
    "outputs": [],
180 192
    "source": [
181
-    "def g(n):\n",
182
-    "    import random\n",
183
-    "    from time import sleep\n",
184
-    "    sleep(random.random() / 5)\n",
185
-    "    # Properly save and restore the RNG state\n",
186
-    "    state = random.getstate()\n",
187
-    "    random.seed(n)\n",
188
-    "    val = random.gauss(0.5, 1)\n",
189
-    "    random.setstate(state)\n",
190
-    "    return val"
193
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() > 1500)"
191 194
    ]
192 195
   },
193 196
   {
... ...
@@ -196,25 +199,38 @@
196 199
    "metadata": {},
197 200
    "outputs": [],
198 201
    "source": [
199
-    "learner = adaptive.AverageLearner(g, None, 0.01)\n",
200
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
201 202
     "adaptive.live_plot(runner)"
202 203
    ]
203 204
   },
205
+  {
206
+   "cell_type": "code",
207
+   "execution_count": null,
208
+   "metadata": {},
209
+   "outputs": [],
210
+   "source": [
211
+    "import numpy as np\n",
212
+    "learner2 = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])\n",
213
+    "lin = np.linspace(-1, 1, len(learner.points)**0.5)\n",
214
+    "xy = [(x, y) for x in lin for y in lin]\n",
215
+    "learner2.add_data(xy, map(func, xy))\n",
216
+    "learner2.plot().relabel('Homogeneous grid') + learner.plot().relabel('With adaptive')"
217
+   ]
218
+  },
204 219
   {
205 220
    "cell_type": "markdown",
206 221
    "metadata": {},
207 222
    "source": [
208
-    "# Balancing learner"
223
+    "# Averaging learner"
209 224
    ]
210 225
   },
211 226
   {
212 227
    "cell_type": "markdown",
213 228
    "metadata": {},
214 229
    "source": [
215
-    "The balancing learner is a \"meta-learner\" that takes a list of multiple leaners. The runner wil find find out which points of which child learner will improve the loss the most and send those to the executor.\n",
230
+    "The next type of learner averages a function until the uncertainty in the average meets some condition.\n",
216 231
     "\n",
217
-    "The balancing learner can for example be used to implement a poor-man's 2D learner by using the `Learner1D`."
232
+    "This is useful for sampling a random variable. The function passed to the learner must formally take a single parameter,\n",
233
+    "which should be used like a \"seed\" for the (pseudo-) random variable (although in the current implementation the seed parameter can be ignored by the function)."
218 234
    ]
219 235
   },
220 236
   {
... ...
@@ -223,11 +239,16 @@
223 239
    "metadata": {},
224 240
    "outputs": [],
225 241
    "source": [
226
-    "from adaptive.learner import Learner1D, BalancingLearner\n",
227
-    "\n",
228
-    "learners = [Learner1D(partial(f, offset=2*random()-1, wait=False), bounds=(-1.0, 1.0)) for i in range(10)]\n",
229
-    "learner = BalancingLearner(learners)\n",
230
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)"
242
+    "def g(n):\n",
243
+    "    import random\n",
244
+    "    from time import sleep\n",
245
+    "    sleep(random.random() / 5)\n",
246
+    "    # Properly save and restore the RNG state\n",
247
+    "    state = random.getstate()\n",
248
+    "    random.seed(n)\n",
249
+    "    val = random.gauss(0.5, 1)\n",
250
+    "    random.setstate(state)\n",
251
+    "    return val"
231 252
    ]
232 253
   },
233 254
   {
... ...
@@ -236,41 +257,25 @@
236 257
    "metadata": {},
237 258
    "outputs": [],
238 259
    "source": [
239
-    "import holoviews as hv\n",
240
-    "adaptive.live_plot(runner, plotter=lambda learner: hv.Overlay([L.plot() for L in learner.learners]))"
260
+    "learner = adaptive.AverageLearner(g, None, 0.01)\n",
261
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
262
+    "adaptive.live_plot(runner)"
241 263
    ]
242 264
   },
243 265
   {
244 266
    "cell_type": "markdown",
245 267
    "metadata": {},
246 268
    "source": [
247
-    "# 2D learner"
269
+    "# Balancing learner"
248 270
    ]
249 271
   },
250 272
   {
251
-   "cell_type": "code",
252
-   "execution_count": null,
273
+   "cell_type": "markdown",
253 274
    "metadata": {},
254
-   "outputs": [],
255 275
    "source": [
256
-    "def func(xy):\n",
257
-    "    import numpy as np\n",
258
-    "    x, y = xy\n",
259
-    "    a = 0.2\n",
260
-    "    return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
276
+    "The balancing learner is a \"meta-learner\" that takes a list of multiple leaners. The runner wil find find out which points of which child learner will improve the loss the most and send those to the executor.\n",
261 277
     "\n",
262
-    "learner = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])"
263
-   ]
264
-  },
265
-  {
266
-   "cell_type": "code",
267
-   "execution_count": null,
268
-   "metadata": {},
269
-   "outputs": [],
270
-   "source": [
271
-    "from concurrent.futures import ProcessPoolExecutor\n",
272
-    "executor = ProcessPoolExecutor(max_workers=48)\n",
273
-    "runner = adaptive.Runner(learner, executor, goal=lambda l: l.loss() > 1500, log=True)"
278
+    "The balancing learner can for example be used to implement a poor-man's 2D learner by using the `Learner1D`."
274 279
    ]
275 280
   },
276 281
   {
... ...
@@ -279,7 +284,11 @@
279 284
    "metadata": {},
280 285
    "outputs": [],
281 286
    "source": [
282
-    "adaptive.live_plot(runner)"
287
+    "from adaptive.learner import Learner1D, BalancingLearner\n",
288
+    "\n",
289
+    "learners = [Learner1D(partial(f, offset=2*random()-1, wait=False), bounds=(-1.0, 1.0)) for i in range(10)]\n",
290
+    "learner = BalancingLearner(learners)\n",
291
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)"
283 292
    ]
284 293
   },
285 294
   {
... ...
@@ -288,12 +297,8 @@
288 297
    "metadata": {},
289 298
    "outputs": [],
290 299
    "source": [
291
-    "import numpy as np\n",
292
-    "learner2 = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])\n",
293
-    "lin = np.linspace(-1, 1, len(learner.points)**0.5)\n",
294
-    "xy = [(x, y) for x in lin for y in lin]\n",
295
-    "learner2.add_data(xy, map(func, xy))\n",
296
-    "learner2.plot().relabel('Homogeneous grid') + learner.plot().relabel('With adaptive')"
300
+    "import holoviews as hv\n",
301
+    "adaptive.live_plot(runner, plotter=lambda learner: hv.Overlay([L.plot() for L in learner.learners]))"
297 302
    ]
298 303
   },
299 304
   {