Browse code

rename learner.ipynb -> example-notebook.ipynb

Bas Nijholt authored on 13/12/2019 14:51:54
Showing 1 changed files
1 1
deleted file mode 100644
... ...
@@ -1,1503 +0,0 @@
1
-{
2
- "cells": [
3
-  {
4
-   "cell_type": "markdown",
5
-   "metadata": {},
6
-   "source": [
7
-    "![](docs/source/_static/logo_docs.png)\n",
8
-    "# Adaptive"
9
-   ]
10
-  },
11
-  {
12
-   "cell_type": "markdown",
13
-   "metadata": {},
14
-   "source": [
15
-    "[`adaptive`](https://github.com/python-adaptive/adaptive) is a package for adaptively sampling functions with support for parallel evaluation.\n",
16
-    "\n",
17
-    "This is an introductory notebook that shows some basic use cases."
18
-   ]
19
-  },
20
-  {
21
-   "cell_type": "code",
22
-   "execution_count": null,
23
-   "metadata": {},
24
-   "outputs": [],
25
-   "source": [
26
-    "import adaptive\n",
27
-    "adaptive.notebook_extension()\n",
28
-    "\n",
29
-    "# Import modules that are used in multiple cells\n",
30
-    "import holoviews as hv\n",
31
-    "import numpy as np\n",
32
-    "from functools import partial\n",
33
-    "import random"
34
-   ]
35
-  },
36
-  {
37
-   "cell_type": "markdown",
38
-   "metadata": {},
39
-   "source": [
40
-    "# 1D function learner"
41
-   ]
42
-  },
43
-  {
44
-   "cell_type": "markdown",
45
-   "metadata": {},
46
-   "source": [
47
-    "We start with the most common use-case: sampling a 1D function $\\ f: ℝ → ℝ$.\n",
48
-    "\n",
49
-    "We will use the following function, which is a smooth (linear) background with a sharp peak at a random location:"
50
-   ]
51
-  },
52
-  {
53
-   "cell_type": "code",
54
-   "execution_count": null,
55
-   "metadata": {},
56
-   "outputs": [],
57
-   "source": [
58
-    "offset = random.uniform(-0.5, 0.5)\n",
59
-    "\n",
60
-    "def peak(x, offset=offset, wait=True):\n",
61
-    "    from time import sleep\n",
62
-    "    from random import random\n",
63
-    "\n",
64
-    "    a = 0.01\n",
65
-    "    if wait:  \n",
66
-    "        # we pretend that this is a slow function\n",
67
-    "        sleep(random())\n",
68
-    "\n",
69
-    "    return x + a**2 / (a**2 + (x - offset)**2)"
70
-   ]
71
-  },
72
-  {
73
-   "cell_type": "markdown",
74
-   "metadata": {},
75
-   "source": [
76
-    "We start by initializing a 1D \"learner\", which will suggest points to evaluate, and adapt its suggestions as more and more points are evaluated."
77
-   ]
78
-  },
79
-  {
80
-   "cell_type": "code",
81
-   "execution_count": null,
82
-   "metadata": {},
83
-   "outputs": [],
84
-   "source": [
85
-    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))"
86
-   ]
87
-  },
88
-  {
89
-   "cell_type": "markdown",
90
-   "metadata": {},
91
-   "source": [
92
-    "Next we create a \"runner\" that will request points from the learner and evaluate 'f' on them.\n",
93
-    "\n",
94
-    "By default on Unix-like systems the runner will evaluate the points in parallel using local processes ([`concurrent.futures.ProcessPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor)).\n",
95
-    "\n",
96
-    "On Windows systems the runner will try to use a [`distributed.Client`](https://distributed.readthedocs.io/en/latest/client.html) if [`distributed`](https://distributed.readthedocs.io/en/latest/index.html) is installed. A `ProcessPoolExecutor` cannot be used on Windows for reasons."
97
-   ]
98
-  },
99
-  {
100
-   "cell_type": "code",
101
-   "execution_count": null,
102
-   "metadata": {},
103
-   "outputs": [],
104
-   "source": [
105
-    "# The end condition is when the \"loss\" is less than 0.01. In the context of the\n",
106
-    "# 1D learner this means that we will resolve features in 'func' with width 0.01 or wider.\n",
107
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01, ntasks=4)\n",
108
-    "runner.live_info()"
109
-   ]
110
-  },
111
-  {
112
-   "cell_type": "markdown",
113
-   "metadata": {},
114
-   "source": [
115
-    "When instantiated in a Jupyter notebook the runner does its job in the background and does not block the IPython kernel.\n",
116
-    "We can use this to create a plot that updates as new data arrives:"
117
-   ]
118
-  },
119
-  {
120
-   "cell_type": "code",
121
-   "execution_count": null,
122
-   "metadata": {},
123
-   "outputs": [],
124
-   "source": [
125
-    "runner.live_plot(update_interval=0.1)"
126
-   ]
127
-  },
128
-  {
129
-   "cell_type": "markdown",
130
-   "metadata": {},
131
-   "source": [
132
-    "We can now compare the adaptive sampling to a homogeneous sampling with the same number of points:"
133
-   ]
134
-  },
135
-  {
136
-   "cell_type": "code",
137
-   "execution_count": null,
138
-   "metadata": {},
139
-   "outputs": [],
140
-   "source": [
141
-    "if runner.status() != \"finished\":\n",
142
-    "    print(\"WARINING: The runner hasn't reached it goal yet!\")\n",
143
-    "\n",
144
-    "learner2 = adaptive.Learner1D(peak, bounds=learner.bounds)\n",
145
-    "\n",
146
-    "xs = np.linspace(*learner.bounds, len(learner.data))\n",
147
-    "ys = [peak(x, wait=False) for x in xs]\n",
148
-    "learner2.tell_many(xs, ys)\n",
149
-    "\n",
150
-    "learner.plot() + learner2.plot()"
151
-   ]
152
-  },
153
-  {
154
-   "cell_type": "markdown",
155
-   "metadata": {},
156
-   "source": [
157
-    "# 2D function learner"
158
-   ]
159
-  },
160
-  {
161
-   "cell_type": "markdown",
162
-   "metadata": {},
163
-   "source": [
164
-    "Besides 1D functions, we can also learn 2D functions: $\\ f: ℝ^2 → ℝ$"
165
-   ]
166
-  },
167
-  {
168
-   "cell_type": "code",
169
-   "execution_count": null,
170
-   "metadata": {},
171
-   "outputs": [],
172
-   "source": [
173
-    "def ring(xy, wait=True):\n",
174
-    "    import numpy as np\n",
175
-    "    from time import sleep\n",
176
-    "    from random import random\n",
177
-    "\n",
178
-    "    if wait:\n",
179
-    "        # we pretend that this is a slow function\n",
180
-    "        sleep(random() / 10)\n",
181
-    "    x, y = xy\n",
182
-    "    a = 0.2\n",
183
-    "    return x + np.exp(-(x ** 2 + y ** 2 - 0.75 ** 2) ** 2 / a ** 4)\n",
184
-    "\n",
185
-    "\n",
186
-    "learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)])"
187
-   ]
188
-  },
189
-  {
190
-   "cell_type": "code",
191
-   "execution_count": null,
192
-   "metadata": {},
193
-   "outputs": [],
194
-   "source": [
195
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
196
-    "runner.live_info()"
197
-   ]
198
-  },
199
-  {
200
-   "cell_type": "code",
201
-   "execution_count": null,
202
-   "metadata": {},
203
-   "outputs": [],
204
-   "source": [
205
-    "def plot(learner):\n",
206
-    "    plot = learner.plot(tri_alpha=0.2)\n",
207
-    "    return plot.Image + plot.EdgePaths.I + plot\n",
208
-    "\n",
209
-    "runner.live_plot(plotter=plot, update_interval=0.1)"
210
-   ]
211
-  },
212
-  {
213
-   "cell_type": "code",
214
-   "execution_count": null,
215
-   "metadata": {},
216
-   "outputs": [],
217
-   "source": [
218
-    "%%opts EdgePaths (color='w')\n",
219
-    "\n",
220
-    "import itertools\n",
221
-    "\n",
222
-    "# Create a learner and add data on homogeneous grid, so that we can plot it\n",
223
-    "learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)\n",
224
-    "n = int(learner.npoints ** 0.5)\n",
225
-    "xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds]\n",
226
-    "xys = list(itertools.product(xs, ys))\n",
227
-    "zs = [ring(xy, wait=False) for xy in xys]\n",
228
-    "learner2.tell_many(xys, zs)\n",
229
-    "\n",
230
-    "(\n",
231
-    "    learner2.plot(n).relabel(\"Homogeneous grid\")\n",
232
-    "    + learner.plot().relabel(\"With adaptive\")\n",
233
-    "    + learner2.plot(n, tri_alpha=0.4)\n",
234
-    "    + learner.plot(tri_alpha=0.4)\n",
235
-    ").cols(2)"
236
-   ]
237
-  },
238
-  {
239
-   "cell_type": "markdown",
240
-   "metadata": {},
241
-   "source": [
242
-    "# Averaging learner"
243
-   ]
244
-  },
245
-  {
246
-   "cell_type": "markdown",
247
-   "metadata": {},
248
-   "source": [
249
-    "The next type of learner averages a function until the uncertainty in the average meets some condition.\n",
250
-    "\n",
251
-    "This is useful for sampling a random variable. The function passed to the learner must formally take a single parameter,\n",
252
-    "which should be used like a \"seed\" for the (pseudo-) random variable (although in the current implementation the seed parameter can be ignored by the function)."
253
-   ]
254
-  },
255
-  {
256
-   "cell_type": "code",
257
-   "execution_count": null,
258
-   "metadata": {},
259
-   "outputs": [],
260
-   "source": [
261
-    "def g(n):\n",
262
-    "    import random\n",
263
-    "    from time import sleep\n",
264
-    "    sleep(random.random() / 1000)\n",
265
-    "    # Properly save and restore the RNG state\n",
266
-    "    state = random.getstate()\n",
267
-    "    random.seed(n)\n",
268
-    "    val = random.gauss(0.5, 1)\n",
269
-    "    random.setstate(state)\n",
270
-    "    return val"
271
-   ]
272
-  },
273
-  {
274
-   "cell_type": "code",
275
-   "execution_count": null,
276
-   "metadata": {},
277
-   "outputs": [],
278
-   "source": [
279
-    "learner = adaptive.AverageLearner(g, atol=None, rtol=0.01)\n",
280
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 2)\n",
281
-    "runner.live_info()"
282
-   ]
283
-  },
284
-  {
285
-   "cell_type": "code",
286
-   "execution_count": null,
287
-   "metadata": {},
288
-   "outputs": [],
289
-   "source": [
290
-    "runner.live_plot(update_interval=0.1)"
291
-   ]
292
-  },
293
-  {
294
-   "cell_type": "markdown",
295
-   "metadata": {},
296
-   "source": [
297
-    "# 1D integration learner with `cquad`"
298
-   ]
299
-  },
300
-  {
301
-   "cell_type": "markdown",
302
-   "metadata": {},
303
-   "source": [
304
-    "This learner learns a 1D function and calculates the integral and error of the integral with it. It is based on Pedro Gonnet's [implementation](https://www.academia.edu/1976055/Adaptive_quadrature_re-revisited).\n",
305
-    "\n",
306
-    "Let's try the following function with cusps (that is difficult to integrate):"
307
-   ]
308
-  },
309
-  {
310
-   "cell_type": "code",
311
-   "execution_count": null,
312
-   "metadata": {},
313
-   "outputs": [],
314
-   "source": [
315
-    "def f24(x):\n",
316
-    "    return np.floor(np.exp(x))\n",
317
-    "\n",
318
-    "xs = np.linspace(0, 3, 200)\n",
319
-    "hv.Scatter((xs, [f24(x) for x in xs]))"
320
-   ]
321
-  },
322
-  {
323
-   "cell_type": "markdown",
324
-   "metadata": {},
325
-   "source": [
326
-    "Just to prove that this really is a difficult to integrate function, let's try a familiar function integrator `scipy.integrate.quad`, which will give us warnings that it encounters difficulties."
327
-   ]
328
-  },
329
-  {
330
-   "cell_type": "code",
331
-   "execution_count": null,
332
-   "metadata": {},
333
-   "outputs": [],
334
-   "source": [
335
-    "import scipy.integrate\n",
336
-    "scipy.integrate.quad(f24, 0, 3)"
337
-   ]
338
-  },
339
-  {
340
-   "cell_type": "markdown",
341
-   "metadata": {},
342
-   "source": [
343
-    "We initialize a learner again and pass the bounds and relative tolerance we want to reach. Then in the `Runner` we pass `goal=lambda l: l.done()` where `learner.done()` is `True` when the relative tolerance has been reached."
344
-   ]
345
-  },
346
-  {
347
-   "cell_type": "code",
348
-   "execution_count": null,
349
-   "metadata": {},
350
-   "outputs": [],
351
-   "source": [
352
-    "from adaptive.runner import SequentialExecutor\n",
353
-    "\n",
354
-    "learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-8)\n",
355
-    "\n",
356
-    "# We use a SequentialExecutor, which runs the function to be learned in *this* process only.\n",
357
-    "# This means we don't pay the overhead of evaluating the function in another process.\n",
358
-    "executor = SequentialExecutor()\n",
359
-    "runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.done())\n",
360
-    "runner.live_info()"
361
-   ]
362
-  },
363
-  {
364
-   "cell_type": "markdown",
365
-   "metadata": {},
366
-   "source": [
367
-    "Now we could do the live plotting again, but lets just wait untill the runner is done."
368
-   ]
369
-  },
370
-  {
371
-   "cell_type": "code",
372
-   "execution_count": null,
373
-   "metadata": {},
374
-   "outputs": [],
375
-   "source": [
376
-    "if runner.status() != \"finished\":\n",
377
-    "    print(\"WARINING: The runner hasn't reached it goal yet!\")\n",
378
-    "\n",
379
-    "print('The integral value is {} with the corresponding error of {}'.format(learner.igral, learner.err))\n",
380
-    "learner.plot()"
381
-   ]
382
-  },
383
-  {
384
-   "cell_type": "markdown",
385
-   "metadata": {},
386
-   "source": [
387
-    "# 1D learner with vector output: `f:ℝ → ℝ^N`"
388
-   ]
389
-  },
390
-  {
391
-   "cell_type": "markdown",
392
-   "metadata": {},
393
-   "source": [
394
-    "Sometimes you may want to learn a function with vector output:"
395
-   ]
396
-  },
397
-  {
398
-   "cell_type": "code",
399
-   "execution_count": null,
400
-   "metadata": {},
401
-   "outputs": [],
402
-   "source": [
403
-    "random.seed(0)\n",
404
-    "offsets = [random.uniform(-0.8, 0.8) for _ in range(3)]\n",
405
-    "\n",
406
-    "# sharp peaks at random locations in the domain\n",
407
-    "def f_levels(x, offsets=offsets):\n",
408
-    "    a = 0.01\n",
409
-    "    return np.array([offset + peak(x, offset, wait=False) for offset in offsets])"
410
-   ]
411
-  },
412
-  {
413
-   "cell_type": "markdown",
414
-   "metadata": {},
415
-   "source": [
416
-    "`adaptive` has you covered! The `Learner1D` can be used for such functions:"
417
-   ]
418
-  },
419
-  {
420
-   "cell_type": "code",
421
-   "execution_count": null,
422
-   "metadata": {},
423
-   "outputs": [],
424
-   "source": [
425
-    "learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))\n",
426
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
427
-    "runner.live_info()"
428
-   ]
429
-  },
430
-  {
431
-   "cell_type": "code",
432
-   "execution_count": null,
433
-   "metadata": {},
434
-   "outputs": [],
435
-   "source": [
436
-    "runner.live_plot(update_interval=0.1)"
437
-   ]
438
-  },
439
-  {
440
-   "cell_type": "markdown",
441
-   "metadata": {},
442
-   "source": [
443
-    "# N-dimensional function learner (beta)\n",
444
-    "Besides 1 and 2 dimensional functions, we can also learn N-D functions: $\\ f: ℝ^N → ℝ, N \\ge 2$\n",
445
-    "\n",
446
-    "Do keep in mind the speed and [effectiveness](https://en.wikipedia.org/wiki/Curse_of_dimensionality) of the learner drops quickly with increasing number of dimensions."
447
-   ]
448
-  },
449
-  {
450
-   "cell_type": "code",
451
-   "execution_count": null,
452
-   "metadata": {},
453
-   "outputs": [],
454
-   "source": [
455
-    "# this step takes a lot of time, it will finish at about 3300 points, which can take up to 6 minutes\n",
456
-    "def sphere(xyz):\n",
457
-    "    x, y, z = xyz\n",
458
-    "    a = 0.4\n",
459
-    "    return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)\n",
460
-    "\n",
461
-    "learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])\n",
462
-    "runner = adaptive.Runner(learner, goal=lambda l: l.npoints > 2000)\n",
463
-    "runner.live_info()"
464
-   ]
465
-  },
466
-  {
467
-   "cell_type": "markdown",
468
-   "metadata": {},
469
-   "source": [
470
-    "Let's plot 2D slices of the 3D function"
471
-   ]
472
-  },
473
-  {
474
-   "cell_type": "code",
475
-   "execution_count": null,
476
-   "metadata": {},
477
-   "outputs": [],
478
-   "source": [
479
-    "def plot_cut(x, direction, learner=learner):\n",
480
-    "    cut_mapping = {'xyz'.index(direction): x}\n",
481
-    "    return learner.plot_slice(cut_mapping, n=100)\n",
482
-    "\n",
483
-    "dm = hv.DynamicMap(plot_cut, kdims=['value', 'direction'])\n",
484
-    "dm.redim.values(value=np.linspace(-1, 1), direction=list('xyz'))"
485
-   ]
486
-  },
487
-  {
488
-   "cell_type": "markdown",
489
-   "metadata": {},
490
-   "source": [
491
-    "Or we can plot 1D slices"
492
-   ]
493
-  },
494
-  {
495
-   "cell_type": "code",
496
-   "execution_count": null,
497
-   "metadata": {},
498
-   "outputs": [],
499
-   "source": [
500
-    "%%opts Path {+framewise}\n",
501
-    "def plot_cut(x1, x2, directions, learner=learner):\n",
502
-    "    cut_mapping = {'xyz'.index(d): x for d, x in zip(directions, [x1, x2])}\n",
503
-    "    return learner.plot_slice(cut_mapping)\n",
504
-    "\n",
505
-    "dm = hv.DynamicMap(plot_cut, kdims=['v1', 'v2', 'directions'])\n",
506
-    "dm.redim.values(v1=np.linspace(-1, 1),\n",
507
-    "                v2=np.linspace(-1, 1),\n",
508
-    "                directions=['xy', 'xz', 'yz'])"
509
-   ]
510
-  },
511
-  {
512
-   "cell_type": "markdown",
513
-   "metadata": {},
514
-   "source": [
515
-    "The plots show some wobbles while the original function was smooth, this is a result of the fact that the learner chooses points in 3 dimensions and the simplices are not in the same face as we try to interpolate our lines. However, as always, when you sample more points the graph will become gradually smoother."
516
-   ]
517
-  },
518
-  {
519
-   "cell_type": "code",
520
-   "execution_count": null,
521
-   "metadata": {},
522
-   "outputs": [],
523
-   "source": [
524
-    "learner.plot_3D()"
525
-   ]
526
-  },
527
-  {
528
-   "cell_type": "markdown",
529
-   "metadata": {},
530
-   "source": [
531
-    "# Custom adaptive logic for 1D and 2D"
532
-   ]
533
-  },
534
-  {
535
-   "cell_type": "markdown",
536
-   "metadata": {},
537
-   "source": [
538
-    "`Learner1D` and `Learner2D` both work on the principle of subdividing their domain into subdomains, and assigning a property to each subdomain, which we call the *loss*. The algorithm for choosing the best place to evaluate our function is then simply *take the subdomain with the largest loss and add a point in the center, creating new subdomains around this point*. \n",
539
-    "\n",
540
-    "The *loss function* that defines the loss per subdomain is the canonical place to define what regions of the domain are \"interesting\".\n",
541
-    "The default loss function for `Learner1D` and `Learner2D` is sufficient for a wide range of common cases, but it is by no means a panacea. For example, the default loss function will tend to get stuck on divergences.\n",
542
-    "\n",
543
-    "Both the `Learner1D` and `Learner2D` allow you to specify a *custom loss function*. Below we illustrate how you would go about writing your own loss function. The documentation for `Learner1D` and `Learner2D` specifies the signature that your loss function needs to have in order for it to work with `adaptive`.\n",
544
-    "\n",
545
-    "\n",
546
-    "Say we want to properly sample a function that contains divergences. A simple (but naive) strategy is to *uniformly* sample the domain:\n"
547
-   ]
548
-  },
549
-  {
550
-   "cell_type": "code",
551
-   "execution_count": null,
552
-   "metadata": {},
553
-   "outputs": [],
554
-   "source": [
555
-    "def uniform_sampling_1d(xs, ys):\n",
556
-    "    # Note that we never use 'data'; the loss is just the size of the subdomain\n",
557
-    "    dx = xs[1] - xs[0]\n",
558
-    "    return dx\n",
559
-    "\n",
560
-    "def f_divergent_1d(x):\n",
561
-    "    return 1 / x**2\n",
562
-    "\n",
563
-    "learner = adaptive.Learner1D(f_divergent_1d, (-1, 1), loss_per_interval=uniform_sampling_1d)\n",
564
-    "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01)\n",
565
-    "learner.plot().select(y=(0, 10000))"
566
-   ]
567
-  },
568
-  {
569
-   "cell_type": "code",
570
-   "execution_count": null,
571
-   "metadata": {},
572
-   "outputs": [],
573
-   "source": [
574
-    "%%opts EdgePaths (color='w') Image [logz=True]\n",
575
-    "\n",
576
-    "from adaptive.runner import SequentialExecutor\n",
577
-    "\n",
578
-    "def uniform_sampling_2d(ip):\n",
579
-    "    from adaptive.learner.learner2D import areas\n",
580
-    "    A = areas(ip)\n",
581
-    "    return np.sqrt(A)\n",
582
-    "\n",
583
-    "def f_divergent_2d(xy):\n",
584
-    "    x, y = xy\n",
585
-    "    return 1 / (x**2 + y**2)\n",
586
-    "\n",
587
-    "learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=uniform_sampling_2d)\n",
588
-    "\n",
589
-    "# this takes a while, so use the async Runner so we know *something* is happening\n",
590
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)\n",
591
-    "runner.live_info()\n",
592
-    "runner.live_plot(update_interval=0.2,\n",
593
-    "                 plotter=lambda l: l.plot(tri_alpha=0.3).relabel('1 / (x^2 + y^2) in log scale'))"
594
-   ]
595
-  },
596
-  {
597
-   "cell_type": "markdown",
598
-   "metadata": {},
599
-   "source": [
600
-    "The uniform sampling strategy is a common case to benchmark against, so the 1D and 2D versions are included in `adaptive` as `adaptive.learner.learner1D.uniform_sampling` and `adaptive.learner.learner2D.uniform_sampling`."
601
-   ]
602
-  },
603
-  {
604
-   "cell_type": "markdown",
605
-   "metadata": {},
606
-   "source": [
607
-    "### Doing better\n",
608
-    "\n",
609
-    "Of course, using `adaptive` for uniform sampling is a bit of a waste!\n",
610
-    "\n",
611
-    "Let's see if we can do a bit better. Below we define a loss per subdomain that scales with the degree of nonlinearity of the function (this is very similar to the default loss function for `Learner2D`), but which is 0 for subdomains smaller than a certain area, and infinite for subdomains larger than a certain area.\n",
612
-    "\n",
613
-    "A loss defined in this way means that the adaptive algorithm will first prioritise subdomains that are too large (infinite loss). After all subdomains are appropriately small it will prioritise places where the function is very nonlinear, but will ignore subdomains that are too small (0 loss)."
614
-   ]
615
-  },
616
-  {
617
-   "cell_type": "code",
618
-   "execution_count": null,
619
-   "metadata": {},
620
-   "outputs": [],
621
-   "source": [
622
-    "%%opts EdgePaths (color='w') Image [logz=True]\n",
623
-    "\n",
624
-    "def resolution_loss(ip, min_distance=0, max_distance=1):\n",
625
-    "    \"\"\"min_distance and max_distance should be in between 0 and 1\n",
626
-    "    because the total area is normalized to 1.\"\"\"\n",
627
-    "\n",
628
-    "    from adaptive.learner.learner2D import areas, deviations\n",
629
-    "\n",
630
-    "    A = areas(ip)\n",
631
-    "\n",
632
-    "    # 'deviations' returns an array of shape '(n, len(ip))', where\n",
633
-    "    # 'n' is the  is the dimension of the output of the learned function\n",
634
-    "    # In this case we know that the learned function returns a scalar,\n",
635
-    "    # so 'deviations' returns an array of shape '(1, len(ip))'.\n",
636
-    "    # It represents the deviation of the function value from a linear estimate\n",
637
-    "    # over each triangular subdomain.\n",
638
-    "    dev = deviations(ip)[0]\n",
639
-    "    \n",
640
-    "    # we add terms of the same dimension: dev == [distance], A == [distance**2]\n",
641
-    "    loss = np.sqrt(A) * dev + A\n",
642
-    "    \n",
643
-    "    # Setting areas with a small area to zero such that they won't be chosen again\n",
644
-    "    loss[A < min_distance**2] = 0 \n",
645
-    "    \n",
646
-    "    # Setting triangles that have a size larger than max_distance to infinite loss\n",
647
-    "    loss[A > max_distance**2] = np.inf\n",
648
-    "\n",
649
-    "    return loss\n",
650
-    "\n",
651
-    "loss = partial(resolution_loss, min_distance=0.01)\n",
652
-    "\n",
653
-    "learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=loss)\n",
654
-    "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.02)\n",
655
-    "learner.plot(tri_alpha=0.3).relabel('1 / (x^2 + y^2) in log scale')"
656
-   ]
657
-  },
658
-  {
659
-   "cell_type": "markdown",
660
-   "metadata": {},
661
-   "source": [
662
-    "Awesome! We zoom in on the singularity, but not at the expense of sampling the rest of the domain a reasonable amount.\n",
663
-    "\n",
664
-    "The above strategy is available as `adaptive.learner.learner2D.resolution_loss`."
665
-   ]
666
-  },
667
-  {
668
-   "cell_type": "markdown",
669
-   "metadata": {},
670
-   "source": [
671
-    "# Balancing learner"
672
-   ]
673
-  },
674
-  {
675
-   "cell_type": "markdown",
676
-   "metadata": {},
677
-   "source": [
678
-    "The balancing learner is a \"meta-learner\" that takes a list of learners. When you request a point from the balancing learner, it will query all of its \"children\" to figure out which one will give the most improvement.\n",
679
-    "\n",
680
-    "The balancing learner can for example be used to implement a poor-man's 2D learner by using the `Learner1D`."
681
-   ]
682
-  },
683
-  {
684
-   "cell_type": "code",
685
-   "execution_count": null,
686
-   "metadata": {},
687
-   "outputs": [],
688
-   "source": [
689
-    "def h(x, offset=0):\n",
690
-    "    a = 0.01\n",
691
-    "    return x + a**2 / (a**2 + (x - offset)**2)\n",
692
-    "\n",
693
-    "learners = [adaptive.Learner1D(partial(h, offset=random.uniform(-1, 1)),\n",
694
-    "            bounds=(-1, 1)) for i in range(10)]\n",
695
-    "\n",
696
-    "bal_learner = adaptive.BalancingLearner(learners)\n",
697
-    "runner = adaptive.Runner(bal_learner, goal=lambda l: l.loss() < 0.01)\n",
698
-    "runner.live_info()"
699
-   ]
700
-  },
701
-  {
702
-   "cell_type": "code",
703
-   "execution_count": null,
704
-   "metadata": {},
705
-   "outputs": [],
706
-   "source": [
707
-    "plotter = lambda learner: hv.Overlay([L.plot() for L in learner.learners])\n",
708
-    "runner.live_plot(plotter=plotter, update_interval=0.1)"
709
-   ]
710
-  },
711
-  {
712
-   "cell_type": "markdown",
713
-   "metadata": {},
714
-   "source": [
715
-    "Often one wants to create a set of `learner`s for a cartesian product of parameters. For that particular case we've added a `classmethod` called `from_product`. See how it works below"
716
-   ]
717
-  },
718
-  {
719
-   "cell_type": "code",
720
-   "execution_count": null,
721
-   "metadata": {},
722
-   "outputs": [],
723
-   "source": [
724
-    "from scipy.special import eval_jacobi\n",
725
-    "\n",
726
-    "def jacobi(x, n, alpha, beta):\n",
727
-    "    return eval_jacobi(n, alpha, beta, x)\n",
728
-    "\n",
729
-    "combos = {\n",
730
-    "    'n': [1, 2, 4, 8],\n",
731
-    "    'alpha': np.linspace(0, 2, 3),\n",
732
-    "    'beta': np.linspace(0, 1, 5),\n",
733
-    "}\n",
734
-    "\n",
735
-    "learner = adaptive.BalancingLearner.from_product(\n",
736
-    "    jacobi, adaptive.Learner1D, dict(bounds=(0, 1)), combos)\n",
737
-    "\n",
738
-    "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01)\n",
739
-    "\n",
740
-    "# The `cdims` will automatically be set when using `from_product`, so\n",
741
-    "# `plot()` will return a HoloMap with correctly labeled sliders.\n",
742
-    "learner.plot().overlay('beta').grid().select(y=(-1, 3))"
743
-   ]
744
-  },
745
-  {
746
-   "cell_type": "markdown",
747
-   "metadata": {},
748
-   "source": [
749
-    "# DataSaver"
750
-   ]
751
-  },
752
-  {
753
-   "cell_type": "markdown",
754
-   "metadata": {},
755
-   "source": [
756
-    "If the function that you want to learn returns a value along with some metadata, you can wrap your learner in an `adaptive.DataSaver`.\n",
757
-    "\n",
758
-    "In the following example the function to be learned returns its result and the execution time in a dictionary:"
759
-   ]
760
-  },
761
-  {
762
-   "cell_type": "code",
763
-   "execution_count": null,
764
-   "metadata": {},
765
-   "outputs": [],
766
-   "source": [
767
-    "from operator import itemgetter\n",
768
-    "\n",
769
-    "def f_dict(x):\n",
770
-    "    \"\"\"The function evaluation takes roughly the time we `sleep`.\"\"\"\n",
771
-    "    import random\n",
772
-    "    from time import sleep\n",
773
-    "\n",
774
-    "    waiting_time = random.random()\n",
775
-    "    sleep(waiting_time)\n",
776
-    "    a = 0.01\n",
777
-    "    y = x + a**2 / (a**2 + x**2)\n",
778
-    "    return {'y': y, 'waiting_time': waiting_time}\n",
779
-    "\n",
780
-    "# Create the learner with the function that returns a 'dict'\n",
781
-    "# This learner cannot be run directly, as Learner1D does not know what to do with the 'dict'\n",
782
-    "_learner = adaptive.Learner1D(f_dict, bounds=(-1, 1))\n",
783
-    "\n",
784
-    "# Wrapping the learner with 'adaptive.DataSaver' and tell it which key it needs to learn\n",
785
-    "learner = adaptive.DataSaver(_learner, arg_picker=itemgetter('y'))"
786
-   ]
787
-  },
788
-  {
789
-   "cell_type": "markdown",
790
-   "metadata": {},
791
-   "source": [
792
-    "`learner.learner` is the original learner, so `learner.learner.loss()` will call the correct loss method."
793
-   ]
794
-  },
795
-  {
796
-   "cell_type": "code",
797
-   "execution_count": null,
798
-   "metadata": {},
799
-   "outputs": [],
800
-   "source": [
801
-    "runner = adaptive.Runner(learner, goal=lambda l: l.learner.loss() < 0.05)\n",
802
-    "runner.live_info()"
803
-   ]
804
-  },
805
-  {
806
-   "cell_type": "code",
807
-   "execution_count": null,
808
-   "metadata": {},
809
-   "outputs": [],
810
-   "source": [
811
-    "runner.live_plot(plotter=lambda l: l.learner.plot(), update_interval=0.1)"
812
-   ]
813
-  },
814
-  {
815
-   "cell_type": "markdown",
816
-   "metadata": {},
817
-   "source": [
818
-    "Now the `DataSavingLearner` will have an dictionary attribute `extra_data` that has `x` as key and the data that was returned by `learner.function` as values."
819
-   ]
820
-  },
821
-  {
822
-   "cell_type": "code",
823
-   "execution_count": null,
824
-   "metadata": {},
825
-   "outputs": [],
826
-   "source": [
827
-    "learner.extra_data"
828
-   ]
829
-  },
830
-  {
831
-   "cell_type": "markdown",
832
-   "metadata": {},
833
-   "source": [
834
-    "# `Scikit-Optimize`"
835
-   ]
836
-  },
837
-  {
838
-   "cell_type": "markdown",
839
-   "metadata": {},
840
-   "source": [
841
-    "We have wrapped the `Optimizer` class from [`scikit-optimize`](https://github.com/scikit-optimize/scikit-optimize), to show how existing libraries can be integrated with `adaptive`.\n",
842
-    "\n",
843
-    "The `SKOptLearner` attempts to \"optimize\" the given function `g` (i.e. find the global minimum of `g` in the window of interest).\n",
844
-    "\n",
845
-    "Here we use the same example as in the `scikit-optimize` [tutorial](https://github.com/scikit-optimize/scikit-optimize/blob/master/examples/ask-and-tell.ipynb). Although `SKOptLearner` can optimize functions of arbitrary dimensionality, we can only plot the learner if a 1D function is being learned."
846
-   ]
847
-  },
848
-  {
849
-   "cell_type": "code",
850
-   "execution_count": null,
851
-   "metadata": {},
852
-   "outputs": [],
853
-   "source": [
854
-    "def F(x, noise_level=0.1):\n",
855
-    "    return (np.sin(5 * x) * (1 - np.tanh(x ** 2))\n",
856
-    "            + np.random.randn() * noise_level)"
857
-   ]
858
-  },
859
-  {
860
-   "cell_type": "code",
861
-   "execution_count": null,
862
-   "metadata": {},
863
-   "outputs": [],
864
-   "source": [
865
-    "learner = adaptive.SKOptLearner(F, dimensions=[(-2., 2.)],\n",
866
-    "                                base_estimator=\"GP\",\n",
867
-    "                                acq_func=\"gp_hedge\",\n",
868
-    "                                acq_optimizer=\"lbfgs\",\n",
869
-    "                               )\n",
870
-    "runner = adaptive.Runner(learner, ntasks=1, goal=lambda l: l.npoints > 40)\n",
871
-    "runner.live_info()"
872
-   ]
873
-  },
874
-  {
875
-   "cell_type": "code",
876
-   "execution_count": null,
877
-   "metadata": {},
878
-   "outputs": [],
879
-   "source": [
880
-    "%%opts Overlay [legend_position='top']\n",
881
-    "xs = np.linspace(*learner.space.bounds[0])\n",
882
-    "to_learn = hv.Curve((xs, [F(x, 0) for x in xs]), label='to learn')\n",
883
-    "\n",
884
-    "runner.live_plot().relabel('prediction', depth=2) * to_learn"
885
-   ]
886
-  },
887
-  {
888
-   "cell_type": "markdown",
889
-   "metadata": {
890
-    "collapsed": true
891
-   },
892
-   "source": [
893
-    "# Using multiple cores"
894
-   ]
895
-  },
896
-  {
897
-   "cell_type": "markdown",
898
-   "metadata": {},
899
-   "source": [
900
-    "Often you will want to evaluate the function on some remote computing resources. `adaptive` works out of the box with any framework that implements a [PEP 3148](https://www.python.org/dev/peps/pep-3148/) compliant executor that returns `concurrent.futures.Future` objects."
901
-   ]
902
-  },
903
-  {
904
-   "cell_type": "markdown",
905
-   "metadata": {},
906
-   "source": [
907
-    "### [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html)"
908
-   ]
909
-  },
910
-  {
911
-   "cell_type": "markdown",
912
-   "metadata": {},
913
-   "source": [
914
-    "On Unix-like systems by default `adaptive.Runner` creates a `ProcessPoolExecutor`, but you can also pass one explicitly e.g. to limit the number of workers:"
915
-   ]
916
-  },
917
-  {
918
-   "cell_type": "code",
919
-   "execution_count": null,
920
-   "metadata": {},
921
-   "outputs": [],
922
-   "source": [
923
-    "from concurrent.futures import ProcessPoolExecutor\n",
924
-    "\n",
925
-    "executor = ProcessPoolExecutor(max_workers=4)\n",
926
-    "\n",
927
-    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
928
-    "runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.05)\n",
929
-    "runner.live_info()\n",
930
-    "runner.live_plot(update_interval=0.1)"
931
-   ]
932
-  },
933
-  {
934
-   "cell_type": "markdown",
935
-   "metadata": {},
936
-   "source": [
937
-    "### [`ipyparallel`](https://ipyparallel.readthedocs.io/en/latest/intro.html)"
938
-   ]
939
-  },
940
-  {
941
-   "cell_type": "code",
942
-   "execution_count": null,
943
-   "metadata": {},
944
-   "outputs": [],
945
-   "source": [
946
-    "import ipyparallel\n",
947
-    "\n",
948
-    "client = ipyparallel.Client()  # You will need to start an `ipcluster` to make this work\n",
949
-    "\n",
950
-    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
951
-    "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
952
-    "runner.live_info()\n",
953
-    "runner.live_plot()"
954
-   ]
955
-  },
956
-  {
957
-   "cell_type": "markdown",
958
-   "metadata": {},
959
-   "source": [
960
-    "### [`distributed`](https://distributed.readthedocs.io/en/latest/)\n",
961
-    "\n",
962
-    "On Windows by default `adaptive.Runner` uses a `distributed.Client`."
963
-   ]
964
-  },
965
-  {
966
-   "cell_type": "code",
967
-   "execution_count": null,
968
-   "metadata": {},
969
-   "outputs": [],
970
-   "source": [
971
-    "import distributed\n",
972
-    "\n",
973
-    "client = distributed.Client()\n",
974
-    "\n",
975
-    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
976
-    "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
977
-    "runner.live_info()\n",
978
-    "runner.live_plot(update_interval=0.1)"
979
-   ]
980
-  },
981
-  {
982
-   "cell_type": "markdown",
983
-   "metadata": {},
984
-   "source": [
985
-    "---"
986
-   ]
987
-  },
988
-  {
989
-   "cell_type": "markdown",
990
-   "metadata": {},
991
-   "source": [
992
-    "# Advanced Topics"
993
-   ]
994
-  },
995
-  {
996
-   "cell_type": "markdown",
997
-   "metadata": {},
998
-   "source": [
999
-    "## Saving and loading learners"
1000
-   ]
1001
-  },
1002
-  {
1003
-   "cell_type": "markdown",
1004
-   "metadata": {},
1005
-   "source": [
1006
-    "Every learner has a `save` and `load` method that can be used to save and load **only** the data of a learner.\n",
1007
-    "\n",
1008
-    "There are __two ways__ of naming the files:\n",
1009
-    "1. Using the `fname` argument in `learner.save(fname=...)`\n",
1010
-    "2. Setting the `fname` attribute, like `learner.fname = 'data/example.p` and then `learner.save()`\n",
1011
-    "\n",
1012
-    "The second way _must be used_ when saving the `learner`s of a `BalancingLearner`.\n",
1013
-    "\n",
1014
-    "By default the resulting pickle files are compressed, to turn this off use `learner.save(fname=..., compress=False)`"
1015
-   ]
1016
-  },
1017
-  {
1018
-   "cell_type": "code",
1019
-   "execution_count": null,
1020
-   "metadata": {},
1021
-   "outputs": [],
1022
-   "source": [
1023
-    "# Let's create two learners and run only one.\n",
1024
-    "learner = adaptive.Learner1D(partial(peak, wait=False), bounds=(-1, 1))\n",
1025
-    "control = adaptive.Learner1D(partial(peak, wait=False), bounds=(-1, 1))\n",
1026
-    "\n",
1027
-    "# Let's only run the learner\n",
1028
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
1029
-    "runner.live_info()"
1030
-   ]
1031
-  },
1032
-  {
1033
-   "cell_type": "markdown",
1034
-   "metadata": {},
1035
-   "source": [
1036
-    "We can save the data with"
1037
-   ]
1038
-  },
1039
-  {
1040
-   "cell_type": "code",
1041
-   "execution_count": null,
1042
-   "metadata": {},
1043
-   "outputs": [],
1044
-   "source": [
1045
-    "fname = 'data/example_file.p'\n",
1046
-    "learner.save(fname)"
1047
-   ]
1048
-  },
1049
-  {
1050
-   "cell_type": "markdown",
1051
-   "metadata": {},
1052
-   "source": [
1053
-    "and load (and then plot) the other empty learner with"
1054
-   ]
1055
-  },
1056
-  {
1057
-   "cell_type": "code",
1058
-   "execution_count": null,
1059
-   "metadata": {},
1060
-   "outputs": [],
1061
-   "source": [
1062
-    "control.load(fname)\n",
1063
-    "learner.plot().relabel('saved learner') + control.plot().relabel('loaded learner')"
1064
-   ]
1065
-  },
1066
-  {
1067
-   "cell_type": "markdown",
1068
-   "metadata": {},
1069
-   "source": [
1070
-    "Or just (without saving):"
1071
-   ]
1072
-  },
1073
-  {
1074
-   "cell_type": "code",
1075
-   "execution_count": null,
1076
-   "metadata": {},
1077
-   "outputs": [],
1078
-   "source": [
1079
-    "control = adaptive.Learner1D(partial(peak, wait=False), bounds=(-1, 1))\n",
1080
-    "control.copy_from(learner)"
1081
-   ]
1082
-  },
1083
-  {
1084
-   "cell_type": "markdown",
1085
-   "metadata": {},
1086
-   "source": [
1087
-    "One can also periodically save the learner while it's run in a `Runner`. You can use it like:"
1088
-   ]
1089
-  },
1090
-  {
1091
-   "cell_type": "code",
1092
-   "execution_count": null,
1093
-   "metadata": {},
1094
-   "outputs": [],
1095
-   "source": [
1096
-    "def slow_f(x):\n",
1097
-    "    from time import sleep\n",
1098
-    "    sleep(5)\n",
1099
-    "    return x\n",
1100
-    "\n",
1101
-    "learner = adaptive.Learner1D(slow_f, bounds=[0, 1])\n",
1102
-    "runner = adaptive.Runner(learner, goal=lambda l: l.npoints > 100)\n",
1103
-    "\n",
1104
-    "runner.start_periodic_saving(save_kwargs=dict(fname='data/periodic_example.p'), interval=6)\n",
1105
-    "\n",
1106
-    "runner.live_info()"
1107
-   ]
1108
-  },
1109
-  {
1110
-   "cell_type": "code",
1111
-   "execution_count": null,
1112
-   "metadata": {},
1113
-   "outputs": [],
1114
-   "source": [
1115
-    "# See the data after 6 seconds with\n",
1116
-    "!ls -lah data  # only works on macOS and Linux systems"
1117
-   ]
1118
-  },
1119
-  {
1120
-   "cell_type": "markdown",
1121
-   "metadata": {},
1122
-   "source": [
1123
-    "## A watched pot never boils!"
1124
-   ]
1125
-  },
1126
-  {
1127
-   "cell_type": "markdown",
1128
-   "metadata": {},
1129
-   "source": [
1130
-    "`adaptive.Runner` does its work in an `asyncio` task that runs concurrently with the IPython kernel, when using `adaptive` from a Jupyter notebook. This is advantageous because it allows us to do things like live-updating plots, however it can trip you up if you're not careful.\n",
1131
-    "\n",
1132
-    "Notably: **if you block the IPython kernel, the runner will not do any work**.\n",
1133
-    "\n",
1134
-    "For example if you wanted to wait for a runner to complete, **do not wait in a busy loop**:\n",
1135
-    "```python\n",
1136
-    "while not runner.task.done():\n",
1137
-    "    pass\n",
1138
-    "```\n",
1139
-    "\n",
1140
-    "If you do this then **the runner will never finish**."
1141
-   ]
1142
-  },
1143
-  {
1144
-   "cell_type": "markdown",
1145
-   "metadata": {},
1146
-   "source": [
1147
-    "What to do if you don't care about live plotting, and just want to run something until its done?\n",
1148
-    "\n",
1149
-    "The simplest way to accomplish this is to use `adaptive.BlockingRunner`:"
1150
-   ]
1151
-  },
1152
-  {
1153
-   "cell_type": "code",
1154
-   "execution_count": null,
1155
-   "metadata": {},
1156
-   "outputs": [],
1157
-   "source": [
1158
-    "learner = adaptive.Learner1D(partial(peak, wait=False), bounds=(-1, 1))\n",
1159
-    "adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.005)\n",
1160
-    "# This will only get run after the runner has finished\n",
1161
-    "learner.plot()"
1162
-   ]
1163
-  },
1164
-  {
1165
-   "cell_type": "markdown",
1166
-   "metadata": {},
1167
-   "source": [
1168
-    "## Reproducibility"
1169
-   ]
1170
-  },
1171
-  {
1172
-   "cell_type": "markdown",
1173
-   "metadata": {},
1174
-   "source": [
1175
-    "By default `adaptive` runners evaluate the learned function in parallel across several cores. The runners are also opportunistic, in that as soon as a result is available they will feed it to the learner and request another point to replace the one that just finished.\n",
1176
-    "\n",
1177
-    "Because the order in which computations complete is non-deterministic, this means that the runner behaves in a non-deterministic way. Adaptive makes this choice because in many cases the speedup from parallel execution is worth sacrificing the \"purity\" of exactly reproducible computations.\n",
1178
-    "\n",
1179
-    "Nevertheless it is still possible to run a learner in a deterministic way with adaptive.\n",
1180
-    "\n",
1181
-    "The simplest way is to use `adaptive.runner.simple` to run your learner:"
1182
-   ]
1183
-  },
1184
-  {
1185
-   "cell_type": "code",
1186
-   "execution_count": null,
1187
-   "metadata": {},
1188
-   "outputs": [],
1189
-   "source": [
1190
-    "learner = adaptive.Learner1D(partial(peak, wait=False), bounds=(-1, 1))\n",
1191
-    "\n",
1192
-    "# blocks until completion\n",
1193
-    "adaptive.runner.simple(learner, goal=lambda l: l.loss() < 0.002)\n",
1194
-    "\n",
1195
-    "learner.plot()"
1196
-   ]
1197
-  },
1198
-  {
1199
-   "cell_type": "markdown",
1200
-   "metadata": {},
1201
-   "source": [
1202
-    "Note that unlike `adaptive.Runner`, `adaptive.runner.simple` *blocks* until it is finished.\n",
1203
-    "\n",
1204
-    "If you want to enable determinism, want to continue using the non-blocking `adaptive.Runner`, you can use the `adaptive.runner.SequentialExecutor`:"
1205
-   ]
1206
-  },
1207
-  {
1208
-   "cell_type": "code",
1209
-   "execution_count": null,
1210
-   "metadata": {},
1211
-   "outputs": [],
1212
-   "source": [
1213
-    "from adaptive.runner import SequentialExecutor\n",
1214
-    "\n",
1215
-    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
1216
-    "\n",
1217
-    "# blocks until completion\n",
1218
-    "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.loss() < 0.002)\n",
1219
-    "runner.live_info()\n",
1220
-    "runner.live_plot(update_interval=0.1)"
1221
-   ]
1222
-  },
1223
-  {
1224
-   "cell_type": "markdown",
1225
-   "metadata": {},
1226
-   "source": [
1227
-    "## Cancelling a runner"
1228
-   ]
1229
-  },
1230
-  {
1231
-   "cell_type": "markdown",
1232
-   "metadata": {},
1233
-   "source": [
1234
-    "Sometimes you want to interactively explore a parameter space, and want the function to be evaluated at finer and finer resolution and manually control when the calculation stops.\n",
1235
-    "\n",
1236
-    "If no `goal` is provided to a runner then the runner will run until cancelled.\n",
1237
-    "\n",
1238
-    "`runner.live_info()` will provide a button that can be clicked to stop the runner. You can also stop the runner programatically using `runner.cancel()`."
1239
-   ]
1240
-  },
1241
-  {
1242
-   "cell_type": "code",
1243
-   "execution_count": null,
1244
-   "metadata": {},
1245
-   "outputs": [],
1246
-   "source": [
1247
-    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
1248
-    "runner = adaptive.Runner(learner)\n",
1249
-    "runner.live_info()\n",
1250
-    "runner.live_plot(update_interval=0.1)"
1251
-   ]
1252
-  },
1253
-  {
1254
-   "cell_type": "code",
1255
-   "execution_count": null,
1256
-   "metadata": {},
1257
-   "outputs": [],
1258
-   "source": [
1259
-    "runner.cancel()"
1260
-   ]
1261
-  },
1262
-  {
1263
-   "cell_type": "code",
1264
-   "execution_count": null,
1265
-   "metadata": {},
1266
-   "outputs": [],
1267
-   "source": [
1268
-    "print(runner.status())"
1269
-   ]
1270
-  },
1271
-  {
1272
-   "cell_type": "markdown",
1273
-   "metadata": {},
1274
-   "source": [
1275
-    "## Debugging Problems "
1276
-   ]
1277
-  },
1278
-  {
1279
-   "cell_type": "markdown",
1280
-   "metadata": {},
1281
-   "source": [
1282
-    "Runners work in the background with respect to the IPython kernel, which makes it convenient, but also means that inspecting errors is more difficult because exceptions will not be raised directly in the notebook. Often the only indication you will have that something has gone wrong is that nothing will be happening.\n",
1283
-    "\n",
1284
-    "Let's look at the following example, where the function to be learned will raise an exception 10% of the time."
1285
-   ]
1286
-  },
1287
-  {
1288
-   "cell_type": "code",
1289
-   "execution_count": null,
1290
-   "metadata": {},
1291
-   "outputs": [],
1292
-   "source": [
1293
-    "def will_raise(x):\n",
1294
-    "    from random import random\n",
1295
-    "    from time import sleep\n",
1296
-    "    \n",
1297
-    "    sleep(random())\n",
1298
-    "    if random() < 0.1:\n",
1299
-    "        raise RuntimeError('something went wrong!')\n",
1300
-    "    return x**2\n",
1301
-    "    \n",
1302
-    "learner = adaptive.Learner1D(will_raise, (-1, 1))\n",
1303
-    "runner = adaptive.Runner(learner)  # without 'goal' the runner will run forever unless cancelled\n",
1304
-    "runner.live_info()\n",
1305
-    "runner.live_plot()"
1306
-   ]
1307
-  },
1308
-  {
1309
-   "cell_type": "markdown",
1310
-   "metadata": {},
1311
-   "source": [
1312
-    "The above runner should continue forever, but we notice that it stops after a few points are evaluated.\n",
1313
-    "\n",
1314
-    "First we should check that the runner has really finished:"
1315
-   ]
1316
-  },
1317
-  {
1318
-   "cell_type": "code",
1319
-   "execution_count": null,
1320
-   "metadata": {},
1321
-   "outputs": [],
1322
-   "source": [
1323
-    "runner.task.done()"
1324
-   ]
1325
-  },
1326
-  {
1327
-   "cell_type": "markdown",
1328
-   "metadata": {},
1329
-   "source": [
1330
-    "If it has indeed finished then we should check the `result` of the runner. This should be `None` if the runner stopped successfully. If the runner stopped due to an exception then asking for the result will raise the exception with the stack trace:"
1331
-   ]
1332
-  },
1333
-  {
1334
-   "cell_type": "code",
1335
-   "execution_count": null,
1336
-   "metadata": {},
1337
-   "outputs": [],
1338
-   "source": [
1339
-    "runner.task.result()"
1340
-   ]
1341
-  },
1342
-  {
1343
-   "cell_type": "markdown",
1344
-   "metadata": {},
1345
-   "source": [
1346
-    "### Logging runners"
1347
-   ]
1348
-  },
1349
-  {
1350
-   "cell_type": "markdown",
1351
-   "metadata": {},
1352
-   "source": [
1353
-    "Runners do their job in the background, which makes introspection quite cumbersome. One way to inspect runners is to instantiate one with `log=True`:"
1354
-   ]
1355
-  },
1356
-  {
1357
-   "cell_type": "code",
1358
-   "execution_count": null,
1359
-   "metadata": {},
1360
-   "outputs": [],
1361
-   "source": [
1362
-    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
1363
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.1, log=True)\n",
1364
-    "runner.live_info()"
1365
-   ]
1366
-  },
1367
-  {
1368
-   "cell_type": "markdown",
1369
-   "metadata": {},
1370
-   "source": [
1371
-    "This gives a the runner a `log` attribute, which is a list of the `learner` methods that were called, as well as their arguments. This is useful because executors typically execute their tasks in a non-deterministic order.\n",
1372
-    "\n",
1373
-    "This can be used with `adaptive.runner.replay_log` to perfom the same set of operations on another runner:\n"
1374
-   ]
1375
-  },
1376
-  {
1377
-   "cell_type": "code",
1378
-   "execution_count": null,
1379
-   "metadata": {},
1380
-   "outputs": [],
1381
-   "source": [
1382
-    "reconstructed_learner = adaptive.Learner1D(peak, bounds=learner.bounds)\n",
1383
-    "adaptive.runner.replay_log(reconstructed_learner, runner.log)"
1384
-   ]
1385
-  },
1386
-  {
1387
-   "cell_type": "code",
1388
-   "execution_count": null,
1389
-   "metadata": {},
1390
-   "outputs": [],
1391
-   "source": [
1392
-    "learner.plot().Scatter.I.opts(style=dict(size=6)) * reconstructed_learner.plot()"
1393
-   ]
1394
-  },
1395
-  {
1396
-   "cell_type": "markdown",
1397
-   "metadata": {},
1398
-   "source": [
1399
-    "### Adding coroutines"
1400
-   ]
1401
-  },
1402
-  {
1403
-   "cell_type": "markdown",
1404
-   "metadata": {},
1405
-   "source": [
1406
-    "In the following example we'll add a `asyncio.Task` that times the runner.\n",
1407
-    "This is *only* for demonstration purposes because one can simply\n",
1408
-    "check ``runner.elapsed_time()`` or use the ``runner.live_info()``\n",
1409
-    "widget to see the time since the runner has started.\n",
1410
-    "\n",
1411
-    "So let's get on with the example. To time the runner\n",
1412
-    "you **cannot** simply use\n",
1413
-    "\n",
1414
-    "```python\n",
1415
-    "now = datetime.now()\n",
1416
-    "runner = adaptive.Runner(...)\n",
1417
-    "print(datetime.now() - now)\n",
1418
-    "```\n",
1419
-    "\n",
1420
-    "because this will be done immediately. Also blocking the kernel with\n",
1421
-    "``while not runner.task.done()`` will not work because the runner will\n",
1422
-    "not do anything when the kernel is blocked.\n",
1423
-    "\n",
1424
-    "Therefore you need to create an ``async`` function and hook it into the\n",
1425
-    "``ioloop`` like so:\n"
1426
-   ]
1427
-  },
1428
-  {
1429
-   "cell_type": "code",
1430
-   "execution_count": null,
1431
-   "metadata": {},
1432
-   "outputs": [],
1433
-   "source": [
1434
-    "import asyncio\n",
1435
-    "\n",
1436
-    "async def time(runner):\n",
1437
-    "    from datetime import datetime\n",
1438
-    "    now = datetime.now()\n",
1439
-    "    await runner.task\n",
1440
-    "    return datetime.now() - now\n",
1441
-    "\n",
1442
-    "ioloop = asyncio.get_event_loop()\n",
1443
-    "\n",
1444
-    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
1445
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.1)\n",
1446
-    "\n",
1447
-    "timer = ioloop.create_task(time(runner))\n",
1448
-    "runner.live_info()"
1449
-   ]
1450
-  },
1451
-  {
1452
-   "cell_type": "code",
1453
-   "execution_count": null,
1454
-   "metadata": {},
1455
-   "outputs": [],
1456
-   "source": [
1457
-    "# The result will only be set when the runner is done.\n",
1458
-    "timer.result()"
1459
-   ]
1460
-  },
1461
-  {
1462
-   "cell_type": "markdown",
1463
-   "metadata": {},
1464
-   "source": [
1465
-    "## Using Runners from a script "
1466
-   ]
1467
-  },
1468
-  {
1469
-   "cell_type": "markdown",
1470
-   "metadata": {},
1471
-   "source": [
1472
-    "Runners can also be used from a Python script independently of the notebook.\n",
1473
-    "\n",
1474
-    "The simplest way to accomplish this is simply to use the `BlockingRunner`:\n",
1475
-    "\n",
1476
-    "```python\n",
1477
-    "import adaptive\n",
1478
-    "\n",
1479
-    "def f(x):\n",
1480
-    "    return x\n",
1481
-    "\n",
1482
-    "learner = adaptive.Learner1D(peak, (-1, 1))\n",
1483
-    "\n",
1484
-    "adaptive.BlockingRunner(learner, goal=lambda: l: l.loss() < 0.1)\n",
1485
-    "```\n",
1486
-    "\n",
1487
-    "If you use `asyncio` already in your script and want to integrate `adaptive` into it, then you can use the default `Runner` as you would from a notebook. If you want to wait for the runner to finish, then you can simply\n",
1488
-    "```python\n",
1489
-    "    await runner.task\n",
1490
-    "```\n",
1491
-    "from within a coroutine."
1492
-   ]
1493
-  }
1494
- ],
1495
- "metadata": {
1496
-  "language_info": {
1497
-   "name": "python",
1498
-   "pygments_lexer": "ipython3"
1499
-  }
1500
- },
1501
- "nbformat": 4,
1502
- "nbformat_minor": 1
1503
-}
Browse code

remove accidentally added cell

Bas Nijholt authored on 14/09/2019 15:24:25
Showing 1 changed files
... ...
@@ -394,15 +394,6 @@
394 394
     "Sometimes you may want to learn a function with vector output:"
395 395
    ]
396 396
   },
397
-  {
398
-   "cell_type": "code",
399
-   "execution_count": null,
400
-   "metadata": {},
401
-   "outputs": [],
402
-   "source": [
403
-    "peak"
404
-   ]
405
-  },
406 397
   {
407 398
    "cell_type": "code",
408 399
    "execution_count": null,
Browse code

update example notebook

Bas Nijholt authored on 14/09/2019 15:21:44
Showing 1 changed files
... ...
@@ -4,6 +4,7 @@
4 4
    "cell_type": "markdown",
5 5
    "metadata": {},
6 6
    "source": [
7
+    "![](docs/source/_static/logo_docs.png)\n",
7 8
     "# Adaptive"
8 9
    ]
9 10
   },
... ...
@@ -13,21 +14,7 @@
13 14
    "source": [
14 15
     "[`adaptive`](https://github.com/python-adaptive/adaptive) is a package for adaptively sampling functions with support for parallel evaluation.\n",
15 16
     "\n",
16
-    "This is an introductory notebook that shows some basic use cases.\n",
17
-    "\n",
18
-    "`adaptive` needs at least Python 3.6, and the following packages:\n",
19
-    "\n",
20
-    "+ `scipy`\n",
21
-    "+ `sortedcontainers`\n",
22
-    "\n",
23
-    "Additionally `adaptive` has lots of extra functionality that makes it simple to use from Jupyter notebooks.\n",
24
-    "This extra functionality depends on the following packages\n",
25
-    "\n",
26
-    "+ `ipykernel>=4.8.0`\n",
27
-    "+ `jupyter_client>=5.2.2`\n",
28
-    "+ `holoviews`\n",
29
-    "+ `bokeh`\n",
30
-    "+ `ipywidgets`"
17
+    "This is an introductory notebook that shows some basic use cases."
31 18
    ]
32 19
   },
33 20
   {
... ...
@@ -70,13 +57,15 @@
70 57
    "source": [
71 58
     "offset = random.uniform(-0.5, 0.5)\n",
72 59
     "\n",
73
-    "def f(x, offset=offset, wait=True):\n",
60
+    "def peak(x, offset=offset, wait=True):\n",
74 61
     "    from time import sleep\n",
75 62
     "    from random import random\n",
76 63
     "\n",
77 64
     "    a = 0.01\n",
78
-    "    if wait:\n",
65
+    "    if wait:  \n",
66
+    "        # we pretend that this is a slow function\n",
79 67
     "        sleep(random())\n",
68
+    "\n",
80 69
     "    return x + a**2 / (a**2 + (x - offset)**2)"
81 70
    ]
82 71
   },
... ...
@@ -93,7 +82,7 @@
93 82
    "metadata": {},
94 83
    "outputs": [],
95 84
    "source": [
96
-    "learner = adaptive.Learner1D(f, bounds=(-1, 1))"
85
+    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))"
97 86
    ]
98 87
   },
99 88
   {
... ...
@@ -115,7 +104,7 @@
115 104
    "source": [
116 105
     "# The end condition is when the \"loss\" is less than 0.01. In the context of the\n",
117 106
     "# 1D learner this means that we will resolve features in 'func' with width 0.01 or wider.\n",
118
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
107
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01, ntasks=4)\n",
119 108
     "runner.live_info()"
120 109
    ]
121 110
   },
... ...
@@ -149,20 +138,14 @@
149 138
    "metadata": {},
150 139
    "outputs": [],
151 140
    "source": [
152
-    "if not runner.task.done():\n",
153
-    "    raise RuntimeError('Wait for the runner to finish before executing the cells below!')"
154
-   ]
155
-  },
156
-  {
157
-   "cell_type": "code",
158
-   "execution_count": null,
159
-   "metadata": {},
160
-   "outputs": [],
161
-   "source": [
162
-    "learner2 = adaptive.Learner1D(f, bounds=learner.bounds)\n",
141
+    "if runner.status() != \"finished\":\n",
142
+    "    print(\"WARINING: The runner hasn't reached it goal yet!\")\n",
143
+    "\n",
144
+    "learner2 = adaptive.Learner1D(peak, bounds=learner.bounds)\n",
163 145
     "\n",
164 146
     "xs = np.linspace(*learner.bounds, len(learner.data))\n",
165
-    "learner2.tell_many(xs, map(partial(f, wait=False), xs))\n",
147
+    "ys = [peak(x, wait=False) for x in xs]\n",
148
+    "learner2.tell_many(xs, ys)\n",
166 149
     "\n",
167 150
     "learner.plot() + learner2.plot()"
168 151
    ]
... ...
@@ -191,11 +174,14 @@
191 174
     "    import numpy as np\n",
192 175
     "    from time import sleep\n",
193 176
     "    from random import random\n",
177
+    "\n",
194 178
     "    if wait:\n",
195
-    "        sleep(random()/10)\n",
179
+    "        # we pretend that this is a slow function\n",
180
+    "        sleep(random() / 10)\n",
196 181
     "    x, y = xy\n",
197 182
     "    a = 0.2\n",
198
-    "    return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
183
+    "    return x + np.exp(-(x ** 2 + y ** 2 - 0.75 ** 2) ** 2 / a ** 4)\n",
184
+    "\n",
199 185
     "\n",
200 186
     "learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)])"
201 187
    ]
... ...
@@ -235,13 +221,18 @@
235 221
     "\n",
236 222
     "# Create a learner and add data on homogeneous grid, so that we can plot it\n",
237 223
     "learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)\n",
238
-    "n = int(learner.npoints**0.5)\n",
224
+    "n = int(learner.npoints ** 0.5)\n",
239 225
     "xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds]\n",
240 226
     "xys = list(itertools.product(xs, ys))\n",
241
-    "learner2.tell_many(xys, map(partial(ring, wait=False), xys))\n",
227
+    "zs = [ring(xy, wait=False) for xy in xys]\n",
228
+    "learner2.tell_many(xys, zs)\n",
242 229
     "\n",
243
-    "(learner2.plot(n).relabel('Homogeneous grid') + learner.plot().relabel('With adaptive') + \n",
244
-    " learner2.plot(n, tri_alpha=0.4) + learner.plot(tri_alpha=0.4)).cols(2)"
230
+    "(\n",
231
+    "    learner2.plot(n).relabel(\"Homogeneous grid\")\n",
232
+    "    + learner.plot().relabel(\"With adaptive\")\n",
233
+    "    + learner2.plot(n, tri_alpha=0.4)\n",
234
+    "    + learner.plot(tri_alpha=0.4)\n",
235
+    ").cols(2)"
245 236
    ]
246 237
   },
247 238
   {
... ...
@@ -362,9 +353,10 @@
362 353
     "\n",
363 354
     "learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-8)\n",
364 355
     "\n",
365
-    "# We use a SequentialExecutor, which runs the function to be learned in *this* process only. This means we don't pay\n",
366
-    "# the overhead of evaluating the function in another process.\n",
367
-    "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.done())\n",
356
+    "# We use a SequentialExecutor, which runs the function to be learned in *this* process only.\n",
357
+    "# This means we don't pay the overhead of evaluating the function in another process.\n",
358
+    "executor = SequentialExecutor()\n",
359
+    "runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.done())\n",
368 360
     "runner.live_info()"
369 361
    ]
370 362
   },
... ...
@@ -381,16 +373,9 @@
381 373
    "metadata": {},
382 374
    "outputs": [],
383 375
    "source": [
384
-    "if not runner.task.done():\n",
385
-    "    raise RuntimeError('Wait for the runner to finish before executing the cells below!')"
386
-   ]
387
-  },
388
-  {
389
-   "cell_type": "code",
390
-   "execution_count": null,
391
-   "metadata": {},
392
-   "outputs": [],
393
-   "source": [
376
+    "if runner.status() != \"finished\":\n",
377
+    "    print(\"WARINING: The runner hasn't reached it goal yet!\")\n",
378
+    "\n",
394 379
     "print('The integral value is {} with the corresponding error of {}'.format(learner.igral, learner.err))\n",
395 380
     "learner.plot()"
396 381
    ]
... ...
@@ -409,6 +394,15 @@
409 394
     "Sometimes you may want to learn a function with vector output:"
410 395
    ]
411 396
   },
397
+  {
398
+   "cell_type": "code",
399
+   "execution_count": null,
400
+   "metadata": {},
401
+   "outputs": [],
402
+   "source": [
403
+    "peak"
404
+   ]
405
+  },
412 406
   {
413 407
    "cell_type": "code",
414 408
    "execution_count": null,
... ...
@@ -421,8 +415,7 @@
421 415
     "# sharp peaks at random locations in the domain\n",
422 416
     "def f_levels(x, offsets=offsets):\n",
423 417
     "    a = 0.01\n",
424
-    "    return np.array([offset + x + a**2 / (a**2 + (x - offset)**2)\n",
425
-    "                     for offset in offsets])"
418
+    "    return np.array([offset + peak(x, offset, wait=False) for offset in offsets])"
426 419
    ]
427 420
   },
428 421
   {
... ...
@@ -475,7 +468,7 @@
475 468
     "    return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)\n",
476 469
     "\n",
477 470
     "learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])\n",
478
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.001)\n",
471
+    "runner = adaptive.Runner(learner, goal=lambda l: l.npoints > 2000)\n",
479 472
     "runner.live_info()"
480 473
    ]
481 474
   },
... ...
@@ -531,6 +524,15 @@
531 524
     "The plots show some wobbles while the original function was smooth, this is a result of the fact that the learner chooses points in 3 dimensions and the simplices are not in the same face as we try to interpolate our lines. However, as always, when you sample more points the graph will become gradually smoother."
532 525
    ]
533 526
   },
527
+  {
528
+   "cell_type": "code",
529
+   "execution_count": null,
530
+   "metadata": {},
531
+   "outputs": [],
532
+   "source": [
533
+    "learner.plot_3D()"
534
+   ]
535
+  },
534 536
   {
535 537
    "cell_type": "markdown",
536 538
    "metadata": {},
... ...
@@ -559,11 +561,9 @@
559 561
    "metadata": {},
560 562
    "outputs": [],
561 563
    "source": [
562
-    "def uniform_sampling_1d(interval, scale, data):\n",
564
+    "def uniform_sampling_1d(xs, ys):\n",
563 565
     "    # Note that we never use 'data'; the loss is just the size of the subdomain\n",
564
-    "    x_left, x_right = interval\n",
565
-    "    x_scale, _ = scale\n",
566
-    "    dx = (x_right - x_left) / x_scale\n",
566
+    "    dx = xs[1] - xs[0]\n",
567 567
     "    return dx\n",
568 568
     "\n",
569 569
     "def f_divergent_1d(x):\n",
... ...
@@ -732,7 +732,8 @@
732 732
    "source": [
733 733
     "from scipy.special import eval_jacobi\n",
734 734
     "\n",
735
-    "def jacobi(x, n, alpha, beta): return eval_jacobi(n, alpha, beta, x)\n",
735
+    "def jacobi(x, n, alpha, beta):\n",
736
+    "    return eval_jacobi(n, alpha, beta, x)\n",
736 737
     "\n",
737 738
     "combos = {\n",
738 739
     "    'n': [1, 2, 4, 8],\n",
... ...
@@ -932,7 +933,7 @@
932 933
     "\n",
933 934
     "executor = ProcessPoolExecutor(max_workers=4)\n",
934 935
     "\n",
935
-    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
936
+    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
936 937
     "runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.05)\n",
937 938
     "runner.live_info()\n",
938 939
     "runner.live_plot(update_interval=0.1)"
... ...
@@ -955,7 +956,7 @@
955 956
     "\n",
956 957
     "client = ipyparallel.Client()  # You will need to start an `ipcluster` to make this work\n",
957 958
     "\n",
958
-    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
959
+    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
959 960
     "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
960 961
     "runner.live_info()\n",
961 962
     "runner.live_plot()"
... ...
@@ -980,7 +981,7 @@
980 981
     "\n",
981 982
     "client = distributed.Client()\n",
982 983
     "\n",
983
-    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
984
+    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
984 985
     "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
985 986
     "runner.live_info()\n",
986 987
     "runner.live_plot(update_interval=0.1)"
... ...
@@ -1029,14 +1030,21 @@
1029 1030
    "outputs": [],
1030 1031
    "source": [
1031 1032
     "# Let's create two learners and run only one.\n",
1032
-    "learner = adaptive.Learner1D(partial(f, wait=False), bounds=(-1, 1))\n",
1033
-    "control = adaptive.Learner1D(partial(f, wait=False), bounds=(-1, 1))\n",
1033
+    "learner = adaptive.Learner1D(partial(peak, wait=False), bounds=(-1, 1))\n",
1034
+    "control = adaptive.Learner1D(partial(peak, wait=False), bounds=(-1, 1))\n",
1034 1035
     "\n",
1035 1036
     "# Let's only run the learner\n",
1036 1037
     "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
1037 1038
     "runner.live_info()"
1038 1039
    ]
1039 1040
   },
1041
+  {
1042
+   "cell_type": "markdown",
1043
+   "metadata": {},
1044
+   "source": [
1045
+    "We can save the data with"
1046
+   ]
1047
+  },
1040 1048
   {
1041 1049
    "cell_type": "code",
1042 1050
    "execution_count": null,
... ...
@@ -1044,9 +1052,23 @@
1044 1052
    "outputs": [],
1045 1053
    "source": [
1046 1054
     "fname = 'data/example_file.p'\n",
1047
-    "learner.save(fname)\n",
1055
+    "learner.save(fname)"
1056
+   ]
1057
+  },
1058
+  {
1059
+   "cell_type": "markdown",
1060
+   "metadata": {},
1061
+   "source": [
1062
+    "and load (and then plot) the other empty learner with"
1063
+   ]
1064
+  },
1065
+  {
1066
+   "cell_type": "code",
1067
+   "execution_count": null,
1068
+   "metadata": {},
1069
+   "outputs": [],
1070
+   "source": [
1048 1071
     "control.load(fname)\n",
1049
-    "\n",
1050 1072
     "learner.plot().relabel('saved learner') + control.plot().relabel('loaded learner')"
1051 1073
    ]
1052 1074
   },
... ...
@@ -1063,7 +1085,7 @@
1063 1085
    "metadata": {},
1064 1086
    "outputs": [],
1065 1087
    "source": [
1066
-    "control = adaptive.Learner1D(partial(f, wait=False), bounds=(-1, 1))\n",
1088
+    "control = adaptive.Learner1D(partial(peak, wait=False), bounds=(-1, 1))\n",
1067 1089
     "control.copy_from(learner)"
1068 1090
    ]
1069 1091
   },
... ...
@@ -1084,9 +1106,12 @@
1084 1106
     "    from time import sleep\n",
1085 1107
     "    sleep(5)\n",
1086 1108
     "    return x\n",
1109
+    "\n",
1087 1110
     "learner = adaptive.Learner1D(slow_f, bounds=[0, 1])\n",
1088 1111
     "runner = adaptive.Runner(learner, goal=lambda l: l.npoints > 100)\n",
1112
+    "\n",
1089 1113
     "runner.start_periodic_saving(save_kwargs=dict(fname='data/periodic_example.p'), interval=6)\n",
1114
+    "\n",
1090 1115
     "runner.live_info()"
1091 1116
    ]
1092 1117
   },
... ...
@@ -1139,7 +1164,7 @@
1139 1164
    "metadata": {},
1140 1165
    "outputs": [],
1141 1166
    "source": [
1142
-    "learner = adaptive.Learner1D(partial(f, wait=False), bounds=(-1, 1))\n",
1167
+    "learner = adaptive.Learner1D(partial(peak, wait=False), bounds=(-1, 1))\n",
1143 1168
     "adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.005)\n",
1144 1169
     "# This will only get run after the runner has finished\n",
1145 1170
     "learner.plot()"
... ...
@@ -1171,7 +1196,7 @@
1171 1196
    "metadata": {},
1172 1197
    "outputs": [],
1173 1198
    "source": [
1174
-    "learner = adaptive.Learner1D(partial(f, wait=False), bounds=(-1, 1))\n",
1199
+    "learner = adaptive.Learner1D(partial(peak, wait=False), bounds=(-1, 1))\n",
1175 1200
     "\n",
1176 1201
     "# blocks until completion\n",
1177 1202
     "adaptive.runner.simple(learner, goal=lambda l: l.loss() < 0.002)\n",
... ...
@@ -1196,7 +1221,7 @@
1196 1221
    "source": [
1197 1222
     "from adaptive.runner import SequentialExecutor\n",
1198 1223
     "\n",
1199
-    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
1224
+    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
1200 1225
     "\n",
1201 1226
     "# blocks until completion\n",
1202 1227
     "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.loss() < 0.002)\n",
... ...
@@ -1228,7 +1253,7 @@
1228 1253
    "metadata": {},
1229 1254
    "outputs": [],
1230 1255
    "source": [
1231
-    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
1256
+    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
1232 1257
     "runner = adaptive.Runner(learner)\n",
1233 1258
     "runner.live_info()\n",
1234 1259
     "runner.live_plot(update_interval=0.1)"
... ...
@@ -1343,9 +1368,8 @@
1343 1368
    "metadata": {},
1344 1369
    "outputs": [],
1345 1370
    "source": [
1346
-    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
1347
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.1,\n",
1348
-    "                         log=True)\n",
1371
+    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
1372
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.1, log=True)\n",
1349 1373
     "runner.live_info()"
1350 1374
    ]
1351 1375
   },
... ...
@@ -1364,7 +1388,7 @@
1364 1388
    "metadata": {},
1365 1389
    "outputs": [],
1366 1390
    "source": [
1367
-    "reconstructed_learner = adaptive.Learner1D(f, bounds=learner.bounds)\n",
1391
+    "reconstructed_learner = adaptive.Learner1D(peak, bounds=learner.bounds)\n",
1368 1392
     "adaptive.runner.replay_log(reconstructed_learner, runner.log)"
1369 1393
    ]
1370 1394
   },
... ...
@@ -1381,22 +1405,33 @@
1381 1405
    "cell_type": "markdown",
1382 1406
    "metadata": {},
1383 1407
    "source": [
1384
-    "### Timing functions"
1408
+    "### Adding coroutines"
1385 1409
    ]
1386 1410
   },
1387 1411
   {
1388 1412
    "cell_type": "markdown",
1389 1413
    "metadata": {},
1390 1414
    "source": [
1391
-    "To time the runner you **cannot** simply use \n",
1415
+    "In the following example we'll add a `asyncio.Task` that times the runner.\n",
1416
+    "This is *only* for demonstration purposes because one can simply\n",
1417
+    "check ``runner.elapsed_time()`` or use the ``runner.live_info()``\n",
1418
+    "widget to see the time since the runner has started.\n",
1419
+    "\n",
1420
+    "So let's get on with the example. To time the runner\n",
1421
+    "you **cannot** simply use\n",
1422
+    "\n",
1392 1423
     "```python\n",
1393 1424
     "now = datetime.now()\n",
1394 1425
     "runner = adaptive.Runner(...)\n",
1395 1426
     "print(datetime.now() - now)\n",
1396 1427
     "```\n",
1397
-    "because this will be done immediately. Also blocking the kernel with `while not runner.task.done()` will not work because the runner will not do anything when the kernel is blocked.\n",
1398 1428
     "\n",
1399
-    "Therefore you need to create an `async` function and hook it into the `ioloop` like so:"
1429
+    "because this will be done immediately. Also blocking the kernel with\n",
1430
+    "``while not runner.task.done()`` will not work because the runner will\n",
1431
+    "not do anything when the kernel is blocked.\n",
1432
+    "\n",
1433
+    "Therefore you need to create an ``async`` function and hook it into the\n",
1434
+    "``ioloop`` like so:\n"
1400 1435
    ]
1401 1436
   },
1402 1437
   {
... ...
@@ -1415,10 +1450,11 @@
1415 1450
     "\n",
1416 1451
     "ioloop = asyncio.get_event_loop()\n",
1417 1452
     "\n",
1418
-    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
1419
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
1453
+    "learner = adaptive.Learner1D(peak, bounds=(-1, 1))\n",
1454
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.1)\n",
1420 1455
     "\n",
1421
-    "timer = ioloop.create_task(time(runner))"
1456
+    "timer = ioloop.create_task(time(runner))\n",
1457
+    "runner.live_info()"
1422 1458
    ]
1423 1459
   },
1424 1460
   {
... ...
@@ -1452,7 +1488,7 @@
1452 1488
     "def f(x):\n",
1453 1489
     "    return x\n",
1454 1490
     "\n",
1455
-    "learner = adaptive.Learner1D(f, (-1, 1))\n",
1491
+    "learner = adaptive.Learner1D(peak, (-1, 1))\n",
1456 1492
     "\n",
1457 1493
     "adaptive.BlockingRunner(learner, goal=lambda: l: l.loss() < 0.1)\n",
1458 1494
     "```\n",
Browse code

fix typo

Bas Nijholt authored on 14/09/2019 14:53:43
Showing 1 changed files
... ...
@@ -113,8 +113,8 @@
113 113
    "metadata": {},
114 114
    "outputs": [],
115 115
    "source": [
116
-    "# The end condition is when the \"loss\" is less than 0.1. In the context of the\n",
117
-    "# 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.\n",
116
+    "# The end condition is when the \"loss\" is less than 0.01. In the context of the\n",
117
+    "# 1D learner this means that we will resolve features in 'func' with width 0.01 or wider.\n",
118 118
     "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
119 119
     "runner.live_info()"
120 120
    ]
Browse code

change default values in the notebook

Bas Nijholt authored on 06/09/2019 13:29:08
Showing 1 changed files
... ...
@@ -115,7 +115,7 @@
115 115
    "source": [
116 116
     "# The end condition is when the \"loss\" is less than 0.1. In the context of the\n",
117 117
     "# 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.\n",
118
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.05)\n",
118
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
119 119
     "runner.live_info()"
120 120
    ]
121 121
   },
... ...
@@ -475,7 +475,7 @@
475 475
     "    return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)\n",
476 476
     "\n",
477 477
     "learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])\n",
478
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
478
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.001)\n",
479 479
     "runner.live_info()"
480 480
    ]
481 481
   },
Browse code

change urls from GitLab to GitHub

Bas Nijholt authored on 21/01/2019 14:56:03
Showing 1 changed files
... ...
@@ -11,7 +11,7 @@
11 11
    "cell_type": "markdown",
12 12
    "metadata": {},
13 13
    "source": [
14
-    "[`adaptive`](https://gitlab.kwant-project.org/qt/adaptive-evaluation) is a package for adaptively sampling functions with support for parallel evaluation.\n",
14
+    "[`adaptive`](https://github.com/python-adaptive/adaptive) is a package for adaptively sampling functions with support for parallel evaluation.\n",
15 15
     "\n",
16 16
     "This is an introductory notebook that shows some basic use cases.\n",
17 17
     "\n",
Browse code

rename 'function_values' to 'data' because its more obvious

Because data is now in the 'BaseLearner'

Bas Nijholt authored on 28/10/2018 14:22:27
Showing 1 changed files
... ...
@@ -559,8 +559,8 @@
559 559
    "metadata": {},
560 560
    "outputs": [],
561 561
    "source": [
562
-    "def uniform_sampling_1d(interval, scale, function_values):\n",
563
-    "    # Note that we never use 'function_values'; the loss is just the size of the subdomain\n",
562
+    "def uniform_sampling_1d(interval, scale, data):\n",
563
+    "    # Note that we never use 'data'; the loss is just the size of the subdomain\n",
564 564
     "    x_left, x_right = interval\n",
565 565
     "    x_scale, _ = scale\n",
566 566
     "    dx = (x_right - x_left) / x_scale\n",
Browse code

add 'save' and 'load' to the learners and periodic saving to the Runner

Bas Nijholt authored on 08/10/2018 17:08:34
Showing 1 changed files
... ...
@@ -1000,6 +1000,106 @@
1000 1000
     "# Advanced Topics"
1001 1001
    ]
1002 1002
   },
1003
+  {
1004
+   "cell_type": "markdown",
1005
+   "metadata": {},
1006
+   "source": [
1007
+    "## Saving and loading learners"
1008
+   ]
1009
+  },
1010
+  {
1011
+   "cell_type": "markdown",
1012
+   "metadata": {},
1013
+   "source": [
1014
+    "Every learner has a `save` and `load` method that can be used to save and load **only** the data of a learner.\n",
1015
+    "\n",
1016
+    "There are __two ways__ of naming the files:\n",
1017
+    "1. Using the `fname` argument in `learner.save(fname=...)`\n",
1018
+    "2. Setting the `fname` attribute, like `learner.fname = 'data/example.p` and then `learner.save()`\n",
1019
+    "\n",
1020
+    "The second way _must be used_ when saving the `learner`s of a `BalancingLearner`.\n",
1021
+    "\n",
1022
+    "By default the resulting pickle files are compressed, to turn this off use `learner.save(fname=..., compress=False)`"
1023
+   ]
1024
+  },
1025
+  {
1026
+   "cell_type": "code",
1027
+   "execution_count": null,
1028
+   "metadata": {},
1029
+   "outputs": [],
1030
+   "source": [
1031
+    "# Let's create two learners and run only one.\n",
1032
+    "learner = adaptive.Learner1D(partial(f, wait=False), bounds=(-1, 1))\n",
1033
+    "control = adaptive.Learner1D(partial(f, wait=False), bounds=(-1, 1))\n",
1034
+    "\n",
1035
+    "# Let's only run the learner\n",
1036
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
1037
+    "runner.live_info()"
1038
+   ]
1039
+  },
1040
+  {
1041
+   "cell_type": "code",
1042
+   "execution_count": null,
1043
+   "metadata": {},
1044
+   "outputs": [],
1045
+   "source": [
1046
+    "fname = 'data/example_file.p'\n",
1047
+    "learner.save(fname)\n",
1048
+    "control.load(fname)\n",
1049
+    "\n",
1050
+    "learner.plot().relabel('saved learner') + control.plot().relabel('loaded learner')"
1051
+   ]
1052
+  },
1053
+  {
1054
+   "cell_type": "markdown",
1055
+   "metadata": {},
1056
+   "source": [
1057
+    "Or just (without saving):"
1058
+   ]
1059
+  },
1060
+  {
1061
+   "cell_type": "code",
1062
+   "execution_count": null,
1063
+   "metadata": {},
1064
+   "outputs": [],
1065
+   "source": [
1066
+    "control = adaptive.Learner1D(partial(f, wait=False), bounds=(-1, 1))\n",
1067
+    "control.copy_from(learner)"
1068
+   ]
1069
+  },
1070
+  {
1071
+   "cell_type": "markdown",
1072
+   "metadata": {},
1073
+   "source": [
1074
+    "One can also periodically save the learner while it's run in a `Runner`. You can use it like:"
1075
+   ]
1076
+  },
1077
+  {
1078
+   "cell_type": "code",
1079
+   "execution_count": null,
1080
+   "metadata": {},
1081
+   "outputs": [],
1082
+   "source": [
1083
+    "def slow_f(x):\n",
1084
+    "    from time import sleep\n",
1085
+    "    sleep(5)\n",
1086
+    "    return x\n",
1087
+    "learner = adaptive.Learner1D(slow_f, bounds=[0, 1])\n",
1088
+    "runner = adaptive.Runner(learner, goal=lambda l: l.npoints > 100)\n",
1089
+    "runner.start_periodic_saving(save_kwargs=dict(fname='data/periodic_example.p'), interval=6)\n",
1090
+    "runner.live_info()"
1091
+   ]
1092
+  },
1093
+  {
1094
+   "cell_type": "code",
1095
+   "execution_count": null,
1096
+   "metadata": {},
1097
+   "outputs": [],
1098
+   "source": [
1099
+    "# See the data after 6 seconds with\n",
1100
+    "!ls -lah data  # only works on macOS and Linux systems"
1101
+   ]
1102
+  },
1003 1103
   {
1004 1104
    "cell_type": "markdown",
1005 1105
    "metadata": {},
Browse code

very small changes to the example notebook

Bas Nijholt authored on 13/10/2018 01:59:14
Showing 1 changed files
... ...
@@ -747,7 +747,7 @@
747 747
     "\n",
748 748
     "# The `cdims` will automatically be set when using `from_product`, so\n",
749 749
     "# `plot()` will return a HoloMap with correctly labeled sliders.\n",
750
-    "learner.plot().overlay('beta').grid()"
750
+    "learner.plot().overlay('beta').grid().select(y=(-1, 3))"
751 751
    ]
752 752
   },
753 753
   {
... ...
@@ -859,7 +859,7 @@
859 859
    "metadata": {},
860 860
    "outputs": [],
861 861
    "source": [
862
-    "def g(x, noise_level=0.1):\n",
862
+    "def F(x, noise_level=0.1):\n",
863 863
     "    return (np.sin(5 * x) * (1 - np.tanh(x ** 2))\n",
864 864
     "            + np.random.randn() * noise_level)"
865 865
    ]
... ...
@@ -870,7 +870,7 @@
870 870
    "metadata": {},
871 871
    "outputs": [],
872 872
    "source": [
873
-    "learner = adaptive.SKOptLearner(g, dimensions=[(-2., 2.)],\n",
873
+    "learner = adaptive.SKOptLearner(F, dimensions=[(-2., 2.)],\n",
874 874
     "                                base_estimator=\"GP\",\n",
875 875
     "                                acq_func=\"gp_hedge\",\n",
876 876
     "                                acq_optimizer=\"lbfgs\",\n",
... ...
@@ -887,7 +887,7 @@
887 887
    "source": [
888 888
     "%%opts Overlay [legend_position='top']\n",
889 889
     "xs = np.linspace(*learner.space.bounds[0])\n",
890
-    "to_learn = hv.Curve((xs, [g(x, 0) for x in xs]), label='to learn')\n",
890
+    "to_learn = hv.Curve((xs, [F(x, 0) for x in xs]), label='to learn')\n",
891 891
     "\n",
892 892
     "runner.live_plot().relabel('prediction', depth=2) * to_learn"
893 893
    ]
Browse code

simplify Learner2D plotting in the notebook

Bas Nijholt authored on 08/10/2018 16:47:03
Showing 1 changed files
... ...
@@ -218,10 +218,7 @@
218 218
    "source": [
219 219
     "def plot(learner):\n",
220 220
     "    plot = learner.plot(tri_alpha=0.2)\n",
221
-    "    title = f'loss={learner._loss:.3f}, n_points={learner.npoints}'\n",
222
-    "    return (plot.Image\n",
223
-    "            + plot.EdgePaths.I.opts(plot=dict(title_format=title))\n",
224
-    "            + plot)\n",
221
+    "    return plot.Image + plot.EdgePaths.I + plot\n",
225 222
     "\n",
226 223
     "runner.live_plot(plotter=plot, update_interval=0.1)"
227 224
    ]
Browse code

move the 'LearnerND' down in the notebook for now

Bas Nijholt authored on 24/09/2018 12:37:26
Showing 1 changed files
... ...
@@ -247,85 +247,6 @@
247 247
     " learner2.plot(n, tri_alpha=0.4) + learner.plot(tri_alpha=0.4)).cols(2)"
248 248
    ]
249 249
   },
250
-  {
251
-   "cell_type": "markdown",
252
-   "metadata": {},
253
-   "source": [
254
-    "# N-dimensional function learner\n",
255
-    "Besides 1 and 2 dimensional functions, we can also learn N-D functions: $\\ f: ℝ^N → ℝ, N \\ge 2$\n",
256
-    "\n",
257
-    "Do keep in mind the speed and [effectiveness](https://en.wikipedia.org/wiki/Curse_of_dimensionality) of the learner drops quickly with increasing number of dimensions."
258
-   ]
259
-  },
260
-  {
261
-   "cell_type": "code",
262
-   "execution_count": null,
263
-   "metadata": {},
264
-   "outputs": [],
265
-   "source": [
266
-    "# this step takes a lot of time, it will finish at about 3300 points, which can take up to 6 minutes\n",
267
-    "def sphere(xyz):\n",
268
-    "    x, y, z = xyz\n",
269
-    "    a = 0.4\n",
270
-    "    return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)\n",
271
-    "\n",
272
-    "learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])\n",
273
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
274
-    "runner.live_info()"
275
-   ]
276
-  },
277
-  {
278
-   "cell_type": "markdown",
279
-   "metadata": {},
280
-   "source": [
281
-    "Let's plot 2D slices of the 3D function"
282
-   ]
283
-  },
284
-  {
285
-   "cell_type": "code",
286
-   "execution_count": null,
287
-   "metadata": {},
288
-   "outputs": [],
289
-   "source": [
290
-    "def plot_cut(x, direction, learner=learner):\n",
291
-    "    cut_mapping = {'xyz'.index(direction): x}\n",
292
-    "    return learner.plot_slice(cut_mapping, n=100)\n",
293
-    "\n",
294
-    "dm = hv.DynamicMap(plot_cut, kdims=['value', 'direction'])\n",
295
-    "dm.redim.values(value=np.linspace(-1, 1), direction=list('xyz'))"
296
-   ]
297
-  },
298
-  {
299
-   "cell_type": "markdown",
300
-   "metadata": {},
301
-   "source": [
302
-    "Or we can plot 1D slices"
303
-   ]
304
-  },
305
-  {
306
-   "cell_type": "code",
307
-   "execution_count": null,
308
-   "metadata": {},
309
-   "outputs": [],
310
-   "source": [
311
-    "%%opts Path {+framewise}\n",
312
-    "def plot_cut(x1, x2, directions, learner=learner):\n",
313
-    "    cut_mapping = {'xyz'.index(d): x for d, x in zip(directions, [x1, x2])}\n",
314
-    "    return learner.plot_slice(cut_mapping)\n",
315
-    "\n",
316
-    "dm = hv.DynamicMap(plot_cut, kdims=['v1', 'v2', 'directions'])\n",
317
-    "dm.redim.values(v1=np.linspace(-1, 1),\n",
318
-    "                v2=np.linspace(-1, 1),\n",
319
-    "                directions=['xy', 'xz', 'yz'])"
320
-   ]
321
-  },
322
-  {
323
-   "cell_type": "markdown",
324
-   "metadata": {},
325
-   "source": [
326
-    "The plots show some wobbles while the original function was smooth, this is a result of the fact that the learner chooses points in 3 dimensions and the simplices are not in the same face as we try to interpolate our lines. However, as always, when you sample more points the graph will become gradually smoother."
327
-   ]
328
-  },
329 250
   {
330 251
    "cell_type": "markdown",
331 252
    "metadata": {},
... ...
@@ -534,6 +455,85 @@
534 455
     "runner.live_plot(update_interval=0.1)"
535 456
    ]
536 457
   },
458
+  {
459
+   "cell_type": "markdown",
460
+   "metadata": {},
461
+   "source": [
462
+    "# N-dimensional function learner (beta)\n",
463
+    "Besides 1 and 2 dimensional functions, we can also learn N-D functions: $\\ f: ℝ^N → ℝ, N \\ge 2$\n",
464
+    "\n",
465
+    "Do keep in mind the speed and [effectiveness](https://en.wikipedia.org/wiki/Curse_of_dimensionality) of the learner drops quickly with increasing number of dimensions."
466
+   ]
467
+  },
468
+  {
469
+   "cell_type": "code",
470
+   "execution_count": null,
471
+   "metadata": {},
472
+   "outputs": [],
473
+   "source": [
474
+    "# this step takes a lot of time, it will finish at about 3300 points, which can take up to 6 minutes\n",
475
+    "def sphere(xyz):\n",
476
+    "    x, y, z = xyz\n",
477
+    "    a = 0.4\n",
478
+    "    return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)\n",
479
+    "\n",
480
+    "learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])\n",
481
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
482
+    "runner.live_info()"
483
+   ]
484
+  },
485
+  {
486
+   "cell_type": "markdown",
487
+   "metadata": {},
488
+   "source": [
489
+    "Let's plot 2D slices of the 3D function"
490
+   ]
491
+  },
492
+  {
493
+   "cell_type": "code",
494
+   "execution_count": null,
495
+   "metadata": {},
496
+   "outputs": [],
497
+   "source": [
498
+    "def plot_cut(x, direction, learner=learner):\n",
499
+    "    cut_mapping = {'xyz'.index(direction): x}\n",
500
+    "    return learner.plot_slice(cut_mapping, n=100)\n",
501
+    "\n",
502
+    "dm = hv.DynamicMap(plot_cut, kdims=['value', 'direction'])\n",
503
+    "dm.redim.values(value=np.linspace(-1, 1), direction=list('xyz'))"
504
+   ]
505
+  },
506
+  {
507
+   "cell_type": "markdown",
508
+   "metadata": {},
509
+   "source": [
510
+    "Or we can plot 1D slices"
511
+   ]
512
+  },
513
+  {
514
+   "cell_type": "code",
515
+   "execution_count": null,
516
+   "metadata": {},
517
+   "outputs": [],
518
+   "source": [
519
+    "%%opts Path {+framewise}\n",
520
+    "def plot_cut(x1, x2, directions, learner=learner):\n",
521
+    "    cut_mapping = {'xyz'.index(d): x for d, x in zip(directions, [x1, x2])}\n",
522
+    "    return learner.plot_slice(cut_mapping)\n",
523
+    "\n",
524
+    "dm = hv.DynamicMap(plot_cut, kdims=['v1', 'v2', 'directions'])\n",
525
+    "dm.redim.values(v1=np.linspace(-1, 1),\n",
526
+    "                v2=np.linspace(-1, 1),\n",
527
+    "                directions=['xy', 'xz', 'yz'])"
528
+   ]
529
+  },
530
+  {
531
+   "cell_type": "markdown",
532
+   "metadata": {},
533
+   "source": [
534
+    "The plots show some wobbles while the original function was smooth, this is a result of the fact that the learner chooses points in 3 dimensions and the simplices are not in the same face as we try to interpolate our lines. However, as always, when you sample more points the graph will become gradually smoother."
535
+   ]
536
+  },
537 537
   {
538 538
    "cell_type": "markdown",
539 539
    "metadata": {},
Browse code

disable logging for the LearnerND in the example notebook

Bas Nijholt authored on 31/08/2018 14:44:14
Showing 1 changed files
... ...
@@ -270,7 +270,7 @@
270 270
     "    return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)\n",
271 271
     "\n",
272 272
     "learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])\n",
273
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02, log=True)\n",
273
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
274 274
     "runner.live_info()"
275 275
    ]
276 276
   },
Browse code

reduce the tolerance of the IntegratorLearner in the example notebook

Bas Nijholt authored on 31/08/2018 14:43:30
Showing 1 changed files
... ...
@@ -442,7 +442,7 @@
442 442
    "source": [
443 443
     "from adaptive.runner import SequentialExecutor\n",
444 444
     "\n",
445
-    "learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-10)\n",
445
+    "learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-8)\n",
446 446
     "\n",
447 447
     "# We use a SequentialExecutor, which runs the function to be learned in *this* process only. This means we don't pay\n",
448 448
     "# the overhead of evaluating the function in another process.\n",
Browse code

rename 'learner.tell'

We now define 'tell' and 'tell_many'. Subclasses may implement
either (or both).

Closes #59.

Joseph Weston authored on 02/07/2018 20:15:07 • Bas Nijholt committed on 18/07/2018 20:35:34
Showing 1 changed files
... ...
@@ -162,7 +162,7 @@
162 162
     "learner2 = adaptive.Learner1D(f, bounds=learner.bounds)\n",
163 163
     "\n",
164 164
     "xs = np.linspace(*learner.bounds, len(learner.data))\n",
165
-    "learner2.tell(xs, map(partial(f, wait=False), xs))\n",
165
+    "learner2.tell_many(xs, map(partial(f, wait=False), xs))\n",
166 166
     "\n",
167 167
     "learner.plot() + learner2.plot()"
168 168
    ]
... ...
@@ -241,7 +241,7 @@
241 241
     "n = int(learner.npoints**0.5)\n",
242 242
     "xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds]\n",
243 243
     "xys = list(itertools.product(xs, ys))\n",
244
-    "learner2.tell(xys, map(partial(ring, wait=False), xys))\n",
244
+    "learner2.tell_many(xys, map(partial(ring, wait=False), xys))\n",
245 245
     "\n",
246 246
     "(learner2.plot(n).relabel('Homogeneous grid') + learner.plot().relabel('With adaptive') + \n",
247 247
     " learner2.plot(n, tri_alpha=0.4) + learner.plot(tri_alpha=0.4)).cols(2)"
Browse code

change the 'plot_slice' call signature

Bas Nijholt authored on 10/07/2018 22:42:13
Showing 1 changed files
... ...
@@ -252,9 +252,9 @@
252 252
    "metadata": {},
253 253
    "source": [
254 254
     "# N-dimensional function learner\n",
255
-    "Appart from the 1d and 2d learner, we can also learn N-d functions:  $\\ f: ℝ^N → ℝ, N \\ge 2$\n",
255
+    "Besides 1 and 2 dimensional functions, we can also learn N-D functions: $\\ f: ℝ^N → ℝ, N \\ge 2$\n",
256 256
     "\n",
257
-    "Do keep in mind the speed and effectiveness of the learner drops quickly with increasing number of dimensions."
257
+    "Do keep in mind the speed and [effectiveness](https://en.wikipedia.org/wiki/Curse_of_dimensionality) of the learner drops quickly with increasing number of dimensions."
258 258
    ]
259 259
   },
260 260
   {
... ...
@@ -269,19 +269,16 @@
269 269
     "    a = 0.4\n",
270 270
     "    return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)\n",
271 271
     "\n",
272
-    "\n",
273 272
     "learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])\n",
274
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)\n",
273
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02, log=True)\n",
275 274
     "runner.live_info()"
276 275
    ]
277 276
   },
278 277
   {
279
-   "cell_type": "code",
280
-   "execution_count": null,
278
+   "cell_type": "markdown",
281 279
    "metadata": {},
282
-   "outputs": [],
283 280
    "source": [
284
-    "learner.loss()"
281
+    "Let's plot 2D slices of the 3D function"
285 282
    ]
286 283
   },
287 284
   {
... ...
@@ -290,28 +287,19 @@
290 287
    "metadata": {},
291 288
    "outputs": [],
292 289
    "source": [
293
-    "# We plot 2d slices of the 3d method at different values for x\n",
294
-    "(learner.plot_slice((0.0, None, None), n=100).relabel(\"x=0\") + \n",
295
-    " learner.plot_slice((0.2, None, None), n=100).relabel(\"x=0.2\") +\n",
296
-    " learner.plot_slice((0.4, None, None), n=100).relabel(\"x=0.4\") + \n",
297
-    " learner.plot_slice((0.6, None, None), n=100).relabel(\"x=0.6\") + \n",
298
-    " learner.plot_slice((0.8, None, None), n=100).relabel(\"x=0.8\")\n",
299
-    ").cols(2)"
290
+    "def plot_cut(x, direction, learner=learner):\n",
291
+    "    cut_mapping = {'xyz'.index(direction): x}\n",
292
+    "    return learner.plot_slice(cut_mapping, n=100)\n",
293
+    "\n",
294
+    "dm = hv.DynamicMap(plot_cut, kdims=['value', 'direction'])\n",
295
+    "dm.redim.values(value=np.linspace(-1, 1), direction=list('xyz'))"
300 296
    ]
301 297
   },
302 298
   {
303
-   "cell_type": "code",
304
-   "execution_count": null,
299
+   "cell_type": "markdown",
305 300
    "metadata": {},
306
-   "outputs": [],
307 301
    "source": [
308
-    "# We can also plot slices in a different direction\n",
309
-    "(learner.plot_slice((None, None, 0.0), n=100).relabel(\"z=0\") + \n",
310
-    " learner.plot_slice((None, None, 0.2), n=100).relabel(\"z=0.2\") +\n",
311
-    " learner.plot_slice((None, None, 0.4), n=100).relabel(\"z=0.4\") + \n",
312
-    " learner.plot_slice((None, None, 0.6), n=100).relabel(\"z=0.6\") + \n",
313
-    " learner.plot_slice((None, None, 0.8), n=100).relabel(\"z=0.8\")\n",
314
-    ").cols(2)"
302
+    "Or we can plot 1D slices"
315 303
    ]
316 304
   },
317 305
   {
... ...
@@ -320,12 +308,15 @@
320 308
    "metadata": {},
321 309
    "outputs": [],
322 310
    "source": [
323
-    "# Or we can plot 1d slices\n",
324
-    "(learner.plot_slice((None, 0.0, 0.0), n=100).relabel(\"y=0 z=0\") + \n",
325
-    " learner.plot_slice((None, 0.0, 0.5), n=100).relabel(\"y=0 z=0.5\") +\n",
326
-    " learner.plot_slice((0.0, None, 0.0), n=100).relabel(\"x=0 z=0\") + \n",
327
-    " learner.plot_slice((0.0, 0.5, None), n=100).relabel(\"x=0.5 y=0\")\n",
328
-    ").cols(2)"
311
+    "%%opts Path {+framewise}\n",
312
+    "def plot_cut(x1, x2, directions, learner=learner):\n",
313
+    "    cut_mapping = {'xyz'.index(d): x for d, x in zip(directions, [x1, x2])}\n",
314
+    "    return learner.plot_slice(cut_mapping)\n",
315
+    "\n",
316
+    "dm = hv.DynamicMap(plot_cut, kdims=['v1', 'v2', 'directions'])\n",
317
+    "dm.redim.values(v1=np.linspace(-1, 1),\n",
318
+    "                v2=np.linspace(-1, 1),\n",
319
+    "                directions=['xy', 'xz', 'yz'])"
329 320
    ]
330 321
   },
331 322
   {
Browse code

add comment about effectiveness

Jorn Hoofwijk authored on 08/07/2018 21:48:53 • Bas Nijholt committed on 11/07/2018 05:27:21
Showing 1 changed files
... ...
@@ -254,7 +254,7 @@
254 254
     "# N-dimensional function learner\n",
255 255
     "Appart from the 1d and 2d learner, we can also learn N-d functions:  $\\ f: ℝ^N → ℝ, N \\ge 2$\n",
256 256
     "\n",
257
-    "Do keep in mind the speed of the learner drops quickly with increasing number of dimensions."
257
+    "Do keep in mind the speed and effectiveness of the learner drops quickly with increasing number of dimensions."
258 258
    ]
259 259
   },
260 260
   {
... ...
@@ -264,12 +264,12 @@
264 264
    "outputs": [],
265 265
    "source": [
266 266
     "# this step takes a lot of time, it will finish at about 3300 points, which can take up to 6 minutes\n",
267
-    "\n",
268 267
     "def sphere(xyz):\n",
269 268
     "    x, y, z = xyz\n",
270 269
     "    a = 0.4\n",
271 270
     "    return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)\n",
272 271
     "\n",
272
+    "\n",
273 273
     "learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])\n",
274 274
     "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)\n",
275 275
     "runner.live_info()"
Browse code

add learnerND to the jupyter notebook

Jorn Hoofwijk authored on 08/07/2018 17:05:25 • Bas Nijholt committed on 11/07/2018 05:27:21
Showing 1 changed files
... ...
@@ -247,6 +247,94 @@
247 247
     " learner2.plot(n, tri_alpha=0.4) + learner.plot(tri_alpha=0.4)).cols(2)"
248 248
    ]
249 249
   },
250
+  {
251
+   "cell_type": "markdown",
252
+   "metadata": {},
253
+   "source": [
254
+    "# N-dimensional function learner\n",
255
+    "Appart from the 1d and 2d learner, we can also learn N-d functions:  $\\ f: ℝ^N → ℝ, N \\ge 2$\n",
256
+    "\n",
257
+    "Do keep in mind the speed of the learner drops quickly with increasing number of dimensions."
258
+   ]
259
+  },
260
+  {
261
+   "cell_type": "code",
262
+   "execution_count": null,
263
+   "metadata": {},
264
+   "outputs": [],
265
+   "source": [
266
+    "# this step takes a lot of time, it will finish at about 3300 points, which can take up to 6 minutes\n",
267
+    "\n",
268
+    "def sphere(xyz):\n",
269
+    "    x, y, z = xyz\n",
270
+    "    a = 0.4\n",
271
+    "    return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)\n",
272
+    "\n",
273
+    "learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])\n",
274
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)\n",
275
+    "runner.live_info()"
276
+   ]
277
+  },
278
+  {
279
+   "cell_type": "code",
280
+   "execution_count": null,
281
+   "metadata": {},
282
+   "outputs": [],
283
+   "source": [
284
+    "learner.loss()"
285
+   ]
286
+  },
287
+  {
288
+   "cell_type": "code",
289
+   "execution_count": null,
290
+   "metadata": {},
291
+   "outputs": [],
292
+   "source": [
293
+    "# We plot 2d slices of the 3d method at different values for x\n",
294
+    "(learner.plot_slice((0.0, None, None), n=100).relabel(\"x=0\") + \n",
295
+    " learner.plot_slice((0.2, None, None), n=100).relabel(\"x=0.2\") +\n",
296
+    " learner.plot_slice((0.4, None, None), n=100).relabel(\"x=0.4\") + \n",
297
+    " learner.plot_slice((0.6, None, None), n=100).relabel(\"x=0.6\") + \n",
298
+    " learner.plot_slice((0.8, None, None), n=100).relabel(\"x=0.8\")\n",
299
+    ").cols(2)"
300
+   ]
301
+  },
302
+  {
303
+   "cell_type": "code",
304
+   "execution_count": null,
305
+   "metadata": {},
306
+   "outputs": [],
307
+   "source": [
308
+    "# We can also plot slices in a different direction\n",
309
+    "(learner.plot_slice((None, None, 0.0), n=100).relabel(\"z=0\") + \n",
310
+    " learner.plot_slice((None, None, 0.2), n=100).relabel(\"z=0.2\") +\n",
311
+    " learner.plot_slice((None, None, 0.4), n=100).relabel(\"z=0.4\") + \n",
312
+    " learner.plot_slice((None, None, 0.6), n=100).relabel(\"z=0.6\") + \n",
313
+    " learner.plot_slice((None, None, 0.8), n=100).relabel(\"z=0.8\")\n",
314
+    ").cols(2)"
315
+   ]
316
+  },
317
+  {
318
+   "cell_type": "code",
319
+   "execution_count": null,
320
+   "metadata": {},
321
+   "outputs": [],
322
+   "source": [
323
+    "# Or we can plot 1d slices\n",
324
+    "(learner.plot_slice((None, 0.0, 0.0), n=100).relabel(\"y=0 z=0\") + \n",
325
+    " learner.plot_slice((None, 0.0, 0.5), n=100).relabel(\"y=0 z=0.5\") +\n",
326
+    " learner.plot_slice((0.0, None, 0.0), n=100).relabel(\"x=0 z=0\") + \n",
327
+    " learner.plot_slice((0.0, 0.5, None), n=100).relabel(\"x=0.5 y=0\")\n",
328
+    ").cols(2)"
329
+   ]
330
+  },
331
+  {
332
+   "cell_type": "markdown",
333
+   "metadata": {},
334
+   "source": [
335
+    "The plots show some wobbles while the original function was smooth, this is a result of the fact that the learner chooses points in 3 dimensions and the simplices are not in the same face as we try to interpolate our lines. However, as always, when you sample more points the graph will become gradually smoother."
336
+   ]
337
+  },
250 338
   {
251 339
    "cell_type": "markdown",
252 340
    "metadata": {},
Browse code

add LearnerND to adaptive in standard import add plot_slice function to plot a slice of the evaluated function adapt the ask function to return a point

Jorn Hoofwijk authored on 27/05/2018 17:54:04 • Bas Nijholt committed on 11/07/2018 05:27:20
Showing 1 changed files
... ...
@@ -780,9 +780,7 @@
780 780
   {
781 781
    "cell_type": "code",
782 782
    "execution_count": null,
783
-   "metadata": {
784
-    "collapsed": true
785
-   },
783
+   "metadata": {},
786 784
    "outputs": [],
787 785
    "source": [
788 786
     "def g(x, noise_level=0.1):\n",
Browse code

Merge branch 'feature/skopt' into 'master'

Feature/skopt

See merge request qt/adaptive!64

Joseph Weston authored on 20/06/2018 14:50:23
Showing 0 changed files
Browse code

add a comment about 'cdims' in the notebook

Bas Nijholt authored on 14/06/2018 00:17:20
Showing 1 changed files
... ...
@@ -669,6 +669,8 @@
669 669
     "\n",
670 670
     "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01)\n",
671 671
     "\n",
672
+    "# The `cdims` will automatically be set when using `from_product`, so\n",
673
+    "# `plot()` will return a HoloMap with correctly labeled sliders.\n",
672 674
     "learner.plot().overlay('beta').grid()"
673 675
    ]
674 676
   },
Browse code

store '_cdim_default' inside the 'BalancingLearner'

Bas Nijholt authored on 13/06/2018 19:37:17
Showing 1 changed files
... ...
@@ -669,7 +669,7 @@
669 669
     "\n",
670 670
     "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01)\n",
671 671
     "\n",
672
-    "learner.plot(cdims=adaptive.utils.named_product(**combos)).overlay('beta').grid()"
672
+    "learner.plot().overlay('beta').grid()"
673 673
    ]
674 674
   },
675 675
   {
Browse code

rename 'from_combos' -> 'from_product'

Bas Nijholt authored on 13/06/2018 10:17:37
Showing 1 changed files
... ...
@@ -645,7 +645,7 @@
645 645
    "cell_type": "markdown",
646 646
    "metadata": {},
647 647
    "source": [
648
-    "Often one wants to create a set of `learner`s for a cartesian product of parameters. For that particular case we've added a `classmethod` called `from_combos`. See how it works below"
648
+    "Often one wants to create a set of `learner`s for a cartesian product of parameters. For that particular case we've added a `classmethod` called `from_product`. See how it works below"
649 649
    ]
650 650
   },
651 651
   {
... ...
@@ -664,7 +664,7 @@
664 664
     "    'beta': np.linspace(0, 1, 5),\n",
665 665
     "}\n",
666 666
     "\n",
667
-    "learner = adaptive.BalancingLearner.from_combos(\n",
667
+    "learner = adaptive.BalancingLearner.from_product(\n",
668 668
     "    jacobi, adaptive.Learner1D, dict(bounds=(0, 1)), combos)\n",
669 669
     "\n",
670 670
     "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01)\n",
Browse code

add example usage of 'BalancingLearner.from_combos'

Bas Nijholt authored on 13/06/2018 01:51:06
Showing 1 changed files
... ...
@@ -641,6 +641,37 @@
641 641
     "runner.live_plot(plotter=plotter, update_interval=0.1)"
642 642
    ]
643 643
   },
644
+  {
645
+   "cell_type": "markdown",
646
+   "metadata": {},
647
+   "source": [
648
+    "Often one wants to create a set of `learner`s for a cartesian product of parameters. For that particular case we've added a `classmethod` called `from_combos`. See how it works below"
649
+   ]
650
+  },
651
+  {
652
+   "cell_type": "code",
653
+   "execution_count": null,
654
+   "metadata": {},
655
+   "outputs": [],
656
+   "source": [
657
+    "from scipy.special import eval_jacobi\n",
658
+    "\n",
659
+    "def jacobi(x, n, alpha, beta): return eval_jacobi(n, alpha, beta, x)\n",
660
+    "\n",
661
+    "combos = {\n",
662
+    "    'n': [1, 2, 4, 8],\n",
663
+    "    'alpha': np.linspace(0, 2, 3),\n",
664
+    "    'beta': np.linspace(0, 1, 5),\n",
665
+    "}\n",
666
+    "\n",
667
+    "learner = adaptive.BalancingLearner.from_combos(\n",
668
+    "    jacobi, adaptive.Learner1D, dict(bounds=(0, 1)), combos)\n",
669
+    "\n",
670
+    "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01)\n",
671
+    "\n",
672
+    "learner.plot(cdims=adaptive.utils.named_product(**combos)).overlay('beta').grid()"
673
+   ]
674
+  },
644 675
   {
645 676
    "cell_type": "markdown",
646 677
    "metadata": {},
Browse code

overlay skopt plot with noiseless function

Bas Nijholt authored on 12/06/2018 18:49:11
Showing 1 changed files
... ...
@@ -778,7 +778,11 @@
778 778
    "metadata": {},
779 779
    "outputs": [],
780 780
    "source": [
781
-    "runner.live_plot()"
781
+    "%%opts Overlay [legend_position='top']\n",
782
+    "xs = np.linspace(*learner.space.bounds[0])\n",
783
+    "to_learn = hv.Curve((xs, [g(x, 0) for x in xs]), label='to learn')\n",
784
+    "\n",
785
+    "runner.live_plot().relabel('prediction', depth=2) * to_learn"
782 786
    ]
783 787
   },
784 788
   {
Browse code

correctly calculate predicted model and error

Add a curve to the SKOptLearner plotter that shows the
predicted model. Update invocation of SKOptLearner in
example notebook to use better defaults.

Joseph Weston authored on 24/05/2018 14:59:20
Showing 1 changed files
... ...
@@ -764,9 +764,11 @@
764 764
    "outputs": [],
765 765
    "source": [
766 766
     "learner = adaptive.SKOptLearner(g, dimensions=[(-2., 2.)],\n",
767
-    "                                base_estimator=\"ET\",\n",
768
-    "                                acq_optimizer=\"sampling\")\n",
769
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
767
+    "                                base_estimator=\"GP\",\n",
768
+    "                                acq_func=\"gp_hedge\",\n",
769
+    "                                acq_optimizer=\"lbfgs\",\n",
770
+    "                               )\n",
771
+    "runner = adaptive.Runner(learner, ntasks=1, goal=lambda l: l.npoints > 40)\n",
770 772
     "runner.live_info()"
771 773
    ]
772 774
   },
Browse code

add section on scikit-optimize learner to example notebook

Joseph Weston authored on 23/05/2018 17:31:59
Showing 1 changed files
... ...
@@ -726,6 +726,59 @@
726 726
     "learner.extra_data"
727 727
    ]
728 728
   },
729
+  {
730
+   "cell_type": "markdown",
731
+   "metadata": {},
732
+   "source": [
733
+    "# `Scikit-Optimize`"
734
+   ]
735
+  },
736
+  {
737
+   "cell_type": "markdown",
738
+   "metadata": {},
739
+   "source": [
740
+    "We have wrapped the `Optimizer` class from [`scikit-optimize`](https://github.com/scikit-optimize/scikit-optimize), to show how existing libraries can be integrated with `adaptive`.\n",
741
+    "\n",
742
+    "The `SKOptLearner` attempts to \"optimize\" the given function `g` (i.e. find the global minimum of `g` in the window of interest).\n",
743
+    "\n",
744
+    "Here we use the same example as in the `scikit-optimize` [tutorial](https://github.com/scikit-optimize/scikit-optimize/blob/master/examples/ask-and-tell.ipynb). Although `SKOptLearner` can optimize functions of arbitrary dimensionality, we can only plot the learner if a 1D function is being learned."
745
+   ]
746
+  },
747
+  {
748
+   "cell_type": "code",
749
+   "execution_count": null,
750
+   "metadata": {
751
+    "collapsed": true
752
+   },
753
+   "outputs": [],
754
+   "source": [
755
+    "def g(x, noise_level=0.1):\n",
756
+    "    return (np.sin(5 * x) * (1 - np.tanh(x ** 2))\n",
757
+    "            + np.random.randn() * noise_level)"
758
+   ]
759
+  },
760
+  {
761
+   "cell_type": "code",
762
+   "execution_count": null,
763
+   "metadata": {},
764
+   "outputs": [],
765
+   "source": [
766
+    "learner = adaptive.SKOptLearner(g, dimensions=[(-2., 2.)],\n",
767
+    "                                base_estimator=\"ET\",\n",
768
+    "                                acq_optimizer=\"sampling\")\n",
769
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
770
+    "runner.live_info()"
771
+   ]
772
+  },
773
+  {
774
+   "cell_type": "code",
775
+   "execution_count": null,
776
+   "metadata": {},
777
+   "outputs": [],
778
+   "source": [
779
+    "runner.live_plot()"
780
+   ]
781
+  },
729 782
   {
730 783
    "cell_type": "markdown",
731 784
    "metadata": {
Browse code

give 'tell' the functionality to handle both single values and iterables

Bas Nijholt authored on 24/05/2018 06:09:43
Showing 1 changed files
... ...
@@ -162,7 +162,7 @@
162 162
     "learner2 = adaptive.Learner1D(f, bounds=learner.bounds)\n",
163 163
     "\n",
164 164
     "xs = np.linspace(*learner.bounds, len(learner.data))\n",
165
-    "learner2.add_data(xs, map(partial(f, wait=False), xs))\n",
165
+    "learner2.tell(xs, map(partial(f, wait=False), xs))\n",
166 166
     "\n",
167 167
     "learner.plot() + learner2.plot()"
168 168
    ]
... ...
@@ -241,7 +241,7 @@
241 241
     "n = int(learner.npoints**0.5)\n",
242 242
     "xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds]\n",
243 243
     "xys = list(itertools.product(xs, ys))\n",
244
-    "learner2.add_data(xys, map(partial(ring, wait=False), xys))\n",
244
+    "learner2.tell(xys, map(partial(ring, wait=False), xys))\n",
245 245
     "\n",
246 246
     "(learner2.plot(n).relabel('Homogeneous grid') + learner.plot().relabel('With adaptive') + \n",
247 247
     " learner2.plot(n, tri_alpha=0.4) + learner.plot(tri_alpha=0.4)).cols(2)"
Browse code

remove metadata from learner.ipynb

Bas Nijholt authored on 12/04/2018 14:27:43
Showing 1 changed files
... ...
@@ -1200,23 +1200,9 @@
1200 1200
   }
1201 1201
  ],
1202 1202
  "metadata": {
1203
-  "anaconda-cloud": {},
1204
-  "kernelspec": {
1205
-   "display_name": "Python 3",
1206
-   "language": "python",
1207
-   "name": "python3"
1208
-  },
1209 1203
   "language_info": {
1210
-   "codemirror_mode": {
1211
-    "name": "ipython",
1212
-    "version": 3
1213
-   },
1214
-   "file_extension": ".py",
1215
-   "mimetype": "text/x-python",
1216 1204
    "name": "python",
1217
-   "nbconvert_exporter": "python",
1218
-   "pygments_lexer": "ipython3",
1219
-   "version": "3.6.3"
1205
+   "pygments_lexer": "ipython3"
1220 1206
   }
1221 1207
  },
1222 1208
  "nbformat": 4,
Browse code

mention the default executors on Windows and Unix-like systems in the notebook

Bas Nijholt authored on 22/02/2018 22:00:52
Showing 1 changed files
... ...
@@ -102,7 +102,9 @@
102 102
    "source": [
103 103
     "Next we create a \"runner\" that will request points from the learner and evaluate 'f' on them.\n",
104 104
     "\n",
105
-    "By default the runner will evaluate the points in parallel using local processes ([`concurrent.futures.ProcessPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor))."
105
+    "By default on Unix-like systems the runner will evaluate the points in parallel using local processes ([`concurrent.futures.ProcessPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor)).\n",
106
+    "\n",
107
+    "On Windows systems the runner will try to use a [`distributed.Client`](https://distributed.readthedocs.io/en/latest/client.html) if [`distributed`](https://distributed.readthedocs.io/en/latest/index.html) is installed. A `ProcessPoolExecutor` cannot be used on Windows for reasons."
106 108
    ]
107 109
   },
108 110
   {
... ...
@@ -751,7 +753,7 @@
751 753
    "cell_type": "markdown",
752 754
    "metadata": {},
753 755
    "source": [
754
-    "By default `adaptive.Runner` creates a `ProcessPoolExecutor`, but you can also pass one explicitly e.g. to limit the number of workers:"
756
+    "On Unix-like systems by default `adaptive.Runner` creates a `ProcessPoolExecutor`, but you can also pass one explicitly e.g. to limit the number of workers:"
755 757
    ]
756 758
   },
757 759
   {
... ...
@@ -797,7 +799,9 @@
797 799
    "cell_type": "markdown",
798 800
    "metadata": {},
799 801
    "source": [
800
-    "### [`distributed`](https://distributed.readthedocs.io/en/latest/)"
802
+    "### [`distributed`](https://distributed.readthedocs.io/en/latest/)\n",
803
+    "\n",
804
+    "On Windows by default `adaptive.Runner` uses a `distributed.Client`."
801 805
    ]
802 806
   },
803 807
   {
Browse code

change plots settings in the example notebook

Bas Nijholt authored on 20/02/2018 17:03:49
Showing 1 changed files
... ...
@@ -440,8 +440,17 @@
440 440
    "outputs": [],
441 441
    "source": [
442 442
     "learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))\n",
443
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.05)\n",
444
-    "runner.live_plot()"
443
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
444
+    "runner.live_info()"
445
+   ]
446
+  },
447
+  {
448
+   "cell_type": "code",
449
+   "execution_count": null,
450
+   "metadata": {},
451
+   "outputs": [],
452
+   "source": [
453
+    "runner.live_plot(update_interval=0.1)"
445 454
    ]
446 455
   },
447 456
   {
... ...
@@ -860,7 +869,7 @@
860 869
    "metadata": {},
861 870
    "outputs": [],
862 871
    "source": [
863
-    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
872
+    "learner = adaptive.Learner1D(partial(f, wait=False), bounds=(-1, 1))\n",
864 873
     "adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.005)\n",
865 874
     "# This will only get run after the runner has finished\n",
866 875
     "learner.plot()"
Browse code

several small fixes in the example notebook

Bas Nijholt authored on 19/02/2018 19:07:20
Showing 1 changed files
... ...
@@ -608,11 +608,11 @@
608 608
    "metadata": {},
609 609
    "outputs": [],
610 610
    "source": [
611
-    "def f(x, offset=0):\n",
611
+    "def h(x, offset=0):\n",
612 612
     "    a = 0.01\n",
613 613
     "    return x + a**2 / (a**2 + (x - offset)**2)\n",
614 614
     "\n",
615
-    "learners = [adaptive.Learner1D(partial(f, offset=random.uniform(-1, 1)),\n",
615
+    "learners = [adaptive.Learner1D(partial(h, offset=random.uniform(-1, 1)),\n",
616 616
     "            bounds=(-1, 1)) for i in range(10)]\n",
617 617
     "\n",
618 618
     "bal_learner = adaptive.BalancingLearner(learners)\n",
... ...
@@ -892,7 +892,7 @@
892 892
    "metadata": {},
893 893
    "outputs": [],
894 894
    "source": [
895
-    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
895
+    "learner = adaptive.Learner1D(partial(f, wait=False), bounds=(-1, 1))\n",
896 896
     "\n",
897 897
     "# blocks until completion\n",
898 898
     "adaptive.runner.simple(learner, goal=lambda l: l.loss() < 0.002)\n",
... ...
@@ -1095,7 +1095,7 @@
1095 1095
    "metadata": {},
1096 1096
    "outputs": [],
1097 1097
    "source": [
1098
-    "learner.plot().opts(style=dict(size=6)) * reconstructed_learner.plot()"
1098
+    "learner.plot().Scatter.I.opts(style=dict(size=6)) * reconstructed_learner.plot()"
1099 1099
    ]
1100 1100
   },
1101 1101
   {
Browse code

reword some sections of the custom loss section

Also update references to the loss functions defined in adaptive.

Joseph Weston authored on 19/02/2018 18:39:18
Showing 1 changed files
... ...
@@ -448,22 +448,22 @@
448 448
    "cell_type": "markdown",
449 449
    "metadata": {},
450 450
    "source": [
451
-    "# Custom point choosing logic for 1D and 2D"
451
+    "# Custom adaptive logic for 1D and 2D"
452 452
    ]
453 453
   },
454 454
   {
455 455
    "cell_type": "markdown",
456 456
    "metadata": {},
457 457
    "source": [
458
-    "The `Learner1D` and `Learner2D` implement a certain logic for chosing points based on the existing data.\n",
458
+    "`Learner1D` and `Learner2D` both work on the principle of subdividing their domain into subdomains, and assigning a property to each subdomain, which we call the *loss*. The algorithm for choosing the best place to evaluate our function is then simply *take the subdomain with the largest loss and add a point in the center, creating new subdomains around this point*. \n",
459 459
     "\n",
460
-    "For some functions this default stratagy might not work, for example you'll run into trouble when you learn functions that contain divergencies.\n",
460
+    "The *loss function* that defines the loss per subdomain is the canonical place to define what regions of the domain are \"interesting\".\n",
461
+    "The default loss function for `Learner1D` and `Learner2D` is sufficient for a wide range of common cases, but it is by no means a panacea. For example, the default loss function will tend to get stuck on divergences.\n",
461 462
     "\n",
462
-    "Both the `Learner1D` and `Learner2D` allow you to use a custom loss function, which you specify as an argument in the learner. See the doc-string of `Learner1D` and `Learner2D` to see what `loss_per_interval` and `loss_per_triangle` need to return and take as input.\n",
463
+    "Both the `Learner1D` and `Learner2D` allow you to specify a *custom loss function*. Below we illustrate how you would go about writing your own loss function. The documentation for `Learner1D` and `Learner2D` specifies the signature that your loss function needs to have in order for it to work with `adaptive`.\n",
463 464
     "\n",
464
-    "As an example we implement a homogeneous sampling strategy (which of course is not the best way of handling divergencies).\n",
465 465
     "\n",
466
-    "Note that both these loss functions are also available from `adaptive.learner.learner1d.uniform_sampling` and `adaptive.learner.learner2d.uniform_sampling`."
466
+    "Say we want to properly sample a function that contains divergences. A simple (but naive) strategy is to *uniformly* sample the domain:\n"
467 467
    ]
468 468
   },
469 469
   {
... ...
@@ -473,6 +473,7 @@
473 473
    "outputs": [],
474 474
    "source": [
475 475
     "def uniform_sampling_1d(interval, scale, function_values):\n",
476
+    "    # Note that we never use 'function_values'; the loss is just the size of the subdomain\n",
476 477
     "    x_left, x_right = interval\n",
477 478
     "    x_scale, _ = scale\n",
478 479
     "    dx = (x_right - x_left) / x_scale\n",
... ...
@@ -492,7 +493,9 @@
492 493
    "metadata": {},
493 494
    "outputs": [],
494 495
    "source": [
495
-    "%%opts EdgePaths (color='w')\n",
496
+    "%%opts EdgePaths (color='w') Image [logz=True]\n",
497
+    "\n",
498
+    "from adaptive.runner import SequentialExecutor\n",
496 499
     "\n",
497 500
     "def uniform_sampling_2d(ip):\n",
498 501
     "    from adaptive.learner.learner2D import areas\n",
... ...
@@ -504,18 +507,32 @@
504 507
     "    return 1 / (x**2 + y**2)\n",
505 508
     "\n",
506 509
     "learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=uniform_sampling_2d)\n",
507
-    "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.02)\n",
508
-    "learner.plot(tri_alpha=0.3)"
510
+    "\n",
511
+    "# this takes a while, so use the async Runner so we know *something* is happening\n",
512
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)\n",
513
+    "runner.live_info()\n",
514
+    "runner.live_plot(update_interval=0.2,\n",
515
+    "                 plotter=lambda l: l.plot(tri_alpha=0.3).relabel('1 / (x^2 + y^2) in log scale'))"
516
+   ]
517
+  },
518
+  {
519
+   "cell_type": "markdown",
520
+   "metadata": {},
521
+   "source": [
522
+    "The uniform sampling strategy is a common case to benchmark against, so the 1D and 2D versions are included in `adaptive` as `adaptive.learner.learner1D.uniform_sampling` and `adaptive.learner.learner2D.uniform_sampling`."
509 523
    ]
510 524
   },
511 525
   {
512 526
    "cell_type": "markdown",
513 527
    "metadata": {},
514 528
    "source": [
515
-    "#### Doing better\n",
516
-    "Of course we can improve on the the above result, since just homogeneous sampling is usually the dumbest way to sample.\n",
529
+    "### Doing better\n",
530
+    "\n",
531
+    "Of course, using `adaptive` for uniform sampling is a bit of a waste!\n",
532
+    "\n",
533
+    "Let's see if we can do a bit better. Below we define a loss per subdomain that scales with the degree of nonlinearity of the function (this is very similar to the default loss function for `Learner2D`), but which is 0 for subdomains smaller than a certain area, and infinite for subdomains larger than a certain area.\n",
517 534
     "\n",
518
-    "The loss function (slightly more general version) below is available as `adaptive.learner.learner2D.resolution_loss`."
535
+    "A loss defined in this way means that the adaptive algorithm will first prioritise subdomains that are too large (infinite loss). After all subdomains are appropriately small it will prioritise places where the function is very nonlinear, but will ignore subdomains that are too small (0 loss)."
519 536
    ]
520 537
   },
521 538
   {
... ...
@@ -529,13 +546,17 @@
529 546
     "def resolution_loss(ip, min_distance=0, max_distance=1):\n",
530 547
     "    \"\"\"min_distance and max_distance should be in between 0 and 1\n",
531 548
     "    because the total area is normalized to 1.\"\"\"\n",
549
+    "\n",
532 550
     "    from adaptive.learner.learner2D import areas, deviations\n",
551
+    "\n",
533 552
     "    A = areas(ip)\n",
534 553
     "\n",
535
-    "    # `deviations` returns an array of the same length as the\n",
536
-    "    # vector your function to be learned returns, so 1 in this case.\n",
537
-    "    # Its value represents the deviation from the linear estimate based\n",
538
-    "    # on the gradients inside each triangle.\n",
554
+    "    # 'deviations' returns an array of shape '(n, len(ip))', where\n",
555
+    "    # 'n' is the  is the dimension of the output of the learned function\n",
556
+    "    # In this case we know that the learned function returns a scalar,\n",
557
+    "    # so 'deviations' returns an array of shape '(1, len(ip))'.\n",
558
+    "    # It represents the deviation of the function value from a linear estimate\n",
559
+    "    # over each triangular subdomain.\n",
539 560
     "    dev = deviations(ip)[0]\n",
540 561
     "    \n",
541 562
     "    # we add terms of the same dimension: dev == [distance], A == [distance**2]\n",
... ...
@@ -556,6 +577,15 @@
556 577
     "learner.plot(tri_alpha=0.3).relabel('1 / (x^2 + y^2) in log scale')"
557 578
    ]
558 579
   },
580
+  {
581
+   "cell_type": "markdown",
582
+   "metadata": {},
583
+   "source": [
584
+    "Awesome! We zoom in on the singularity, but not at the expense of sampling the rest of the domain a reasonable amount.\n",
585
+    "\n",
586
+    "The above strategy is available as `adaptive.learner.learner2D.resolution_loss`."
587
+   ]
588
+  },
559 589
   {
560 590
    "cell_type": "markdown",
561 591
    "metadata": {},
Browse code

promote max_resolution_loss to resolution_loss to work with min and max resolution

Bas Nijholt authored on 19/02/2018 17:18:37 • Joseph Weston committed on 19/02/2018 18:40:10
Showing 1 changed files
... ...
@@ -513,7 +513,9 @@
513 513
    "metadata": {},
514 514
    "source": [
515 515
     "#### Doing better\n",
516
-    "Of course we can improve on the the above result, since just homogeneous sampling is usually the dumbest way to sample."
516
+    "Of course we can improve on the the above result, since just homogeneous sampling is usually the dumbest way to sample.\n",
517
+    "\n",
518
+    "The loss function (slightly more general version) below is available as `adaptive.learner.learner2D.resolution_loss`."
517 519
    ]
518 520
   },
519 521
   {
... ...
@@ -524,7 +526,9 @@
524 526
    "source": [
525 527
     "%%opts EdgePaths (color='w') Image [logz=True]\n",
526 528
     "\n",
527
-    "def max_resolution_loss(ip, smallest_distance=0.01):\n",
529
+    "def resolution_loss(ip, min_distance=0, max_distance=1):\n",
530
+    "    \"\"\"min_distance and max_distance should be in between 0 and 1\n",
531
+    "    because the total area is normalized to 1.\"\"\"\n",
528 532
     "    from adaptive.learner.learner2D import areas, deviations\n",
529 533
     "    A = areas(ip)\n",
530 534
     "\n",
... ...
@@ -538,12 +542,18 @@
538 542
     "    loss = np.sqrt(A) * dev + A\n",
539 543
     "    \n",
540 544
     "    # Setting areas with a small area to zero such that they won't be chosen again\n",
541
-    "    loss[A < smallest_distance**2] = 0 \n",
545
+    "    loss[A < min_distance**2] = 0 \n",
546
+    "    \n",
547
+    "    # Setting triangles that have a size larger than max_distance to infinite loss\n",
548
+    "    loss[A > max_distance**2] = np.inf\n",
549
+    "\n",
542 550
     "    return loss\n",
543 551
     "\n",
544
-    "learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=max_resolution_loss)\n",
552
+    "loss = partial(resolution_loss, min_distance=0.01)\n",
553
+    "\n",
554
+    "learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=loss)\n",
545 555
     "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.02)\n",
546
-    "learner.plot(tri_alpha=0.3).relabel('Plotted in log scale')"
556
+    "learner.plot(tri_alpha=0.3).relabel('1 / (x^2 + y^2) in log scale')"
547 557
    ]
548 558
   },
549 559
   {
Browse code

add an example of custom loss functions to the notebook

Bas Nijholt authored on 19/02/2018 16:26:51 • Joseph Weston committed on 19/02/2018 18:40:10
Showing 1 changed files
... ...
@@ -444,6 +444,108 @@
444 444
     "runner.live_plot()"
445 445
    ]
446 446
   },
447
+  {
448
+   "cell_type": "markdown",
449
+   "metadata": {},
450
+   "source": [
451
+    "# Custom point choosing logic for 1D and 2D"
452
+   ]
453
+  },
454
+  {
455
+   "cell_type": "markdown",
456
+   "metadata": {},
457
+   "source": [
458
+    "The `Learner1D` and `Learner2D` implement a certain logic for chosing points based on the existing data.\n",
459
+    "\n",
460
+    "For some functions this default stratagy might not work, for example you'll run into trouble when you learn functions that contain divergencies.\n",
461
+    "\n",
462
+    "Both the `Learner1D` and `Learner2D` allow you to use a custom loss function, which you specify as an argument in the learner. See the doc-string of `Learner1D` and `Learner2D` to see what `loss_per_interval` and `loss_per_triangle` need to return and take as input.\n",
463
+    "\n",
464
+    "As an example we implement a homogeneous sampling strategy (which of course is not the best way of handling divergencies).\n",
465
+    "\n",
466
+    "Note that both these loss functions are also available from `adaptive.learner.learner1d.uniform_sampling` and `adaptive.learner.learner2d.uniform_sampling`."
467
+   ]
468
+  },
469
+  {
470
+   "cell_type": "code",
471
+   "execution_count": null,
472
+   "metadata": {},
473
+   "outputs": [],
474
+   "source": [
475
+    "def uniform_sampling_1d(interval, scale, function_values):\n",
476
+    "    x_left, x_right = interval\n",
477
+    "    x_scale, _ = scale\n",
478
+    "    dx = (x_right - x_left) / x_scale\n",
479
+    "    return dx\n",
480
+    "\n",
481
+    "def f_divergent_1d(x):\n",
482
+    "    return 1 / x**2\n",
483
+    "\n",
484
+    "learner = adaptive.Learner1D(f_divergent_1d, (-1, 1), loss_per_interval=uniform_sampling_1d)\n",
485
+    "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01)\n",
486
+    "learner.plot().select(y=(0, 10000))"
487
+   ]
488
+  },
489
+  {
490
+   "cell_type": "code",
491
+   "execution_count": null,
492
+   "metadata": {},
493
+   "outputs": [],
494
+   "source": [
495
+    "%%opts EdgePaths (color='w')\n",
496
+    "\n",
497
+    "def uniform_sampling_2d(ip):\n",
498
+    "    from adaptive.learner.learner2D import areas\n",
499
+    "    A = areas(ip)\n",
500
+    "    return np.sqrt(A)\n",
501
+    "\n",
502
+    "def f_divergent_2d(xy):\n",
503
+    "    x, y = xy\n",
504
+    "    return 1 / (x**2 + y**2)\n",
505
+    "\n",
506
+    "learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=uniform_sampling_2d)\n",
507
+    "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.02)\n",
508
+    "learner.plot(tri_alpha=0.3)"
509
+   ]
510
+  },
511
+  {
512
+   "cell_type": "markdown",
513
+   "metadata": {},
514
+   "source": [
515
+    "#### Doing better\n",
516
+    "Of course we can improve on the the above result, since just homogeneous sampling is usually the dumbest way to sample."
517
+   ]
518
+  },
519
+  {
520
+   "cell_type": "code",
521
+   "execution_count": null,
522
+   "metadata": {},
523
+   "outputs": [],
524
+   "source": [
525
+    "%%opts EdgePaths (color='w') Image [logz=True]\n",
526
+    "\n",
527
+    "def max_resolution_loss(ip, smallest_distance=0.01):\n",
528
+    "    from adaptive.learner.learner2D import areas, deviations\n",
529
+    "    A = areas(ip)\n",
530
+    "\n",
531
+    "    # `deviations` returns an array of the same length as the\n",
532
+    "    # vector your function to be learned returns, so 1 in this case.\n",
533
+    "    # Its value represents the deviation from the linear estimate based\n",
534
+    "    # on the gradients inside each triangle.\n",
535
+    "    dev = deviations(ip)[0]\n",
536
+    "    \n",
537
+    "    # we add terms of the same dimension: dev == [distance], A == [distance**2]\n",
538
+    "    loss = np.sqrt(A) * dev + A\n",
539
+    "    \n",
540
+    "    # Setting areas with a small area to zero such that they won't be chosen again\n",
541
+    "    loss[A < smallest_distance**2] = 0 \n",
542
+    "    return loss\n",
543
+    "\n",
544
+    "learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=max_resolution_loss)\n",
545
+    "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.02)\n",
546
+    "learner.plot(tri_alpha=0.3).relabel('Plotted in log scale')"
547
+   ]
548
+  },
447 549
   {
448 550
    "cell_type": "markdown",
449 551
    "metadata": {},
Browse code

simplify live_plot plotting function for the Learner2D example

Bas Nijholt authored on 19/02/2018 15:33:58
Showing 1 changed files
... ...
@@ -217,8 +217,9 @@
217 217
     "def plot(learner):\n",
218 218
     "    plot = learner.plot(tri_alpha=0.2)\n",
219 219
     "    title = f'loss={learner._loss:.3f}, n_points={learner.npoints}'\n",
220
-    "    opts = dict(plot=dict(title_format=title))\n",
221
-    "    return plot.Image + plot.EdgePaths.I.clone().opts(**opts) + plot\n",
220
+    "    return (plot.Image\n",
221
+    "            + plot.EdgePaths.I.opts(plot=dict(title_format=title))\n",
222
+    "            + plot)\n",
222 223
     "\n",
223 224
     "runner.live_plot(plotter=plot, update_interval=0.1)"
224 225
    ]
Browse code

update example notebook

Joseph Weston authored on 19/02/2018 15:32:05
Showing 1 changed files
... ...
@@ -15,11 +15,19 @@
15 15
     "\n",
16 16
     "This is an introductory notebook that shows some basic use cases.\n",
17 17
     "\n",
18
-    "`adaptive` needs the following packages:\n",
18
+    "`adaptive` needs at least Python 3.6, and the following packages:\n",
19 19
     "\n",
20
-    "+ Python 3.6\n",
20
+    "+ `scipy`\n",
21
+    "+ `sortedcontainers`\n",
22
+    "\n",
23
+    "Additionally `adaptive` has lots of extra functionality that makes it simple to use from Jupyter notebooks.\n",
24
+    "This extra functionality depends on the following packages\n",
25
+    "\n",
26
+    "+ `ipykernel>=4.8.0`\n",
27
+    "+ `jupyter_client>=5.2.2`\n",
21 28
     "+ `holoviews`\n",
22
-    "+ `bokeh`"
29
+    "+ `bokeh`\n",
30
+    "+ `ipywidgets`"
23 31
    ]
24 32
   },
25 33
   {
... ...
@@ -105,7 +113,7 @@
105 113
    "source": [
106 114
     "# The end condition is when the \"loss\" is less than 0.1. In the context of the\n",
107 115
     "# 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.\n",
108
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
116
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.05)\n",
109 117
     "runner.live_info()"
110 118
    ]
111 119
   },
... ...
@@ -222,12 +230,16 @@
222 230
    "outputs": [],
223 231
    "source": [
224 232
     "%%opts EdgePaths (color='w')\n",
233
+    "\n",
225 234
     "import itertools\n",
235
+    "\n",
236
+    "# Create a learner and add data on homogeneous grid, so that we can plot it\n",
226 237
     "learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)\n",
227 238
     "n = int(learner.npoints**0.5)\n",
228 239
     "xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds]\n",
229 240
     "xys = list(itertools.product(xs, ys))\n",
230 241
     "learner2.add_data(xys, map(partial(ring, wait=False), xys))\n",
242
+    "\n",
231 243
     "(learner2.plot(n).relabel('Homogeneous grid') + learner.plot().relabel('With adaptive') + \n",
232 244
     " learner2.plot(n, tri_alpha=0.4) + learner.plot(tri_alpha=0.4)).cols(2)"
233 245
    ]
... ...
@@ -258,7 +270,7 @@
258 270
     "def g(n):\n",
259 271
     "    import random\n",
260 272
     "    from time import sleep\n",
261
-    "    sleep(random.random() / 5)\n",
273
+    "    sleep(random.random() / 1000)\n",
262 274
     "    # Properly save and restore the RNG state\n",
263 275
     "    state = random.getstate()\n",
264 276
     "    random.seed(n)\n",
... ...
@@ -274,7 +286,7 @@
274 286
    "outputs": [],
275 287
    "source": [
276 288
     "learner = adaptive.AverageLearner(g, atol=None, rtol=0.01)\n",
277
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
289
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 2)\n",
278 290
     "runner.live_info()"
279 291
    ]
280 292
   },
... ...
@@ -284,7 +296,7 @@
284 296
    "metadata": {},
285 297
    "outputs": [],
286 298
    "source": [
287
-    "runner.live_plot()"
299
+    "runner.live_plot(update_interval=0.1)"
288 300
    ]
289 301
   },
290 302
   {
... ...
@@ -347,7 +359,11 @@
347 359
    "outputs": [],
348 360
    "source": [
349 361
     "from adaptive.runner import SequentialExecutor\n",
362
+    "\n",
350 363
     "learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-10)\n",
364
+    "\n",
365
+    "# We use a SequentialExecutor, which runs the function to be learned in *this* process only. This means we don't pay\n",
366
+    "# the overhead of evaluating the function in another process.\n",
351 367
     "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.done())\n",
352 368
     "runner.live_info()"
353 369
    ]
... ...
@@ -390,7 +406,7 @@
390 406
    "cell_type": "markdown",
391 407
    "metadata": {},
392 408
    "source": [
393
-    "Some 1D functions return multiple numbers, such as the following function:"
409
+    "Sometimes you may want to learn a function with vector output:"
394 410
    ]
395 411
   },
396 412
   {
... ...
@@ -401,6 +417,8 @@
401 417
    "source": [
402 418
     "random.seed(0)\n",
403 419
     "offsets = [random.uniform(-0.8, 0.8) for _ in range(3)]\n",
420
+    "\n",
421
+    "# sharp peaks at random locations in the domain\n",
404 422
     "def f_levels(x, offsets=offsets):\n",
405 423
     "    a = 0.01\n",
406 424
     "    return np.array([offset + x + a**2 / (a**2 + (x - offset)**2)\n",
... ...
@@ -411,7 +429,7 @@
411 429
    "cell_type": "markdown",
412 430
    "metadata": {},
413 431
    "source": [
414
-    "This is again a function with sharp peaks at different x-values and with different constant backgrounds. To learn this function we can use a `Learner1D` as well."
432
+    "`adaptive` has you covered! The `Learner1D` can be used for such functions:"
415 433
    ]
416 434
   },
417 435
   {
... ...
@@ -420,25 +438,8 @@
420 438
    "metadata": {},
421 439
    "outputs": [],
422 440
    "source": [
423
-    "from adaptive.runner import SequentialExecutor\n",
424
-    "\n",
425 441
     "learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))\n",
426
-    "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.loss() < 0.05)"
427
-   ]
428
-  },
429
-  {
430
-   "cell_type": "markdown",
431
-   "metadata": {},
432
-   "source": [
433
-    "In the plot below we see that the function gets more densely sampled around the peaks, which is the behaviour we want."
434
-   ]
435
-  },
436
-  {
437
-   "cell_type": "code",
438
-   "execution_count": null,
439
-   "metadata": {},
440
-   "outputs": [],
441
-   "source": [
442
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.05)\n",
442 443
     "runner.live_plot()"
443 444
    ]
444 445
   },
... ...
@@ -453,7 +454,7 @@
453 454
    "cell_type": "markdown",
454 455
    "metadata": {},
455 456
    "source": [
456
-    "The balancing learner is a \"meta-learner\" that takes a list of multiple leaners. The runner wil find find out which points of which child learner will improve the loss the most and send those to the executor.\n",
457
+    "The balancing learner is a \"meta-learner\" that takes a list of learners. When you request a point from the balancing learner, it will query all of its \"children\" to figure out which one will give the most improvement.\n",
457 458
     "\n",
458 459
     "The balancing learner can for example be used to implement a poor-man's 2D learner by using the `Learner1D`."
459 460
    ]
... ...
@@ -483,7 +484,7 @@
483 484
    "outputs": [],
484 485
    "source": [
485 486
     "plotter = lambda learner: hv.Overlay([L.plot() for L in learner.learners])\n",
486
-    "runner.live_plot(plotter=plotter)"
487
+    "runner.live_plot(plotter=plotter, update_interval=0.1)"
487 488
    ]
488 489
   },
489 490
   {
... ...
@@ -497,11 +498,9 @@
497 498
    "cell_type": "markdown",
498 499
    "metadata": {},
499 500
    "source": [
500
-    "Sometimes we want to learn functions that do not only return a single `float`, but instead return multiple values (even though we only want to learn one value.)\n",
501
+    "If the function that you want to learn returns a value along with some metadata, you can wrap your learner in an `adaptive.DataSaver`.\n",
501 502
     "\n",
502
-    "We can wrap our learners with a `adaptive.DataSaver` such that the learner will be able to handle functions that return results.\n",
503
-    "\n",
504
-    "Take for example this function where we want to remember the time that every point took:"
503
+    "In the following example the function to be learned returns its result and the execution time in a dictionary:"
505 504
    ]
506 505
   },
507 506
   {
... ...
@@ -523,11 +522,11 @@
523 522
     "    y = x + a**2 / (a**2 + x**2)\n",
524 523
     "    return {'y': y, 'waiting_time': waiting_time}\n",
525 524
     "\n",
526
-    "# Create the learner with the function that returns a `dict`\n",
527
-    "# Note that this learner cannot be passed to a runner.\n",
525
+    "# Create the learner with the function that returns a 'dict'\n",
526
+    "# This learner cannot be run directly, as Learner1D does not know what to do with the 'dict'\n",
528 527
     "_learner = adaptive.Learner1D(f_dict, bounds=(-1, 1))\n",
529 528
     "\n",
530
-    "# Wrap the learner in the `DataSavingLearner` and tell it which key it needs to learn\n",
529
+    "# Wrapping the learner with 'adaptive.DataSaver' and tell it which key it needs to learn\n",
531 530
     "learner = adaptive.DataSaver(_learner, arg_picker=itemgetter('y'))"
532 531
    ]
533 532
   },
... ...
@@ -544,14 +543,8 @@
544 543
    "metadata": {},
545 544
    "outputs": [],
546 545
    "source": [
547
-    "runner = adaptive.Runner(learner, goal=lambda l: l.learner.loss() < 0.01)"
548
-   ]
549
-  },
550
-  {
551
-   "cell_type": "markdown",
552
-   "metadata": {},
553
-   "source": [
554
-    "Because `learner` doesn't have a `plot` function we need to copy `_learner.plot` (or `learner.learner.plot`)"
546
+    "runner = adaptive.Runner(learner, goal=lambda l: l.learner.loss() < 0.05)\n",
547
+    "runner.live_info()"
555 548
    ]
556 549
   },
557 550
   {
... ...
@@ -560,8 +553,7 @@
560 553
    "metadata": {},
561 554
    "outputs": [],
562 555
    "source": [
563
-    "learner.plot = _learner.plot\n",
564
-    "runner.live_plot()"
556
+    "runner.live_plot(plotter=lambda l: l.learner.plot(), update_interval=0.1)"
565 557
    ]
566 558
   },
567 559
   {
... ...
@@ -586,7 +578,7 @@
586 578
     "collapsed": true
587 579
    },
588 580
    "source": [
589
-    "## Alternative executors"
581
+    "# Using multiple cores"
590 582
    ]
591 583
   },
592 584
   {
... ...
@@ -600,14 +592,14 @@
600 592
    "cell_type": "markdown",
601 593
    "metadata": {},
602 594
    "source": [
603
-    "### `concurrent.futures`"
595
+    "### [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html)"
604 596
    ]
605 597
   },
606 598
   {
607 599
    "cell_type": "markdown",
608 600
    "metadata": {},
609 601
    "source": [
610
-    "By default a runner creates a `ProcessPoolExecutor`, but you can also pass one explicitly e.g. to limit the number of workers:"
602
+    "By default `adaptive.Runner` creates a `ProcessPoolExecutor`, but you can also pass one explicitly e.g. to limit the number of workers:"
611 603
    ]
612 604
   },
613 605
   {
... ...
@@ -623,14 +615,14 @@
623 615
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
624 616
     "runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.05)\n",
625 617
     "runner.live_info()\n",
626
-    "runner.live_plot()"
618
+    "runner.live_plot(update_interval=0.1)"
627 619
    ]
628 620
   },
629 621
   {
630 622
    "cell_type": "markdown",
631 623
    "metadata": {},
632 624
    "source": [
633
-    "### IPyparallel"
625
+    "### [`ipyparallel`](https://ipyparallel.readthedocs.io/en/latest/intro.html)"
634 626
    ]
635 627
   },
636 628
   {
... ...
@@ -653,7 +645,7 @@
653 645
    "cell_type": "markdown",
654 646
    "metadata": {},
655 647
    "source": [
656
-    "### distributed"
648
+    "### [`distributed`](https://distributed.readthedocs.io/en/latest/)"
657 649
    ]
658 650
   },
659 651
   {
... ...
@@ -669,7 +661,7 @@
669 661
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
670 662
     "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
671 663
     "runner.live_info()\n",
672
-    "runner.live_plot()"
664
+    "runner.live_plot(update_interval=0.1)"
673 665
    ]
674 666
   },
675 667
   {
... ...
@@ -686,6 +678,110 @@
686 678
     "# Advanced Topics"
687 679
    ]
688 680
   },
681
+  {
682
+   "cell_type": "markdown",
683
+   "metadata": {},
684
+   "source": [
685
+    "## A watched pot never boils!"
686
+   ]
687
+  },
688
+  {
689
+   "cell_type": "markdown",
690
+   "metadata": {},
691
+   "source": [
692
+    "`adaptive.Runner` does its work in an `asyncio` task that runs concurrently with the IPython kernel, when using `adaptive` from a Jupyter notebook. This is advantageous because it allows us to do things like live-updating plots, however it can trip you up if you're not careful.\n",
693
+    "\n",
694
+    "Notably: **if you block the IPython kernel, the runner will not do any work**.\n",
695
+    "\n",
696
+    "For example if you wanted to wait for a runner to complete, **do not wait in a busy loop**:\n",
697
+    "```python\n",
698
+    "while not runner.task.done():\n",
699
+    "    pass\n",
700
+    "```\n",
701
+    "\n",
702
+    "If you do this then **the runner will never finish**."
703
+   ]
704
+  },
705
+  {
706
+   "cell_type": "markdown",
707
+   "metadata": {},
708
+   "source": [
709
+    "What to do if you don't care about live plotting, and just want to run something until its done?\n",
710
+    "\n",
711
+    "The simplest way to accomplish this is to use `adaptive.BlockingRunner`:"
712
+   ]
713
+  },
714
+  {
715
+   "cell_type": "code",
716
+   "execution_count": null,
717
+   "metadata": {},
718
+   "outputs": [],
719
+   "source": [
720
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
721
+    "adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.005)\n",
722
+    "# This will only get run after the runner has finished\n",
723
+    "learner.plot()"
724
+   ]
725
+  },
726
+  {
727
+   "cell_type": "markdown",
728
+   "metadata": {},
729
+   "source": [
730
+    "## Reproducibility"
731
+   ]
732
+  },
733
+  {
734
+   "cell_type": "markdown",
735
+   "metadata": {},
736
+   "source": [
737
+    "By default `adaptive` runners evaluate the learned function in parallel across several cores. The runners are also opportunistic, in that as soon as a result is available they will feed it to the learner and request another point to replace the one that just finished.\n",
738
+    "\n",
739
+    "Because the order in which computations complete is non-deterministic, this means that the runner behaves in a non-deterministic way. Adaptive makes this choice because in many cases the speedup from parallel execution is worth sacrificing the \"purity\" of exactly reproducible computations.\n",
740
+    "\n",
741
+    "Nevertheless it is still possible to run a learner in a deterministic way with adaptive.\n",
742
+    "\n",
743
+    "The simplest way is to use `adaptive.runner.simple` to run your learner:"
744
+   ]
745
+  },
746
+  {
747
+   "cell_type": "code",
748
+   "execution_count": null,
749
+   "metadata": {},
750
+   "outputs": [],
751
+   "source": [
752
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
753
+    "\n",
754
+    "# blocks until completion\n",
755
+    "adaptive.runner.simple(learner, goal=lambda l: l.loss() < 0.002)\n",
756
+    "\n",
757
+    "learner.plot()"
758
+   ]
759
+  },
760
+  {
761
+   "cell_type": "markdown",
762
+   "metadata": {},
763
+   "source": [
764
+    "Note that unlike `adaptive.Runner`, `adaptive.runner.simple` *blocks* until it is finished.\n",
765
+    "\n",
766
+    "If you want to enable determinism, want to continue using the non-blocking `adaptive.Runner`, you can use the `adaptive.runner.SequentialExecutor`:"
767
+   ]
768
+  },
769
+  {
770
+   "cell_type": "code",
771
+   "execution_count": null,
772
+   "metadata": {},
773
+   "outputs": [],
774
+   "source": [
775
+    "from adaptive.runner import SequentialExecutor\n",
776
+    "\n",
777
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
778
+    "\n",
779
+    "# blocks until completion\n",
780
+    "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.loss() < 0.002)\n",
781
+    "runner.live_info()\n",
782
+    "runner.live_plot(update_interval=0.1)"
783
+   ]
784
+  },
689 785
   {
690 786
    "cell_type": "markdown",
691 787
    "metadata": {},
... ...
@@ -699,7 +795,9 @@
699 795
    "source": [
700 796
     "Sometimes you want to interactively explore a parameter space, and want the function to be evaluated at finer and finer resolution and manually control when the calculation stops.\n",
701 797
     "\n",
702
-    "If no `goal` is provided to a runner then the runner will run until cancelled:"
798
+    "If no `goal` is provided to a runner then the runner will run until cancelled.\n",
799
+    "\n",
800
+    "`runner.live_info()` will provide a button that can be clicked to stop the runner. You can also stop the runner programatically using `runner.cancel()`."
703 801
    ]
704 802
   },
705 803
   {
... ...
@@ -710,7 +808,8 @@
710 808
    "source": [
711 809
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
712 810
     "runner = adaptive.Runner(learner)\n",
713
-    "runner.live_plot()"
811
+    "runner.live_info()\n",
812
+    "runner.live_plot(update_interval=0.1)"
714 813
    ]
715 814
   },
716 815
   {
... ...
@@ -764,6 +863,7 @@
764 863
     "    \n",
765 864
     "learner = adaptive.Learner1D(will_raise, (-1, 1))\n",
766 865
     "runner = adaptive.Runner(learner)  # without 'goal' the runner will run forever unless cancelled\n",
866
+    "runner.live_info()\n",
767 867
     "runner.live_plot()"
768 868
    ]
769 869
   },
... ...
@@ -920,7 +1020,9 @@
920 1020
    "cell_type": "markdown",
921 1021
    "metadata": {},
922 1022
    "source": [
923
-    "Runners can also be used from a Python script independently of the notebook:\n",
1023
+    "Runners can also be used from a Python script independently of the notebook.\n",
1024
+    "\n",
1025
+    "The simplest way to accomplish this is simply to use the `BlockingRunner`:\n",
924 1026
     "\n",
925 1027
     "```python\n",
926 1028
     "import adaptive\n",
... ...
@@ -930,17 +1032,14 @@
930 1032
     "\n",
931 1033
     "learner = adaptive.Learner1D(f, (-1, 1))\n",
932 1034
     "\n",
933
-    "runner = adaptive.Runner(learner, goal=lambda: l: l.loss() < 0.1)\n",
934
-    "runner.run_sync()  # Block until completion.\n",
1035
+    "adaptive.BlockingRunner(learner, goal=lambda: l: l.loss() < 0.1)\n",
935 1036
     "```\n",
936 1037
     "\n",
937
-    "Under the hood the runner uses [`asyncio`](https://docs.python.org/3/library/asyncio.html). You don't need to worry about this most of the time, unless your script uses asyncio itself. If this is the case you should be aware that instantiating a `Runner` schedules a new task on the current event loop, and you can simply\n",
938
-    "\n",
1038
+    "If you use `asyncio` already in your script and want to integrate `adaptive` into it, then you can use the default `Runner` as you would from a notebook. If you want to wait for the runner to finish, then you can simply\n",
939 1039
     "```python\n",
940 1040
     "    await runner.task\n",
941 1041
     "```\n",
942
-    "\n",
943
-    "inside a coroutine to await completion of the runner."
1042
+    "from within a coroutine."
944 1043
    ]
945 1044
   }
946 1045
  ],
Browse code

remove incorrect statement about 'f' being a closure in the example notebook

Bas Nijholt authored on 19/02/2018 11:59:26
Showing 1 changed files
... ...
@@ -641,9 +641,7 @@
641 641
    "source": [
642 642
     "import ipyparallel\n",
643 643
     "\n",
644
-    "client = ipyparallel.Client()\n",
645
-    "# f is a closure, so we have to use cloudpickle -- this is independent of 'adaptive'\n",
646
-    "client[:].use_cloudpickle()\n",
644
+    "client = ipyparallel.Client()  # You will need to start an `ipcluster` to make this work\n",
647 645
     "\n",
648 646
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
649 647
     "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
Browse code

add a example using 'distributed' to the notebook

Bas Nijholt authored on 19/02/2018 11:57:38
Showing 1 changed files
... ...
@@ -651,6 +651,29 @@
651 651
     "runner.live_plot()"
652 652
    ]
653 653
   },
654
+  {
655
+   "cell_type": "markdown",
656
+   "metadata": {},
657
+   "source": [
658
+    "### distributed"
659
+   ]
660
+  },
661
+  {
662
+   "cell_type": "code",
663
+   "execution_count": null,
664
+   "metadata": {},
665
+   "outputs": [],
666
+   "source": [
667
+    "import distributed\n",
668
+    "\n",
669
+    "client = distributed.Client()\n",
670
+    "\n",
671
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
672
+    "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
673
+    "runner.live_info()\n",
674
+    "runner.live_plot()"
675
+   ]
676
+  },
654 677
   {
655 678
    "cell_type": "markdown",
656 679
    "metadata": {},
Browse code

plot the grid for the Learner2D example for the homogeneous grid

Bas Nijholt authored on 17/02/2018 15:11:48
Showing 1 changed files
... ...
@@ -221,13 +221,15 @@
221 221
    "metadata": {},
222 222
    "outputs": [],
223 223
    "source": [
224
+    "%%opts EdgePaths (color='w')\n",
224 225
     "import itertools\n",
225 226
     "learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)\n",
226 227
     "n = int(learner.npoints**0.5)\n",
227 228
     "xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds]\n",
228 229
     "xys = list(itertools.product(xs, ys))\n",
229 230
     "learner2.add_data(xys, map(partial(ring, wait=False), xys))\n",
230
-    "learner2.plot(n).relabel('Homogeneous grid') + learner.plot().relabel('With adaptive')"
231
+    "(learner2.plot(n).relabel('Homogeneous grid') + learner.plot().relabel('With adaptive') + \n",
232
+    " learner2.plot(n, tri_alpha=0.4) + learner.plot(tri_alpha=0.4)).cols(2)"
231 233
    ]
232 234
   },
233 235
   {
Browse code

decrease live_plot update_interval now that plotting is much faster

Bas Nijholt authored on 17/02/2018 14:18:56
Showing 1 changed files
... ...
@@ -123,7 +123,7 @@
123 123
    "metadata": {},
124 124
    "outputs": [],
125 125
    "source": [
126
-    "runner.live_plot()"
126
+    "runner.live_plot(update_interval=0.1)"
127 127
    ]
128 128
   },
129 129
   {
... ...
@@ -212,7 +212,7 @@
212 212
     "    opts = dict(plot=dict(title_format=title))\n",
213 213
     "    return plot.Image + plot.EdgePaths.I.clone().opts(**opts) + plot\n",
214 214
     "\n",
215
-    "runner.live_plot(plotter=plot, update_interval=0.2)"
215
+    "runner.live_plot(plotter=plot, update_interval=0.1)"
216 216
    ]
217 217
   },
218 218
   {
Browse code

change sleep duration in functions in the example notebook

This is a much better default for people with a more average amount of cores.
The current defaults were set using a machine with 48 cores, which is not standard.

Bas Nijholt authored on 17/02/2018 14:17:11
Showing 1 changed files
... ...
@@ -68,7 +68,7 @@
68 68
     "\n",
69 69
     "    a = 0.01\n",
70 70
     "    if wait:\n",
71
-    "        sleep(random()*10)\n",
71
+    "        sleep(random())\n",
72 72
     "    return x + a**2 / (a**2 + (x - offset)**2)"
73 73
    ]
74 74
   },
... ...
@@ -182,7 +182,7 @@
182 182
     "    from time import sleep\n",
183 183
     "    from random import random\n",
184 184
     "    if wait:\n",
185
-    "        sleep(random())\n",
185
+    "        sleep(random()/10)\n",
186 186
     "    x, y = xy\n",
187 187
     "    a = 0.2\n",
188 188
     "    return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
Browse code

change the 'n' attribute of learners to 'npoints'

Previously this attribute was meant to be an implementation
detail, but it is useful information and should be made public.
Given that it is public information, it should have an informative
name.

Joseph Weston authored on 16/02/2018 17:50:48
Showing 1 changed files
... ...
@@ -208,7 +208,7 @@
208 208
    "source": [
209 209
     "def plot(learner):\n",
210 210
     "    plot = learner.plot(tri_alpha=0.2)\n",
211
-    "    title = f'loss={learner._loss:.3f}, n_points={learner.n}'\n",
211
+    "    title = f'loss={learner._loss:.3f}, n_points={learner.npoints}'\n",
212 212
     "    opts = dict(plot=dict(title_format=title))\n",
213 213
     "    return plot.Image + plot.EdgePaths.I.clone().opts(**opts) + plot\n",
214 214
     "\n",
... ...
@@ -223,7 +223,7 @@
223 223
    "source": [
224 224
     "import itertools\n",
225 225
     "learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)\n",
226
-    "n = int(learner.n**0.5)\n",
226
+    "n = int(learner.npoints**0.5)\n",
227 227
     "xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds]\n",
228 228
     "xys = list(itertools.product(xs, ys))\n",
229 229
     "learner2.add_data(xys, map(partial(ring, wait=False), xys))\n",
Browse code

small fixes in the notebook

Bas Nijholt authored on 16/02/2018 16:02:33 • Joseph Weston committed on 16/02/2018 16:48:29
Showing 1 changed files
... ...
@@ -462,7 +462,7 @@
462 462
    "metadata": {},
463 463
    "outputs": [],
464 464
    "source": [
465
-    "def f(x, offset):\n",
465
+    "def f(x, offset=0):\n",
466 466
     "    a = 0.01\n",
467 467
     "    return x + a**2 / (a**2 + (x - offset)**2)\n",
468 468
     "\n",
... ...
@@ -470,7 +470,8 @@
470 470
     "            bounds=(-1, 1)) for i in range(10)]\n",
471 471
     "\n",
472 472
     "bal_learner = adaptive.BalancingLearner(learners)\n",
473
-    "runner = adaptive.Runner(bal_learner, goal=lambda l: l.loss() < 0.01)"
473
+    "runner = adaptive.Runner(bal_learner, goal=lambda l: l.loss() < 0.01)\n",
474
+    "runner.live_info()"
474 475
    ]
475 476
   },
476 477
   {
... ...
@@ -618,7 +619,8 @@
618 619
     "executor = ProcessPoolExecutor(max_workers=4)\n",
619 620
     "\n",
620 621
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
621
-    "runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.1)\n",
622
+    "runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.05)\n",
623
+    "runner.live_info()\n",
622 624
     "runner.live_plot()"
623 625
    ]
624 626
   },
... ...
@@ -642,7 +644,8 @@
642 644
     "client[:].use_cloudpickle()\n",
643 645
     "\n",
644 646
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
645
-    "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.1)\n",
647
+    "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
648
+    "runner.live_info()\n",
646 649
     "runner.live_plot()"
647 650
    ]
648 651
   },
... ...
@@ -693,7 +696,16 @@
693 696
    "metadata": {},
694 697
    "outputs": [],
695 698
    "source": [
696
-    "runner.task.cancel()"
699
+    "runner.cancel()"
700
+   ]
701
+  },
702
+  {
703
+   "cell_type": "code",
704
+   "execution_count": null,
705
+   "metadata": {},
706
+   "outputs": [],
707
+   "source": [
708
+    "print(runner.status())"
697 709
    ]
698 710
   },
699 711
   {
... ...
@@ -789,7 +801,7 @@
789 801
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
790 802
     "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.1,\n",
791 803
     "                         log=True)\n",
792
-    "runner.live_plot()"
804
+    "runner.live_info()"
793 805
    ]
794 806
   },
795 807
   {
Browse code

use runner.live_plot() instead of adaptive.live_plot(runner) in the notebook

Bas Nijholt authored on 16/02/2018 15:47:52 • Joseph Weston committed on 16/02/2018 16:48:29
Showing 1 changed files
... ...
@@ -123,7 +123,7 @@
123 123
    "metadata": {},
124 124
    "outputs": [],
125 125
    "source": [
126
-    "adaptive.live_plot(runner)"
126
+    "runner.live_plot()"
127 127
    ]
128 128
   },
129 129
   {
... ...
@@ -196,7 +196,8 @@
196 196
    "metadata": {},
197 197
    "outputs": [],
198 198
    "source": [
199
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)"
199
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
200
+    "runner.live_info()"
200 201
    ]
201 202
   },
202 203
   {
... ...
@@ -211,7 +212,7 @@
211 212
     "    opts = dict(plot=dict(title_format=title))\n",
212 213
     "    return plot.Image + plot.EdgePaths.I.clone().opts(**opts) + plot\n",
213 214
     "\n",
214
-    "adaptive.live_plot(runner, plotter=plot, update_interval=1)"
215
+    "runner.live_plot(plotter=plot, update_interval=0.2)"
215 216
    ]
216 217
   },
217 218
   {
... ...
@@ -272,7 +273,16 @@
272 273
    "source": [
273 274
     "learner = adaptive.AverageLearner(g, atol=None, rtol=0.01)\n",
274 275
     "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
275
-    "adaptive.live_plot(runner)"
276
+    "runner.live_info()"
277
+   ]
278
+  },
279
+  {
280
+   "cell_type": "code",
281
+   "execution_count": null,
282
+   "metadata": {},
283
+   "outputs": [],
284
+   "source": [
285
+    "runner.live_plot()"
276 286
    ]
277 287
   },
278 288
   {
... ...
@@ -336,7 +346,8 @@
336 346
    "source": [
337 347
     "from adaptive.runner import SequentialExecutor\n",
338 348
     "learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-10)\n",
339
-    "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.done())"
349
+    "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.done())\n",
350
+    "runner.live_info()"
340 351
    ]
341 352
   },
342 353
   {
... ...
@@ -410,7 +421,7 @@
410 421
     "from adaptive.runner import SequentialExecutor\n",
411 422
     "\n",
412 423
     "learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))\n",
413
-    "runner = adaptive.Runner(learner, SequentialExecutor(), goal=lambda l: l.loss() < 0.05)"
424
+    "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.loss() < 0.05)"
414 425
    ]
415 426
   },
416 427
   {
... ...
@@ -426,7 +437,7 @@
426 437
    "metadata": {},
427 438
    "outputs": [],
428 439
    "source": [
429
-    "adaptive.live_plot(runner)"
440
+    "runner.live_plot()"
430 441
    ]
431 442
   },
432 443
   {
... ...
@@ -469,7 +480,7 @@
469 480
    "outputs": [],
470 481
    "source": [
471 482
     "plotter = lambda learner: hv.Overlay([L.plot() for L in learner.learners])\n",
472
-    "adaptive.live_plot(runner, plotter=plotter)"
483
+    "runner.live_plot(plotter=plotter)"
473 484
    ]
474 485
   },
475 486
   {
... ...
@@ -547,7 +558,7 @@
547 558
    "outputs": [],
548 559
    "source": [
549 560
     "learner.plot = _learner.plot\n",
550
-    "adaptive.live_plot(runner)"
561
+    "runner.live_plot()"
551 562
    ]
552 563
   },
553 564
   {
... ...
@@ -608,7 +619,7 @@
608 619
     "\n",
609 620
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
610 621
     "runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.1)\n",
611
-    "adaptive.live_plot(runner)"
622
+    "runner.live_plot()"
612 623
    ]
613 624
   },
614 625
   {
... ...
@@ -632,7 +643,7 @@
632 643
     "\n",
633 644
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
634 645
     "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.1)\n",
635
-    "adaptive.live_plot(runner)"
646
+    "runner.live_plot()"
636 647
    ]
637 648
   },
638 649
   {
... ...
@@ -673,7 +684,7 @@
673 684
    "source": [
674 685
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
675 686
     "runner = adaptive.Runner(learner)\n",
676
-    "adaptive.live_plot(runner)"
687
+    "runner.live_plot()"
677 688
    ]
678 689
   },
679 690
   {
... ...
@@ -718,7 +729,7 @@
718 729
     "    \n",
719 730
     "learner = adaptive.Learner1D(will_raise, (-1, 1))\n",
720 731
     "runner = adaptive.Runner(learner)  # without 'goal' the runner will run forever unless cancelled\n",
721
-    "adaptive.live_plot(runner)"
732
+    "runner.live_plot()"
722 733
    ]
723 734
   },
724 735
   {
... ...
@@ -778,7 +789,7 @@
778 789
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
779 790
     "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.1,\n",
780 791
     "                         log=True)\n",
781
-    "adaptive.live_plot(runner)"
792
+    "runner.live_plot()"
782 793
    ]
783 794
   },
784 795
   {
Browse code

add example of runner.live_info to example notebook

Joseph Weston authored on 15/02/2018 17:44:27
Showing 1 changed files
... ...
@@ -105,7 +105,8 @@
105 105
    "source": [
106 106
     "# The end condition is when the \"loss\" is less than 0.1. In the context of the\n",
107 107
     "# 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.\n",
108
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)"
108
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
109
+    "runner.live_info()"
109 110
    ]
110 111
   },
111 112
   {
Browse code

2D: holoviews hack for plotting loss, https://github.com/ioam/holoviews/issues/2215

Bas Nijholt authored on 19/12/2017 12:07:21
Showing 1 changed files
... ...
@@ -207,8 +207,8 @@
207 207
     "def plot(learner):\n",
208 208
     "    plot = learner.plot(tri_alpha=0.2)\n",
209 209
     "    title = f'loss={learner._loss:.3f}, n_points={learner.n}'\n",
210
-    "    opts = dict(style=(dict(alpha=1)), plot=dict(title_format=title))\n",
211
-    "    return (plot.Image + plot.EdgePaths.opts(**opts) + plot)\n",
210
+    "    opts = dict(plot=dict(title_format=title))\n",
211
+    "    return plot.Image + plot.EdgePaths.I.clone().opts(**opts) + plot\n",
212 212
     "\n",
213 213
     "adaptive.live_plot(runner, plotter=plot, update_interval=1)"
214 214
    ]
... ...
@@ -225,7 +225,7 @@
225 225
     "xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds]\n",
226 226
     "xys = list(itertools.product(xs, ys))\n",
227 227
     "learner2.add_data(xys, map(partial(ring, wait=False), xys))\n",
228
-    "learner2.plot().relabel('Homogeneous grid') + learner.plot().relabel('With adaptive')"
228
+    "learner2.plot(n).relabel('Homogeneous grid') + learner.plot().relabel('With adaptive')"
229 229
    ]
230 230
   },
231 231
   {
Browse code

2D: plot loss and nr_points with the live_plot

Bas Nijholt authored on 18/12/2017 18:52:20
Showing 1 changed files
... ...
@@ -206,10 +206,11 @@
206 206
    "source": [
207 207
     "def plot(learner):\n",
208 208
     "    plot = learner.plot(tri_alpha=0.2)\n",
209
-    "    opts = dict(alpha=1)\n",
210
-    "    return plot.Image + plot.EdgePaths.opts(style=opts) + plot\n",
209
+    "    title = f'loss={learner._loss:.3f}, n_points={learner.n}'\n",
210
+    "    opts = dict(style=(dict(alpha=1)), plot=dict(title_format=title))\n",
211
+    "    return (plot.Image + plot.EdgePaths.opts(**opts) + plot)\n",
211 212
     "\n",
212
-    "adaptive.live_plot(runner, plotter=plot)"
213
+    "adaptive.live_plot(runner, plotter=plot, update_interval=1)"
213 214
    ]
214 215
   },
215 216
   {
Browse code

small simplification in the notebook

Bas Nijholt authored on 18/12/2017 18:51:00
Showing 1 changed files
... ...
@@ -220,8 +220,8 @@
220 220
    "source": [
221 221
     "import itertools\n",
222 222
     "learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)\n",
223
-    "xs = np.linspace(*learner.bounds[0], learner.n**0.5)\n",
224
-    "ys = np.linspace(*learner.bounds[1], learner.n**0.5)\n",
223
+    "n = int(learner.n**0.5)\n",
224
+    "xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds]\n",
225 225
     "xys = list(itertools.product(xs, ys))\n",
226 226
     "learner2.add_data(xys, map(partial(ring, wait=False), xys))\n",
227 227
     "learner2.plot().relabel('Homogeneous grid') + learner.plot().relabel('With adaptive')"
Browse code

2D: update the notebook to use hv.TriMesh

Bas Nijholt authored on 18/12/2017 16:07:07
Showing 1 changed files
... ...
@@ -207,7 +207,7 @@
207 207
     "def plot(learner):\n",
208 208
     "    plot = learner.plot(tri_alpha=0.2)\n",
209 209
     "    opts = dict(alpha=1)\n",
210
-    "    return plot.Image + plot.Contours.opts(style=opts) + plot\n",
210
+    "    return plot.Image + plot.EdgePaths.opts(style=opts) + plot\n",
211 211
     "\n",
212 212
     "adaptive.live_plot(runner, plotter=plot)"
213 213
    ]
Browse code

rename triangles_alpha -> tri_alpha

Bas Nijholt authored on 18/12/2017 11:42:46
Showing 1 changed files
... ...
@@ -205,7 +205,7 @@
205 205
    "outputs": [],
206 206
    "source": [
207 207
     "def plot(learner):\n",
208
-    "    plot = learner.plot(triangles_alpha=0.2)\n",
208
+    "    plot = learner.plot(tri_alpha=0.2)\n",
209 209
     "    opts = dict(alpha=1)\n",
210 210
     "    return plot.Image + plot.Contours.opts(style=opts) + plot\n",
211 211
     "\n",
Browse code

combine live_plots in the notebook

Bas Nijholt authored on 09/12/2017 22:29:28
Showing 1 changed files
... ...
@@ -204,9 +204,12 @@
204 204
    "metadata": {},
205 205
    "outputs": [],
206 206
    "source": [
207
-    "(adaptive.live_plot(runner) +\n",
208
-    " adaptive.live_plot(runner, plotter=lambda l: l.plot(triangles_alpha=1).Contours) +\n",
209
-    " adaptive.live_plot(runner, plotter=lambda l: l.plot(triangles_alpha=0.2)))"
207
+    "def plot(learner):\n",
208
+    "    plot = learner.plot(triangles_alpha=0.2)\n",
209
+    "    opts = dict(alpha=1)\n",
210
+    "    return plot.Image + plot.Contours.opts(style=opts) + plot\n",
211
+    "\n",
212
+    "adaptive.live_plot(runner, plotter=plot)"
210 213
    ]
211 214
   },
212 215
   {
Browse code

fix that plotting always works

Bas Nijholt authored on 14/11/2017 15:23:22
Showing 1 changed files
... ...
@@ -393,7 +393,7 @@
393 393
    "cell_type": "markdown",
394 394
    "metadata": {},
395 395
    "source": [
396
-    "This is again a function with sharp peaks at different x-values and with different constant backgrounds. To learn this function we can use a `Learner1D` with the argument `vector_output=True`."
396
+    "This is again a function with sharp peaks at different x-values and with different constant backgrounds. To learn this function we can use a `Learner1D` as well."
397 397
    ]
398 398
   },
399 399
   {
... ...
@@ -404,7 +404,7 @@
404 404
    "source": [
405 405
     "from adaptive.runner import SequentialExecutor\n",
406 406
     "\n",
407
-    "learner = adaptive.Learner1D(f_levels, bounds=(-1, 1), vector_output=True)\n",
407
+    "learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))\n",
408 408
     "runner = adaptive.Runner(learner, SequentialExecutor(), goal=lambda l: l.loss() < 0.05)"
409 409
    ]
410 410
   },
Browse code

add Learner1D example with vector_output=True

Bas Nijholt authored on 08/11/2017 14:10:13
Showing 1 changed files
... ...
@@ -361,6 +361,69 @@
361 361
     "learner.plot()"
362 362
    ]
363 363
   },
364
+  {
365
+   "cell_type": "markdown",
366
+   "metadata": {},
367
+   "source": [
368
+    "# 1D learner with vector output: `f:ℝ → ℝ^N`"
369
+   ]
370
+  },
371
+  {
372
+   "cell_type": "markdown",
373
+   "metadata": {},
374
+   "source": [
375
+    "Some 1D functions return multiple numbers, such as the following function:"
376
+   ]
377
+  },
378
+  {
379
+   "cell_type": "code",
380
+   "execution_count": null,
381
+   "metadata": {},
382
+   "outputs": [],
383
+   "source": [
384
+    "random.seed(0)\n",
385
+    "offsets = [random.uniform(-0.8, 0.8) for _ in range(3)]\n",
386
+    "def f_levels(x, offsets=offsets):\n",
387
+    "    a = 0.01\n",
388
+    "    return np.array([offset + x + a**2 / (a**2 + (x - offset)**2)\n",
389
+    "                     for offset in offsets])"
390
+   ]
391
+  },
392
+  {
393
+   "cell_type": "markdown",
394
+   "metadata": {},
395
+   "source": [
396
+    "This is again a function with sharp peaks at different x-values and with different constant backgrounds. To learn this function we can use a `Learner1D` with the argument `vector_output=True`."
397
+   ]
398
+  },
399
+  {
400
+   "cell_type": "code",
401
+   "execution_count": null,
402
+   "metadata": {},
403
+   "outputs": [],
404
+   "source": [
405
+    "from adaptive.runner import SequentialExecutor\n",
406
+    "\n",
407
+    "learner = adaptive.Learner1D(f_levels, bounds=(-1, 1), vector_output=True)\n",
408
+    "runner = adaptive.Runner(learner, SequentialExecutor(), goal=lambda l: l.loss() < 0.05)"
409
+   ]
410
+  },
411
+  {
412
+   "cell_type": "markdown",
413
+   "metadata": {},
414
+   "source": [
415
+    "In the plot below we see that the function gets more densely sampled around the peaks, which is the behaviour we want."
416
+   ]
417
+  },
418
+  {
419
+   "cell_type": "code",
420
+   "execution_count": null,
421
+   "metadata": {},
422
+   "outputs": [],
423
+   "source": [
424
+    "adaptive.live_plot(runner)"
425
+   ]
426
+  },
364 427
   {
365 428
    "cell_type": "markdown",
366 429
    "metadata": {},
Browse code

2D: use the triangle_alpha plotting option in the notebook

Bas Nijholt authored on 03/11/2017 12:47:01
Showing 1 changed files
... ...
@@ -204,21 +204,9 @@
204 204
    "metadata": {},
205 205
    "outputs": [],
206 206
    "source": [
207
-    "%%output size=100\n",
208
-    "%%opts Contours (alpha=0.3)\n",
209
-    "\n",
210
-    "def plot(learner):\n",
211
-    "    tri = learner.ip().tri\n",
212
-    "    return hv.Contours([p for p in learner.unscale(tri.points[tri.vertices])])\n",
213
-    "\n",
214
-    "def plot_poly(learner):\n",
215
-    "    tri = learner.ip().tri\n",
216
-    "    return hv.Polygons([p for p in learner.unscale(tri.points[tri.vertices])])\n",
217
-    "\n",
218
-    "\n",
219 207
     "(adaptive.live_plot(runner) +\n",
220
-    " adaptive.live_plot(runner, plotter=plot_poly) +\n",
221
-    " adaptive.live_plot(runner) * adaptive.live_plot(runner, plotter=plot))"
208
+    " adaptive.live_plot(runner, plotter=lambda l: l.plot(triangles_alpha=1).Contours) +\n",
209
+    " adaptive.live_plot(runner, plotter=lambda l: l.plot(triangles_alpha=0.2)))"
222 210
    ]
223 211
   },
224 212
   {
Browse code

copy the bounds in reconstructed learner

Bas Nijholt authored on 07/11/2017 17:24:34
Showing 1 changed files
... ...
@@ -740,7 +740,7 @@
740 740
    "metadata": {},
741 741
    "outputs": [],
742 742
    "source": [
743
-    "reconstructed_learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
743
+    "reconstructed_learner = adaptive.Learner1D(f, bounds=learner.bounds)\n",
744 744
     "adaptive.runner.replay_log(reconstructed_learner, runner.log)"
745 745
    ]
746 746
   },
Browse code

fix small problems in notebook that arose in meeting

Bas Nijholt authored on 01/11/2017 14:38:51
Showing 1 changed files
... ...
@@ -18,8 +18,8 @@
18 18
     "`adaptive` needs the following packages:\n",
19 19
     "\n",
20 20
     "+ Python 3.6\n",
21
-    "+ holowiews\n",
22
-    "+ bokeh"
21
+    "+ `holoviews`\n",
22
+    "+ `bokeh`"
23 23
    ]
24 24
   },
25 25
   {
... ...
@@ -29,7 +29,13 @@
29 29
    "outputs": [],
30 30
    "source": [
31 31
     "import adaptive\n",
32
-    "adaptive.notebook_extension()"
32
+    "adaptive.notebook_extension()\n",
33
+    "\n",
34
+    "# Import modules that are used in multiple cells\n",
35
+    "import holoviews as hv\n",
36
+    "import numpy as np\n",
37
+    "from functools import partial\n",
38
+    "import random"
33 39
    ]
34 40
   },
35 41
   {
... ...
@@ -54,12 +60,9 @@
54 60
    "metadata": {},
55 61
    "outputs": [],
56 62
    "source": [
57
-    "from functools import partial\n",
58
-    "from random import random\n",
63
+    "offset = random.uniform(-0.5, 0.5)\n",
59 64
     "\n",
60
-    "offset = random() - 0.5\n",
61
-    "\n",
62
-    "def f(x, offset=0, wait=True):\n",
65
+    "def f(x, offset=offset, wait=True):\n",
63 66
     "    from time import sleep\n",
64 67
     "    from random import random\n",
65 68
     "\n",
... ...
@@ -82,7 +85,7 @@
82 85
    "metadata": {},
83 86
    "outputs": [],
84 87
    "source": [
85
-    "learner = adaptive.learner.Learner1D(f, bounds=(-1.0, 1.0))"
88
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))"
86 89
    ]
87 90
   },
88 91
   {
... ...
@@ -145,11 +148,9 @@
145 148
    "metadata": {},
146 149
    "outputs": [],
147 150
    "source": [
148
-    "import numpy as np\n",
151
+    "learner2 = adaptive.Learner1D(f, bounds=learner.bounds)\n",
149 152
     "\n",
150
-    "learner2 = adaptive.learner.Learner1D(f, bounds=(-1.01, 1.0))\n",
151
-    "\n",
152
-    "xs = np.linspace(-1.0, 1.0, len(learner.data))\n",
153
+    "xs = np.linspace(*learner.bounds, len(learner.data))\n",
153 154
     "learner2.add_data(xs, map(partial(f, wait=False), xs))\n",
154 155
     "\n",
155 156
     "learner.plot() + learner2.plot()"
... ...
@@ -175,7 +176,7 @@
175 176
    "metadata": {},
176 177
    "outputs": [],
177 178
    "source": [
178
-    "def func(xy, wait=True):\n",
179
+    "def ring(xy, wait=True):\n",
179 180
     "    import numpy as np\n",
180 181
     "    from time import sleep\n",
181 182
     "    from random import random\n",
... ...
@@ -185,7 +186,7 @@
185 186
     "    a = 0.2\n",
186 187
     "    return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
187 188
     "\n",
188
-    "learner = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])"
189
+    "learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)])"
189 190
    ]
190 191
   },
191 192
   {
... ...
@@ -205,7 +206,6 @@
205 206
    "source": [
206 207
     "%%output size=100\n",
207 208
     "%%opts Contours (alpha=0.3)\n",
208
-    "import holoviews as hv\n",
209 209
     "\n",
210 210
     "def plot(learner):\n",
211 211
     "    tri = learner.ip().tri\n",
... ...
@@ -227,11 +227,12 @@
227 227
    "metadata": {},
228 228
    "outputs": [],
229 229
    "source": [
230
-    "import numpy as np\n",
231
-    "learner2 = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])\n",
232
-    "lin = np.linspace(-1, 1, len(learner.points)**0.5)\n",
233
-    "xy = [(x, y) for x in lin for y in lin]\n",
234
-    "learner2.add_data(xy, map(partial(func, wait=False), xy))\n",
230
+    "import itertools\n",
231
+    "learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)\n",
232
+    "xs = np.linspace(*learner.bounds[0], learner.n**0.5)\n",
233
+    "ys = np.linspace(*learner.bounds[1], learner.n**0.5)\n",
234
+    "xys = list(itertools.product(xs, ys))\n",
235
+    "learner2.add_data(xys, map(partial(ring, wait=False), xys))\n",
235 236
     "learner2.plot().relabel('Homogeneous grid') + learner.plot().relabel('With adaptive')"
236 237
    ]
237 238
   },
... ...
@@ -276,7 +277,7 @@
276 277
    "metadata": {},
277 278
    "outputs": [],
278 279
    "source": [
279
-    "learner = adaptive.learner.AverageLearner(g, None, 0.01)\n",
280
+    "learner = adaptive.AverageLearner(g, atol=None, rtol=0.01)\n",
280 281
     "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
281 282
     "adaptive.live_plot(runner)"
282 283
    ]
... ...
@@ -303,9 +304,6 @@
303 304
    "metadata": {},
304 305
    "outputs": [],
305 306
    "source": [
306
-    "import numpy as np\n",
307
-    "import holoviews as hv\n",
308
-    "\n",
309 307
     "def f24(x):\n",
310 308
     "    return np.floor(np.exp(x))\n",
311 309
     "\n",
... ...
@@ -343,8 +341,9 @@
343 341
    "metadata": {},
344 342
    "outputs": [],
345 343
    "source": [
346
-    "learner = adaptive.learner.IntegratorLearner(f24, bounds=(0, 3), tol=1e-10)\n",
347
-    "runner = adaptive.Runner(learner, executor=adaptive.runner.SequentialExecutor(), goal=lambda l: l.done())"
344
+    "from adaptive.runner import SequentialExecutor\n",
345
+    "learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-10)\n",
346
+    "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.done())"
348 347
    ]
349 348
   },
350 349
   {
... ...
@@ -396,11 +395,15 @@
396 395
    "metadata": {},
397 396
    "outputs": [],
398 397
    "source": [
399
-    "from adaptive.learner import Learner1D, BalancingLearner\n",
398
+    "def f(x, offset):\n",
399
+    "    a = 0.01\n",
400
+    "    return x + a**2 / (a**2 + (x - offset)**2)\n",
401
+    "\n",
402
+    "learners = [adaptive.Learner1D(partial(f, offset=random.uniform(-1, 1)),\n",
403
+    "            bounds=(-1, 1)) for i in range(10)]\n",
400 404
     "\n",
401
-    "learners = [Learner1D(partial(f, offset=2*random()-1, wait=False), bounds=(-1.0, 1.0)) for i in range(10)]\n",
402
-    "learner = BalancingLearner(learners)\n",
403
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)"
405
+    "bal_learner = adaptive.BalancingLearner(learners)\n",
406
+    "runner = adaptive.Runner(bal_learner, goal=lambda l: l.loss() < 0.01)"
404 407
    ]
405 408
   },
406 409
   {
... ...
@@ -409,8 +412,8 @@
409 412
    "metadata": {},
410 413
    "outputs": [],
411 414
    "source": [
412
-    "import holoviews as hv\n",
413
-    "adaptive.live_plot(runner, plotter=lambda learner: hv.Overlay([L.plot() for L in learner.learners]))"
415
+    "plotter = lambda learner: hv.Overlay([L.plot() for L in learner.learners])\n",
416
+    "adaptive.live_plot(runner, plotter=plotter)"
414 417
    ]
415 418
   },
416 419
   {
... ...
@@ -437,10 +440,9 @@
437 440
    "metadata": {},
438 441
    "outputs": [],
439 442
    "source": [
440
-    "from adaptive.learner import DataSaver, Learner1D\n",
441 443
     "from operator import itemgetter\n",
442 444
     "\n",
443
-    "def f(x):\n",
445
+    "def f_dict(x):\n",
444 446
     "    \"\"\"The function evaluation takes roughly the time we `sleep`.\"\"\"\n",
445 447
     "    import random\n",
446 448
     "    from time import sleep\n",
... ...
@@ -453,10 +455,10 @@
453 455
     "\n",
454 456
     "# Create the learner with the function that returns a `dict`\n",
455 457
     "# Note that this learner cannot be passed to a runner.\n",
456
-    "_learner = Learner1D(f, bounds=(-1.0, 1.0))\n",
458
+    "_learner = adaptive.Learner1D(f_dict, bounds=(-1, 1))\n",
457 459
     "\n",
458 460
     "# Wrap the learner in the `DataSavingLearner` and tell it which key it needs to learn\n",
459
-    "learner = DataSaver(_learner, arg_picker=itemgetter('y'))"
461
+    "learner = adaptive.DataSaver(_learner, arg_picker=itemgetter('y'))"
460 462
    ]
461 463
   },
462 464
   {
... ...
@@ -548,7 +550,7 @@
548 550
     "\n",
549 551
     "executor = ProcessPoolExecutor(max_workers=4)\n",
550 552
     "\n",
551
-    "learner = adaptive.learner.Learner1D(f, bounds=(-1, 1))\n",
553
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
552 554
     "runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.1)\n",
553 555
     "adaptive.live_plot(runner)"
554 556
    ]
... ...
@@ -572,7 +574,7 @@
572 574
     "# f is a closure, so we have to use cloudpickle -- this is independent of 'adaptive'\n",
573 575
     "client[:].use_cloudpickle()\n",
574 576
     "\n",
575
-    "learner = adaptive.learner.Learner1D(f, bounds=(-1, 1))\n",
577
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
576 578
     "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.1)\n",
577 579
     "adaptive.live_plot(runner)"
578 580
    ]
... ...
@@ -613,7 +615,7 @@
613 615
    "metadata": {},
614 616
    "outputs": [],
615 617
    "source": [
616
-    "learner = adaptive.learner.Learner1D(f, bounds=(-1.0, 1.0))\n",
618
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
617 619
     "runner = adaptive.Runner(learner)\n",
618 620
     "adaptive.live_plot(runner)"
619 621
    ]
... ...
@@ -717,7 +719,7 @@
717 719
    "metadata": {},
718 720
    "outputs": [],
719 721
    "source": [
720
-    "learner = adaptive.learner.Learner1D(f, bounds=(-1, 1))\n",
722
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
721 723
     "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.1,\n",
722 724
     "                         log=True)\n",
723 725
     "adaptive.live_plot(runner)"
... ...
@@ -738,7 +740,7 @@
738 740
    "metadata": {},
739 741
    "outputs": [],
740 742
    "source": [
741
-    "reconstructed_learner = adaptive.learner.Learner1D(f, bounds=(-1, 1))\n",
743
+    "reconstructed_learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
742 744
     "adaptive.runner.replay_log(reconstructed_learner, runner.log)"
743 745
    ]
744 746
   },
... ...
@@ -789,9 +791,8 @@
789 791
     "\n",
790 792
     "ioloop = asyncio.get_event_loop()\n",
791 793
     "\n",
792
-    "learner = adaptive.learner.IntegratorLearner(f24, bounds=(0, 3), tol=1e-3)\n",
793
-    "runner = adaptive.Runner(learner, executor=adaptive.runner.SequentialExecutor(),\n",
794
-    "                         goal=lambda l: l.done())\n",
794
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
795
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
795 796
     "\n",
796 797
     "timer = ioloop.create_task(time(runner))"
797 798
    ]
Browse code

rename meta_data to extra_data

Bas Nijholt authored on 01/11/2017 12:00:05
Showing 1 changed files
... ...
@@ -496,7 +496,7 @@
496 496
    "cell_type": "markdown",
497 497
    "metadata": {},
498 498
    "source": [
499
-    "Now the `DataSavingLearner` will have an dictionary attribute `meta_data` that has `x` as key and the extra data as values."
499
+    "Now the `DataSavingLearner` will have an dictionary attribute `extra_data` that has `x` as key and the data that was returned by `learner.function` as values."
500 500
    ]
501 501
   },
502 502
   {
... ...
@@ -505,7 +505,7 @@
505 505
    "metadata": {},
506 506
    "outputs": [],
507 507
    "source": [
508
-    "learner.meta_data"
508
+    "learner.extra_data"
509 509
    ]
510 510
   },
511 511
   {
Browse code

add example usage of the DataSaver to the notebook

Bas Nijholt authored on 31/10/2017 17:39:54
Showing 1 changed files
... ...
@@ -413,6 +413,101 @@
413 413
     "adaptive.live_plot(runner, plotter=lambda learner: hv.Overlay([L.plot() for L in learner.learners]))"
414 414
    ]
415 415
   },
416
+  {
417
+   "cell_type": "markdown",
418
+   "metadata": {},
419
+   "source": [
420
+    "# DataSaver"
421
+   ]
422
+  },
423
+  {
424
+   "cell_type": "markdown",
425
+   "metadata": {},
426
+   "source": [
427
+    "Sometimes we want to learn functions that do not only return a single `float`, but instead return multiple values (even though we only want to learn one value.)\n",
428
+    "\n",
429
+    "We can wrap our learners with a `adaptive.DataSaver` such that the learner will be able to handle functions that return results.\n",
430
+    "\n",
431
+    "Take for example this function where we want to remember the time that every point took:"
432
+   ]
433
+  },
434
+  {
435
+   "cell_type": "code",
436
+   "execution_count": null,
437
+   "metadata": {},
438
+   "outputs": [],
439
+   "source": [
440
+    "from adaptive.learner import DataSaver, Learner1D\n",
441
+    "from operator import itemgetter\n",
442
+    "\n",
443
+    "def f(x):\n",
444
+    "    \"\"\"The function evaluation takes roughly the time we `sleep`.\"\"\"\n",
445
+    "    import random\n",
446
+    "    from time import sleep\n",
447
+    "\n",
448
+    "    waiting_time = random.random()\n",
449
+    "    sleep(waiting_time)\n",
450
+    "    a = 0.01\n",
451
+    "    y = x + a**2 / (a**2 + x**2)\n",
452
+    "    return {'y': y, 'waiting_time': waiting_time}\n",
453
+    "\n",
454
+    "# Create the learner with the function that returns a `dict`\n",
455
+    "# Note that this learner cannot be passed to a runner.\n",
456
+    "_learner = Learner1D(f, bounds=(-1.0, 1.0))\n",
457
+    "\n",
458
+    "# Wrap the learner in the `DataSavingLearner` and tell it which key it needs to learn\n",
459
+    "learner = DataSaver(_learner, arg_picker=itemgetter('y'))"
460
+   ]
461
+  },
462
+  {
463
+   "cell_type": "markdown",
464
+   "metadata": {},
465
+   "source": [
466
+    "`learner.learner` is the original learner, so `learner.learner.loss()` will call the correct loss method."
467
+   ]
468
+  },
469
+  {
470
+   "cell_type": "code",
471
+   "execution_count": null,
472
+   "metadata": {},
473
+   "outputs": [],
474
+   "source": [
475
+    "runner = adaptive.Runner(learner, goal=lambda l: l.learner.loss() < 0.01)"
476
+   ]
477
+  },
478
+  {
479
+   "cell_type": "markdown",
480
+   "metadata": {},
481
+   "source": [
482
+    "Because `learner` doesn't have a `plot` function we need to copy `_learner.plot` (or `learner.learner.plot`)"
483
+   ]
484
+  },
485
+  {
486
+   "cell_type": "code",
487
+   "execution_count": null,
488
+   "metadata": {},
489
+   "outputs": [],
490
+   "source": [
491
+    "learner.plot = _learner.plot\n",
492
+    "adaptive.live_plot(runner)"
493
+   ]
494
+  },
495
+  {
496
+   "cell_type": "markdown",
497
+   "metadata": {},
498
+   "source": [
499
+    "Now the `DataSavingLearner` will have an dictionary attribute `meta_data` that has `x` as key and the extra data as values."
500
+   ]
501
+  },
502
+  {
503
+   "cell_type": "code",
504
+   "execution_count": null,
505
+   "metadata": {},
506
+   "outputs": [],
507
+   "source": [
508
+    "learner.meta_data"
509
+   ]
510
+  },
416 511
   {
417 512
    "cell_type": "markdown",
418 513
    "metadata": {
Browse code

add example to time the runner in the notebook

Bas Nijholt authored on 30/10/2017 17:42:48
Showing 1 changed files
... ...
@@ -656,6 +656,61 @@
656 656
     "learner.plot().opts(style=dict(size=6)) * reconstructed_learner.plot()"
657 657
    ]
658 658
   },
659
+  {
660
+   "cell_type": "markdown",
661
+   "metadata": {},
662
+   "source": [
663
+    "### Timing functions"
664
+   ]
665
+  },
666
+  {
667
+   "cell_type": "markdown",
668
+   "metadata": {},
669
+   "source": [
670
+    "To time the runner you **cannot** simply use \n",
671
+    "```python\n",
672
+    "now = datetime.now()\n",
673
+    "runner = adaptive.Runner(...)\n",
674
+    "print(datetime.now() - now)\n",
675
+    "```\n",
676
+    "because this will be done immediately. Also blocking the kernel with `while not runner.task.done()` will not work because the runner will not do anything when the kernel is blocked.\n",
677
+    "\n",
678
+    "Therefore you need to create an `async` function and hook it into the `ioloop` like so:"
679
+   ]
680
+  },
681
+  {
682
+   "cell_type": "code",
683
+   "execution_count": null,
684
+   "metadata": {},
685
+   "outputs": [],
686
+   "source": [
687
+    "import asyncio\n",
688
+    "\n",
689
+    "async def time(runner):\n",
690
+    "    from datetime import datetime\n",
691
+    "    now = datetime.now()\n",
692
+    "    await runner.task\n",
693
+    "    return datetime.now() - now\n",
694
+    "\n",
695
+    "ioloop = asyncio.get_event_loop()\n",
696
+    "\n",
697
+    "learner = adaptive.learner.IntegratorLearner(f24, bounds=(0, 3), tol=1e-3)\n",
698
+    "runner = adaptive.Runner(learner, executor=adaptive.runner.SequentialExecutor(),\n",
699
+    "                         goal=lambda l: l.done())\n",
700
+    "\n",
701
+    "timer = ioloop.create_task(time(runner))"
702
+   ]
703
+  },
704
+  {
705
+   "cell_type": "code",
706
+   "execution_count": null,
707
+   "metadata": {},
708
+   "outputs": [],
709
+   "source": [
710
+    "# The result will only be set when the runner is done.\n",
711
+    "timer.result()"
712
+   ]
713
+  },
659 714
   {
660 715
    "cell_type": "markdown",
661 716
    "metadata": {},
Browse code

small changes to the notebook and use SequentialExecutor

Bas Nijholt authored on 30/10/2017 17:40:47
Showing 1 changed files
... ...
@@ -175,11 +175,12 @@
175 175
    "metadata": {},
176 176
    "outputs": [],
177 177
    "source": [
178
-    "def func(xy):\n",
178
+    "def func(xy, wait=True):\n",
179 179
     "    import numpy as np\n",
180 180
     "    from time import sleep\n",
181 181
     "    from random import random\n",
182
-    "    sleep(random())\n",
182
+    "    if wait:\n",
183
+    "        sleep(random())\n",
183 184
     "    x, y = xy\n",
184 185
     "    a = 0.2\n",
185 186
     "    return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
... ...
@@ -204,7 +205,7 @@
204 205
    "source": [
205 206
     "%%output size=100\n",
206 207
     "%%opts Contours (alpha=0.3)\n",
207
-    "from adaptive.learner import *\n",
208
+    "import holoviews as hv\n",
208 209
     "\n",
209 210
     "def plot(learner):\n",
210 211
     "    tri = learner.ip().tri\n",
... ...
@@ -230,7 +231,7 @@
230 231
     "learner2 = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])\n",
231 232
     "lin = np.linspace(-1, 1, len(learner.points)**0.5)\n",
232 233
     "xy = [(x, y) for x in lin for y in lin]\n",
233
-    "learner2.add_data(xy, map(func, xy))\n",
234
+    "learner2.add_data(xy, map(partial(func, wait=False), xy))\n",
234 235
     "learner2.plot().relabel('Homogeneous grid') + learner.plot().relabel('With adaptive')"
235 236
    ]
236 237
   },
... ...
@@ -275,7 +276,7 @@
275 276
    "metadata": {},
276 277
    "outputs": [],
277 278
    "source": [
278
-    "learner = adaptive.AverageLearner(g, None, 0.01)\n",
279
+    "learner = adaptive.learner.AverageLearner(g, None, 0.01)\n",
279 280
     "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
280 281
     "adaptive.live_plot(runner)"
281 282
    ]
... ...
@@ -342,9 +343,8 @@
342 343
    "metadata": {},
343 344
    "outputs": [],
344 345
    "source": [
345
-    "from cquad import Learner\n",
346
-    "learner = Learner(f24, bounds=(0, 3), tol=1e-3)\n",
347
-    "runner = adaptive.Runner(learner, goal=lambda l: l.done())"
346
+    "learner = adaptive.learner.IntegratorLearner(f24, bounds=(0, 3), tol=1e-10)\n",
347
+    "runner = adaptive.Runner(learner, executor=adaptive.runner.SequentialExecutor(), goal=lambda l: l.done())"
348 348
    ]
349 349
   },
350 350
   {
Browse code

add an cquad example to the notebook

Bas Nijholt authored on 30/10/2017 15:42:30
Showing 1 changed files
... ...
@@ -280,6 +280,100 @@
280 280
     "adaptive.live_plot(runner)"
281 281
    ]
282 282
   },
283
+  {
284
+   "cell_type": "markdown",
285
+   "metadata": {},
286
+   "source": [
287
+    "# 1D integration learner with `cquad`"
288
+   ]
289
+  },
290
+  {
291
+   "cell_type": "markdown",
292
+   "metadata": {},
293
+   "source": [
294
+    "This learner learns a 1D function and calculates the integral and error of the integral with it. It is based on Pedro Gonnet's [implementation](https://www.academia.edu/1976055/Adaptive_quadrature_re-revisited).\n",
295
+    "\n",
296
+    "Let's try the following function with cusps (that is difficult to integrate):"
297
+   ]
298
+  },
299
+  {
300
+   "cell_type": "code",
301
+   "execution_count": null,
302
+   "metadata": {},
303
+   "outputs": [],
304
+   "source": [
305
+    "import numpy as np\n",
306
+    "import holoviews as hv\n",
307
+    "\n",
308
+    "def f24(x):\n",
309
+    "    return np.floor(np.exp(x))\n",
310
+    "\n",
311
+    "xs = np.linspace(0, 3, 200)\n",
312
+    "hv.Scatter((xs, [f24(x) for x in xs]))"
313
+   ]
314
+  },
315
+  {
316
+   "cell_type": "markdown",
317
+   "metadata": {},
318
+   "source": [
319
+    "Just to prove that this really is a difficult to integrate function, let's try a familiar function integrator `scipy.integrate.quad`, which will give us warnings that it encounters difficulties."
320
+   ]
321
+  },
322
+  {
323
+   "cell_type": "code",
324
+   "execution_count": null,
325
+   "metadata": {},
326
+   "outputs": [],
327
+   "source": [
328
+    "import scipy.integrate\n",
329
+    "scipy.integrate.quad(f24, 0, 3)"
330
+   ]
331
+  },
332
+  {
333
+   "cell_type": "markdown",
334
+   "metadata": {},
335
+   "source": [
336
+    "We initialize a learner again and pass the bounds and relative tolerance we want to reach. Then in the `Runner` we pass `goal=lambda l: l.done()` where `learner.done()` is `True` when the relative tolerance has been reached."
337
+   ]
338
+  },
339
+  {
340
+   "cell_type": "code",
341
+   "execution_count": null,
342
+   "metadata": {},
343
+   "outputs": [],
344
+   "source": [
345
+    "from cquad import Learner\n",
346
+    "learner = Learner(f24, bounds=(0, 3), tol=1e-3)\n",
347
+    "runner = adaptive.Runner(learner, goal=lambda l: l.done())"
348
+   ]
349
+  },
350
+  {
351
+   "cell_type": "markdown",
352
+   "metadata": {},
353
+   "source": [
354
+    "Now we could do the live plotting again, but lets just wait untill the runner is done."
355
+   ]
356
+  },
357
+  {
358
+   "cell_type": "code",
359
+   "execution_count": null,
360
+   "metadata": {},
361
+   "outputs": [],
362
+   "source": [
363
+    "if not runner.task.done():\n",
364
+    "    raise RuntimeError('Wait for the runner to finish before executing the cells below!')"
365
+   ]
366
+  },
367
+  {
368
+   "cell_type": "code",
369
+   "execution_count": null,
370
+   "metadata": {},
371
+   "outputs": [],
372
+   "source": [
373
+    "print('The integral value is {} with the corresponding error of {}'.format(learner.igral, learner.err))\n",
374
+    "learner.plot()"
375
+   ]
376
+  },
283 377
   {
284 378
    "cell_type": "markdown",
285 379
    "metadata": {},
Browse code

2D: fix the scaling in the plotting of the polygons

Bas Nijholt authored on 30/10/2017 10:12:24
Showing 1 changed files
... ...
@@ -206,13 +206,13 @@
206 206
     "%%opts Contours (alpha=0.3)\n",
207 207
     "from adaptive.learner import *\n",
208 208
     "\n",
209
-    "def plot(self):\n",
210
-    "    tri = self.ip().tri\n",
211
-    "    return hv.Contours([p for p in tri.points[tri.vertices]])\n",
209
+    "def plot(learner):\n",
210
+    "    tri = learner.ip().tri\n",
211
+    "    return hv.Contours([p for p in learner.unscale(tri.points[tri.vertices])])\n",
212 212
     "\n",
213
-    "def plot_poly(self):\n",
214
-    "    tri = self.ip().tri\n",
215
-    "    return hv.Polygons([p for p in tri.points[tri.vertices]])\n",
213
+    "def plot_poly(learner):\n",
214
+    "    tri = learner.ip().tri\n",
215
+    "    return hv.Polygons([p for p in learner.unscale(tri.points[tri.vertices])])\n",
216 216
     "\n",
217 217
     "\n",
218 218
     "(adaptive.live_plot(runner) +\n",
... ...
@@ -614,7 +614,7 @@
614 614
    "name": "python",
615 615
    "nbconvert_exporter": "python",
616 616
    "pygments_lexer": "ipython3",
617
-   "version": "3.6.2"
617
+   "version": "3.6.3"
618 618
   }
619 619
  },
620 620
  "nbformat": 4,
Browse code

2D: fixes in the notebook

Bas Nijholt authored on 14/09/2017 17:31:42
Showing 1 changed files
... ...
@@ -65,7 +65,7 @@
65 65
     "\n",
66 66
     "    a = 0.01\n",
67 67
     "    if wait:\n",
68
-    "        sleep(random())\n",
68
+    "        sleep(random()*10)\n",
69 69
     "    return x + a**2 / (a**2 + (x - offset)**2)"
70 70
    ]
71 71
   },
... ...
@@ -152,7 +152,7 @@
152 152
     "xs = np.linspace(-1.0, 1.0, len(learner.data))\n",
153 153
     "learner2.add_data(xs, map(partial(f, wait=False), xs))\n",
154 154
     "\n",
155
-    "learner2.plot()"
155
+    "learner.plot() + learner2.plot()"
156 156
    ]
157 157
   },
158 158
   {
... ...
@@ -193,7 +193,7 @@
193 193
    "metadata": {},
194 194
    "outputs": [],
195 195
    "source": [
196
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01, log=True)"
196
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)"
197 197
    ]
198 198
   },
199 199
   {
... ...
@@ -207,11 +207,11 @@
207 207
     "from adaptive.learner import *\n",
208 208
     "\n",
209 209
     "def plot(self):\n",
210
-    "    tri = self.ip.tri\n",
210
+    "    tri = self.ip().tri\n",
211 211
     "    return hv.Contours([p for p in tri.points[tri.vertices]])\n",
212 212
     "\n",
213 213
     "def plot_poly(self):\n",
214
-    "    tri = self.ip.tri\n",
214
+    "    tri = self.ip().tri\n",
215 215
     "    return hv.Polygons([p for p in tri.points[tri.vertices]])\n",
216 216
     "\n",
217 217
     "\n",
Browse code

add cool 2D polygon plots for minute meeting

Bas Nijholt authored on 14/09/2017 12:42:00
Showing 1 changed files
... ...
@@ -102,7 +102,7 @@
102 102
    "source": [
103 103
     "# The end condition is when the \"loss\" is less than 0.1. In the context of the\n",
104 104
     "# 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.\n",
105
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.1)"
105
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)"
106 106
    ]
107 107
   },
108 108
   {
... ...
@@ -177,6 +177,9 @@
177 177
    "source": [
178 178
     "def func(xy):\n",
179 179
     "    import numpy as np\n",
180
+    "    from time import sleep\n",
181
+    "    from random import random\n",
182
+    "    sleep(random())\n",
180 183
     "    x, y = xy\n",
181 184
     "    a = 0.2\n",
182 185
     "    return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
... ...
@@ -190,7 +193,7 @@
190 193
    "metadata": {},
191 194
    "outputs": [],
192 195
    "source": [
193
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() > 1500)"
196
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01, log=True)"
194 197
    ]
195 198
   },
196 199
   {
... ...
@@ -199,7 +202,22 @@
199 202
    "metadata": {},
200 203
    "outputs": [],
201 204
    "source": [
202
-    "adaptive.live_plot(runner)"
205
+    "%%output size=100\n",
206
+    "%%opts Contours (alpha=0.3)\n",
207
+    "from adaptive.learner import *\n",
208
+    "\n",
209
+    "def plot(self):\n",
210
+    "    tri = self.ip.tri\n",
211
+    "    return hv.Contours([p for p in tri.points[tri.vertices]])\n",
212
+    "\n",
213
+    "def plot_poly(self):\n",
214
+    "    tri = self.ip.tri\n",
215
+    "    return hv.Polygons([p for p in tri.points[tri.vertices]])\n",
216
+    "\n",
217
+    "\n",
218
+    "(adaptive.live_plot(runner) +\n",
219
+    " adaptive.live_plot(runner, plotter=plot_poly) +\n",
220
+    " adaptive.live_plot(runner) * adaptive.live_plot(runner, plotter=plot))"
203 221
    ]
204 222
   },
205 223
   {
Browse code

move 2D learner up in notebook

Bas Nijholt authored on 08/09/2017 12:53:46
Showing 1 changed files
... ...
@@ -159,17 +159,29 @@
159 159
    "cell_type": "markdown",
160 160
    "metadata": {},
161 161
    "source": [
162
-    "# Averaging learner"
162
+    "# 2D function learner"
163 163
    ]
164 164
   },
165 165
   {
166 166
    "cell_type": "markdown",
167 167
    "metadata": {},
168 168
    "source": [
169
-    "The next type of learner averages a function until the uncertainty in the average meets some condition.\n",
169
+    "Besides 1D functions, we can also learn 2D functions: $\\ f: ℝ^2 → ℝ$"
170
+   ]
171
+  },
172
+  {
173
+   "cell_type": "code",
174
+   "execution_count": null,
175
+   "metadata": {},
176
+   "outputs": [],
177
+   "source": [
178
+    "def func(xy):\n",
179
+    "    import numpy as np\n",
180
+    "    x, y = xy\n",
181
+    "    a = 0.2\n",
182
+    "    return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
170 183
     "\n",
171
-    "This is useful for sampling a random variable. The function passed to the learner must formally take a single parameter,\n",
172
-    "which should be used like a \"seed\" for the (pseudo-) random variable (although in the current implementation the seed parameter can be ignored by the function)."
184
+    "learner = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])"
173 185
    ]
174 186
   },
175 187
   {
... ...
@@ -178,16 +190,7 @@
178 190
    "metadata": {},
179 191
    "outputs": [],
180 192
    "source": [
181
-    "def g(n):\n",
182
-    "    import random\n",
183
-    "    from time import sleep\n",
184
-    "    sleep(random.random() / 5)\n",
185
-    "    # Properly save and restore the RNG state\n",
186
-    "    state = random.getstate()\n",
187
-    "    random.seed(n)\n",
188
-    "    val = random.gauss(0.5, 1)\n",
189
-    "    random.setstate(state)\n",
190
-    "    return val"
193
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() > 1500)"
191 194
    ]
192 195
   },
193 196
   {
... ...
@@ -196,25 +199,38 @@
196 199
    "metadata": {},
197 200
    "outputs": [],
198 201
    "source": [
199
-    "learner = adaptive.AverageLearner(g, None, 0.01)\n",
200
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
201 202
     "adaptive.live_plot(runner)"
202 203
    ]
203 204
   },
205
+  {
206
+   "cell_type": "code",
207
+   "execution_count": null,
208
+   "metadata": {},
209
+   "outputs": [],
210
+   "source": [
211
+    "import numpy as np\n",
212
+    "learner2 = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])\n",
213
+    "lin = np.linspace(-1, 1, len(learner.points)**0.5)\n",
214
+    "xy = [(x, y) for x in lin for y in lin]\n",
215
+    "learner2.add_data(xy, map(func, xy))\n",
216
+    "learner2.plot().relabel('Homogeneous grid') + learner.plot().relabel('With adaptive')"
217
+   ]
218
+  },
204 219
   {
205 220
    "cell_type": "markdown",
206 221
    "metadata": {},
207 222
    "source": [
208
-    "# Balancing learner"
223
+    "# Averaging learner"
209 224
    ]
210 225
   },
211 226
   {
212 227
    "cell_type": "markdown",
213 228
    "metadata": {},
214 229
    "source": [
215
-    "The balancing learner is a \"meta-learner\" that takes a list of multiple leaners. The runner wil find find out which points of which child learner will improve the loss the most and send those to the executor.\n",
230
+    "The next type of learner averages a function until the uncertainty in the average meets some condition.\n",
216 231
     "\n",
217
-    "The balancing learner can for example be used to implement a poor-man's 2D learner by using the `Learner1D`."
232
+    "This is useful for sampling a random variable. The function passed to the learner must formally take a single parameter,\n",
233
+    "which should be used like a \"seed\" for the (pseudo-) random variable (although in the current implementation the seed parameter can be ignored by the function)."
218 234
    ]
219 235
   },
220 236
   {
... ...
@@ -223,11 +239,16 @@
223 239
    "metadata": {},
224 240
    "outputs": [],
225 241
    "source": [
226
-    "from adaptive.learner import Learner1D, BalancingLearner\n",
227
-    "\n",
228
-    "learners = [Learner1D(partial(f, offset=2*random()-1, wait=False), bounds=(-1.0, 1.0)) for i in range(10)]\n",
229
-    "learner = BalancingLearner(learners)\n",
230
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)"
242
+    "def g(n):\n",
243
+    "    import random\n",
244
+    "    from time import sleep\n",
245
+    "    sleep(random.random() / 5)\n",
246
+    "    # Properly save and restore the RNG state\n",
247
+    "    state = random.getstate()\n",
248
+    "    random.seed(n)\n",
249
+    "    val = random.gauss(0.5, 1)\n",
250
+    "    random.setstate(state)\n",
251
+    "    return val"
231 252
    ]
232 253
   },
233 254
   {
... ...
@@ -236,41 +257,25 @@
236 257
    "metadata": {},
237 258
    "outputs": [],
238 259
    "source": [
239
-    "import holoviews as hv\n",
240
-    "adaptive.live_plot(runner, plotter=lambda learner: hv.Overlay([L.plot() for L in learner.learners]))"
260
+    "learner = adaptive.AverageLearner(g, None, 0.01)\n",
261
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
262
+    "adaptive.live_plot(runner)"
241 263
    ]
242 264
   },
243 265
   {
244 266
    "cell_type": "markdown",
245 267
    "metadata": {},
246 268
    "source": [
247
-    "# 2D learner"
269
+    "# Balancing learner"
248 270
    ]
249 271
   },
250 272
   {
251
-   "cell_type": "code",
252
-   "execution_count": null,
273
+   "cell_type": "markdown",
253 274
    "metadata": {},
254
-   "outputs": [],
255 275
    "source": [
256
-    "def func(xy):\n",
257
-    "    import numpy as np\n",
258
-    "    x, y = xy\n",
259
-    "    a = 0.2\n",
260
-    "    return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
276
+    "The balancing learner is a \"meta-learner\" that takes a list of multiple leaners. The runner wil find find out which points of which child learner will improve the loss the most and send those to the executor.\n",
261 277
     "\n",
262
-    "learner = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])"
263
-   ]
264
-  },
265
-  {
266
-   "cell_type": "code",
267
-   "execution_count": null,
268
-   "metadata": {},
269
-   "outputs": [],
270
-   "source": [
271
-    "from concurrent.futures import ProcessPoolExecutor\n",
272
-    "executor = ProcessPoolExecutor(max_workers=48)\n",
273
-    "runner = adaptive.Runner(learner, executor, goal=lambda l: l.loss() > 1500, log=True)"
278
+    "The balancing learner can for example be used to implement a poor-man's 2D learner by using the `Learner1D`."
274 279
    ]
275 280
   },
276 281
   {
... ...
@@ -279,7 +284,11 @@
279 284
    "metadata": {},
280 285
    "outputs": [],
281 286
    "source": [
282
-    "adaptive.live_plot(runner)"
287
+    "from adaptive.learner import Learner1D, BalancingLearner\n",
288
+    "\n",
289
+    "learners = [Learner1D(partial(f, offset=2*random()-1, wait=False), bounds=(-1.0, 1.0)) for i in range(10)]\n",
290
+    "learner = BalancingLearner(learners)\n",
291
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)"
283 292
    ]
284 293
   },
285 294
   {
... ...
@@ -288,12 +297,8 @@
288 297
    "metadata": {},
289 298
    "outputs": [],
290 299
    "source": [
291
-    "import numpy as np\n",
292
-    "learner2 = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])\n",
293
-    "lin = np.linspace(-1, 1, len(learner.points)**0.5)\n",
294
-    "xy = [(x, y) for x in lin for y in lin]\n",
295
-    "learner2.add_data(xy, map(func, xy))\n",
296
-    "learner2.plot().relabel('Homogeneous grid') + learner.plot().relabel('With adaptive')"
300
+    "import holoviews as hv\n",
301
+    "adaptive.live_plot(runner, plotter=lambda learner: hv.Overlay([L.plot() for L in learner.learners]))"
297 302
    ]
298 303
   },
299 304
   {
Browse code

2D: update notebook with homogeneous grid example

Bas Nijholt authored on 06/09/2017 17:48:41
Showing 1 changed files
... ...
@@ -253,13 +253,13 @@
253 253
    "metadata": {},
254 254
    "outputs": [],
255 255
    "source": [
256
-    "def func(arg):\n",
256
+    "def func(xy):\n",
257 257
     "    import numpy as np\n",
258
-    "    x, y = arg\n",
258
+    "    x, y = xy\n",
259 259
     "    a = 0.2\n",
260
-    "    return np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
260
+    "    return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
261 261
     "\n",
262
-    "learner = adaptive.learner.AdaptiveTriSampling(func, bounds=[(-1, 1), (-1, 1)])"
262
+    "learner = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])"
263 263
    ]
264 264
   },
265 265
   {
... ...
@@ -269,8 +269,8 @@
269 269
    "outputs": [],
270 270
    "source": [
271 271
     "from concurrent.futures import ProcessPoolExecutor\n",
272
-    "executor = ProcessPoolExecutor(max_workers=2)\n",
273
-    "runner = adaptive.Runner(learner, executor, goal=lambda l: l.loss() > 2000)"
272
+    "executor = ProcessPoolExecutor(max_workers=48)\n",
273
+    "runner = adaptive.Runner(learner, executor, goal=lambda l: l.loss() > 1500, log=True)"
274 274
    ]
275 275
   },
276 276
   {
... ...
@@ -282,6 +282,20 @@
282 282
     "adaptive.live_plot(runner)"
283 283
    ]
284 284
   },
285
+  {
286
+   "cell_type": "code",
287
+   "execution_count": null,
288
+   "metadata": {},
289
+   "outputs": [],
290
+   "source": [
291
+    "import numpy as np\n",
292
+    "learner2 = adaptive.learner.Learner2D(func, bounds=[(-1, 1), (-1, 1)])\n",
293
+    "lin = np.linspace(-1, 1, len(learner.points)**0.5)\n",
294
+    "xy = [(x, y) for x in lin for y in lin]\n",
295
+    "learner2.add_data(xy, map(func, xy))\n",
296
+    "learner2.plot().relabel('Homogeneous grid') + learner.plot().relabel('With adaptive')"
297
+   ]
298
+  },
285 299
   {
286 300
    "cell_type": "markdown",
287 301
    "metadata": {
Browse code

add 2D learner to the notebook

Bas Nijholt authored on 05/09/2017 21:41:42
Showing 1 changed files
... ...
@@ -240,6 +240,48 @@
240 240
     "adaptive.live_plot(runner, plotter=lambda learner: hv.Overlay([L.plot() for L in learner.learners]))"
241 241
    ]
242 242
   },
243
+  {
244
+   "cell_type": "markdown",
245
+   "metadata": {},
246
+   "source": [
247
+    "# 2D learner"
248
+   ]
249
+  },
250
+  {
251
+   "cell_type": "code",
252
+   "execution_count": null,
253
+   "metadata": {},
254
+   "outputs": [],
255
+   "source": [
256
+    "def func(arg):\n",
257
+    "    import numpy as np\n",
258
+    "    x, y = arg\n",
259
+    "    a = 0.2\n",
260
+    "    return np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
261
+    "\n",
262
+    "learner = adaptive.learner.AdaptiveTriSampling(func, bounds=[(-1, 1), (-1, 1)])"
263
+   ]
264
+  },
265
+  {
266
+   "cell_type": "code",
267
+   "execution_count": null,
268
+   "metadata": {},
269
+   "outputs": [],
270
+   "source": [
271
+    "from concurrent.futures import ProcessPoolExecutor\n",
272
+    "executor = ProcessPoolExecutor(max_workers=2)\n",
273
+    "runner = adaptive.Runner(learner, executor, goal=lambda l: l.loss() > 2000)"
274
+   ]
275
+  },
276
+  {
277
+   "cell_type": "code",
278
+   "execution_count": null,
279
+   "metadata": {},
280
+   "outputs": [],
281
+   "source": [
282
+    "adaptive.live_plot(runner)"
283
+   ]
284
+  },
243 285
   {
244 286
    "cell_type": "markdown",
245 287
    "metadata": {
Browse code

Merge branch 'balancinglearner'

Implement BalancingLearner and fix several bugs in 1DLearner.
Also add an abstractmethod `loss_improvement` to learners, that
allows the BalancingLearner to select which learner will give
it the best improvement.

Closes #10 and #13

See merge request !4

Joseph Weston authored on 01/09/2017 14:44:49
Showing 0 changed files
Browse code

add BalancingLearner in the notebook

Bas Nijholt authored on 31/08/2017 16:13:57
Showing 1 changed files
... ...
@@ -54,7 +54,7 @@
54 54
    "metadata": {},
55 55
    "outputs": [],
56 56
    "source": [
57
-    "import functools\n",
57
+    "from functools import partial\n",
58 58
     "from random import random\n",
59 59
     "\n",
60 60
     "offset = random() - 0.5\n",
... ...
@@ -150,7 +150,7 @@
150 150
     "learner2 = adaptive.learner.Learner1D(f, bounds=(-1.01, 1.0))\n",
151 151
     "\n",
152 152
     "xs = np.linspace(-1.0, 1.0, len(learner.data))\n",
153
-    "learner2.add_data(xs, map(functools.partial(f, wait=False), xs))\n",
153
+    "learner2.add_data(xs, map(partial(f, wait=False), xs))\n",
154 154
     "\n",
155 155
     "learner2.plot()"
156 156
    ]
... ...
@@ -196,11 +196,50 @@
196 196
    "metadata": {},
197 197
    "outputs": [],
198 198
    "source": [
199
-    "learner = adaptive.AverageLearner(g, None, 0.03)\n",
199
+    "learner = adaptive.AverageLearner(g, None, 0.01)\n",
200 200
     "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
201 201
     "adaptive.live_plot(runner)"
202 202
    ]
203 203
   },
204
+  {
205
+   "cell_type": "markdown",
206
+   "metadata": {},
207
+   "source": [
208
+    "# Balancing learner"
209
+   ]
210
+  },
211
+  {
212
+   "cell_type": "markdown",
213
+   "metadata": {},
214
+   "source": [
215
+    "The balancing learner is a \"meta-learner\" that takes a list of multiple leaners. The runner wil find find out which points of which child learner will improve the loss the most and send those to the executor.\n",
216
+    "\n",
217
+    "The balancing learner can for example be used to implement a poor-man's 2D learner by using the `Learner1D`."
218
+   ]
219
+  },
220
+  {
221
+   "cell_type": "code",
222
+   "execution_count": null,
223
+   "metadata": {},
224
+   "outputs": [],
225
+   "source": [
226
+    "from adaptive.learner import Learner1D, BalancingLearner\n",
227
+    "\n",
228
+    "learners = [Learner1D(partial(f, offset=2*random()-1, wait=False), bounds=(-1.0, 1.0)) for i in range(10)]\n",
229
+    "learner = BalancingLearner(learners)\n",
230
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)"
231
+   ]
232
+  },
233
+  {
234
+   "cell_type": "code",
235
+   "execution_count": null,
236
+   "metadata": {},
237
+   "outputs": [],
238
+   "source": [
239
+    "import holoviews as hv\n",
240
+    "adaptive.live_plot(runner, plotter=lambda learner: hv.Overlay([L.plot() for L in learner.learners]))"
241
+   ]
242
+  },
204 243
   {
205 244
    "cell_type": "markdown",
206 245
    "metadata": {
Browse code

remove meta data cruft

Bas Nijholt authored on 31/08/2017 16:13:37
Showing 1 changed files
... ...
@@ -51,9 +51,7 @@
51 51
   {
52 52
    "cell_type": "code",
53 53
    "execution_count": null,
54
-   "metadata": {
55
-    "collapsed": true
56
-   },
54
+   "metadata": {},
57 55
    "outputs": [],
58 56
    "source": [
59 57
     "import functools\n",
... ...
@@ -81,9 +79,7 @@
81 79
   {
82 80
    "cell_type": "code",
83 81
    "execution_count": null,
84
-   "metadata": {
85
-    "collapsed": true
86
-   },
82
+   "metadata": {},
87 83
    "outputs": [],
88 84
    "source": [
89 85
     "learner = adaptive.learner.Learner1D(f, bounds=(-1.0, 1.0))"
... ...
@@ -101,9 +97,7 @@
101 97
   {
102 98
    "cell_type": "code",
103 99
    "execution_count": null,
104
-   "metadata": {
105
-    "collapsed": true
106
-   },
100
+   "metadata": {},
107 101
    "outputs": [],
108 102
    "source": [
109 103
     "# The end condition is when the \"loss\" is less than 0.1. In the context of the\n",
... ...
@@ -122,9 +116,7 @@
122 116
   {
123 117
    "cell_type": "code",
124 118
    "execution_count": null,
125
-   "metadata": {
126
-    "collapsed": true
127
-   },
119
+   "metadata": {},
128 120
    "outputs": [],
129 121
    "source": [
130 122
     "adaptive.live_plot(runner)"
... ...
@@ -140,9 +132,7 @@
140 132
   {
141 133
    "cell_type": "code",
142 134
    "execution_count": null,
143
-   "metadata": {
144
-    "collapsed": true
145
-   },
135
+   "metadata": {},
146 136
    "outputs": [],
147 137
    "source": [
148 138
     "if not runner.task.done():\n",
... ...
@@ -152,9 +142,7 @@
152 142
   {
153 143
    "cell_type": "code",
154 144
    "execution_count": null,
155
-   "metadata": {
156
-    "collapsed": true
157
-   },
145
+   "metadata": {},
158 146
    "outputs": [],
159 147
    "source": [
160 148
     "import numpy as np\n",
... ...
@@ -187,9 +175,7 @@
187 175
   {
188 176
    "cell_type": "code",
189 177
    "execution_count": null,
190
-   "metadata": {
191
-    "collapsed": true
192
-   },
178
+   "metadata": {},
193 179
    "outputs": [],
194 180
    "source": [
195 181
     "def g(n):\n",
... ...
@@ -207,9 +193,7 @@
207 193
   {
208 194
    "cell_type": "code",
209 195
    "execution_count": null,
210
-   "metadata": {
211
-    "scrolled": false
212
-   },
196
+   "metadata": {},
213 197
    "outputs": [],
214 198
    "source": [
215 199
     "learner = adaptive.AverageLearner(g, None, 0.03)\n",
... ...
@@ -250,9 +234,7 @@
250 234
   {
251 235
    "cell_type": "code",
252 236
    "execution_count": null,
253
-   "metadata": {
254
-    "collapsed": true
255
-   },
237
+   "metadata": {},
256 238
    "outputs": [],
257 239
    "source": [
258 240
     "from concurrent.futures import ProcessPoolExecutor\n",
... ...
@@ -274,9 +256,7 @@
274 256
   {
275 257
    "cell_type": "code",
276 258
    "execution_count": null,
277
-   "metadata": {
278
-    "collapsed": true
279
-   },
259
+   "metadata": {},
280 260
    "outputs": [],
281 261
    "source": [
282 262
     "import ipyparallel\n",
... ...
@@ -323,9 +303,7 @@
323 303
   {
324 304
    "cell_type": "code",
325 305
    "execution_count": null,
326
-   "metadata": {
327
-    "collapsed": true
328
-   },
306
+   "metadata": {},
329 307
    "outputs": [],
330 308
    "source": [
331 309
     "learner = adaptive.learner.Learner1D(f, bounds=(-1.0, 1.0))\n",
... ...
@@ -336,9 +314,7 @@
336 314
   {
337 315
    "cell_type": "code",
338 316
    "execution_count": null,
339
-   "metadata": {
340
-    "collapsed": true
341
-   },
317
+   "metadata": {},
342 318
    "outputs": [],
343 319
    "source": [
344 320
     "runner.task.cancel()"
... ...
@@ -363,9 +339,7 @@
363 339
   {
364 340
    "cell_type": "code",
365 341
    "execution_count": null,
366
-   "metadata": {
367
-    "collapsed": true
368
-   },
342
+   "metadata": {},
369 343
    "outputs": [],
370 344
    "source": [
371 345
     "def will_raise(x):\n",
... ...
@@ -394,9 +368,7 @@
394 368
   {
395 369
    "cell_type": "code",
396 370
    "execution_count": null,
397
-   "metadata": {
398
-    "collapsed": true
399
-   },
371
+   "metadata": {},
400 372
    "outputs": [],
401 373
    "source": [
402 374
     "runner.task.done()"
... ...
@@ -412,10 +384,7 @@
412 384
   {
413 385
    "cell_type": "code",
414 386
    "execution_count": null,
415
-   "metadata": {
416
-    "collapsed": true,
417
-    "scrolled": false
418
-   },
387
+   "metadata": {},
419 388
    "outputs": [],
420 389
    "source": [
421 390
     "runner.task.result()"
Browse code

add logging to runners and add an example to the notebook

Joseph Weston authored on 31/08/2017 11:17:25
Showing 1 changed files
... ...
@@ -421,6 +421,60 @@
421 421
     "runner.task.result()"
422 422
    ]
423 423
   },
424
+  {
425
+   "cell_type": "markdown",
426
+   "metadata": {},
427
+   "source": [
428
+    "### Logging runners"
429
+   ]
430
+  },
431
+  {
432
+   "cell_type": "markdown",
433
+   "metadata": {},
434
+   "source": [
435
+    "Runners do their job in the background, which makes introspection quite cumbersome. One way to inspect runners is to instantiate one with `log=True`:"
436
+   ]
437
+  },
438
+  {
439
+   "cell_type": "code",
440
+   "execution_count": null,
441
+   "metadata": {},
442
+   "outputs": [],
443
+   "source": [
444
+    "learner = adaptive.learner.Learner1D(f, bounds=(-1, 1))\n",
445
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.1,\n",
446
+    "                         log=True)\n",
447
+    "adaptive.live_plot(runner)"
448
+   ]
449
+  },
450
+  {
451
+   "cell_type": "markdown",
452
+   "metadata": {},
453
+   "source": [
454
+    "This gives a the runner a `log` attribute, which is a list of the `learner` methods that were called, as well as their arguments. This is useful because executors typically execute their tasks in a non-deterministic order.\n",
455
+    "\n",
456
+    "This can be used with `adaptive.runner.replay_log` to perfom the same set of operations on another runner:\n"
457
+   ]
458
+  },
459
+  {
460
+   "cell_type": "code",
461
+   "execution_count": null,
462
+   "metadata": {},
463
+   "outputs": [],
464
+   "source": [
465
+    "reconstructed_learner = adaptive.learner.Learner1D(f, bounds=(-1, 1))\n",
466
+    "adaptive.runner.replay_log(reconstructed_learner, runner.log)"
467
+   ]
468
+  },
469
+  {
470
+   "cell_type": "code",
471
+   "execution_count": null,
472
+   "metadata": {},
473
+   "outputs": [],
474
+   "source": [
475
+    "learner.plot().opts(style=dict(size=6)) * reconstructed_learner.plot()"
476
+   ]
477
+  },
424 478
   {
425 479
    "cell_type": "markdown",
426 480
    "metadata": {},
Browse code

update function name

Joseph Weston authored on 26/08/2017 09:50:57
Showing 1 changed files
... ...
@@ -93,7 +93,7 @@
93 93
    "cell_type": "markdown",
94 94
    "metadata": {},
95 95
    "source": [
96
-    "Next we create a \"runner\" that will request points from the learner and evaluate 'func' on them.\n",
96
+    "Next we create a \"runner\" that will request points from the learner and evaluate 'f' on them.\n",
97 97
     "\n",
98 98
     "By default the runner will evaluate the points in parallel using local processes ([`concurrent.futures.ProcessPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor))."
99 99
    ]
... ...
@@ -162,7 +162,7 @@
162 162
     "learner2 = adaptive.learner.Learner1D(f, bounds=(-1.01, 1.0))\n",
163 163
     "\n",
164 164
     "xs = np.linspace(-1.0, 1.0, len(learner.data))\n",
165
-    "learner2.add_data(xs, map(functools.partial(func, wait=False), xs))\n",
165
+    "learner2.add_data(xs, map(functools.partial(f, wait=False), xs))\n",
166 166
     "\n",
167 167
     "learner2.plot()"
168 168
    ]
Browse code

update repository link

Joseph Weston authored on 23/08/2017 16:24:36
Showing 1 changed files
... ...
@@ -11,7 +11,7 @@
11 11
    "cell_type": "markdown",
12 12
    "metadata": {},
13 13
    "source": [
14
-    "[`adaptive`](https://gitlab.kwant-project.org/basnijholt/adaptive-evaluation) is a package for adaptively sampling functions with support for parallel evaluation.\n",
14
+    "[`adaptive`](https://gitlab.kwant-project.org/qt/adaptive-evaluation) is a package for adaptively sampling functions with support for parallel evaluation.\n",
15 15
     "\n",
16 16
     "This is an introductory notebook that shows some basic use cases.\n",
17 17
     "\n",
Browse code

add introduction notebook

Joseph Weston authored on 23/08/2017 15:16:39
Showing 1 changed files
... ...
@@ -7,6 +7,47 @@
7 7
     "# Adaptive"
8 8
    ]
9 9
   },
10
+  {
11
+   "cell_type": "markdown",
12
+   "metadata": {},
13
+   "source": [
14
+    "[`adaptive`](https://gitlab.kwant-project.org/basnijholt/adaptive-evaluation) is a package for adaptively sampling functions with support for parallel evaluation.\n",
15
+    "\n",
16
+    "This is an introductory notebook that shows some basic use cases.\n",
17
+    "\n",
18
+    "`adaptive` needs the following packages:\n",
19
+    "\n",
20
+    "+ Python 3.6\n",
21
+    "+ holowiews\n",
22
+    "+ bokeh"
23
+   ]
24
+  },
25
+  {
26
+   "cell_type": "code",
27
+   "execution_count": null,
28
+   "metadata": {},
29
+   "outputs": [],
30
+   "source": [
31
+    "import adaptive\n",
32
+    "adaptive.notebook_extension()"
33
+   ]
34
+  },
35
+  {
36
+   "cell_type": "markdown",
37
+   "metadata": {},
38
+   "source": [
39
+    "# 1D function learner"
40
+   ]
41
+  },
42
+  {
43
+   "cell_type": "markdown",
44
+   "metadata": {},
45
+   "source": [
46
+    "We start with the most common use-case: sampling a 1D function $\\ f: ℝ → ℝ$.\n",
47
+    "\n",
48
+    "We will use the following function, which is a smooth (linear) background with a sharp peak at a random location:"
49
+   ]
50
+  },
10 51
   {
11 52
    "cell_type": "code",
12 53
    "execution_count": null,
... ...
@@ -15,45 +56,99 @@
15 56
    },
16 57
    "outputs": [],
17 58
    "source": [
18
-    "import adaptive\n",
19
-    "adaptive.notebook_extension()\n",
59
+    "import functools\n",
60
+    "from random import random\n",
20 61
     "\n",
21
-    "def func(x, wait=True):\n",
22
-    "    \"\"\"Function with a sharp peak on a smooth background\"\"\"\n",
23
-    "    import numpy as np\n",
62
+    "offset = random() - 0.5\n",
63
+    "\n",
64
+    "def f(x, offset=0, wait=True):\n",
24 65
     "    from time import sleep\n",
25
-    "    from random import randint\n",
66
+    "    from random import random\n",
26 67
     "\n",
27
-    "    x = np.asarray(x)\n",
28
-    "    a = 0.001\n",
68
+    "    a = 0.01\n",
29 69
     "    if wait:\n",
30
-    "        sleep(np.random.randint(0, 2) / 10)\n",
31
-    "    return x + a**2/(a**2 + (x)**2)"
70
+    "        sleep(random())\n",
71
+    "    return x + a**2 / (a**2 + (x - offset)**2)"
32 72
    ]
33 73
   },
34 74
   {
35 75
    "cell_type": "markdown",
76
+   "metadata": {},
77
+   "source": [
78
+    "We start by initializing a 1D \"learner\", which will suggest points to evaluate, and adapt its suggestions as more and more points are evaluated."
79
+   ]
80
+  },
81
+  {
82
+   "cell_type": "code",
83
+   "execution_count": null,
36 84
    "metadata": {
37 85
     "collapsed": true
38 86
    },
87
+   "outputs": [],
39 88
    "source": [
40
-    "## Local Process Pool (default)"
89
+    "learner = adaptive.learner.Learner1D(f, bounds=(-1.0, 1.0))"
90
+   ]
91
+  },
92
+  {
93
+   "cell_type": "markdown",
94
+   "metadata": {},
95
+   "source": [
96
+    "Next we create a \"runner\" that will request points from the learner and evaluate 'func' on them.\n",
97
+    "\n",
98
+    "By default the runner will evaluate the points in parallel using local processes ([`concurrent.futures.ProcessPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor))."
41 99
    ]
42 100
   },
43 101
   {
44 102
    "cell_type": "code",
45 103
    "execution_count": null,
46 104
    "metadata": {
47
-    "collapsed": true,
48
-    "scrolled": false
105
+    "collapsed": true
106
+   },
107
+   "outputs": [],
108
+   "source": [
109
+    "# The end condition is when the \"loss\" is less than 0.1. In the context of the\n",
110
+    "# 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.\n",
111
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.1)"
112
+   ]
113
+  },
114
+  {
115
+   "cell_type": "markdown",
116
+   "metadata": {},
117
+   "source": [
118
+    "When instantiated in a Jupyter notebook the runner does its job in the background and does not block the IPython kernel.\n",
119
+    "We can use this to create a plot that updates as new data arrives:"
120
+   ]
121
+  },
122
+  {
123
+   "cell_type": "code",
124
+   "execution_count": null,
125
+   "metadata": {
126
+    "collapsed": true
49 127
    },
50 128
    "outputs": [],
51 129
    "source": [
52
-    "learner = adaptive.learner.Learner1D(func, bounds=(-1.01, 1.0))\n",
53
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss(real=True) < 0.01)\n",
54 130
     "adaptive.live_plot(runner)"
55 131
    ]
56 132
   },
133
+  {
134
+   "cell_type": "markdown",
135
+   "metadata": {},
136
+   "source": [
137
+    "We can now compare the adaptive sampling to a homogeneous sampling with the same number of points:"
138
+   ]
139
+  },
140
+  {
141
+   "cell_type": "code",
142
+   "execution_count": null,
143
+   "metadata": {
144
+    "collapsed": true
145
+   },
146
+   "outputs": [],
147
+   "source": [
148
+    "if not runner.task.done():\n",
149
+    "    raise RuntimeError('Wait for the runner to finish before executing the cells below!')"
150
+   ]
151
+  },
57 152
   {
58 153
    "cell_type": "code",
59 154
    "execution_count": null,
... ...
@@ -62,23 +157,118 @@
62 157
    },
63 158
    "outputs": [],
64 159
    "source": [
65
-    "# Same function evaluated on homogeneous grid with same amount of points\n",
66
-    "from functools import partial\n",
67 160
     "import numpy as np\n",
68 161
     "\n",
69
-    "learner2 = adaptive.learner.Learner1D(func, bounds=(-1.01, 1.0))\n",
70
-    "xs = np.linspace(-1.01, 1.0, len(learner.data))\n",
71
-    "learner2.add_data(xs, map(partial(func, wait=False), xs))\n",
162
+    "learner2 = adaptive.learner.Learner1D(f, bounds=(-1.01, 1.0))\n",
163
+    "\n",
164
+    "xs = np.linspace(-1.0, 1.0, len(learner.data))\n",
165
+    "learner2.add_data(xs, map(functools.partial(func, wait=False), xs))\n",
166
+    "\n",
72 167
     "learner2.plot()"
73 168
    ]
74 169
   },
75 170
   {
76 171
    "cell_type": "markdown",
172
+   "metadata": {},
173
+   "source": [
174
+    "# Averaging learner"
175
+   ]
176
+  },
177
+  {
178
+   "cell_type": "markdown",
179
+   "metadata": {},
180
+   "source": [
181
+    "The next type of learner averages a function until the uncertainty in the average meets some condition.\n",
182
+    "\n",
183
+    "This is useful for sampling a random variable. The function passed to the learner must formally take a single parameter,\n",
184
+    "which should be used like a \"seed\" for the (pseudo-) random variable (although in the current implementation the seed parameter can be ignored by the function)."
185
+   ]
186
+  },
187
+  {
188
+   "cell_type": "code",
189
+   "execution_count": null,
77 190
    "metadata": {
78 191
     "collapsed": true
79 192
    },
193
+   "outputs": [],
80 194
    "source": [
81
-    "## ipyparallel"
195
+    "def g(n):\n",
196
+    "    import random\n",
197
+    "    from time import sleep\n",
198
+    "    sleep(random.random() / 5)\n",
199
+    "    # Properly save and restore the RNG state\n",
200
+    "    state = random.getstate()\n",
201
+    "    random.seed(n)\n",
202
+    "    val = random.gauss(0.5, 1)\n",
203
+    "    random.setstate(state)\n",
204
+    "    return val"
205
+   ]
206
+  },
207
+  {
208
+   "cell_type": "code",
209
+   "execution_count": null,
210
+   "metadata": {
211
+    "scrolled": false
212
+   },
213
+   "outputs": [],
214
+   "source": [
215
+    "learner = adaptive.AverageLearner(g, None, 0.03)\n",
216
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
217
+    "adaptive.live_plot(runner)"
218
+   ]
219
+  },
220
+  {
221
+   "cell_type": "markdown",
222
+   "metadata": {
223
+    "collapsed": true
224
+   },
225
+   "source": [
226
+    "## Alternative executors"
227
+   ]
228
+  },
229
+  {
230
+   "cell_type": "markdown",
231
+   "metadata": {},
232
+   "source": [
233
+    "Often you will want to evaluate the function on some remote computing resources. `adaptive` works out of the box with any framework that implements a [PEP 3148](https://www.python.org/dev/peps/pep-3148/) compliant executor that returns `concurrent.futures.Future` objects."
234
+   ]
235
+  },
236
+  {
237
+   "cell_type": "markdown",
238
+   "metadata": {},
239
+   "source": [
240
+    "### `concurrent.futures`"
241
+   ]
242
+  },
243
+  {
244
+   "cell_type": "markdown",
245
+   "metadata": {},
246
+   "source": [
247
+    "By default a runner creates a `ProcessPoolExecutor`, but you can also pass one explicitly e.g. to limit the number of workers:"
248
+   ]
249
+  },
250
+  {
251
+   "cell_type": "code",
252
+   "execution_count": null,
253
+   "metadata": {
254
+    "collapsed": true
255
+   },
256
+   "outputs": [],
257
+   "source": [
258
+    "from concurrent.futures import ProcessPoolExecutor\n",
259
+    "\n",
260
+    "executor = ProcessPoolExecutor(max_workers=4)\n",
261
+    "\n",
262
+    "learner = adaptive.learner.Learner1D(f, bounds=(-1, 1))\n",
263
+    "runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.1)\n",
264
+    "adaptive.live_plot(runner)"
265
+   ]
266
+  },
267
+  {
268
+   "cell_type": "markdown",
269
+   "metadata": {},
270
+   "source": [
271
+    "### IPyparallel"
82 272
    ]
83 273
   },
84 274
   {
... ...
@@ -92,10 +282,11 @@
92 282
     "import ipyparallel\n",
93 283
     "\n",
94 284
     "client = ipyparallel.Client()\n",
285
+    "# f is a closure, so we have to use cloudpickle -- this is independent of 'adaptive'\n",
286
+    "client[:].use_cloudpickle()\n",
95 287
     "\n",
96
-    "# Initialize the learner\n",
97
-    "learner = adaptive.learner.Learner1D(func)\n",
98
-    "runner = adaptive.Runner(learner, client, goal=lambda l: l.loss() < 0.1)\n",
288
+    "learner = adaptive.learner.Learner1D(f, bounds=(-1, 1))\n",
289
+    "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.1)\n",
99 290
     "adaptive.live_plot(runner)"
100 291
    ]
101 292
   },
... ...
@@ -103,7 +294,119 @@
103 294
    "cell_type": "markdown",
104 295
    "metadata": {},
105 296
    "source": [
106
-    "## 0D Learner"
297
+    "---"
298
+   ]
299
+  },
300
+  {
301
+   "cell_type": "markdown",
302
+   "metadata": {},
303
+   "source": [
304
+    "# Advanced Topics"
305
+   ]
306
+  },
307
+  {
308
+   "cell_type": "markdown",
309
+   "metadata": {},
310
+   "source": [
311
+    "## Cancelling a runner"
312
+   ]
313
+  },
314
+  {
315
+   "cell_type": "markdown",
316
+   "metadata": {},
317
+   "source": [
318
+    "Sometimes you want to interactively explore a parameter space, and want the function to be evaluated at finer and finer resolution and manually control when the calculation stops.\n",
319
+    "\n",
320
+    "If no `goal` is provided to a runner then the runner will run until cancelled:"
321
+   ]
322
+  },
323
+  {
324
+   "cell_type": "code",
325
+   "execution_count": null,
326
+   "metadata": {
327
+    "collapsed": true
328
+   },
329
+   "outputs": [],
330
+   "source": [
331
+    "learner = adaptive.learner.Learner1D(f, bounds=(-1.0, 1.0))\n",
332
+    "runner = adaptive.Runner(learner)\n",
333
+    "adaptive.live_plot(runner)"
334
+   ]
335
+  },
336
+  {
337
+   "cell_type": "code",
338
+   "execution_count": null,
339
+   "metadata": {
340
+    "collapsed": true
341
+   },
342
+   "outputs": [],
343
+   "source": [
344
+    "runner.task.cancel()"
345
+   ]
346
+  },
347
+  {
348
+   "cell_type": "markdown",
349
+   "metadata": {},
350
+   "source": [
351
+    "## Debugging Problems "
352
+   ]
353
+  },
354
+  {
355
+   "cell_type": "markdown",
356
+   "metadata": {},
357
+   "source": [
358
+    "Runners work in the background with respect to the IPython kernel, which makes it convenient, but also means that inspecting errors is more difficult because exceptions will not be raised directly in the notebook. Often the only indication you will have that something has gone wrong is that nothing will be happening.\n",
359
+    "\n",
360
+    "Let's look at the following example, where the function to be learned will raise an exception 10% of the time."
361
+   ]
362
+  },
363
+  {
364
+   "cell_type": "code",
365
+   "execution_count": null,
366
+   "metadata": {
367
+    "collapsed": true
368
+   },
369
+   "outputs": [],
370
+   "source": [
371
+    "def will_raise(x):\n",
372
+    "    from random import random\n",
373
+    "    from time import sleep\n",
374
+    "    \n",
375
+    "    sleep(random())\n",
376
+    "    if random() < 0.1:\n",
377
+    "        raise RuntimeError('something went wrong!')\n",
378
+    "    return x**2\n",
379
+    "    \n",
380
+    "learner = adaptive.Learner1D(will_raise, (-1, 1))\n",
381
+    "runner = adaptive.Runner(learner)  # without 'goal' the runner will run forever unless cancelled\n",
382
+    "adaptive.live_plot(runner)"
383
+   ]
384
+  },
385
+  {
386
+   "cell_type": "markdown",
387
+   "metadata": {},
388
+   "source": [
389
+    "The above runner should continue forever, but we notice that it stops after a few points are evaluated.\n",
390
+    "\n",
391
+    "First we should check that the runner has really finished:"
392
+   ]
393
+  },
394
+  {
395
+   "cell_type": "code",
396
+   "execution_count": null,
397
+   "metadata": {
398
+    "collapsed": true
399
+   },
400
+   "outputs": [],
401
+   "source": [
402
+    "runner.task.done()"
403
+   ]
404
+  },
405
+  {
406
+   "cell_type": "markdown",
407
+   "metadata": {},
408
+   "source": [
409
+    "If it has indeed finished then we should check the `result` of the runner. This should be `None` if the runner stopped successfully. If the runner stopped due to an exception then asking for the result will raise the exception with the stack trace:"
107 410
    ]
108 411
   },
109 412
   {
... ...
@@ -115,15 +418,41 @@
115 418
    },
116 419
    "outputs": [],
117 420
    "source": [
118
-    "def func(x):\n",
119
-    "    import random\n",
120
-    "    import numpy as np\n",
121
-    "    from time import sleep\n",
122
-    "    sleep(np.random.randint(0, 2))\n",
123
-    "    return random.gauss(0.05, 1)\n",
421
+    "runner.task.result()"
422
+   ]
423
+  },
424
+  {
425
+   "cell_type": "markdown",
426
+   "metadata": {},
427
+   "source": [
428
+    "## Using Runners from a script "
429
+   ]
430
+  },
431
+  {
432
+   "cell_type": "markdown",
433
+   "metadata": {},
434
+   "source": [
435
+    "Runners can also be used from a Python script independently of the notebook:\n",
436
+    "\n",
437
+    "```python\n",
438
+    "import adaptive\n",
439
+    "\n",
440
+    "def f(x):\n",
441
+    "    return x\n",
442
+    "\n",
443
+    "learner = adaptive.Learner1D(f, (-1, 1))\n",
444
+    "\n",
445
+    "runner = adaptive.Runner(learner, goal=lambda: l: l.loss() < 0.1)\n",
446
+    "runner.run_sync()  # Block until completion.\n",
447
+    "```\n",
448
+    "\n",
449
+    "Under the hood the runner uses [`asyncio`](https://docs.python.org/3/library/asyncio.html). You don't need to worry about this most of the time, unless your script uses asyncio itself. If this is the case you should be aware that instantiating a `Runner` schedules a new task on the current event loop, and you can simply\n",
450
+    "\n",
451
+    "```python\n",
452
+    "    await runner.task\n",
453
+    "```\n",
124 454
     "\n",
125
-    "learner = adaptive.learner.AverageLearner(func, None, 0.1)\n",
126
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)"
455
+    "inside a coroutine to await completion of the runner."
127 456
    ]
128 457
   }
129 458
  ],
Browse code

remove superfluous notebooks

Joseph Weston authored on 23/08/2017 09:55:52
Showing 1 changed files
1 1
new file mode 100644
... ...
@@ -0,0 +1,152 @@
1
+{
2
+ "cells": [
3
+  {
4
+   "cell_type": "markdown",
5
+   "metadata": {},
6
+   "source": [
7
+    "# Adaptive"
8
+   ]
9
+  },
10
+  {
11
+   "cell_type": "code",
12
+   "execution_count": null,
13
+   "metadata": {
14
+    "collapsed": true
15
+   },
16
+   "outputs": [],
17
+   "source": [
18
+    "import adaptive\n",
19
+    "adaptive.notebook_extension()\n",
20
+    "\n",
21
+    "def func(x, wait=True):\n",
22
+    "    \"\"\"Function with a sharp peak on a smooth background\"\"\"\n",
23
+    "    import numpy as np\n",
24
+    "    from time import sleep\n",
25
+    "    from random import randint\n",
26
+    "\n",
27
+    "    x = np.asarray(x)\n",
28
+    "    a = 0.001\n",
29
+    "    if wait:\n",
30
+    "        sleep(np.random.randint(0, 2) / 10)\n",
31
+    "    return x + a**2/(a**2 + (x)**2)"
32
+   ]
33
+  },
34
+  {
35
+   "cell_type": "markdown",
36
+   "metadata": {
37
+    "collapsed": true
38
+   },
39
+   "source": [
40
+    "## Local Process Pool (default)"
41
+   ]
42
+  },
43
+  {
44
+   "cell_type": "code",
45
+   "execution_count": null,
46
+   "metadata": {
47
+    "collapsed": true,
48
+    "scrolled": false
49
+   },
50
+   "outputs": [],
51
+   "source": [
52
+    "learner = adaptive.learner.Learner1D(func, bounds=(-1.01, 1.0))\n",
53
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss(real=True) < 0.01)\n",
54
+    "adaptive.live_plot(runner)"
55
+   ]
56
+  },
57
+  {
58
+   "cell_type": "code",
59
+   "execution_count": null,
60
+   "metadata": {
61
+    "collapsed": true
62
+   },
63
+   "outputs": [],
64
+   "source": [
65
+    "# Same function evaluated on homogeneous grid with same amount of points\n",
66
+    "from functools import partial\n",
67
+    "import numpy as np\n",
68
+    "\n",
69
+    "learner2 = adaptive.learner.Learner1D(func, bounds=(-1.01, 1.0))\n",
70
+    "xs = np.linspace(-1.01, 1.0, len(learner.data))\n",
71
+    "learner2.add_data(xs, map(partial(func, wait=False), xs))\n",
72
+    "learner2.plot()"
73
+   ]
74
+  },
75
+  {
76
+   "cell_type": "markdown",
77
+   "metadata": {
78
+    "collapsed": true
79
+   },
80
+   "source": [
81
+    "## ipyparallel"
82
+   ]
83
+  },
84
+  {
85
+   "cell_type": "code",
86
+   "execution_count": null,
87
+   "metadata": {
88
+    "collapsed": true
89
+   },
90
+   "outputs": [],
91
+   "source": [
92
+    "import ipyparallel\n",
93
+    "\n",
94
+    "client = ipyparallel.Client()\n",
95
+    "\n",
96
+    "# Initialize the learner\n",
97
+    "learner = adaptive.learner.Learner1D(func)\n",
98
+    "runner = adaptive.Runner(learner, client, goal=lambda l: l.loss() < 0.1)\n",
99
+    "adaptive.live_plot(runner)"
100
+   ]
101
+  },
102
+  {
103
+   "cell_type": "markdown",
104
+   "metadata": {},
105
+   "source": [
106
+    "## 0D Learner"
107
+   ]
108
+  },
109
+  {
110
+   "cell_type": "code",
111
+   "execution_count": null,
112
+   "metadata": {
113
+    "collapsed": true,
114
+    "scrolled": false
115
+   },
116
+   "outputs": [],
117
+   "source": [
118
+    "def func(x):\n",
119
+    "    import random\n",
120
+    "    import numpy as np\n",
121
+    "    from time import sleep\n",
122
+    "    sleep(np.random.randint(0, 2))\n",
123
+    "    return random.gauss(0.05, 1)\n",
124
+    "\n",
125
+    "learner = adaptive.learner.AverageLearner(func, None, 0.1)\n",
126
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)"
127
+   ]
128
+  }
129
+ ],
130
+ "metadata": {
131
+  "anaconda-cloud": {},
132
+  "kernelspec": {
133
+   "display_name": "Python 3",
134
+   "language": "python",
135
+   "name": "python3"
136
+  },
137
+  "language_info": {
138
+   "codemirror_mode": {
139
+    "name": "ipython",
140
+    "version": 3
141
+   },
142
+   "file_extension": ".py",
143
+   "mimetype": "text/x-python",
144
+   "name": "python",
145
+   "nbconvert_exporter": "python",
146
+   "pygments_lexer": "ipython3",
147
+   "version": "3.6.2"
148
+  }
149
+ },
150
+ "nbformat": 4,
151
+ "nbformat_minor": 1
152
+}