Browse code

update example notebook

Joseph Weston authored on 19/02/2018 15:32:05
Showing 1 changed files
... ...
@@ -15,11 +15,19 @@
15 15
     "\n",
16 16
     "This is an introductory notebook that shows some basic use cases.\n",
17 17
     "\n",
18
-    "`adaptive` needs the following packages:\n",
18
+    "`adaptive` needs at least Python 3.6, and the following packages:\n",
19 19
     "\n",
20
-    "+ Python 3.6\n",
20
+    "+ `scipy`\n",
21
+    "+ `sortedcontainers`\n",
22
+    "\n",
23
+    "Additionally `adaptive` has lots of extra functionality that makes it simple to use from Jupyter notebooks.\n",
24
+    "This extra functionality depends on the following packages\n",
25
+    "\n",
26
+    "+ `ipykernel>=4.8.0`\n",
27
+    "+ `jupyter_client>=5.2.2`\n",
21 28
     "+ `holoviews`\n",
22
-    "+ `bokeh`"
29
+    "+ `bokeh`\n",
30
+    "+ `ipywidgets`"
23 31
    ]
24 32
   },
25 33
   {
... ...
@@ -105,7 +113,7 @@
105 113
    "source": [
106 114
     "# The end condition is when the \"loss\" is less than 0.1. In the context of the\n",
107 115
     "# 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.\n",
108
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
116
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.05)\n",
109 117
     "runner.live_info()"
110 118
    ]
111 119
   },
... ...
@@ -222,12 +230,16 @@
222 230
    "outputs": [],
223 231
    "source": [
224 232
     "%%opts EdgePaths (color='w')\n",
233
+    "\n",
225 234
     "import itertools\n",
235
+    "\n",
236
+    "# Create a learner and add data on homogeneous grid, so that we can plot it\n",
226 237
     "learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)\n",
227 238
     "n = int(learner.npoints**0.5)\n",
228 239
     "xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds]\n",
229 240
     "xys = list(itertools.product(xs, ys))\n",
230 241
     "learner2.add_data(xys, map(partial(ring, wait=False), xys))\n",
242
+    "\n",
231 243
     "(learner2.plot(n).relabel('Homogeneous grid') + learner.plot().relabel('With adaptive') + \n",
232 244
     " learner2.plot(n, tri_alpha=0.4) + learner.plot(tri_alpha=0.4)).cols(2)"
233 245
    ]
... ...
@@ -258,7 +270,7 @@
258 270
     "def g(n):\n",
259 271
     "    import random\n",
260 272
     "    from time import sleep\n",
261
-    "    sleep(random.random() / 5)\n",
273
+    "    sleep(random.random() / 1000)\n",
262 274
     "    # Properly save and restore the RNG state\n",
263 275
     "    state = random.getstate()\n",
264 276
     "    random.seed(n)\n",
... ...
@@ -274,7 +286,7 @@
274 286
    "outputs": [],
275 287
    "source": [
276 288
     "learner = adaptive.AverageLearner(g, atol=None, rtol=0.01)\n",
277
-    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)\n",
289
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 2)\n",
278 290
     "runner.live_info()"
279 291
    ]
280 292
   },
... ...
@@ -284,7 +296,7 @@
284 296
    "metadata": {},
285 297
    "outputs": [],
286 298
    "source": [
287
-    "runner.live_plot()"
299
+    "runner.live_plot(update_interval=0.1)"
288 300
    ]
289 301
   },
290 302
   {
... ...
@@ -347,7 +359,11 @@
347 359
    "outputs": [],
348 360
    "source": [
349 361
     "from adaptive.runner import SequentialExecutor\n",
362
+    "\n",
350 363
     "learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-10)\n",
364
+    "\n",
365
+    "# We use a SequentialExecutor, which runs the function to be learned in *this* process only. This means we don't pay\n",
366
+    "# the overhead of evaluating the function in another process.\n",
351 367
     "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.done())\n",
352 368
     "runner.live_info()"
353 369
    ]
... ...
@@ -390,7 +406,7 @@
390 406
    "cell_type": "markdown",
391 407
    "metadata": {},
392 408
    "source": [
393
-    "Some 1D functions return multiple numbers, such as the following function:"
409
+    "Sometimes you may want to learn a function with vector output:"
394 410
    ]
395 411
   },
396 412
   {
... ...
@@ -401,6 +417,8 @@
401 417
    "source": [
402 418
     "random.seed(0)\n",
403 419
     "offsets = [random.uniform(-0.8, 0.8) for _ in range(3)]\n",
420
+    "\n",
421
+    "# sharp peaks at random locations in the domain\n",
404 422
     "def f_levels(x, offsets=offsets):\n",
405 423
     "    a = 0.01\n",
406 424
     "    return np.array([offset + x + a**2 / (a**2 + (x - offset)**2)\n",
... ...
@@ -411,7 +429,7 @@
411 429
    "cell_type": "markdown",
412 430
    "metadata": {},
413 431
    "source": [
414
-    "This is again a function with sharp peaks at different x-values and with different constant backgrounds. To learn this function we can use a `Learner1D` as well."
432
+    "`adaptive` has you covered! The `Learner1D` can be used for such functions:"
415 433
    ]
416 434
   },
417 435
   {
... ...
@@ -420,25 +438,8 @@
420 438
    "metadata": {},
421 439
    "outputs": [],
422 440
    "source": [
423
-    "from adaptive.runner import SequentialExecutor\n",
424
-    "\n",
425 441
     "learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))\n",
426
-    "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.loss() < 0.05)"
427
-   ]
428
-  },
429
-  {
430
-   "cell_type": "markdown",
431
-   "metadata": {},
432
-   "source": [
433
-    "In the plot below we see that the function gets more densely sampled around the peaks, which is the behaviour we want."
434
-   ]
435
-  },
436
-  {
437
-   "cell_type": "code",
438
-   "execution_count": null,
439
-   "metadata": {},
440
-   "outputs": [],
441
-   "source": [
442
+    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.05)\n",
442 443
     "runner.live_plot()"
443 444
    ]
444 445
   },
... ...
@@ -453,7 +454,7 @@
453 454
    "cell_type": "markdown",
454 455
    "metadata": {},
455 456
    "source": [
456
-    "The balancing learner is a \"meta-learner\" that takes a list of multiple leaners. The runner wil find find out which points of which child learner will improve the loss the most and send those to the executor.\n",
457
+    "The balancing learner is a \"meta-learner\" that takes a list of learners. When you request a point from the balancing learner, it will query all of its \"children\" to figure out which one will give the most improvement.\n",
457 458
     "\n",
458 459
     "The balancing learner can for example be used to implement a poor-man's 2D learner by using the `Learner1D`."
459 460
    ]
... ...
@@ -483,7 +484,7 @@
483 484
    "outputs": [],
484 485
    "source": [
485 486
     "plotter = lambda learner: hv.Overlay([L.plot() for L in learner.learners])\n",
486
-    "runner.live_plot(plotter=plotter)"
487
+    "runner.live_plot(plotter=plotter, update_interval=0.1)"
487 488
    ]
488 489
   },
489 490
   {
... ...
@@ -497,11 +498,9 @@
497 498
    "cell_type": "markdown",
498 499
    "metadata": {},
499 500
    "source": [
500
-    "Sometimes we want to learn functions that do not only return a single `float`, but instead return multiple values (even though we only want to learn one value.)\n",
501
+    "If the function that you want to learn returns a value along with some metadata, you can wrap your learner in an `adaptive.DataSaver`.\n",
501 502
     "\n",
502
-    "We can wrap our learners with a `adaptive.DataSaver` such that the learner will be able to handle functions that return results.\n",
503
-    "\n",
504
-    "Take for example this function where we want to remember the time that every point took:"
503
+    "In the following example the function to be learned returns its result and the execution time in a dictionary:"
505 504
    ]
506 505
   },
507 506
   {
... ...
@@ -523,11 +522,11 @@
523 522
     "    y = x + a**2 / (a**2 + x**2)\n",
524 523
     "    return {'y': y, 'waiting_time': waiting_time}\n",
525 524
     "\n",
526
-    "# Create the learner with the function that returns a `dict`\n",
527
-    "# Note that this learner cannot be passed to a runner.\n",
525
+    "# Create the learner with the function that returns a 'dict'\n",
526
+    "# This learner cannot be run directly, as Learner1D does not know what to do with the 'dict'\n",
528 527
     "_learner = adaptive.Learner1D(f_dict, bounds=(-1, 1))\n",
529 528
     "\n",
530
-    "# Wrap the learner in the `DataSavingLearner` and tell it which key it needs to learn\n",
529
+    "# Wrapping the learner with 'adaptive.DataSaver' and tell it which key it needs to learn\n",
531 530
     "learner = adaptive.DataSaver(_learner, arg_picker=itemgetter('y'))"
532 531
    ]
533 532
   },
... ...
@@ -544,14 +543,8 @@
544 543
    "metadata": {},
545 544
    "outputs": [],
546 545
    "source": [
547
-    "runner = adaptive.Runner(learner, goal=lambda l: l.learner.loss() < 0.01)"
548
-   ]
549
-  },
550
-  {
551
-   "cell_type": "markdown",
552
-   "metadata": {},
553
-   "source": [
554
-    "Because `learner` doesn't have a `plot` function we need to copy `_learner.plot` (or `learner.learner.plot`)"
546
+    "runner = adaptive.Runner(learner, goal=lambda l: l.learner.loss() < 0.05)\n",
547
+    "runner.live_info()"
555 548
    ]
556 549
   },
557 550
   {
... ...
@@ -560,8 +553,7 @@
560 553
    "metadata": {},
561 554
    "outputs": [],
562 555
    "source": [
563
-    "learner.plot = _learner.plot\n",
564
-    "runner.live_plot()"
556
+    "runner.live_plot(plotter=lambda l: l.learner.plot(), update_interval=0.1)"
565 557
    ]
566 558
   },
567 559
   {
... ...
@@ -586,7 +578,7 @@
586 578
     "collapsed": true
587 579
    },
588 580
    "source": [
589
-    "## Alternative executors"
581
+    "# Using multiple cores"
590 582
    ]
591 583
   },
592 584
   {
... ...
@@ -600,14 +592,14 @@
600 592
    "cell_type": "markdown",
601 593
    "metadata": {},
602 594
    "source": [
603
-    "### `concurrent.futures`"
595
+    "### [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html)"
604 596
    ]
605 597
   },
606 598
   {
607 599
    "cell_type": "markdown",
608 600
    "metadata": {},
609 601
    "source": [
610
-    "By default a runner creates a `ProcessPoolExecutor`, but you can also pass one explicitly e.g. to limit the number of workers:"
602
+    "By default `adaptive.Runner` creates a `ProcessPoolExecutor`, but you can also pass one explicitly e.g. to limit the number of workers:"
611 603
    ]
612 604
   },
613 605
   {
... ...
@@ -623,14 +615,14 @@
623 615
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
624 616
     "runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.05)\n",
625 617
     "runner.live_info()\n",
626
-    "runner.live_plot()"
618
+    "runner.live_plot(update_interval=0.1)"
627 619
    ]
628 620
   },
629 621
   {
630 622
    "cell_type": "markdown",
631 623
    "metadata": {},
632 624
    "source": [
633
-    "### IPyparallel"
625
+    "### [`ipyparallel`](https://ipyparallel.readthedocs.io/en/latest/intro.html)"
634 626
    ]
635 627
   },
636 628
   {
... ...
@@ -653,7 +645,7 @@
653 645
    "cell_type": "markdown",
654 646
    "metadata": {},
655 647
    "source": [
656
-    "### distributed"
648
+    "### [`distributed`](https://distributed.readthedocs.io/en/latest/)"
657 649
    ]
658 650
   },
659 651
   {
... ...
@@ -669,7 +661,7 @@
669 661
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
670 662
     "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
671 663
     "runner.live_info()\n",
672
-    "runner.live_plot()"
664
+    "runner.live_plot(update_interval=0.1)"
673 665
    ]
674 666
   },
675 667
   {
... ...
@@ -686,6 +678,110 @@
686 678
     "# Advanced Topics"
687 679
    ]
688 680
   },
681
+  {
682
+   "cell_type": "markdown",
683
+   "metadata": {},
684
+   "source": [
685
+    "## A watched pot never boils!"
686
+   ]
687
+  },
688
+  {
689
+   "cell_type": "markdown",
690
+   "metadata": {},
691
+   "source": [
692
+    "`adaptive.Runner` does its work in an `asyncio` task that runs concurrently with the IPython kernel, when using `adaptive` from a Jupyter notebook. This is advantageous because it allows us to do things like live-updating plots, however it can trip you up if you're not careful.\n",
693
+    "\n",
694
+    "Notably: **if you block the IPython kernel, the runner will not do any work**.\n",
695
+    "\n",
696
+    "For example if you wanted to wait for a runner to complete, **do not wait in a busy loop**:\n",
697
+    "```python\n",
698
+    "while not runner.task.done():\n",
699
+    "    pass\n",
700
+    "```\n",
701
+    "\n",
702
+    "If you do this then **the runner will never finish**."
703
+   ]
704
+  },
705
+  {
706
+   "cell_type": "markdown",
707
+   "metadata": {},
708
+   "source": [
709
+    "What to do if you don't care about live plotting, and just want to run something until its done?\n",
710
+    "\n",
711
+    "The simplest way to accomplish this is to use `adaptive.BlockingRunner`:"
712
+   ]
713
+  },
714
+  {
715
+   "cell_type": "code",
716
+   "execution_count": null,
717
+   "metadata": {},
718
+   "outputs": [],
719
+   "source": [
720
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
721
+    "adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.005)\n",
722
+    "# This will only get run after the runner has finished\n",
723
+    "learner.plot()"
724
+   ]
725
+  },
726
+  {
727
+   "cell_type": "markdown",
728
+   "metadata": {},
729
+   "source": [
730
+    "## Reproducibility"
731
+   ]
732
+  },
733
+  {
734
+   "cell_type": "markdown",
735
+   "metadata": {},
736
+   "source": [
737
+    "By default `adaptive` runners evaluate the learned function in parallel across several cores. The runners are also opportunistic, in that as soon as a result is available they will feed it to the learner and request another point to replace the one that just finished.\n",
738
+    "\n",
739
+    "Because the order in which computations complete is non-deterministic, this means that the runner behaves in a non-deterministic way. Adaptive makes this choice because in many cases the speedup from parallel execution is worth sacrificing the \"purity\" of exactly reproducible computations.\n",
740
+    "\n",
741
+    "Nevertheless it is still possible to run a learner in a deterministic way with adaptive.\n",
742
+    "\n",
743
+    "The simplest way is to use `adaptive.runner.simple` to run your learner:"
744
+   ]
745
+  },
746
+  {
747
+   "cell_type": "code",
748
+   "execution_count": null,
749
+   "metadata": {},
750
+   "outputs": [],
751
+   "source": [
752
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
753
+    "\n",
754
+    "# blocks until completion\n",
755
+    "adaptive.runner.simple(learner, goal=lambda l: l.loss() < 0.002)\n",
756
+    "\n",
757
+    "learner.plot()"
758
+   ]
759
+  },
760
+  {
761
+   "cell_type": "markdown",
762
+   "metadata": {},
763
+   "source": [
764
+    "Note that unlike `adaptive.Runner`, `adaptive.runner.simple` *blocks* until it is finished.\n",
765
+    "\n",
766
+    "If you want to enable determinism, want to continue using the non-blocking `adaptive.Runner`, you can use the `adaptive.runner.SequentialExecutor`:"
767
+   ]
768
+  },
769
+  {
770
+   "cell_type": "code",
771
+   "execution_count": null,
772
+   "metadata": {},
773
+   "outputs": [],
774
+   "source": [
775
+    "from adaptive.runner import SequentialExecutor\n",
776
+    "\n",
777
+    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
778
+    "\n",
779
+    "# blocks until completion\n",
780
+    "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.loss() < 0.002)\n",
781
+    "runner.live_info()\n",
782
+    "runner.live_plot(update_interval=0.1)"
783
+   ]
784
+  },
689 785
   {
690 786
    "cell_type": "markdown",
691 787
    "metadata": {},
... ...
@@ -699,7 +795,9 @@
699 795
    "source": [
700 796
     "Sometimes you want to interactively explore a parameter space, and want the function to be evaluated at finer and finer resolution and manually control when the calculation stops.\n",
701 797
     "\n",
702
-    "If no `goal` is provided to a runner then the runner will run until cancelled:"
798
+    "If no `goal` is provided to a runner then the runner will run until cancelled.\n",
799
+    "\n",
800
+    "`runner.live_info()` will provide a button that can be clicked to stop the runner. You can also stop the runner programatically using `runner.cancel()`."
703 801
    ]
704 802
   },
705 803
   {
... ...
@@ -710,7 +808,8 @@
710 808
    "source": [
711 809
     "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
712 810
     "runner = adaptive.Runner(learner)\n",
713
-    "runner.live_plot()"
811
+    "runner.live_info()\n",
812
+    "runner.live_plot(update_interval=0.1)"
714 813
    ]
715 814
   },
716 815
   {
... ...
@@ -764,6 +863,7 @@
764 863
     "    \n",
765 864
     "learner = adaptive.Learner1D(will_raise, (-1, 1))\n",
766 865
     "runner = adaptive.Runner(learner)  # without 'goal' the runner will run forever unless cancelled\n",
866
+    "runner.live_info()\n",
767 867
     "runner.live_plot()"
768 868
    ]
769 869
   },
... ...
@@ -920,7 +1020,9 @@
920 1020
    "cell_type": "markdown",
921 1021
    "metadata": {},
922 1022
    "source": [
923
-    "Runners can also be used from a Python script independently of the notebook:\n",
1023
+    "Runners can also be used from a Python script independently of the notebook.\n",
1024
+    "\n",
1025
+    "The simplest way to accomplish this is simply to use the `BlockingRunner`:\n",
924 1026
     "\n",
925 1027
     "```python\n",
926 1028
     "import adaptive\n",
... ...
@@ -930,17 +1032,14 @@
930 1032
     "\n",
931 1033
     "learner = adaptive.Learner1D(f, (-1, 1))\n",
932 1034
     "\n",
933
-    "runner = adaptive.Runner(learner, goal=lambda: l: l.loss() < 0.1)\n",
934
-    "runner.run_sync()  # Block until completion.\n",
1035
+    "adaptive.BlockingRunner(learner, goal=lambda: l: l.loss() < 0.1)\n",
935 1036
     "```\n",
936 1037
     "\n",
937
-    "Under the hood the runner uses [`asyncio`](https://docs.python.org/3/library/asyncio.html). You don't need to worry about this most of the time, unless your script uses asyncio itself. If this is the case you should be aware that instantiating a `Runner` schedules a new task on the current event loop, and you can simply\n",
938
-    "\n",
1038
+    "If you use `asyncio` already in your script and want to integrate `adaptive` into it, then you can use the default `Runner` as you would from a notebook. If you want to wait for the runner to finish, then you can simply\n",
939 1039
     "```python\n",
940 1040
     "    await runner.task\n",
941 1041
     "```\n",
942
-    "\n",
943
-    "inside a coroutine to await completion of the runner."
1042
+    "from within a coroutine."
944 1043
    ]
945 1044
   }
946 1045
  ],