Browse code

reword loky docs part

Bas Nijholt authored on 09/04/2020 19:01:44
Showing 1 changed files
... ...
@@ -120,9 +120,9 @@ How you call MPI might depend on your specific queuing system, with SLURM for ex
120 120
 `loky.get_reusable_executor`
121 121
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
122 122
 
123
-This executor is basically a powered up version of `~concurrent.futures.ProcessPoolExecutor`, check its `documentation <https://loky.readthedocs.io/>`_.
124
-Among other things, it allows one to reuse the executor and uses ``cloudpickle`` on the background.
125
-This means you can even run closures, lambdas, or other functions that are not picklable with `pickle`.
123
+This executor is basically a powered-up version of `~concurrent.futures.ProcessPoolExecutor`, check its `documentation <https://loky.readthedocs.io/>`_.
124
+Among other things, it allows to *reuse* the executor and uses ``cloudpickle`` for serialization.
125
+This means you can even learn closures, lambdas, or other functions that are not picklable with `pickle`.
126 126
 
127 127
 .. code:: python
128 128
 
Browse code

add loky docs

Bas Nijholt authored on 09/04/2020 18:30:36
Showing 1 changed files
... ...
@@ -116,3 +116,21 @@ How you call MPI might depend on your specific queuing system, with SLURM for ex
116 116
     #SBATCH --ntasks 100
117 117
 
118 118
     srun -n $SLURM_NTASKS --mpi=pmi2 ~/miniconda3/envs/py37_min/bin/python -m mpi4py.futures run_learner.py
119
+
120
+`loky.get_reusable_executor`
121
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
122
+
123
+This executor is basically a powered up version of `~concurrent.futures.ProcessPoolExecutor`, check its `documentation <https://loky.readthedocs.io/>`_.
124
+Among other things, it allows one to reuse the executor and uses ``cloudpickle`` on the background.
125
+This means you can even run closures, lambdas, or other functions that are not picklable with `pickle`.
126
+
127
+.. code:: python
128
+
129
+    from loky import get_reusable_executor
130
+    ex = get_reusable_executor()
131
+
132
+    f = lambda x: x
133
+    learner = adaptive.Learner1D(f, bounds=(-1, 1))
134
+
135
+    runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01, executor=ex)
136
+    runner.live_info()
Browse code

split comment over two lines

Bas Nijholt authored on 07/04/2020 09:54:08
Showing 1 changed files
... ...
@@ -65,7 +65,9 @@ For example, you create the following file called ``run_learner.py``:
65 65
 
66 66
     from mpi4py.futures import MPIPoolExecutor
67 67
 
68
-    if __name__ == "__main__":  # ← use this, see warning @ https://mpi4py.readthedocs.io/en/stable/mpi4py.futures.html#mpipoolexecutor
68
+    # use the idiom below, see the warning at
69
+    # https://mpi4py.readthedocs.io/en/stable/mpi4py.futures.html#mpipoolexecutor
70
+    if __name__ == "__main__":
69 71
 
70 72
         learner = adaptive.Learner1D(f, bounds=(-1, 1))
71 73
 
Browse code

use full url to mpi4py docs

Bas Nijholt authored on 07/04/2020 09:39:32
Showing 1 changed files
... ...
@@ -65,7 +65,7 @@ For example, you create the following file called ``run_learner.py``:
65 65
 
66 66
     from mpi4py.futures import MPIPoolExecutor
67 67
 
68
-    if __name__ == "__main__":  # ← use this, see warning @ https://bit.ly/2HAk0GG
68
+    if __name__ == "__main__":  # ← use this, see warning @ https://mpi4py.readthedocs.io/en/stable/mpi4py.futures.html#mpipoolexecutor
69 69
 
70 70
         learner = adaptive.Learner1D(f, bounds=(-1, 1))
71 71
 
Browse code

use __name__ == "__main__" for the MPIPoolExecutor

Bas Nijholt authored on 07/04/2020 09:34:07
Showing 1 changed files
... ...
@@ -65,27 +65,29 @@ For example, you create the following file called ``run_learner.py``:
65 65
 
66 66
     from mpi4py.futures import MPIPoolExecutor
67 67
 
68
-    learner = adaptive.Learner1D(f, bounds=(-1, 1))
68
+    if __name__ == "__main__":  # ← use this, see warning @ https://bit.ly/2HAk0GG
69
+
70
+        learner = adaptive.Learner1D(f, bounds=(-1, 1))
69 71
 
70
-    # load the data
71
-    learner.load(fname)
72
+        # load the data
73
+        learner.load(fname)
72 74
 
73
-    # run until `goal` is reached with an `MPIPoolExecutor`
74
-    runner = adaptive.Runner(
75
-        learner,
76
-        executor=MPIPoolExecutor(),
77
-        shutdown_executor=True,
78
-        goal=lambda l: l.loss() < 0.01,
79
-    )
75
+        # run until `goal` is reached with an `MPIPoolExecutor`
76
+        runner = adaptive.Runner(
77
+            learner,
78
+            executor=MPIPoolExecutor(),
79
+            shutdown_executor=True,
80
+            goal=lambda l: l.loss() < 0.01,
81
+        )
80 82
 
81
-    # periodically save the data (in case the job dies)
82
-    runner.start_periodic_saving(dict(fname=fname), interval=600)
83
+        # periodically save the data (in case the job dies)
84
+        runner.start_periodic_saving(dict(fname=fname), interval=600)
83 85
 
84
-    # block until runner goal reached
85
-    runner.ioloop.run_until_complete(runner.task)
86
+        # block until runner goal reached
87
+        runner.ioloop.run_until_complete(runner.task)
86 88
 
87
-    # save one final time before exiting
88
-    learner.save(fname)
89
+        # save one final time before exiting
90
+        learner.save(fname)
89 91
 
90 92
 
91 93
 On your laptop/desktop you can run this script like:
Browse code

fix syntax highlighting and syntax error

Bas Nijholt authored on 13/12/2019 14:41:11
Showing 1 changed files
... ...
@@ -90,7 +90,7 @@ For example, you create the following file called ``run_learner.py``:
90 90
 
91 91
 On your laptop/desktop you can run this script like:
92 92
 
93
-.. code:: python
93
+.. code:: bash
94 94
 
95 95
     export MPI4PY_MAX_WORKERS=15
96 96
     mpiexec -n 1 python run_learner.py
... ...
@@ -99,13 +99,13 @@ Or you can pass ``max_workers=15`` programmatically when creating the `MPIPoolEx
99 99
 
100 100
 Inside the job script using a job queuing system use:
101 101
 
102
-.. code:: python
102
+.. code:: bash
103 103
 
104 104
     mpiexec -n 16 python -m mpi4py.futures run_learner.py
105 105
 
106 106
 How you call MPI might depend on your specific queuing system, with SLURM for example it's:
107 107
 
108
-.. code:: python
108
+.. code:: bash
109 109
 
110 110
     #!/bin/bash
111 111
     #SBATCH --job-name adaptive-example
Browse code

be more explicit

Bas Nijholt authored on 26/08/2019 12:01:47 • GitHub committed on 26/08/2019 12:01:47
Showing 1 changed files
... ...
@@ -95,7 +95,7 @@ On your laptop/desktop you can run this script like:
95 95
     export MPI4PY_MAX_WORKERS=15
96 96
     mpiexec -n 1 python run_learner.py
97 97
 
98
-Or you can pass ``max_workers=15`` programmatically when creating the executor instance.
98
+Or you can pass ``max_workers=15`` programmatically when creating the `MPIPoolExecutor` instance.
99 99
 
100 100
 Inside the job script using a job queuing system use:
101 101
 
Browse code

do one final save before exiting

The periodic save will probably not correspond with the exit time.

Bas Nijholt authored on 26/08/2019 10:22:26 • GitHub committed on 26/08/2019 10:22:26
Showing 1 changed files
... ...
@@ -84,6 +84,9 @@ For example, you create the following file called ``run_learner.py``:
84 84
     # block until runner goal reached
85 85
     runner.ioloop.run_until_complete(runner.task)
86 86
 
87
+    # save one final time before exiting
88
+    learner.save(fname)
89
+
87 90
 
88 91
 On your laptop/desktop you can run this script like:
89 92
 
Browse code

remove MPI4PY_MAX_WORKERS where it's not used

Bas Nijholt authored on 26/08/2019 10:06:13
Showing 1 changed files
... ...
@@ -98,7 +98,6 @@ Inside the job script using a job queuing system use:
98 98
 
99 99
 .. code:: python
100 100
 
101
-    export MPI4PY_MAX_WORKERS=15
102 101
     mpiexec -n 16 python -m mpi4py.futures run_learner.py
103 102
 
104 103
 How you call MPI might depend on your specific queuing system, with SLURM for example it's:
... ...
@@ -109,5 +108,4 @@ How you call MPI might depend on your specific queuing system, with SLURM for ex
109 108
     #SBATCH --job-name adaptive-example
110 109
     #SBATCH --ntasks 100
111 110
 
112
-    export MPI4PY_MAX_WORKERS=$SLURM_NTASKS
113 111
     srun -n $SLURM_NTASKS --mpi=pmi2 ~/miniconda3/envs/py37_min/bin/python -m mpi4py.futures run_learner.py
Browse code

Update tutorial.parallelism.rst

Andrey E. Antipov authored on 24/06/2019 22:42:33 • Bas Nijholt committed on 24/06/2019 23:02:59
Showing 1 changed files
... ...
@@ -63,7 +63,7 @@ For example, you create the following file called ``run_learner.py``:
63 63
 
64 64
 .. code:: python
65 65
 
66
-    import mpi4py.futures
66
+    from mpi4py.futures import MPIPoolExecutor
67 67
 
68 68
     learner = adaptive.Learner1D(f, bounds=(-1, 1))
69 69
 
Browse code

fix whitespace

Bas Nijholt authored on 07/05/2019 19:22:48
Showing 1 changed files
... ...
@@ -111,4 +111,3 @@ How you call MPI might depend on your specific queuing system, with SLURM for ex
111 111
 
112 112
     export MPI4PY_MAX_WORKERS=$SLURM_NTASKS
113 113
     srun -n $SLURM_NTASKS --mpi=pmi2 ~/miniconda3/envs/py37_min/bin/python -m mpi4py.futures run_learner.py
114
-
Browse code

add mpi4py to the docs

Bas Nijholt authored on 01/05/2019 00:34:20
Showing 1 changed files
... ...
@@ -53,3 +53,62 @@ On Windows by default `adaptive.Runner` uses a `distributed.Client`.
53 53
     runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)
54 54
     runner.live_info()
55 55
     runner.live_plot(update_interval=0.1)
56
+
57
+`mpi4py.futures.MPIPoolExecutor`
58
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
59
+
60
+This makes sense if you want to run a ``Learner`` on a cluster non-interactively using a job script.
61
+
62
+For example, you create the following file called ``run_learner.py``:
63
+
64
+.. code:: python
65
+
66
+    import mpi4py.futures
67
+
68
+    learner = adaptive.Learner1D(f, bounds=(-1, 1))
69
+
70
+    # load the data
71
+    learner.load(fname)
72
+
73
+    # run until `goal` is reached with an `MPIPoolExecutor`
74
+    runner = adaptive.Runner(
75
+        learner,
76
+        executor=MPIPoolExecutor(),
77
+        shutdown_executor=True,
78
+        goal=lambda l: l.loss() < 0.01,
79
+    )
80
+
81
+    # periodically save the data (in case the job dies)
82
+    runner.start_periodic_saving(dict(fname=fname), interval=600)
83
+
84
+    # block until runner goal reached
85
+    runner.ioloop.run_until_complete(runner.task)
86
+
87
+
88
+On your laptop/desktop you can run this script like:
89
+
90
+.. code:: python
91
+
92
+    export MPI4PY_MAX_WORKERS=15
93
+    mpiexec -n 1 python run_learner.py
94
+
95
+Or you can pass ``max_workers=15`` programmatically when creating the executor instance.
96
+
97
+Inside the job script using a job queuing system use:
98
+
99
+.. code:: python
100
+
101
+    export MPI4PY_MAX_WORKERS=15
102
+    mpiexec -n 16 python -m mpi4py.futures run_learner.py
103
+
104
+How you call MPI might depend on your specific queuing system, with SLURM for example it's:
105
+
106
+.. code:: python
107
+
108
+    #!/bin/bash
109
+    #SBATCH --job-name adaptive-example
110
+    #SBATCH --ntasks 100
111
+
112
+    export MPI4PY_MAX_WORKERS=$SLURM_NTASKS
113
+    srun -n $SLURM_NTASKS --mpi=pmi2 ~/miniconda3/envs/py37_min/bin/python -m mpi4py.futures run_learner.py
114
+
Browse code

add tutorials

Bas Nijholt authored on 17/10/2018 13:30:10
Showing 1 changed files
1 1
new file mode 100644
... ...
@@ -0,0 +1,55 @@
1
+Parallelism - using multiple cores
2
+----------------------------------
3
+
4
+Often you will want to evaluate the function on some remote computing
5
+resources. ``adaptive`` works out of the box with any framework that
6
+implements a `PEP 3148 <https://www.python.org/dev/peps/pep-3148/>`__
7
+compliant executor that returns `concurrent.futures.Future` objects.
8
+
9
+`concurrent.futures`
10
+~~~~~~~~~~~~~~~~~~~~
11
+
12
+On Unix-like systems by default `adaptive.Runner` creates a
13
+`~concurrent.futures.ProcessPoolExecutor`, but you can also pass
14
+one explicitly e.g. to limit the number of workers:
15
+
16
+.. code:: python
17
+
18
+    from concurrent.futures import ProcessPoolExecutor
19
+
20
+    executor = ProcessPoolExecutor(max_workers=4)
21
+
22
+    learner = adaptive.Learner1D(f, bounds=(-1, 1))
23
+    runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.05)
24
+    runner.live_info()
25
+    runner.live_plot(update_interval=0.1)
26
+
27
+`ipyparallel.Client`
28
+~~~~~~~~~~~~~~~~~~~~
29
+
30
+.. code:: python
31
+
32
+    import ipyparallel
33
+
34
+    client = ipyparallel.Client()  # You will need to start an `ipcluster` to make this work
35
+
36
+    learner = adaptive.Learner1D(f, bounds=(-1, 1))
37
+    runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)
38
+    runner.live_info()
39
+    runner.live_plot()
40
+
41
+`distributed.Client`
42
+~~~~~~~~~~~~~~~~~~~~
43
+
44
+On Windows by default `adaptive.Runner` uses a `distributed.Client`.
45
+
46
+.. code:: python
47
+
48
+    import distributed
49
+
50
+    client = distributed.Client()
51
+
52
+    learner = adaptive.Learner1D(f, bounds=(-1, 1))
53
+    runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)
54
+    runner.live_info()
55
+    runner.live_plot(update_interval=0.1)