Skip to content

Commit 7346e97

Browse files
Merge pull request mala-project#638 from mala-project/update_inference_parallelism
Clarified MPI usage during inference
2 parents 88ff606 + 9341fe0 commit 7346e97

File tree

1 file changed

+20
-1
lines changed

1 file changed

+20
-1
lines changed

docs/source/advanced_usage/predictions.rst

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,26 @@ CPU or GPU. To do so, simply enable MPI usage in MALA
105105
parameters.use_mpi = True
106106
107107
Once MPI is activated, you can start the MPI aware Python script using
108-
``mpirun``, ``srun`` or whichever MPI wrapper is used on your machine.
108+
``mpirun``, ``srun`` or whichever MPI wrapper is used on your machine, for
109+
example with
110+
111+
.. code-block:: bash
112+
113+
#!/bin/bash
114+
#SBATCH --nodes=NUMBER_OF_NODES
115+
#SBATCH --ntasks-per-node=NUMBER_OF_TASKS_PER_NODE
116+
#SBATCH --gres=gpu:NUMBER_OF_TASKS_PER_NODE
117+
# Add more arguments as needed
118+
...
119+
120+
# Load more modules as needed
121+
...
122+
123+
# Depending on your cluster setup, you may need to use srun here
124+
# rather than mpirun.
125+
# Note that
126+
# NUMBER_OF_RANKS = NUMBER_OF_NODES * NUMBER_OF_TASKS_PER_NODE
127+
mpirun -np NUMBER_OF_RANKS python3 -u prediction.py
109128
110129
By default, MALA can only operate with a number of processes by which the
111130
z-dimension of the inference grid can be evenly divided, since the Quantum

0 commit comments

Comments
 (0)