File tree Expand file tree Collapse file tree 1 file changed +20
-1
lines changed
docs/source/advanced_usage Expand file tree Collapse file tree 1 file changed +20
-1
lines changed Original file line number Diff line number Diff line change @@ -105,7 +105,26 @@ CPU or GPU. To do so, simply enable MPI usage in MALA
105105 parameters.use_mpi = True
106106
107107 Once MPI is activated, you can start the MPI aware Python script using
108- ``mpirun ``, ``srun `` or whichever MPI wrapper is used on your machine.
108+ ``mpirun ``, ``srun `` or whichever MPI wrapper is used on your machine, for
109+ example with
110+
111+ .. code-block :: bash
112+
113+ #! /bin/bash
114+ # SBATCH --nodes=NUMBER_OF_NODES
115+ # SBATCH --ntasks-per-node=NUMBER_OF_TASKS_PER_NODE
116+ # SBATCH --gres=gpu:NUMBER_OF_TASKS_PER_NODE
117+ # Add more arguments as needed
118+ ...
119+
120+ # Load more modules as needed
121+ ...
122+
123+ # Depending on your cluster setup, you may need to use srun here
124+ # rather than mpirun.
125+ # Note that
126+ # NUMBER_OF_RANKS = NUMBER_OF_NODES * NUMBER_OF_TASKS_PER_NODE
127+ mpirun -np NUMBER_OF_RANKS python3 -u prediction.py
109128
110129 By default, MALA can only operate with a number of processes by which the
111130z-dimension of the inference grid can be evenly divided, since the Quantum
You can’t perform that action at this time.
0 commit comments