Main navigation

Intel MPI

IntelMPI is a high performing message passing interface library which on Hoffman2 is installed as part of the Intel Cluster Studio.

How to load IntelMPI in your environment

IntelMPI is loaded into the user environment upon loading the module file for the Intel Cluster Studio, namely:

module load intel/13.cs

How to compile with IntelMPI

IntelMPI supports the intel compiler and the gcc compiler.

    • To use Intel compilers:
      • for C code:

mpiicc [options] file1 [file2 ...]

      • for C++ code:

mpiicpc [options] file1 [file2 ...]

      • for fortran code:

mpiifort [options] file1 [file2 ...]

    • To use GNU compilers:
      • for C code:

mpicc [options] file1 [file2 ...]

      • for C++ code:

mpicxx [options] file1 [file2 ...]

      • for fortran77 code:

mpif77 [options] file1 [file2 ...]

      • for fortran90 code:

mpif90 [options] file1 [file2 ...]

    • Note: to compile code with a different gcc or intel compiler then their default version
      • for C code:

mpicc -cc=</path/to/compiler> [options] file1 [file2 ...]

      • for C++ code:

mpicxx -cxx=<path/to/compiler> [options] file1 [file2 ...]

      • for fortran77 code:

mpif77 -fc=<path/to/compiler> [options] file1 [file2 ...]

      • for fortran90 code:

mpif90 -fc=<path/to/compiler> [options] file1 [file2 ...]

To see how the how any of the mpi compile commands listed above work issue the mpi compile command name and followed by the show flag. For example, to see how the mpiicc command works, issue:

mpiicc -show

How to run interactive jobs with IntelMPI

To run any MPI code interactively these steps need to be followed:

    1. request an interactive session with the needed number of cores (see: How to get and interactive session through UGE)
    2. when you get a command promt on the master interactive node set up the correct scheduler environment for your job (i.e., the hostfile, etc.) by issuing:
      • for sh-based shells:

. /u/local/bin/set_qrsh_env.sh

      • for csh-based shells:

source /u/local/bin/set_qrsh_env.csh

    1. compile your code as needed, for example:

mpiicc -o mycode mycode.c

    1. start your interactive parallel run with, for example:

mpirun -n $NSLOTS [options] mycode

How to run batch jobs with IntelMPI

Your parallel executable compiled as shown in How to compile with IntelMPI can be submitted to the queue for batch execution using the provided queue script:

intelmpi.q

Alternatively users may write their own command file for batch job submission following the directions as given in Commonly-used UGE commands. The file for batch job submission should load the intel modulefile before the call to the parallel executable (via mpiexec.hydra). A simple example is given here:

#!/bin/bash
#$ -cwd 
#$ -o path 
#$ -M login_id@mail 
#$ -m bea 
#$ -l h_data=1024M,h_rt=24:00:00
#$ -pe dc_* 32

. /u/local/Modules/default/init/modules.sh
module load intel/13.cs

mpirun -n $NSLOTS [options] /path/to/mycode

How to request a specific number of processes per node with IntelMPI

To run IntelMPI compiled code under the scheduler control (either as an interactive or a batch job) it is not generally necessary to pass to mpirun the file containing the actual list of hosts where the parallel workers will run (as mpirun will retrieve this information from the scheduler set parallel environment). For those cases where it is necessary to request a number of slots different than the actual number of parallel workers needed (for example because we would like each parallel task to use more memory than the memory available to each one slot on a node) the following submission mechanism should be employed:

    • for interactive use:
source /u/local/bin/set_qrsh_env.sh/csh
cat $PE_HOSTFILE | awk '{print $1}' | uniq > $TMPDIR/hostfile.$JOB_ID
mpirun -f $TMPDIR/hostfile.$JOB_ID -ppn n executable-name
    • for batch use the following lines should be used in the scheduler submission file:
cat $PE_HOSTFILE | awk '{print $1}' | uniq > $TMPDIR/hostfile.$JOB_ID
mpirun -f $TMPDIR/hostfile.$JOB_ID -ppn n executable-name

Using Ethernet or Infiniband networking with IntelMPI

You may use either Ethernet or Infiniband networking to run your MPI jobs. Normally you would use Infiniband for high performance, by specifying the option to the mpirun command as in the examples above:

-env I_MPI_FABRICS shm:ofa

If you want to use Ethernet to run MPI (for example, when Infiniband is not working or for testing purposes), use this option instead:

-env I_MPI_FABRICS tcp:tcp
Report Typos and Errors
UCLA OIT

© 2016 UC REGENTS TERMS OF USE & PRIVACY POLICY