Main navigation

modules

Environmental modules is a utility that allows users to dynamically modify their shell environment (e.g., $PATH, $LD_LIBRARY_PATH, etc.) in order to support a number of compilers and applications installed on the Hoffman2 Cluster.

Environmental modules: Basic commands

Environmental modules consists of: a collection of files, modulefiles, containing directives to load certain environmental variables (and in certain cases unload conflicting ones); and an interpreter (the module command) that acts the directives contained in the modulefiles.

Basic commands are:

module help		  # prints a basic list of commands and arguments accepted by the module command
module list               # prints a list of the currently loaded modulefile
module available          # lists all the modulefiles available under the current set of already loaded modulefiles
module show modulefile    # shows how the modulefile alters the environment
module whatis modulefile  # prints basic information about the software loaded by the modulefile
module help modulefile    # prints a basic help for the modulefile
module load modulefile    # loads the modulefile
module unload modulefile  # unloads the modulefile

where modulefile is the name of a modulefile for a certain application.

How to use the module command in interactive sessions

Start an interactive session on a compute node with qrsh. Then at the compute node shell prompt, enter:

module load modulefile

To run the application enter at the command line:

executable [options-and/or-arguments]

include any command line options or arguments as appropriate.

How to use the module command in scripts for batch execution

For most of the supported software on the cluster queue scripts are available to generate, and submit, batch jobs. These scripts internally use modulefiles to load the correct environment for the software at hand.

How to use the module command in scripts for batch execution of serial jobs

In case you needed to generate your own job scheduler command file for your jobs, you will need to follow the guidelines given in Running a Batch Job and include the following lines:

  • for bash scripts:
    . /u/local/Modules/default/init/modules.sh
    module load modulefile
    executable [options-and/or-arguments]
    
  • for csh scripts:
    source /u/local/Modules/default/init/modules.csh
    module load modulefile
    executable [options-and/or-arguments]
    

where modulefile is either the module for the specific application, a modulefile that you have created (see: Writing your own modulefiles) or the modulefile for the compiler with which your application was built.

How to use the module command in scripts for batch execution of parallel jobs

Parallel jobs are divided in two main categories: shared memory (opnemp and multi-threaded jobs)a nd distributed memory (MPI jobs). A combination of the two modes is also possible.

How to use the module command in scripts for batch execution of parallel jobs using shared memory

Jobs that use shared memory kind of parallelism do not need any specific module to be loaded. The queue scriptopenmp.q

could be used to submit this kind of jobs.

How to use the module command in scripts for batch execution of parallel jobs using IntelMPI library

In case you needed to generate your own job scheduler command file for your parallel job, you will need to follow the guidelines given in Running a Batch Job. If your application is parallel and was compiled on the cluster with the default intel or gcc compiler and the IntelMPI library you will need to use:

  • for bash scripts:
    . /u/local/Modules/default/init/modules.sh
    module load intel
    $MPI_BIN/mpiexec.hydra -n $NSLOTS -env I_MPI_FABRICS ofa:ofa executable [options-and/or-arguments]
    
  • for csh scripts:
    source /u/local/Modules/default/init/modules.csh
    module load intel
    $MPI_BIN/mpiexec.hydra -n $NSLOTS -env I_MPI_FABRICS ofa:ofa executable [options-and/or-arguments]
    

Note that in this case you could generate a scheduler ready submission using the queue script intelmpi.q.

How to use the module command in scripts for batch execution of parallel jobs using IntelMPI library and a compiler different then the default

In case you needed to generate your own job scheduler command file for your parallel job, you will need to follow the guidelines given in Running a Batch Job. If your application is parallel and was compiled on the cluster with a compiler (different than the default intel or gcc) and the IntelMPI library you will need to use:

  • for bash scripts:
    . /u/local/Modules/default/init/modules.sh
    module load compiler-modulefile
    module load intelmpi
    $MPI_BIN/mpiexec.hydra -n $NSLOTS -env I_MPI_FABRICS ofa:ofa executable [options-and/or-arguments]
    
  • for csh scripts:
    source /u/local/Modules/default/init/modules.csh
    module load compiler-modulefile
    module load intelmpi
    $MPI_BIN/mpiexec.hydra -n $NSLOTS -env I_MPI_FABRICS ofa:ofa executable [options-and/or-arguments]
    

where: compiler-modulefile is the modulefile for the needed compiler.

How to use the module command in scripts for batch execution of parallel jobs using OpenMPI library

In case you needed to generate your own job scheduler command file for your parallel job, you will need to follow the guidelines given in Running a Batch Job. If your application is parallel and was compiled on the cluster with a given compiler and OpenMPI library built with the same compiler you will need to use:

  • for bash scripts:
    . /u/local/Modules/default/init/modules.sh
    module load compiler-modulefile
    module load openmpi-modulefile
    $MPI_BIN/mpiexec --prefix $MPI_DIR -n $NSLOTS executable [options-and/or-arguments]
    
  • for csh scripts:
    source /u/local/Modules/default/init/modules.csh
    module load compiler-modulefile
    module load openmpi-modulefile
    $MPI_BIN/mpiexec --prefix $MPI_DIR -n $NSLOTS executable [options-and/or-arguments]
    

where: compiler-modulefile, openmpi-modulefile, are respectively the modulefile for the needed compiler, the modulefile and for the needed OpenMPI libraries. Note that you could also use the queue script openmpi.q.

Default user environment upon login into the cluster

The default environment on the Hoffman2 Cluster consists of the production version of the Intel compiler, the production version of the OpenMPI libraries built with the production version of the Intel compiler. These are set by the respective modulfiles intel and openmpi.

To see what modulefiles are available under the hierarchy set up by these two modulefiles enter at the command prompt:

module available

or for short:

module av

Changing your environment – Example 1: Loading a different compiler

To load a compiler different then the Hoffman2 default just type at the command line, for example:

module load intel/12.1

or:

module load gcc/4.3.5

Notice that to load the default version of a module, for example, gcc, it is sufficient to issue the following command:

module load gcc

When loading a modulefile for a new compiler in your environment the one previously loaded gets unloaded together with any of its dependent modulefiles (for example the openmpi modulefile). This means that upon loading a new compiler (or unloading the modulefile for any compiler) any reference to the previously loaded module and any of its dependencies is completely removed from the user’s environment and, in case a new compiler is loaded, replaced by the new environment.

Please notice that the command:

module av

may produce different results depending on which compiler you have loaded.

Changing your environment – Example 2: Loading a python modulefile

As many third party python packages are available on the Hoffman2 Cluster, which are not included in the system python installation, loading the python modulefile allows for adding to the default $PYTHONPATH the location of the Hoffman2 extra python packages (or allows to load in the environment a non-system installation of python).

Currently available python modulefiles on the cluster are:

[h2user@h2loginnode ~]$ module av python

------------------------- /u/local/Modules/modulefiles -------------------------
python/2.6(default) python/2.7          python/3.1

For example to load the default python module issue:

module load python

your $PYTHONPATH will now contain a reference to the location where extra python packages are installed:

[h2user@h2loginnode ~]$ echo $PYTHONPATH
/u/local/apps/python/2.6/lib64/python2.6/site-packages:
/u/local/apps/python/2.6/lib/python2.6/site-packages:
/u/local/python/2.6/lib64/python2.6/site-packages:
/u/local/python/2.6/lib/python2.6/site-packages

Writing your own modulefiles

In some cases you may have applications and/or libraries compiled in your own $HOME (or in some common location that your group has) for which you may want to create your own modulefiles.

In these cases you will want to use the following environmental modules command:

module use $HOME/modulefiles

where:

$HOME/modulefiles

is the directory where your own modulefiles reside. The command, “module use $HOME/modulefiles”, adds:

$HOME/modulefiles

to your $MODULEPATH.

The command:

module av

will now show your own modulefiles along with the modulefiles that we provide.

To permanently include your own modulefiles upon login into the cluster add the line:

module use $HOME/modulefiles

to your own initialization files (i.e., .bashrc or .cshrc).

A sample modulefile is included here for the application MYAPP version X.Y (installed in /path/to/my/software/dir/MYAPP/X.Y, which could, for example, be: $HOME/software/MYAPP/X.Y):

#%Module
# MYAPP module file
set name "MYAPP"
# Version number
set ver "X.Y"

module-whatis "Name        : $name"
module-whatis "Version     : $ver"
module-whatis "Description : Add desc of MYAPP here"

set base_dir  /path/to/my/software/dir
prepend-path  PATH              $base_dir/$name/$ver
prepend-path  LD_LIBRARY_PATH   $base_dir/lib
prepend-path  MANPATH           $base_dir/man
prepend-path  INFOPATH          $base_dir/info

setenv        MYAPP_DIR           $base_dir
setenv        MYAPP_BIN           $base_dir/bin
setenv        MYAPP_INC           $base_dir/include
setenv        MYAPP_LIB           $base_dir/lib

N.B.: When writing your own modules you should include checks so that when loading new modules conflicting modules are either unloaded or a warning is issued. Per se environmental modules does not know which modules are mutually conflicting and therefore conflicting modules are not automatically unloaded, you will need to add this check to your modulefiles. For more details see man modulefile. Environmental modules understand Tcl/Tkl so your modulefiles can be “fancied up” with Tcl/Tkl instructions.

Report Typos and Errors
UCLA OIT

© 2016 UC REGENTS TERMS OF USE & PRIVACY POLICY