Glossary

Note

The use of italic on anyone of the following glossary entries signifies that the term is specific to the Hoffman2 Cluster (e.g., campus job versus batch job).

Array Job
array job
array jobs

An array of identical tasks being differentiated only by an index number and treated by the scheduler as a series of jobs.

batch job
batch jobs
unix batch job

In a batch job the scheduler dispatches a shell script or a binary file to be executed on one (e.g., serial jobs or shared memory jobs) or more (distributed memory jobs) compute nodes, using the computational resources (e.g., runtime, memory, number or core, etc.) that have been requested (and which are reserved by the scheduler for the use of said script/binary).

batch
batch execution

A unix job submitted to remote resources to be executed as resources become available (for example, because other jobs terminate).

campus job
campus jobs

On the Hoffman2 Cluster, campus jobs refer to jobs submitted by users in groups that have not contributed compute nodes to the cluster. The jobs are limited to run on the IDRE-owned compute nodes with a run time of up to 24 hours.

campus user
campus users

On the Hoffman2 Cluster campus users refer to users of the cluster who belong to groups that have not contributed nodes to the cluster. Their jobs are limited to run on the IDRE-owned nodes with a run time of up to 24 hours.

command line interpreter
command line
unix command line

The program that interprets the user input. It is ready to accept input when the shell prompt is showing.

complex
complexes

A complex is the definition of requestable resource within the Univa Grid Manager (UGE) (the current job scheduler on the Hoffman2 Cluster). From man complex: "The definition of complex attributes provides all pertinent information concerning the resource attributes a user may request for a Univa Grid Engine job via the qsub -l option and for the interpretation of these parameters within the Univa Grid Engine system."

compute cluster

A collection of networked hosts interconnected, which aggregate computing power can be used to address computational tasks that require distributed or capacity computing.

compute node
compute nodes

A node in the cluster used to perform computations. Some nodes with GPU cards are referred to as GPU nodes.

distributed memory job
distributed memory jobs
distributed memory

Distributed memory jobs are programs that execute part or all of their instructions concurrently (i.e., in parallel) employing different workers all of which operate on the different set of data (distributed memory). Processes exchange information by message passing. Since each process operates on its own private memory space, distributed memory jobs can be distributed across many nodes on the cluster. On the Hoffman2 Cluster distributed memory jobs should be scheduled requesting the correct number of slots (or computing cores) selecting a distributed parallel environment (see: Requesting multiple cores).

GPU node
GPU nodes

A compute node in the cluster with one or more GPU cards.

interactive job
interactive jobs
interactive session
interactive sessions

In an interactive job or session the scheduler establishes a connection to a compute node on which users can run one or more commands, using the computational resources (e.g., runtime, memory, number or core, etc.) that they have requested (and which are reserved by the scheduler for their use).

highp job
highp jobs

On the Hoffman2 Cluster highp jobs refer to jobs submitted by users in groups that have contributed compute nodes to the cluster and that can run on these nodes for extended run times (up to 14 days) and with higher priority.

job
jobs
unix job
unix jobs

In a unix-like operating system a job is a group of processes initiated either by one of more commands issued at the unix command line or initiated by a shell script.

job scheduler
scheduler

A job scheduler (or scheduler) is a resource manager which dispatches jobs (batch and interactive) on compute nodes on the cluster to prevent resource contention.

job submission script
submission script
command file

A job submission script is a shell script containing the instructions to run a job (such as setting up the job environment by loading the needed modules, giving the exact command needed to run the executable(s), etc.) including the instructions for the scheduler itself (e.g., runtime, memory, number of cores, etc.).

login node
login nodes

A gateway node used to access the cluster, not intended to perform computations.

master node

When running a parallel job that requires multiple cores/slots, from one or more compute nodes, the master node is the head host where the scheduled job is running.

node
nodes
host
hosts

A node, or a host, is a physical server (i.e, enterprise computer) containing multiple CPU cores and interconnected to other nodes to form the cluster.

parallel job
parallel jobs

A parallel job executes a program of which some instructions are executed concurrently from different workers. Parallel jobs are divided in: shared memory jobs, distributed memory jobs and hybrid jobs that perform shared memory within a compute node and distributed memory across multiple compute nodes.

prompt
shell prompt
shell command prompt
command prompt

A short string of text at the start of the command line on a command line interface (i.e., a shell). Typically of the form $ in some cases preceded by a field which includes information on the user running the shell, the host on which is running and the location within the filesystem from which it is running.

terminal emulator
terminal

A program that displays the command line interpreter (or shell).

unix shell
unix shells
shell
shells

The Unix shell, or shell, is a program that interprets the user input. It is also known as command line interpreter program (between a user and a Unix-like OS).

serial job
serial jobs

A program that executes instructions one after the other with no parallelization of tasks.

shared job
shared jobs

On the Hoffman2 Cluster shared jobs refer to jobs submitted by users in groups that have contributed compute nodes to the cluster and that can run on unused nodes of other contributors (for up to 24 hours).

shared memory job
shared memory jobs
shared memory

Shared memory jobs are programs that execute part or all of their instructions concurrently (i.e., in parallel) employing different workers all of which operate on the same set of data (shared memory). On the Hoffman2 Cluster shared memory jobs should be scheduled requesting the correct number of slots (or computing cores) selecting a shared parallel environment (see: Requesting multiple cores).

slave node
slave nodes

When running a parallel job that requires multiple cores/slots from one or more compute nodes, slave node(s) is/are the additional hosts where the scheduled job is running.

slot
slots

Within the Univa Grid Engine, slots are units of computing on which processes of a job can be scheduled on. On the Hoffman2 Cluster a slot is the same as a CPU core. A single-threaded serial job occupies at most one slot (that is one computing core).