Your home directory
Your home directory is physically located on our fast, highly responsive NetApp storage. The name of your home directory is like:
where u is the first character of your username. For example, if your user name is “bruin”, your home directory is “/u/home/b/bruin.”
Home directories are periodically backed up. In case of catastrophic hardware failure, files in home directories will be restored from the last backup that was taken. The purpose of the backup is not to be able to restore files that you accidentally delete or overwrite. See Protecting data from accidental loss.
If your resource group has purchased additional storage, you may also have a directory on that file system. You should ask your Hoffman2 sponsor for the name of the file system.
In your home directory, there is a convenient symlink called project pointing to the additional storage. To go to the project storage space, issue the command:
Your home directory has a 20GB disk space quota. If you belong to a resource group that has purchased additional storage, you may have an individual quota on that space in addition to the quota on your home directory. To see your quota and current disk space usage, at the shell prompt, enter:
There are two kinds of file systems for temporary files that you may use: $TMPDIR which is local to each compute node, and $SCRATCH which is mounted on all login and compute nodes. The purpose of these file systems is to accommodate data used by jobs while they are executing. Do not write files in any /tmp directory.
When a job is running, the environment variable $TMPDIR (defined by the job scheduler) points to a directory on the compute node’s local hard drive (approximately 100 GB). The environment variable $TMPDIR is defined only at run time; it is undefined before job starts and after job exits. It may not be possible to retrieve files under $TMPDIR after a job exists.
For example, in your job script, you could have something like:
cd $TMPDIR # enter the temporary directory # ... do some computations # ... write some files to $TMPDIR cp $TMPDIR/* $HOME/ # copy files from $TMPDIR to home directory # job exists; $TMPDIR and its files are deleted automatically
Files on compute node local scratch file systems which are not related to a job running on that node will be deleted without further notice.
Use $TMPDIR for life-of-the-job files and high activity files to avoid the overhead of network traffic associated with the network file systems and improve your job’s throughput. See How to use scratch directory for file I/O for more information and examples. Using $TMPDIR may not be suitable for MPI-style jobs because it is not visible by other compute nodes within the same MPI run.
Files in $TMPDIR will be deleted by the job scheduler at the end of your job or interactive session. If you want to keep files written to $TMPDIR, tell your program to copy them to permanent space before the end of your job or session. Files written to $TMPDIR are not backed up.
The global scratch file system is mounted on all nodes of the Hoffman2 Cluster. There is a 2TB per user limit. The system provides an environment variable $SCRATCH which is a unique directory for your login ID’s files on the global scratch file system. Example:
Files on the global scratch file system which are outside of $SCRATCH directories will be deleted without further notice.
Because the $SCRATCH file system resides on fast flash-based storage, it is recommended that for performance reasons parallel jobs, especially those with high I/O requirements, write to $SCRATCH.
Under normal circumstances, files you store in $SCRATCH are allowed to remain there for 14 days. Any files older than 14 days may be automatically deleted by the system to guarantee that enough space exists for the creation of new files. However, there may be occasions when even after all files older than 14 days have been deleted, there is still insufficient free space remaining. Under that circumstance, files belonging to those users who are using the preponderance of the disk space in $SCRATCH will be deleted even though they have not been there for 14 days. Files written to $SCRATCH are not backed up.
Each login node has a local 200GB /work file system. /work is different on each login node. To use it, first make a directory named with your login ID. Example:
cd /work mkdir $USER
and write your files into your /work/$USER/ directory.
Files in /work more than 24 hours old become eligible for automatic deletion.
Local storage on login nodes can be used for high activity files to avoid the overhead and network traffic associated with the network file systems. Files written to /work are not backed up.