Frequently asked questions

Frequently asked questions and their answers are organized by the following categories:

Getting help

Note

If you do not find answers to your question or issue, please open a support ticket via our online help desk.

For faster resolution, please provide helpful details, e.g. your user name, the relevant files and directories and whether you grant technicians access to any of them, any jobs scripts or job IDs affected and what steps are necessary to reproduce the issue.

Accounts

What is the status of my user account application?

Most likely your application is pending sponsor approval. To check your application status, you can send an email to accounts@idre.ucla.edu and we will get back to you.

Why do I have two usernames? (Grid and Hoffman2 Cluster)

When you register for a Hoffman2 Cluster account, you will get two sets of independent credentials.

  1. UCLA/UC Grid username:

  • Your Grid username is to manage your account profile in our account management web application, Grid Identity Manager (GIM). Should you choose to change your primary sponsor or update your associated email address, you will need this credential and the answer to your challenge question to request the changes to your profile.

  1. Hoffman2 Cluster username:

  • Your Hoffman2 Cluster username is your account login ID and is required to access the cluster. For information on how to access the cluster, please see the Connecting/Logging in page. Your Hoffman2 Cluster username may be different from your Grid username; Hoffman2 usernames are limited to 8-characters.

I no longer need my user account. What should I do?

If you no longer need your user account, you can send an email to accounts@idre.ucla.edu requesting its deletion.

It says my username doesn’t exist?

GIM portal user denied message

This is because the web page you are trying to access is for the Grid Administrator only. For information on how to access the cluster, please see Connecting/Logging in page.

Questions or comments click here.

My SSH client says, ‘Permission denied, please try again’

If you’re getting a permission denied warning when connecting to the Hoffman2 Cluster, it could be for a few reasons. In this example, Joe Bruin is having a problem connecting with his Hoffman2 Cluster account, joebruin

$ ssh hoffman2.idre.ucla.edu -l joebruin
joebruin@hoffman2.idre.ucla.edu's password:
Permission denied, please try again.
  1. Verify you are using your cluster username and not your grid username. Hoffman2 Cluster usernames are limited to 8-characters

  2. Incorrect password

  1. The system may not be accepting logins due to a scheduled maintenance.

  2. If you continue to have problems, please visit our online help desk and submit a ticket.

Questions or comments click here.

I forgot the answer to my Grid Identity Manager (GIM) challenge question?

Please open a support ticket via our online help desk, https://support.idre.ucla.edu/helpdesk/

I need to change my Hoffman2 sponsor, how do I do that?

Instructions can be found in the Request a new sponsor section of the Users - managing your account page.

Getting access to other project folders

In order to get access to another research group’s purchased project storage volume, your cluster account will need to be a member of their Unix group. Please open a support ticket via our online help desk and include your cluster username and the full path to the project folder to request access.

I would like to collaborate with another Hoffman2 user. How can I share data with them?

There are a few things to keep in mind. Users on the cluster are organized into groups. Every user belongs to a primary group and may be in several secondary groups. You can see the list of groups you belong to with the groups command, e.g.

joebruin@login2:~$ groups joebruin
joebruin : web gpu

In the previous example, you see my primary group is web (first in the list), and I only belong to one secondary group, gpu. A user can belong to many groups, but a file or directory can be owned by only one owner and one group. Group membership can give you access to files and directories belonging to that group.

So, in order to share data, you must have a common group to which both users belong. For example, if I want to share a folder with Hoffman2 user, sambruin, I would check for a common group that we both belong to with the groups command:

joebruin@login2:~$ groups joebruin sambruin
joebruin : web gpu
sambruin : acct gpu

In this example, gpu is a group we’re both members of, so I could share data as long as it’s owned by the group, gpu and the permissions on the directory and files give the group members access. Group ownership does not imply group access; you must set the file access permissions so your group can use the files. For that you would use the chown and chmod commands, or newgrp to change your working group in your current shell.

To request membership to another Hoffman2 sponsor’s Unix group, you will need to get the sponsor’s approval. Please ask the sponsor to forward the request and their approval to accounts@idre.ucla.edu.

Questions or comments click here.

Acknowledging the Hoffman2 Cluster

How can I acknowledge Hoffman2 Shared Cluster in my presentations or publications?

When publishing results from work carried partially or exclusively on the Hoffman2 Cluster, we appreciate you acknowledging us as follows:

"This work used computational and storage services associated with the Hoffman2 Shared Cluster provided by UCLA Institute for Digital Research and Education’s Research Technology Group."

Applications, compilers and libraries

How to load certain applications in your path / How to set up your environment

In Unix-like system the process that interacts with a user (or a user command), called the shell, maintains a list of variables, called environmental variables, and their values. For example in order to find an executable users should add its path to their $PATH variable.

Users can permanently add certain values to their shell environmental variables by editing their shell initialization files (such as: .bash_profile, .profile, etc.) located in their $HOME directories.

Alternatively Hoffman2 users can dynamically change their shell environment using the environmental modules utility.

How to use environmental modules interactively

Users can load a certain application/compiler in their environment (e.g.: $PATH, $LD_LIBRARY_PATH, etc.) by issuing the command:

$ module load application/compiler

where application/compiler is the name of modulefile relative to the application/compiler (for example: matlab, intel, etc.).

To see a list of available modulefiles relative to applications/compilers users should issue the command:

$ module avail

to learn about the application/compiler loaded by a certain module issue:

$ module whatis application/compiler

or:

$ module help application/compiler

to see how a module for a certain application/compiler will modify the user’s environment issue:

$ module show application/compiler

to check which modules are loaded issue:

$ module list

to unload a certain application/compiler previously loaded from one’s environment issue:

$ module unload application/compiler

for a full list of module commands issue:

$ module help

Users are encouraged to write their own modulefiles to load their own applications, you can learn how to do so here.

After loading the Intel compiler module, why is mpicc/mpicxx/mpif90 still not using Intel compiler?

Intel MPI compilers have unconventional names. After loading Intel compiler module, use:

  • mpiicc for C programs

  • mpiicpc for C++ programs

  • mpiifort for Fortran programs

Questions or comments click here.

Connecting, Authentication, SSH public-keys

Set-up SSH public-key authentication

Using SSH public-key authentication to connect to a remote system is a robust, more secure alternative to logging in with an account password. SSH public-key authentication relies on asymmetric cryptographic algorithms that generate a pair of separate keys (a key pair), one “private” and the other “public”. You keep the private key on your local computer you use to connect to the remote system. The public key is “public” and can be stored on each remote system in a .ssh/authorized_keys directory.

Note

You need to be able to transfer your public key to the Hoffman2 Cluster. Therefore, you must be able to login with your password to add the public key to your ~/.ssh/authorized_keys file in your home directory.

To set-up public-key authentication via SSH on macOS and Linux:

  1. Use the terminal application to generate a key pair using the RSA algorithm.

To generate RSA keys, at the prompt, enter:

$ ssh-keygen -t rsa
  1. You will be prompted to supply a filename (for saving the key pair) and a passphrase (for protecting your private key):

  • filename Press enter to accept the default filename (id_rsa)

  • passphrase Enter a passphrase to protect your private key

Warning

If you don’t passphrase protect your private key, anyone with access to your computer can SSH (without being prompted for the passphrase) to your account on any remote system that has the corresponding public key.

Your private key will be generated using the default filename (id_rsa) or the filename you specified and stored on your local computer in your home directory, in a subdirectory named, .ssh.

The public key will be generated using the same filename (id_rsa.pub), but will have a .pub extension added to it. The public key file is stored in the same location (~/.ssh/).

  1. Now the public key needs to be transferred to the remote system (Hoffman2 Cluster). You can use the program, ssh-copy-id or scp to copy the public key file to the remote system. It’s preferable to use ssh-copy-id because contents of the public key are added directly to your ~/.ssh/authorized_keys file. If you use scp, then you will need to connect the remote compute and copy the contents of id_rsa.pub to the authorized_keys file manually. You will be prompted for your account password to complete the copy to the remote system.

transfer public key via ssh-copy-id

$ ssh-copy-id -i ~/.ssh/id_rsa.pub login_id@hoffman2.idre.ucla.edu

… where login_id is replaced with your cluster username.

transfer public key via scp

$ scp ~/.ssh/id_rsa.pub login_id@hoffman2.idre.ucla.edu
$ ssh login_id@hoffman2.idre.ucla.edu
$ cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
$ rm ~/id_rsa.pub
  1. You should be able to SSH to your Hoffman2 Cluster user account from your local computer with the private key. Replace joebruin with your cluster username.

[joebruin@macintosh ~]$ ssh joebruin@hoffman2.idre.ucla.edu
Enter passphrase for key '/Users/joebruin/.ssh/id_rsa':
Last login: Mon Jul 10 06:01:17 2020 from vpn.ucla.edu

SSH public-key authentication not working?

  1. Please verify the file permissions. Typically, you will want:

  • $HOME/.ssh directory to be 700 (drwx——)

  • public key ($HOME/.ssh/id_rsa.pub) to be 644 (-rw-r–r–)

  • private key ($HOME/.ssh/id_rsa) to be 600 (-rw——-)

Problems with this answer? Please send comments here.

Public hosts hostkey fingerprints

The Hoffman2 Cluster has the following classes of public, or external facing, hosts that are used to connect or to transfer data to and from the cluster:

Class

Hostname

Login nodes

hoffman2.idre.ucla.edu

Data transfer nodes

dtn.hoffman2.idre.ucla.edu

NX nodes

nx.hoffman2.idre.ucla.edu

x2go nodes

x2go.hoffman2.idre.ucla.edu

All public facing hosts have the following hostkey fingerprint:

RSA MD5-hex: 3c:9c:67:d8:c5:a4:ae:77:07:5f:10:2f:20:4a:75:0f
RSA SHA1-hex: b7:e0:12:dc:93:e4:3f:12:6f:72:c0:fb:cf:ad:d5:7a:47:1a:9a:df
RSA SHA256-hex: 91:a8:7d:04:9c:12:ce:b9:45:9d:5a:7d:4e:0f:84:97:62:1d:70:23:7b:26:03:79:f9:4a:f6:47:22:1d:bf:03
RSA SHA256-base64: kah9BJwSzrlFnVp9Tg+El2IdcCN7JgN5+Ur2RyIdvwM

Even though all of our public, external-facing hosts use the same RSA public hostkey, that public key can be represented with four different fingerprint hashes. Depending on the software package you may see one of the four fingerprint hashes listed above. If the fingerprint hash doesn’t match one listed above, do not continue authentication and contact us by email at, support@idre.ucla.edu.

Problems with this answer? Please send comments here.

Connecting for the first time

As all connections are based on a secure protocol, the first time you connect from a local computer to the Hoffman2 Cluster you will be prompted with a message similar to:

The authenticity of host 'HOSTNAME (HOST IP)' can't be established.
RSA key fingerprint is SHA256:kah9BJwSzrlFnVp9Tg+El2IdcCN7JgN5+Ur2RyIdvwM.
RSA key fingerprint is MD5:3c:9c:67:d8:c5:a4:ae:77:07:5f:10:2f:20:4a:75:0f.
Are you sure you want to continue connecting (yes/no)?

Where HOSTNAME and HOST IP are the hostname and IP address of the various classes of public hosts.

Warning

Only proceed to connect if the RSA key fingerprint displayed in the prompted message corresponds to one of the RSA fingerprints listed above for the Hoffman2 Cluster public hosts.

If the RSA fingerprint displayed by your SSH client does not match one of the RSA fingerprints above for the Hoffman2 Cluster public hosts, when attempting to connect you will get a message similar to:

@@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.

in this case, do not continue authentication; instead, contact support@idre.ucla.edu.

Problems with this answer? Please send comments here.

Data transfers

When I Log in to the Globus web application, I get a “Missing Identity Information” error

Missing Identity Information - Unable to complete the authentication process. Your identity provider did not release the attribute(s): {{eppn}}

To resolve this issue with missing attributes not being released to 3rd parties, you will need to contact the UCLA IT Support Center. UCLA Logon ID is not a service of the IDRE Research Technology Group.

Job Errors/ Job Scheduler

An IDRE consultant sent me an email about a lot of left over jobs running under my userid. How do I delete them?

You can get the process id’s using the ps command and filter them using the grep command to select only the jobs you want to delete and feed the result to kill command.

To list the processes, use the command:

$ ps -u loginid | grep myjob | awk '{print $1}'

To kill the processes, use the command:

$ ps -u loginid | grep myjob | awk '{print $1}' | xargs kill

Replace loginid with your username and myjob with the executable name (e.g. bash or python).

Problems with this answer? Please send comments here.

In an interactive session (via qrsh), I am getting “not enough memory” error messages and my application is terminated abruptly. Why?

When issuing the qrsh command, one must specify the memory size via -l h_data, which is also imposed as the virtual memory limit for the qrsh session. If the application (e.g. matlab) exceeds this limit, it will be automatically terminated by the scheduler. Each application has a different error message, but usually it contains key words like “not enough memory”, “increase your virtual memory”, “cannot allocate memory” or something similar. In this case, you will have to re-run the qrsh command with an increased h_data value.

Please also note that requesting an excessive amount of h_data might cause the qrsh to wait for a long time, or even fails to start, because there are fewer and fewer compute nodes that can meet your criterion as you increase the h_data value. If this is your first time to launch the application in a qrsh session, we recommend gradually increase h_data until the application runs successfully.

Why is my job still waiting in the queue?

The following factors may contribute to longer wait time, or jobs not starting (depending on your account’s access level):

  • Larger memory request, requested with: h_data or the product of (h_data)*(-pe shared #) is large

  • Longer run time, requested with: h_rt

  • Specific CPU model, requested with: arch

  • Many CPUs, requested with: -pe dc* #

  • You are already running on some numbers of CPUs or nodes

  • For high priority jobs, requested with: -l highp

    • Your group members are already running on your purchased nodes; there are not enough left for your job to start

    • Your request exceeds what your group nodes have

  • Hoffman2 cluster’s load

Problems with this answer? Please send comments here.

I have a lot of jobs in error state E. How do I find out what the problem is?

When the myjobs script or qstat -u $USER shows you have jobs in an error state (“E”, “Eqw”, etc.) you can use the error_reason script to show you why. It will print the error reason line from qstat -j jobid output for all of your jobs that are in an error state.

$ error_reason -u loginid

Replace loginid with your username.

What queues can I run my jobs in?

The qquota command will tell you what resources available to your userid are in use at the moment that the qquota command was run. The purpose of qquota is not to provide a complete list of the resources available to your userid. If no resources are in use at the moment, qquota will not return any information.

For example:

resource quota           rule limit           filter
rulset1/10         slots=123/256        users @campus hosts @idre-amd_01g

where: slots=123/25 means 123 slots or cores are in use by your group out of 256 of your group’s total allocation. Enter man qquota at the shell prompt for more information.

When will my job run?

The qstat command will list all the jobs which are running (r) or waiting to run (qw), in order by priority (“prior” column). If all jobs requested the same resources, this would also be the order in which they start running. In reality, some jobs will request more nodes or a longer run time which is not presently available, so the job scheduler will “back-fill” and try to start jobs which require fewer resources that will complete without slowing down the start time of a job higher in the list.

If you are in a research group which has purchased nodes for the Hoffman2 Cluster, you can use the highp complex to request that your job run on your group’s purchased nodes. It is guaranteed that some job submitted by someone in your research group will start within 24 hours. To see where your highp job is with respect to the waiting jobs that everyone else in your group has submitted, you can use the groupjobs script. It will display a list of pending jobs, or pending and running jobs, similar to regular qstat output but only for everyone in your resource group. The job at the top of the list will in most cases start running before those later in the list. For help and a list of options, enter groupjobs -h.

Problems with this answer? Please send comments here.

Why did my highp job not start within 24 hours?

A highp job will start in 24 hours provided that your group does not overuse purchased resources.

The common reasons a highp job did not start in 24 hours are:

  1. You did not specify the highp option in your job script.

    Check your job script, look for a line that starts with #$ -l; highp should be one of a parameter. For example, the line should look like:

    #$ -l h_data=1G,h_rt=48:00:00,highp
    
  2. The pending job in question does not have highp option. (See below about how to check this.)

  3. Members of your group are already running long jobs on the purchased compute nodes. In this case, your highp job will be queued until resources become available. (You still need to add highp to the job script described above.)

  4. Your research group is not a Hoffman2 shared cluster program participant. Consider join the program and enjoy the benefits.

  5. The of h_data and number of slots is greater than the per-node memory size of your group nodes.

For example, you have h_data=8G and -pe shared 7. This means you are requesting a node with 56 GB (=8G*7) of memory. If each of your group’s nodes has, say, 32GB of memory, your highp job will not start.

To check whether your pending job has the highp option, use the following commands and steps:

  1. Find out job ID (of the pending job):

    $ qstat -s p -u $USER
    
  2. Check if highp is specified for the job in question:

    $ qstat -j job_id | grep ^'hard resource_list' |grep highp
    

If you see no output from the command above, it means that job does not have highp option. You need to specify highp. See below about how to use qalter command to fix this.

If you see something like:

$ hard resource_list: h_data=1024M,h_rt=259200,highp=TRUE

This means the job does have highp option specified.

To alter (without re-submitting it) a already-pending job from non-highp to highp, use following steps:

$ qalter -mods l_hard highp true  job_id

For more information about qalter, use the command:

$ man qalter

Problems with this answer? Please send comments here.

How much virtual memory should I request in job submission?

It is important to request the correct amount of memory size when submitting a job. If the request is too small, the job may be killed at run time due to memory overuse. If the request is too large (e.g. larger than the compute nodes you intend to run the job), the job may not start.

The followings are a few common techniques that can help you determine the virtual memory size of your program.

If your job has completed, run the command:

$ qacct -j job_ID

Look for the maxvmem value. This is the virtual memory size that your program consumed as seen by the scheduler. Specify h_data so that (h_data)*(number of slots) is no less than this value. For example, if maxvmem shows 11 GB, you can request 12 GB of memory on a compute node to run the job, such as one of the followings:

$ -l h_data=12GB # for a single-core run (if your program is sequential)
$ -l h_data=6GB -pe shared 2 # for a 2-core run (if your program is shared-memory parallel)
$ -l h_data=2GB -pe shared 6 # for a 6-core run (if your program is shared-memory parallel)

Note

In these examples, the product of h_data * (number of slots) is always 12GB. If you specify -l h_data=12GB -pe shared 6, you are actually requesting 12GB*6=72GB of memory on a node.

Note

If you are running multiple slots on a node, h_rt * (number of slots) needs to be smaller than the total memory size of your nodes.

If you are not sure about the virtual memory size, run your program in “exclusive” mode first. Once done, use Method 1 above to determine the virtual memory size. To submit a job in exclusive mode, qsub the job with the command:

$ qsub -l exclusive your_job_script

where your_job_script is replaced by the actual file name of your job script. In this case, you should also specify h_data for node selection purposes. If you are running sequential or shared-memory parallel program (i.e. using only one compute node), we recommend using h_data=32GB and without specifying the number of slots. You can also append the exclusive option to the line starting with #$ -l in your job script, e.g.:

#$ -l h_rt=24:00:00,h_data=32G,exclusive

Again, if your program is sequential or shared-memory parallel, DO NOT specify the number of slots (i.e. there should be no -pe option in your job script or command line, otherwise you may over-request memory causing the job unable to start).

Problems with this answer? Please send comments here.

How do I pack multiple job-array tasks into one run?

Using job array is a way to submit a large number of similar jobs. In some cases each job task takes only a few minutes to compute. Running a large number of extremely short jobs through the scheduler is very inefficient — the system is likely to be more busy finding a node, sending jobs in and out, than doing the actual computing. With a simple change of your job script, you can pack pack multiple job-array tasks into one run (or dispatch), so you can benefit from the convenience of using job arrays and at the same time use the computing resources efficiently.

If you run too many short jobs (e.g. more than 200 less-than-3-minute jobs within an hour), your other pending jobs may be temporarily throttled. Please understand that this is a way to ensure the scheduler’s normal operation, not intended to cause user inconveniences.

At run time, the environment variable $SGE_TASK_ID uniquely identified a task. The main ideas to pack multiple tasks into one run with minimum change to your job script are to:

  1. change the job task step size.

  2. create a loop inside the job script to execute multiple tasks (equal to the ‘step size’).

Of course, you may need to adjust h_rt to allocate sufficient wall-clock time to run the ‘packed’ version of job script.

Your original job script looks like:

#!/bin/bash
...
#$ -t 1-2000
...
./a.out $SGE_TASK_ID ...

To pack, say, 100 tasks into one run, change your job script to:

$ #!/bin/bash
$ ...
$ #$ -t 1-2000:100
$ ...
$ for i in `seq 0 99`; do
$    my_task_id=$((SGE_TASK_ID + i))
$    ./a.out $my_task_id ...
$ done

Problems with this answer? Please send comments here.

How do I request large memory to run sequential (1-core) program?

If you are requesting less than 64 GB, use the h_data to specify the requested memory size, e.g.:

$ qsub -l h_data=32G ...

You can also put -l h_data=32G in your job script file.

In this case, you are requesting a single core (slot), so you should not specify any -pe option.

If you are requesting more than 64GB, please contact us.

Problems with this answer? Please send comments here.

How do I request large memory to run multi-threaded (single node) program?

You will use -pe (number of cores) and -l h_data (memory per core) together to specify the total amount of memory you want. Note that the product of (number of cores)*(h_data) must be smaller than the total memory of a compute node, otherwise your job will never start.

For example, request 8 cores with 32G total memory (shared by all 8 cores):

$ qsub -l h_data=4G -pe shared 8 ...

If your multi-threaded program will automatically use all CPUs available on the node, add the -l exclusive option, e.g.:

$ qsub -l h_data=4G,exclusive -pe shared 8 ...

You can also put -pe shared 8 -l h_data=32G in your job script file.

If you are requesting more than 64GB total memory, please contact us.

Why cannot I submit too many individual jobs?

When there are too many pending jobs, the scheduler may fail to process all them, causing scheduling problems. Therefore, to maintain stability, the system has a limit on how many jobs a user can submit. This limit is usually in the hundreds, and may vary depending on the system’s load.

Most users who submit a huge number of individual jobs should consider using job arrays for one obvious benefit: one job-array job can hold thousands of “tasks” (or individual “runs”) and consume only one (1) job out of the user’s number of jobs limit. A user can then submit hundreds of job arrays (each containing thousands of “runs”). This usually can cover some of the largest “through-put” runs on the cluster.

If each individual task is very short (e.g. finish in a few minutes), users should pack several tasks into one run to increase throughput efficiency. See this FAQ for more details. Running a large number of short jobs is a severe waste of the cluster’s computing power.

For more information about job array, see this page.

Problems with this answer? Please send comments here.

What is “fork: retry: Resource temporarily unavailable”?

On the login nodes each user account is limited to a certain number of running processes. The message “… fork: retry: Resource temporarily unavailable”, means you have reached this limit. To circumvent this, you can get an interactive session (on a compute node with desired resources in terms of memory and/or processors, which is not subject to this limit.) and do your work there. See also role of the login nodes.

When submitting a job, I get “Unable to run job: got no response from JSV script…”.

This could happen when the scheduler (software) is too busy handling jobs. One way to overcome this problem is to add the following line at the bottom of your initialization files to increase the default timeout limit:

$ export SGE_JSV_TIMEOUT=60

Then run:

$ source ~/.bashrc

(or just log out and log in), and try to submit your job again.

Problems with this answer? Please send comments here.

Storage and File systems

What file systems are backed up?

The home and project file systems are backed up, with a target backup window that runs once per 24 hours to disk-based storage. See Backups

Protecting data from accidental loss

Here are several ways to protect your files from accidental loss.

Backup your files to another place, e.g. the hard drive on another computer. See File transfer. Make backup copies of files and directories in a compressed tar file. For example, to create a compressed tar file (.tgz) of all files under directory “myproject”:

$ tar -czf myproject.tgz myproject/

Enter man tar at the shell prompt for more information.

Modify your personal Linux environment to change the cp (copy), mv (move), and rm (remove) commands so that you are prompted for confirmation before any existing file is deleted or overwritten. bash shell: Add the following commands to your $HOME/.bashrc file:

$ alias cp='cp -i'
$ alias mv='mv -i'
$ alias rm='rm -i'

tcsh shell: Add the following commands to your $HOME/.cshrc file:

$ alias cp 'cp -i'
$ alias mv 'mv -i'
$ alias rm 'rm -i'

Modify your personal Linux environment to prevent any existing file from being overwritten by the output redirection (>) symbol. bash shell: Add the following command to your $HOME/.bashrc file:

$ set -o noclobber

tcsh shell: Add the following command to your $HOME/.cshrc file:

$ set noclobber

Use the chmod command to remove your own write access to files you intend to not change or delete. Example:

$ chmod -w myfile

You will be unable to accidentally modify such a file in the future. If you try to delete a file for which you have removed your own write access without specifying the -f (force) flag on the rm command, you will be prompted and have to reply affirmatively before the file will be removed. Enter man chmod at the shell prompt for more information.

Data Sharing on Hoffman2

There are a few things to keep in mind. Users on the cluster are organized into groups. Every user belongs to a primary group and may be in several secondary groups. You can see the list of groups you belong to with the groups command, e.g.

joebruin@login2:~$ groups joebruin
joebruin : web gpu

In the previous example, you see my primary group is web (first in the list), and I only belong to one secondary group, gpu.

A user can belong to many groups, but a file or directory can be owned by only one owner and one group. Group membership can give you access to files and directories belonging to that group.

So, in order to share data, you must have a common group to which both users belong. For example, if I want to share a folder with Hoffman2 user, sambruin, I would check for a common group that we both belong to with the groups command:

joebruin@login2:~$ groups $USER sambruin
joebruin : web gpu
sambruin : acct gpu

In this example, gpu is a group we’re both members of, so I could share data as long as it’s owned by the group, “gpu” and the permissions on the directory and files give the group members access. Group ownership does not imply group access; you must set the file access permissions so your group can use the files. For that you would use the chown and chmod commands, or newgrp to change your working group in your current shell.

My program writes lot of scratch files in my home directory. This results in exceeding my disk space quota. What is the solution?

There are several things you can do:

If you are a member of a research group which has contributed nodes to the Hoffman2 Cluster, your PI can purchase additional disk space for use by the members of your group. Each process in your parallel program can write to the local /work on the node it is running on. When the program finishes, you can copy the files off to a place where you have more space. Since /work is local to the nodes, using it is very efficient. You can write to /u/scratch and you have 7 days after the job completes to copy the files somewhere else.

How do I transfer my files from the Hoffman2 Cluster to my machine?

If the size of an individual file does not exceed 100 MB, you can download it to your local machine, or transfer it to another cluster that you can access at UCLA from the UCLA Grid Portal.

For any size file, you can use the scp command to transfer a file or directory from one machine or system to another. For safety reasons, as outlined in the Security Policy for IDRE-Hosted Clusters, always scp from your machine to the IDRE-Hosted cluster. NEVER scp from the IDRE-Hosted cluster back to your local machine.

Is there a simpler way to copy all my files to my new Hoffman2 account?

Once you have been notified that your login ID has been added to the Hoffman2 Cluster, login to your local machine and from your local machine’s home directory enter the command:

tar -clpzf - * | ssh loginid@hoffman2.idre.ucla.edu tar -xpzf -

Replace loginid with your Hoffman2 Cluster loginid.

Note that this transfer will not copy any of the hidden (dot) files from your local home directory to your new home directory on the Hoffman2 Cluster. Since many of the dot files in your home directory are operating system version specific, it would not be appropriate or useful to transfer these files.

What is my disk storage quota and usage?

From the UCLA Grid Portal, you can use its “Disk Usage on Hoffman2” application. Click:

Job Services
Applications
Disk Usage on Hoffman2
Submit Job button

You do not have to make any changes on the application form in order for it to report on your home directory usage. View your job results as usual. Click:

Job Services
Job Status

After your job has completed and its status is Done, click the Stdout link in the Output column for your job. Your request runs as a job on Hoffman2 and will send you standard Sun Grid Engine job status email.

From the Hoffman2 Cluster login nodes, at the shell prompt, enter:

myquota

The myquota command will report the usage and quota for filesystems where your userid has saved files, including /u/scratch as well as your home directory. Use the myquota command instead of the quota command.

Problems with the answers in this section? Please send comments here.

Other

How do I print my output?

There is no printer directly associated with the Hoffman2 Cluster. If you have a printer attached to your local desktop machine, you can copy your file to your local machine and print your file locally. Recall that for security reasons you should issue the scp command from your local machine, and not from the Hoffman2 command line.

Here is a little script that you could save on a unix/linux machine that might make printing a text file easier. You might name this script h2print.

scp login_id@hoffman2.idre.ucla.edu:$* .
lpr $*

where login_id is your Hoffman2 Cluster user name (i.e., login ID). You can omit login_id@ if your user_id on your local machine is the same as your Hoffman2 Cluster login ID. Note the period (.) at the end of the scp command line. Mark the script as executable with the chmod command:

$ chmod +x h2print

To print a Hoffman2 text file in your home directory, from your local machine’s command prompt, enter:

$ h2print hoffman2_filename

where hoffman2_filename is the name of your text file on the Hoffman2 Cluster that you want to print.

The scp command will prompt you for your Hoffman2 Cluster password, unless you have previously setup an rsa key pair on your local machine with the ssh-keygen -t rsa command, and appended a copy of the public key (id_rsa.pub) to ~/.ssh/authorized_keys on your Hoffman2 Cluster account.

UCLA Grid Portal will be taken down in near future?

UCLA grid portal software is no longer under active development. Although it is working fine, we don’t expect all the software that built Grid Portal to have continued compatibility with latest version of essential software packages and operating system to run the portal for a long time. Therefore, this service will be discontinued in near future. IDRE is working on alternate solutions to provide similar services.