Table of contents
  1. VPN address
  2. SSH to the login node
  3. Start interactive mode/terminal/bash on worker/compute nodes
  4. Available modules (software)
  5. Info about the resources (queues, nodes and status) available
    1. Online
    2. From terminal on the login node:
  6. Find path to Matlab
  7. Run software GUI
  8. Edit files
  9. Transfer files
  10. Submit a job to the scheduler
  11. Sample SBATCH script
  12. Controlling jobs (to be run on the login node)
  13. Login to a DEV node
  14. Set CPU affinity

VPN address

secureaccess.gsu.edu

SSH to the login node

$ ssh -X <CampusID>@arclogin

Start interactive mode/terminal/bash on worker/compute nodes

Can be run on the login node

# Start interactive mode on a general-purpose compute node
$ srun -p qTRD -A <slurm_account_code> -v -n1 --mem=10g --pty /bin/bash    

# Start interactive mode on a GPU worker node
$ srun -p qTRDGPUH -A <slurm_account_code> -v -n1 --pty --mem=10g --gres=gpu:V100:1 /bin/bash

# Start interactive mode on a high-memory worker node
$ srun -p qTRDHM -A <slurm_account_code> -v -n1 --mem=10g --pty /bin/bash

Available modules (software)

Can be run on any node

$ module avail

# Search for a particular module
$ module avail matlab

# load a particular module
$ module load matlab/R2022a

# List loaded modules
$ module list

# Unload all modules
$ module purge

Info about the resources (queues, nodes and status) available

Online

https://arcxdmod.rs.gsu.edu/

From terminal on the login node:

$ sinfo
$ sinfo -o "%24n %7P %.11T %.4c %.8m %14C %10e"
$ sinfo -O "nodehost:16,partition:.8,cpus:.8,memory:.8,cpusstate:.16,freemem:.8,gres:.16,gresused:.16,statelong:.8,time:.16"

Find path to Matlab

Can be run on any node, but do not actually run matlab on the login node, please.

$ module load matlab/R2022a
$ which matlab

Run software GUI

From https://hemera.rs.gsu.edu/

Edit files

Can be run on any node

$ nano <filename>
$ vi <filename>

Transfer files

# from local to cluster
$ scp -r <local path to file> <campusID>@arclogin:<server path>

# from cluster to local
$ scp -r <campusID>@arclogin:<server path> <local path to file>

Submit a job to the scheduler

Can be run on the login node

# Submit a job script 
$ sbatch JobSubmit.sh

# Submit a job array script
$ sbatch –array=1-5000%100 JobSubmit.sh

Sample SBATCH script

Please see example SLURM scripts.

Controlling jobs (to be run on the login node)

# Check status of all jobs
$ squeue

# Check status of jobs by user
$ squeue -u <campusID>

# Continuously check status of jobs
$ watch -n 10 squeue -u <campusID>

# Check job status by ID
$ squeue -j <jobID>

# Cancel job by ID
$ scancel <jobID>

Login to a DEV node

Can be run on the login node/local machine, for lightweight experiments only

$ ssh arctrdgndev101

Set CPU affinity

numactl --localalloc matlab -batch "myscript.m"


Page last modified: Feb 10 2022 at 01:01 PM.