Here is the instructions to run NAMD on Expanse. \\
\\
Your work directory is as /expanse/lustre/scratch/$USER/temp_project/ \\
\\
Example files for equilibrium run (first-dyn.csh and first-dyn.scr) and production run (dyn.csh, dyn.scr, dyn-1.inp, and dyn-2.inp) could be found as follows:\\
For **CPU** nodes @ /home/mkhsieh/script/namd_run/cpu/ \\
OR download {{ :cpu.tgz |}} \\
Note: \\
Each standard compute node has ~256 GB of memory and 128 cores \\
Each standard node core will be allocated 1 GB of memory, users should explicitly include the --mem directive to request additional memory; Max. memory per compute node --mem = 248G \\
\\
Example of SBATCH lines
#SBATCH --job-name=your_job_name
#SBATCH --account=**ask Dr. Klauda**
#SBATCH --partition=shared
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=32
#SBATCH --cpus-per-task=1
#SBATCH --mem=64G
#SBATCH --time=48:00:00
Example of execution line \\
namd2 +p32 +setcpuaffinity dyn.inp >& dyn.out
For **GPU** nodes @ /home/mkhsieh/script/namd_run/gpu \\
OR download {{ :gpu.tgz |}} \\
Note: \\
Each GPU node has 4 GPUs, ~384GB of memory and 40 cores \\
Default resource allocation for 1 GPU = 1 GPU, 1 CPU, and 1G of memory, users will need to explicitly ask for additional resources in their job script. \\
For max memory on a GPU node, users should request --mem = 374G; A GPU SU is equivalent to 1GPU, <10CPUs, and <96G of memory. \\
\\
Example of SBATCH lines \\
#SBATCH --job-name=your_job_name
#SBATCH --account=**ask Dr. Klauda**
#SBATCH --partition=gpu-shared
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=10
#SBATCH --mem=95G
#SBATCH --gpus=1
#SBATCH --time=48:00:00
Example of execution line \\
mpirun --mca btl_openib_allow_ib true -np 1 --map-by ppr:1:node namd2 +ppn10 +setcpuaffinity +devices 0 dyn.inp >& dyn.out