User Tools

Site Tools


membrane_simulation

bilayer_simulations

Running the first membrane simulation

These are based off running on Zaratan. If running on another resource see sample inputs from Dr. Klauda or others.

Once you've built the bilayer in CHARMM-GUI per Dr. Klauda's instructions you'll need to download that *tgz file to the scratch filesystem on Zaratan. This should be placed in the shared folder /scratch/zt1/project/energybio/shared/username. You will need to replace the 'username' with your login. You MUST download the file and uncompress on Zaratan. Do NOT uncompress and then download…it will cause problems. You should uncompress the file with the tar -xvzf command and let Dr. Klauda (or an associated graduate student) know the path of this directory so I can have a quick check.

After approval of the setup, you will need to do is copy all the files in the following directory on Zaratan to your namd subfolder of the system you built:

~jbklauda/sim-inputs/namd/

The first-dyn.* files are good examples of setting up a set of membrane simulations that you should run in the namd subfolder that were made from CHARMM-GUI. This will run step6.1 to 6.6.

Changes in first-dyn.csh:

You will need to change the job-name to something relevant to you. On Zaratan, there are 128-core nodes. For small bilayers (<30,000 atoms) the use of the whole node is inefficient. Beyond these small systems, you will need to do some benchmarking with varied number of cores to determine the optimal size (talk with Dr. Klauda on details). However, for small bilayers you should use:

 charmrun namd2 +p 48 +ppn 48 +setcpuaffinity 

If you wanted to use all the cores on the node then you would replace 48 with 128 in the .csh file (#SBATCH –cpus-per-task 128 and in charmmrun). If you need to used more than one node, this depends on the resource. Zaratan with its AMD EPYC 7763 chips works best when you put 8 processes on a node with its 8 CCDs. Each CCD has 16 threads so you would want to list -n in the mpirun equal to the (# of processes) * (# of nodes). Plus change the -N and -n in #SBATCH to the number of nodes that you are using. So for example if you are using 2 nodes the above part of the code will change:

 mpirun --mca opal_warn_on_missing_libcuda -n 16 namd2 ++ppn 16   

So the charmrun is removed and other and you only change '-n #' with # being the # of nodes.

Copy these first-dyn.* files directly to your namd subfolder. Then all you need to do is type:

./first-dyn.scr 

And the job should be queued…you can check with

 squeue -u username 
membrane_simulation.txt · Last modified: 2023/08/17 16:08 by admin