membrane_simulation
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
membrane_simulation [2020/11/08 12:29] – admin | membrane_simulation [2023/08/17 16:08] (current) – admin | ||
---|---|---|---|
Line 4: | Line 4: | ||
- | These are based off running on DT2. If running on MARCC see sample inputs from Dr. Klauda or others. | + | These are based off running on Zaratan. If running on another resource |
- | Once you've built the bilayer in CHARMM-GUI per Dr. Klauda' | + | Once you've built the bilayer in CHARMM-GUI per Dr. Klauda' |
- | After approval of the setup, you will need to do is look at some files in the following directory on on DT2: | + | After approval of the setup, you will need to do is copy all the files in the following directory on Zaratan to your namd subfolder of the system you built: |
- | ///homes/jbklauda/ | + | //~jbklauda/ |
The first-dyn.* files are good examples of setting up a set of membrane simulations that you should run in the namd subfolder that were made from CHARMM-GUI. This will run step6.1 to 6.6. | The first-dyn.* files are good examples of setting up a set of membrane simulations that you should run in the namd subfolder that were made from CHARMM-GUI. This will run step6.1 to 6.6. | ||
Line 16: | Line 16: | ||
Changes in // | Changes in // | ||
- | You will need to change the job-name to something relevant to you. On DT2, the example runs on a single 20-core node. You may adjust this and the associated lines from: | + | You will need to change the job-name to something relevant to you. On Zaratan, there are 128-core nodes. For small bilayers (<30,000 atoms) the use of the whole node is inefficient. Beyond these small systems, you will need to do some benchmarking with varied number of cores to determine |
- | <code commend> charmrun namd2 +p 19 +ppn 19 +setcpuaffinity </ | + | <code commend> charmrun namd2 +p 48 +ppn 48 +setcpuaffinity </ |
- | To changing | + | If you wanted to use all the cores on the node then you would replace 48 with 128 in the .csh file (#SBATCH --cpus-per-task 128 and in charmmrun). If you need to used more than one node, this depends on the resource. Zaratan with its AMD EPYC 7763 chips works best when you put 8 processes on a node with its 8 CCDs. Each CCD has 16 threads so you would want to list -n in the mpirun equal to the (# of processes) * (# of nodes). Plus change |
- | <code commend> mpirun -n 2 namd2 +auto-provision | + | <code commend> mpirun |
So the charmrun is removed and other and you only change '-n #' with # being the # of nodes. | So the charmrun is removed and other and you only change '-n #' with # being the # of nodes. |
membrane_simulation.1604856579.txt.gz · Last modified: 2020/11/08 12:29 (external edit)