User Tools

Site Tools


membrane_simulation

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
membrane_simulation [2023/07/10 10:07] adminmembrane_simulation [2023/08/17 16:08] (current) admin
Line 20: Line 20:
 <code commend> charmrun namd2 +p 48 +ppn 48 +setcpuaffinity </code> <code commend> charmrun namd2 +p 48 +ppn 48 +setcpuaffinity </code>
  
-If you wanted to use all the cores on the node then you would replace 48 with 128 in the .csh file (#SBATCH --cpus-per-task 128 and in charmmrun). If you need to used more than one node, change the -N and -n in #SBATCH to the number of nodes that you are using. So for example if you are using 2 nodes the above part of the code will change:+If you wanted to use all the cores on the node then you would replace 48 with 128 in the .csh file (#SBATCH --cpus-per-task 128 and in charmmrun). If you need to used more than one node, this depends on the resource. Zaratan with its AMD EPYC 7763 chips works best when you put 8 processes on a node with its 8 CCDs. Each CCD has 16 threads so you would want to list -n in the mpirun equal to the (# of processes) * (# of nodes). Plus change the -N and -n in #SBATCH to the number of nodes that you are using. So for example if you are using 2 nodes the above part of the code will change:
  
-<code commend> mpirun -n namd2 +auto-provision +setcpuaffinity  </code>+<code commend> mpirun --mca opal_warn_on_missing_libcuda -n 16 namd2 ++ppn 16   </code>
  
 So the charmrun is removed and other and you only change '-n #' with # being the # of nodes.  So the charmrun is removed and other and you only change '-n #' with # being the # of nodes. 
membrane_simulation.txt · Last modified: 2023/08/17 16:08 by admin