Skip to content

Commit 23bb8f6

Browse files
committed
P100 -> A100
1 parent 8a2d239 commit 23bb8f6

File tree

5 files changed

+29
-37
lines changed

5 files changed

+29
-37
lines changed

docs/Software/Available_Applications/AlphaFold.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ Input *fasta* used in following example  is 3RGK
112112
#SBATCH --job-name af-2.3.2-monomer
113113
#SBATCH --mem 24G
114114
#SBATCH --cpus-per-task 8
115-
#SBATCH --gpus-per-node 1
115+
#SBATCH --gpus-per-node A100:1
116116
#SBATCH --time 02:00:00
117117
#SBATCH --output %j.out
118118
@@ -157,7 +157,7 @@ MAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH
157157
#SBATCH --job-name af-2.3.2-multimer
158158
#SBATCH --mem 30G
159159
#SBATCH --cpus-per-task 4
160-
#SBATCH --gpus-per-node 1
160+
#SBATCH --gpus-per-node A100:1
161161
#SBATCH --time 01:45:00
162162
#SBATCH --output slurmout.%j.out
163163
@@ -207,7 +207,7 @@ modifications. Image (.*simg*) and the corresponding definition file
207207
#SBATCH --job-name alphafold2_monomer_example
208208
#SBATCH --mem 30G
209209
#SBATCH --cpus-per-task 6
210-
#SBATCH --gpus-per-node 1
210+
#SBATCH --gpus-per-node A100:1
211211
#SBATCH --time 02:00:00
212212
#SBATCH --output slurmout.%j.out
213213
@@ -246,7 +246,7 @@ singularity exec --nv /opt/nesi/containers/AlphaFold/alphafold_2.2.0.simg python
246246
#SBATCH --job-name alphafold2_monomer_example
247247
#SBATCH --mem 30G
248248
#SBATCH --cpus-per-task 6
249-
#SBATCH --gpus-per-node 1
249+
#SBATCH --gpus-per-node A100:1
250250
#SBATCH --time 02:00:00
251251
#SBATCH --output slurmout.%j.out
252252
@@ -282,11 +282,8 @@ singularity exec --nv /opt/nesi/containers/AlphaFold/alphafold_2.2.0.simg python
282282

283283
1. Values for `--mem` , `--cpus-per-task` and `--time` Slurm variables
284284
are for *3RGK.fasta*. Adjust them accordingly
285-
2. We have tested this on both P100 and A100 GPUs where the runtimes
286-
were identical. Therefore, the above example was set to former
287-
via `1`
288-
3. The `--nv` flag enables GPU support.
289-
4. `--pwd /app/alphafold` is to workaround this [existing
285+
2. The `--nv` flag enables GPU support.
286+
3. `--pwd /app/alphafold` is to workaround this [existing
290287
issue](https://github.com/deepmind/alphafold/issues/32)
291288

292289
### AlphaFold2 : Initial Release ( this version does not support `multimer`)

docs/Software/Available_Applications/ont-guppy-gpu.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -41,12 +41,8 @@ https://community.nanoporetech.com/
4141
### Example Slurm script
4242

4343
- Following Slurm script is a template to run Basecalling on NVIDIA
44-
P100 GPUs.( We do not recommend running Guppy jobs on CPUs )
44+
A100 GPUs. (We do not recommend running Guppy jobs on CPUs.)
4545
- `--device auto` will automatically pick up the GPU over CPU
46-
- Also,  NeSI Mahuika cluster can provide A100 GPUs  which can be 5-6
47-
times faster than P100 GPUs for Guppy Basecalling with  version. 5
48-
and above. This can be requested with
49-
`#SBATCH --gpus-per-node A100:1` variable
5046
- Config files are stored in
5147
***/opt/nesi/CS400\_centos7\_bdw/ont-guppy-gpu/(version)/data/ ***
5248
with read permissions to all researchers (replace ***(version)***

docs/Software/Containers/NVIDIA_GPU_Containers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ running the NAMD image on NeSI, based on the NVIDIA instructions
6161
#SBATCH --time=00:10:00
6262
#SBATCH --ntasks=1
6363
#SBATCH --cpus-per-task=8
64-
#SBATCH --gpus-per-node P100:1
64+
#SBATCH --gpus-per-node A100:1
6565
#SBATCH --mem=1G
6666
6767
module purge

docs/Software/Parallel_Computing/Parallel_Computing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ GPUs excel at large-scale parallel operations on matrices, making them ideal for
176176
#SBATCH --account nesi99991
177177
#SBATCH --output %x.out
178178
#SBATCH --mem-per-cpu 2G
179-
#SBATCH --gpus-per-node P100:1
179+
#SBATCH --gpus-per-node A100:1
180180

181181
module load CUDA
182182
nvidia-smi

docs/Tutorials/Introduction_To_HPC/Parallel.md

Lines changed: 20 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Create a new script called `gpu-job.sl`
7878
#SBATCH --account {{config.extra.project_code}}
7979
#SBATCH --output %x.out
8080
#SBATCH --mem-per-cpu 2G
81-
#SBATCH --gpu-per-node P100:1
81+
#SBATCH --gpu-per-node A100:1
8282
8383
module load CUDA
8484
nvidia-smi
@@ -97,26 +97,25 @@ then submit with
9797
```
9898

9999
```out
100-
Tue Mar 12 19:40:51 2024
101-
+-----------------------------------------------------------------------------+
102-
| NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 |
103-
|-------------------------------+----------------------+----------------------+
104-
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
105-
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
106-
| | | MIG M. |
107-
|===============================+======================+======================|
108-
| 0 Tesla P100-PCIE... On | 00000000:05:00.0 Off | 0 |
109-
| N/A 28C P0 24W / 250W | 0MiB / 12288MiB | 0% Default |
110-
| | | N/A |
111-
+-------------------------------+----------------------+----------------------+
112-
113-
+-----------------------------------------------------------------------------+
114-
| Processes: |
115-
| GPU GI CI PID Type Process name GPU Memory |
116-
| ID ID Usage |
117-
|=============================================================================|
118-
| No running processes found |
119-
+-----------------------------------------------------------------------------+
100+
hu Mar 26 12:47:29 2026
101+
+-----------------------------------------------------------------------------------------+
102+
| NVIDIA-SMI 580.105.08 Driver Version: 580.105.08 CUDA Version: 13.0 |
103+
+-----------------------------------------+------------------------+----------------------+
104+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
105+
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
106+
| | | MIG M. |
107+
|=========================================+========================+======================|
108+
| 0 NVIDIA A100-SXM4-80GB Off | 00000000:C7:00.0 Off | 0 |
109+
| N/A 31C P0 61W / 400W | 0MiB / 81920MiB | 0% Default |
110+
| | | Disabled |
111+
+-----------------------------------------+------------------------+----------------------+
112+
+-----------------------------------------------------------------------------------------+
113+
| Processes: |
114+
| GPU GI CI PID Type Process name GPU Memory |
115+
| ID ID Usage |
116+
|=========================================================================================|
117+
| No running processes found |
118+
+-----------------------------------------------------------------------------------------+
120119
```
121120

122121
### Job arrays

0 commit comments

Comments
 (0)