Skip to content

Commit d0e193d

Browse files
committed
remove mention of P100
1 parent 6a1c12b commit d0e193d

File tree

5 files changed

+11
-14
lines changed

5 files changed

+11
-14
lines changed

docs/Software/Available_Applications/AlphaFold.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ Input *fasta* used in following example  is 3RGK
112112
#SBATCH --job-name af-2.3.2-monomer
113113
#SBATCH --mem 24G
114114
#SBATCH --cpus-per-task 8
115-
#SBATCH --gpus-per-node P100:1
115+
#SBATCH --gpus-per-node A100:1
116116
#SBATCH --time 02:00:00
117117
#SBATCH --output %j.out
118118
@@ -157,7 +157,7 @@ MAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH
157157
#SBATCH --job-name af-2.3.2-multimer
158158
#SBATCH --mem 30G
159159
#SBATCH --cpus-per-task 4
160-
#SBATCH --gpus-per-node P100:1
160+
#SBATCH --gpus-per-node A100:1
161161
#SBATCH --time 01:45:00
162162
#SBATCH --output slurmout.%j.out
163163
@@ -207,7 +207,7 @@ modifications. Image (.*simg*) and the corresponding definition file
207207
#SBATCH --job-name alphafold2_monomer_example
208208
#SBATCH --mem 30G
209209
#SBATCH --cpus-per-task 6
210-
#SBATCH --gpus-per-node P100:1
210+
#SBATCH --gpus-per-node A100:1
211211
#SBATCH --time 02:00:00
212212
#SBATCH --output slurmout.%j.out
213213
@@ -246,7 +246,7 @@ singularity exec --nv /opt/nesi/containers/AlphaFold/alphafold_2.2.0.simg python
246246
#SBATCH --job-name alphafold2_monomer_example
247247
#SBATCH --mem 30G
248248
#SBATCH --cpus-per-task 6
249-
#SBATCH --gpus-per-node P100:1
249+
#SBATCH --gpus-per-node A100:1
250250
#SBATCH --time 02:00:00
251251
#SBATCH --output slurmout.%j.out
252252
@@ -282,11 +282,8 @@ singularity exec --nv /opt/nesi/containers/AlphaFold/alphafold_2.2.0.simg python
282282

283283
1. Values for `--mem` , `--cpus-per-task` and `--time` Slurm variables
284284
are for *3RGK.fasta*. Adjust them accordingly
285-
2. We have tested this on both P100 and A100 GPUs where the runtimes
286-
were identical. Therefore, the above example was set to former
287-
via `P100:1`
288-
3. The `--nv` flag enables GPU support.
289-
4. `--pwd /app/alphafold` is to workaround this [existing
285+
2. The `--nv` flag enables GPU support.
286+
3. `--pwd /app/alphafold` is to workaround this [existing
290287
issue](https://github.com/deepmind/alphafold/issues/32)
291288

292289
### AlphaFold2 : Initial Release ( this version does not support `multimer`)

docs/Software/Available_Applications/ont-guppy-gpu.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,10 +41,10 @@ https://community.nanoporetech.com/
4141
### Example Slurm script
4242

4343
- Following Slurm script is a template to run Basecalling on NVIDIA
44-
P100 GPUs.( We do not recommend running Guppy jobs on CPUs )
44+
A100 GPUs.( We do not recommend running Guppy jobs on CPUs )
4545
- `--device auto` will automatically pick up the GPU over CPU
4646
- Also,  NeSI Mahuika cluster can provide A100 GPUs  which can be 5-6
47-
times faster than P100 GPUs for Guppy Basecalling with  version. 5
47+
times faster than A100 GPUs for Guppy Basecalling with  version. 5
4848
and above. This can be requested with
4949
`#SBATCH --gpus-per-node A100:1` variable
5050
- Config files are stored in

docs/Software/Containers/NVIDIA_GPU_Containers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ running the NAMD image on NeSI, based on the NVIDIA instructions
6161
#SBATCH --time=00:10:00
6262
#SBATCH --ntasks=1
6363
#SBATCH --cpus-per-task=8
64-
#SBATCH --gpus-per-node P100:1
64+
#SBATCH --gpus-per-node A100:1
6565
#SBATCH --mem=1G
6666
6767
module purge

docs/Software/Parallel_Computing/Parallel_Computing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ GPUs excel at large-scale parallel operations on matrices, making them ideal for
176176
#SBATCH --account nesi99991
177177
#SBATCH --output %x.out
178178
#SBATCH --mem-per-cpu 2G
179-
#SBATCH --gpus-per-node P100:1
179+
#SBATCH --gpus-per-node A100:1
180180

181181
module load CUDA
182182
nvidia-smi

docs/Tutorials/Introduction_To_HPC/Parallel.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Create a new script called `gpu-job.sl`
7878
#SBATCH --account {{config.extra.project_code}}
7979
#SBATCH --output %x.out
8080
#SBATCH --mem-per-cpu 2G
81-
#SBATCH --gpu-per-node P100:1
81+
#SBATCH --gpu-per-node A100:1
8282
8383
module load CUDA
8484
nvidia-smi

0 commit comments

Comments
 (0)