diff --git a/docs/Software/Available_Applications/AlphaFold.md b/docs/Software/Available_Applications/AlphaFold.md index 9d194cf0b..a7742af3d 100644 --- a/docs/Software/Available_Applications/AlphaFold.md +++ b/docs/Software/Available_Applications/AlphaFold.md @@ -112,7 +112,7 @@ Input *fasta* used in following example  is 3RGK #SBATCH --job-name af-2.3.2-monomer #SBATCH --mem 24G #SBATCH --cpus-per-task 8 -#SBATCH --gpus-per-node P100:1 +#SBATCH --gpus-per-node A100:1 #SBATCH --time 02:00:00 #SBATCH --output %j.out @@ -157,7 +157,7 @@ MAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH #SBATCH --job-name af-2.3.2-multimer #SBATCH --mem 30G #SBATCH --cpus-per-task 4 -#SBATCH --gpus-per-node P100:1 +#SBATCH --gpus-per-node A100:1 #SBATCH --time 01:45:00 #SBATCH --output slurmout.%j.out @@ -207,7 +207,7 @@ modifications. Image (.*simg*) and the corresponding definition file #SBATCH --job-name alphafold2_monomer_example #SBATCH --mem 30G #SBATCH --cpus-per-task 6 -#SBATCH --gpus-per-node P100:1 +#SBATCH --gpus-per-node A100:1 #SBATCH --time 02:00:00 #SBATCH --output slurmout.%j.out @@ -246,7 +246,7 @@ singularity exec --nv /opt/nesi/containers/AlphaFold/alphafold_2.2.0.simg python #SBATCH --job-name alphafold2_monomer_example #SBATCH --mem 30G #SBATCH --cpus-per-task 6 -#SBATCH --gpus-per-node P100:1 +#SBATCH --gpus-per-node A100:1 #SBATCH --time 02:00:00 #SBATCH --output slurmout.%j.out @@ -282,11 +282,8 @@ singularity exec --nv /opt/nesi/containers/AlphaFold/alphafold_2.2.0.simg python 1. Values for `--mem` , `--cpus-per-task` and `--time` Slurm variables are for *3RGK.fasta*. Adjust them accordingly -2. We have tested this on both P100 and A100 GPUs where the runtimes - were identical. Therefore, the above example was set to former - via `P100:1` -3. The `--nv` flag enables GPU support. -4. `--pwd /app/alphafold` is to workaround this [existing +2. The `--nv` flag enables GPU support. +3. `--pwd /app/alphafold` is to workaround this [existing issue](https://github.com/deepmind/alphafold/issues/32) ### AlphaFold2 : Initial Release ( this version does not support `multimer`) diff --git a/docs/Software/Available_Applications/ont-guppy-gpu.md b/docs/Software/Available_Applications/ont-guppy-gpu.md index 8d5019ab3..3322b66ce 100644 --- a/docs/Software/Available_Applications/ont-guppy-gpu.md +++ b/docs/Software/Available_Applications/ont-guppy-gpu.md @@ -41,12 +41,8 @@ https://community.nanoporetech.com/ ### Example Slurm script - Following Slurm script is a template to run Basecalling on NVIDIA - P100 GPUs.( We do not recommend running Guppy jobs on CPUs ) + A100 GPUs. (We do not recommend running Guppy jobs on CPUs.) - `--device auto` will automatically pick up the GPU over CPU -- Also,  NeSI Mahuika cluster can provide A100 GPUs  which can be 5-6 - times faster than P100 GPUs for Guppy Basecalling with  version. 5 - and above. This can be requested with - `#SBATCH --gpus-per-node A100:1` variable - Config files are stored in ***/opt/nesi/CS400\_centos7\_bdw/ont-guppy-gpu/(version)/data/ *** with read permissions to all researchers (replace ***(version)*** diff --git a/docs/Software/Containers/NVIDIA_GPU_Containers.md b/docs/Software/Containers/NVIDIA_GPU_Containers.md index 5e771ba22..80746a81d 100644 --- a/docs/Software/Containers/NVIDIA_GPU_Containers.md +++ b/docs/Software/Containers/NVIDIA_GPU_Containers.md @@ -61,7 +61,7 @@ running the NAMD image on NeSI, based on the NVIDIA instructions #SBATCH --time=00:10:00 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=8 - #SBATCH --gpus-per-node P100:1 + #SBATCH --gpus-per-node A100:1 #SBATCH --mem=1G module purge diff --git a/docs/Software/Parallel_Computing/Parallel_Computing.md b/docs/Software/Parallel_Computing/Parallel_Computing.md index 8aee09aab..2a5c53275 100644 --- a/docs/Software/Parallel_Computing/Parallel_Computing.md +++ b/docs/Software/Parallel_Computing/Parallel_Computing.md @@ -176,7 +176,7 @@ GPUs excel at large-scale parallel operations on matrices, making them ideal for #SBATCH --account nesi99991 #SBATCH --output %x.out #SBATCH --mem-per-cpu 2G -#SBATCH --gpus-per-node P100:1 +#SBATCH --gpus-per-node A100:1 module load CUDA nvidia-smi diff --git a/docs/Tutorials/Introduction_To_HPC/Parallel.md b/docs/Tutorials/Introduction_To_HPC/Parallel.md index eef8b3f1c..7818c4c0f 100644 --- a/docs/Tutorials/Introduction_To_HPC/Parallel.md +++ b/docs/Tutorials/Introduction_To_HPC/Parallel.md @@ -78,7 +78,7 @@ Create a new script called `gpu-job.sl` #SBATCH --account {{config.extra.project_code}} #SBATCH --output %x.out #SBATCH --mem-per-cpu 2G -#SBATCH --gpu-per-node P100:1 +#SBATCH --gpu-per-node A100:1 module load CUDA nvidia-smi @@ -97,26 +97,25 @@ then submit with ``` ```out - Tue Mar 12 19:40:51 2024 - +-----------------------------------------------------------------------------+ - | NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 | - |-------------------------------+----------------------+----------------------+ - | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | - | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | - | | | MIG M. | - |===============================+======================+======================| - | 0 Tesla P100-PCIE... On | 00000000:05:00.0 Off | 0 | - | N/A 28C P0 24W / 250W | 0MiB / 12288MiB | 0% Default | - | | | N/A | - +-------------------------------+----------------------+----------------------+ - - +-----------------------------------------------------------------------------+ - | Processes: | - | GPU GI CI PID Type Process name GPU Memory | - | ID ID Usage | - |=============================================================================| - | No running processes found | - +-----------------------------------------------------------------------------+ + hu Mar 26 12:47:29 2026 ++-----------------------------------------------------------------------------------------+ +| NVIDIA-SMI 580.105.08 Driver Version: 580.105.08 CUDA Version: 13.0 | ++-----------------------------------------+------------------------+----------------------+ +| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | +| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | +| | | MIG M. | +|=========================================+========================+======================| +| 0 NVIDIA A100-SXM4-80GB Off | 00000000:C7:00.0 Off | 0 | +| N/A 31C P0 61W / 400W | 0MiB / 81920MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ ++-----------------------------------------------------------------------------------------+ +| Processes: | +| GPU GI CI PID Type Process name GPU Memory | +| ID ID Usage | +|=========================================================================================| +| No running processes found | ++-----------------------------------------------------------------------------------------+ ``` ### Job arrays