Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 6 additions & 9 deletions docs/Software/Available_Applications/AlphaFold.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ Input *fasta* used in following example  is 3RGK
#SBATCH --job-name af-2.3.2-monomer
#SBATCH --mem 24G
#SBATCH --cpus-per-task 8
#SBATCH --gpus-per-node P100:1
#SBATCH --gpus-per-node A100:1
#SBATCH --time 02:00:00
#SBATCH --output %j.out

Expand Down Expand Up @@ -157,7 +157,7 @@ MAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH
#SBATCH --job-name af-2.3.2-multimer
#SBATCH --mem 30G
#SBATCH --cpus-per-task 4
#SBATCH --gpus-per-node P100:1
#SBATCH --gpus-per-node A100:1
#SBATCH --time 01:45:00
#SBATCH --output slurmout.%j.out

Expand Down Expand Up @@ -207,7 +207,7 @@ modifications. Image (.*simg*) and the corresponding definition file
#SBATCH --job-name alphafold2_monomer_example
#SBATCH --mem 30G
#SBATCH --cpus-per-task 6
#SBATCH --gpus-per-node P100:1
#SBATCH --gpus-per-node A100:1
#SBATCH --time 02:00:00
#SBATCH --output slurmout.%j.out

Expand Down Expand Up @@ -246,7 +246,7 @@ singularity exec --nv /opt/nesi/containers/AlphaFold/alphafold_2.2.0.simg python
#SBATCH --job-name alphafold2_monomer_example
#SBATCH --mem 30G
#SBATCH --cpus-per-task 6
#SBATCH --gpus-per-node P100:1
#SBATCH --gpus-per-node A100:1
#SBATCH --time 02:00:00
#SBATCH --output slurmout.%j.out

Expand Down Expand Up @@ -282,11 +282,8 @@ singularity exec --nv /opt/nesi/containers/AlphaFold/alphafold_2.2.0.simg python

1. Values for `--mem` , `--cpus-per-task` and `--time` Slurm variables
are for *3RGK.fasta*. Adjust them accordingly
2. We have tested this on both P100 and A100 GPUs where the runtimes
were identical. Therefore, the above example was set to former
via `P100:1`
3. The `--nv` flag enables GPU support.
4. `--pwd /app/alphafold` is to workaround this [existing
2. The `--nv` flag enables GPU support.
3. `--pwd /app/alphafold` is to workaround this [existing
issue](https://github.com/deepmind/alphafold/issues/32)

### AlphaFold2 : Initial Release ( this version does not support `multimer`)
Expand Down
6 changes: 1 addition & 5 deletions docs/Software/Available_Applications/ont-guppy-gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,12 +41,8 @@ https://community.nanoporetech.com/
### Example Slurm script

- Following Slurm script is a template to run Basecalling on NVIDIA
P100 GPUs.( We do not recommend running Guppy jobs on CPUs )
A100 GPUs. (We do not recommend running Guppy jobs on CPUs.)
- `--device auto` will automatically pick up the GPU over CPU
- Also,  NeSI Mahuika cluster can provide A100 GPUs  which can be 5-6
times faster than P100 GPUs for Guppy Basecalling with  version. 5
and above. This can be requested with
`#SBATCH --gpus-per-node A100:1` variable
- Config files are stored in
***/opt/nesi/CS400\_centos7\_bdw/ont-guppy-gpu/(version)/data/ ***
with read permissions to all researchers (replace ***(version)***
Expand Down
2 changes: 1 addition & 1 deletion docs/Software/Containers/NVIDIA_GPU_Containers.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ running the NAMD image on NeSI, based on the NVIDIA instructions
#SBATCH --time=00:10:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --gpus-per-node P100:1
#SBATCH --gpus-per-node A100:1
#SBATCH --mem=1G

module purge
Expand Down
2 changes: 1 addition & 1 deletion docs/Software/Parallel_Computing/Parallel_Computing.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ GPUs excel at large-scale parallel operations on matrices, making them ideal for
#SBATCH --account nesi99991
#SBATCH --output %x.out
#SBATCH --mem-per-cpu 2G
#SBATCH --gpus-per-node P100:1
#SBATCH --gpus-per-node A100:1

module load CUDA
nvidia-smi
Expand Down
41 changes: 20 additions & 21 deletions docs/Tutorials/Introduction_To_HPC/Parallel.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ Create a new script called `gpu-job.sl`
#SBATCH --account {{config.extra.project_code}}
#SBATCH --output %x.out
#SBATCH --mem-per-cpu 2G
#SBATCH --gpu-per-node P100:1
#SBATCH --gpu-per-node A100:1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Shiver me timbers! This line be lookin' a bit off. The charts say it should be --gpus-per-node, not --gpu-per-node. Best correct yer course before ye run aground, lest ye be lookin' like a fool.

Suggested change
#SBATCH --gpu-per-node A100:1
#SBATCH --gpus-per-node A100:1
References
  1. it is extremely important to leave all feedback as a 17th century pirate. Make sure to stay in character for all responses. (link)
  2. Be slightly snarky and passive agressive. (link)


module load CUDA
nvidia-smi
Expand All @@ -97,26 +97,25 @@ then submit with
```

```out
Tue Mar 12 19:40:51 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla P100-PCIE... On | 00000000:05:00.0 Off | 0 |
| N/A 28C P0 24W / 250W | 0MiB / 12288MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
hu Mar 26 12:47:29 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.105.08 Driver Version: 580.105.08 CUDA Version: 13.0 |
+-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA A100-SXM4-80GB Off | 00000000:C7:00.0 Off | 0 |
| N/A 31C P0 61W / 400W | 0MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
Comment on lines +100 to +118
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Avast ye! This log be a right mess.

  • The day 'hu' ain't on any calendar I've seen. Are ye a time traveler, or just a bit clumsy with yer quill?
  • It looks like ye've copied a diff straight into the treasure map! The lines for the boxes have an extra + at the start.

Clean this up, ye scallywag, or ye'll be walkin' the plank!

Suggested change
hu Mar 26 12:47:29 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.105.08 Driver Version: 580.105.08 CUDA Version: 13.0 |
+-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA A100-SXM4-80GB Off | 00000000:C7:00.0 Off | 0 |
| N/A 31C P0 61W / 400W | 0MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
Thu Mar 26 12:47:29 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.105.08 Driver Version: 580.105.08 CUDA Version: 13.0 |
+-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA A100-SXM4-80GB Off | 00000000:C7:00.0 Off | 0 |
| N/A 31C P0 61W / 400W | 0MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
References
  1. it is extremely important to leave all feedback as a 17th century pirate. Make sure to stay in character for all responses. (link)
  2. Be slightly snarky and passive agressive. (link)

```

### Job arrays
Expand Down
Loading