Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/software-packages/flacs.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ explosion modelling and one of the best validated tools for modeling
flammable and toxic releases in a technical safety context.

The Cirrus cluster is ideally suited to run multiple FLACS simulations
simultaneously, via its [batch system](../../user-guide/batch/). Short
simultaneously, via its [batch system](../user-guide/batch.md). Short
lasting simulations (of typically up to a few hours computing time each)
can be processed efficiently and you could get a few hundred done in a
day or two. In contrast, the Cirrus cluster is not particularly suited
Expand Down Expand Up @@ -202,7 +202,7 @@ list only your jobs use:
### Submitting many FLACS jobs as a job array

Running many related scenarios with the FLACS simulator is ideally
suited for using [job arrays](../../user-guide/batch/#job-arrays), i.e.
suited for using [job arrays](../user-guide/batch.md#job-arrays), i.e.
running the simulations as part of a single job.

Note you must determine ahead of time the number of scenarios involved.
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/data.md
Original file line number Diff line number Diff line change
Expand Up @@ -472,5 +472,5 @@ Please note that “remote” is the name that you have chosen when running rclo

The Cirrus `/work` filesystem, which is hosted on the e1000 fileserver, has a Globus Collection (formerly known as an endpoint) with the name `e1000-fs1 directories`

[Full step-by-step guide for using Globus](../globus) to transfer files to/from Cirrus `/work`
[Full step-by-step guide for using Globus](globus.md) to transfer files to/from Cirrus `/work`

2 changes: 1 addition & 1 deletion docs/user-guide/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ using the NVLink intra-node GPU comm links (and inter-node GPU comms are direct
intead of passing through the host processor).

Hence, the OpenMPI GPU modules allow the user to run GPU-aware MPI code as efficiently
as possible, see [Compiling and using GPU-aware MPI](../gpu/#compiling-and-using-gpu-aware-mpi).
as possible, see [Compiling and using GPU-aware MPI](gpu.md#compiling-and-using-gpu-aware-mpi).

OpenMPI modules for use on the CPU nodes are also available, but these are not
expected to provide any performance advantage over HPE MPT or Intel MPI.
Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,10 @@ meaning.
CPUh
Cirrus CPU time is measured in CPUh. Each job you run on the service
consumes CPUhs from your budget. You can find out more about CPUhs and
how to track your usage in the [resource management section](../resource_management/)
how to track your usage in the [resource management section.](resource_management.md)

GPUh
Cirrus GPU time is measured in GPUh. Each job you run on the GPU nodes
consumes GPUhs from your budget, and requires positive CPUh, even though
these will not be consumed. You can find out more about GPUhs and how to
track your usage in the [resource management section](../resource_management/)
track your usage in the [resource management section.](resource_management.md)
16 changes: 10 additions & 6 deletions docs/user-guide/python.md
Original file line number Diff line number Diff line change
Expand Up @@ -555,7 +555,7 @@ you can start from a login node prompt.

If you have extended a central Python venv following the
instructions about for [Installing your own Python packages
(with pip)](#installing-your-own-python-packages-(with-pip)),
(with pip)](#installing-your-own-python-packages-with-pip),
Jupyter Lab will load the central ipython kernel, not the one
for your venv. To enable loading of the ipython kernel for your
venv from within Jupyter Lab, first install the ipykernel module
Expand All @@ -567,13 +567,17 @@ you can start from a login node prompt.
```
changing placeholder account and username as appropriate.
Thereafter, launch Jupyter Lab as above and select the `myvenv`
kernel.
kernel. It may be needed to load the following environment variables:
```
export PYTHONUSERBASE=$(pwd)/.local
export PATH=$PYTHONUSERBASE/bin:$PATH
export HOME=$(pwd)
export JUPYTER_RUNTIME_DIR=$(pwd)
```

If you are on a compute node, the JupyterLab server will be available
for the length of the interactive session you have requested.

You can also run Jupyter sessions using the centrally-installed
Miniconda3 modules available on Cirrus. For example, the following link
provides instructions for how to setup a Jupyter server on a GPU node.

<https://github.com/hpc-uk/build-instructions/tree/main/pyenvs/ipyparallel>
Miniconda3 modules available on Cirrus. [This page provides instructions
for how to setup a Jupyter server on a GPU node.](https://github.com/hpc-uk/build-instructions/tree/main/pyenvs/ipyparallel)
5 changes: 2 additions & 3 deletions docs/user-guide/resource_management.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Finally we cover some guidelines for I/O and data archiving on Cirrus.
## The Cirrus Administration Web Site (SAFE)

All users have a login and password on the Cirrus Administration Web
Site (also know as the 'SAFE'): [SAFE](https://safe.epcc.ed.ac.uk/).
Site (also know as the [SAFE](https://safe.epcc.ed.ac.uk/).
Once logged into this web site, users can find out much about their
usage of the Cirrus system, including:

Expand All @@ -29,8 +29,7 @@ usage of the Cirrus system, including:

## Checking your CPU/GPU time allocations

You can view these details by logging into the SAFE
(<https://safe.epcc.ed.ac.uk>).
You can view these details by logging into the [SAFE.](https://safe.epcc.ed.ac.uk)

Use the *Login accounts* menu to select the user account that you wish
to query. The page for the login account will summarise the resources
Expand Down
2 changes: 1 addition & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ markdown_extensions:

nav:
- "Overview": index.md
- "Cirrus migration to E1000 system": e1000-migration
- "Cirrus migration to E1000 system": e1000-migration/index.md
- "User Guide":
- "Introduction": user-guide/introduction.md
- "Connecting to Cirrus": user-guide/connecting.md
Expand Down