diff --git a/docs/software-packages/flacs.md b/docs/software-packages/flacs.md index cdc1fb9..4cab541 100644 --- a/docs/software-packages/flacs.md +++ b/docs/software-packages/flacs.md @@ -6,7 +6,7 @@ explosion modelling and one of the best validated tools for modeling flammable and toxic releases in a technical safety context. The Cirrus cluster is ideally suited to run multiple FLACS simulations -simultaneously, via its [batch system](../../user-guide/batch/). Short +simultaneously, via its [batch system](../user-guide/batch.md). Short lasting simulations (of typically up to a few hours computing time each) can be processed efficiently and you could get a few hundred done in a day or two. In contrast, the Cirrus cluster is not particularly suited @@ -202,7 +202,7 @@ list only your jobs use: ### Submitting many FLACS jobs as a job array Running many related scenarios with the FLACS simulator is ideally -suited for using [job arrays](../../user-guide/batch/#job-arrays), i.e. +suited for using [job arrays](../user-guide/batch.md#job-arrays), i.e. running the simulations as part of a single job. Note you must determine ahead of time the number of scenarios involved. diff --git a/docs/user-guide/data.md b/docs/user-guide/data.md index e9a9bb9..ecb012d 100644 --- a/docs/user-guide/data.md +++ b/docs/user-guide/data.md @@ -472,5 +472,5 @@ Please note that “remote” is the name that you have chosen when running rclo The Cirrus `/work` filesystem, which is hosted on the e1000 fileserver, has a Globus Collection (formerly known as an endpoint) with the name `e1000-fs1 directories` -[Full step-by-step guide for using Globus](../globus) to transfer files to/from Cirrus `/work` +[Full step-by-step guide for using Globus](globus.md) to transfer files to/from Cirrus `/work` diff --git a/docs/user-guide/development.md b/docs/user-guide/development.md index 71bd2a9..613f58b 100644 --- a/docs/user-guide/development.md +++ b/docs/user-guide/development.md @@ -296,7 +296,7 @@ using the NVLink intra-node GPU comm links (and inter-node GPU comms are direct intead of passing through the host processor). Hence, the OpenMPI GPU modules allow the user to run GPU-aware MPI code as efficiently -as possible, see [Compiling and using GPU-aware MPI](../gpu/#compiling-and-using-gpu-aware-mpi). +as possible, see [Compiling and using GPU-aware MPI](gpu.md#compiling-and-using-gpu-aware-mpi). OpenMPI modules for use on the CPU nodes are also available, but these are not expected to provide any performance advantage over HPE MPT or Intel MPI. diff --git a/docs/user-guide/introduction.md b/docs/user-guide/introduction.md index 83af265..802e795 100644 --- a/docs/user-guide/introduction.md +++ b/docs/user-guide/introduction.md @@ -31,10 +31,10 @@ meaning. CPUh Cirrus CPU time is measured in CPUh. Each job you run on the service consumes CPUhs from your budget. You can find out more about CPUhs and -how to track your usage in the [resource management section](../resource_management/) +how to track your usage in the [resource management section.](resource_management.md) GPUh Cirrus GPU time is measured in GPUh. Each job you run on the GPU nodes consumes GPUhs from your budget, and requires positive CPUh, even though these will not be consumed. You can find out more about GPUhs and how to -track your usage in the [resource management section](../resource_management/) +track your usage in the [resource management section.](resource_management.md) diff --git a/docs/user-guide/python.md b/docs/user-guide/python.md index 458aca9..c869141 100644 --- a/docs/user-guide/python.md +++ b/docs/user-guide/python.md @@ -555,7 +555,7 @@ you can start from a login node prompt. If you have extended a central Python venv following the instructions about for [Installing your own Python packages - (with pip)](#installing-your-own-python-packages-(with-pip)), + (with pip)](#installing-your-own-python-packages-with-pip), Jupyter Lab will load the central ipython kernel, not the one for your venv. To enable loading of the ipython kernel for your venv from within Jupyter Lab, first install the ipykernel module @@ -567,13 +567,17 @@ you can start from a login node prompt. ``` changing placeholder account and username as appropriate. Thereafter, launch Jupyter Lab as above and select the `myvenv` - kernel. + kernel. It may be needed to load the following environment variables: + ``` + export PYTHONUSERBASE=$(pwd)/.local + export PATH=$PYTHONUSERBASE/bin:$PATH + export HOME=$(pwd) + export JUPYTER_RUNTIME_DIR=$(pwd) + ``` If you are on a compute node, the JupyterLab server will be available for the length of the interactive session you have requested. You can also run Jupyter sessions using the centrally-installed -Miniconda3 modules available on Cirrus. For example, the following link -provides instructions for how to setup a Jupyter server on a GPU node. - - +Miniconda3 modules available on Cirrus. [This page provides instructions +for how to setup a Jupyter server on a GPU node.](https://github.com/hpc-uk/build-instructions/tree/main/pyenvs/ipyparallel) diff --git a/docs/user-guide/resource_management.md b/docs/user-guide/resource_management.md index 4b164e4..d4313de 100644 --- a/docs/user-guide/resource_management.md +++ b/docs/user-guide/resource_management.md @@ -14,7 +14,7 @@ Finally we cover some guidelines for I/O and data archiving on Cirrus. ## The Cirrus Administration Web Site (SAFE) All users have a login and password on the Cirrus Administration Web -Site (also know as the 'SAFE'): [SAFE](https://safe.epcc.ed.ac.uk/). +Site (also know as the [SAFE](https://safe.epcc.ed.ac.uk/). Once logged into this web site, users can find out much about their usage of the Cirrus system, including: @@ -29,8 +29,7 @@ usage of the Cirrus system, including: ## Checking your CPU/GPU time allocations -You can view these details by logging into the SAFE -(). +You can view these details by logging into the [SAFE.](https://safe.epcc.ed.ac.uk) Use the *Login accounts* menu to select the user account that you wish to query. The page for the login account will summarise the resources diff --git a/mkdocs.yml b/mkdocs.yml index 864ae2f..a0a5dc2 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -36,7 +36,7 @@ markdown_extensions: nav: - "Overview": index.md - - "Cirrus migration to E1000 system": e1000-migration + - "Cirrus migration to E1000 system": e1000-migration/index.md - "User Guide": - "Introduction": user-guide/introduction.md - "Connecting to Cirrus": user-guide/connecting.md