Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 42 additions & 7 deletions docs/developers_guide/spack.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,16 @@ Each YAML file describes a Spack environment for a particular combination of com
For example: `chicoma-cpu_gnu_mpich.yaml`

These files are Jinja2 templates, allowing conditional inclusion of packages
(e.g., HDF5/NetCDF, LAPACK) based on user options.
(e.g., LAPACK) based on user options.

Machine-provided HDF5/NetCDF packages should now be listed unconditionally in
the YAML templates. Downstream packages can opt out of them, or any other
machine-provided external such as `cmake`, with `exclude_packages`. Mache
filters the rendered YAML after Jinja expansion and removes matching:

- `spack.specs` entries
- `spack.packages.<name>` external-package sections
- matching provider specs under `spack.packages.all.providers`

### Typical External Packages

Expand Down Expand Up @@ -104,21 +113,48 @@ This pipeline greatly reduces maintenance and prevents drift between Mache and
E3SM’s authoritative machine configuration. In most cases, you do not need to
author or maintain shell script templates in Mache.

### Package opt-outs and deprecated HDF5/NetCDF flag

The preferred downstream-facing interface is `exclude_packages`, available in
the public `mache.spack` APIs and in `mache.deploy` runtime config.

Examples:

- `exclude_packages=["cmake"]`: build CMake with Spack instead of using the
machine-provided external package and module setup.
- `exclude_packages=["hdf5_netcdf"]`: opt out of the machine-provided
HDF5/NetCDF bundle.
- `exclude_packages=["hdf5", "netcdf-c", "netcdf-fortran",
"parallel-netcdf"]`: the same bundle, expressed explicitly.

`e3sm_hdf5_netcdf` and `include_e3sm_hdf5_netcdf` remain supported as
deprecated compatibility flags in public `mache.spack` APIs. New code should
prefer `exclude_packages`.

### When to add a template override

Only provide a small override in `mache/spack/templates/` if you need to:

- Apply an adjustment that’s not appropriate for the shared E3SM CIME config
(machine-local quirk, temporary workaround, etc.).
- Add conditional behavior toggled by
`include_e3sm_lapack` or `e3sm_hdf5_netcdf` (both exposed as Jinja booleans
in templates) that cannot be expressed in the CIME config.
- Add conditional behavior toggled by `include_e3sm_lapack` or by package
helper functions such as `use_system_package('cmake')` and
`use_system_packages('netcdf-c', 'netcdf-fortran')` that cannot be
expressed in the CIME config.

Note: `include_e3sm_hdf5_netcdf` remains supported as a deprecated alias for
`e3sm_hdf5_netcdf` in public `mache.spack` APIs.
`e3sm_hdf5_netcdf` in public `mache.spack` APIs, but new shell overrides
should prefer the package helpers over `e3sm_hdf5_netcdf`.

Templates are Jinja2 files and can use the same conditional logic as YAML
templates.
templates. For shell overrides, the most useful helpers are:

- `use_system_package('<spack-package-name>')`
- `use_system_packages('<pkg1>', '<pkg2>', ...)`
- `render_env_var(name, value, shell_type)`

These helpers let shell overrides stay package-oriented even when the machine
module names do not match Spack package names exactly.

## Testing

Expand All @@ -134,4 +170,3 @@ After adding or modifying YAML templates (or an exceptional shell override):

- For the non-spack aspects of adding a new machine, see [Adding a New Machine to Mache](adding_new_machine.md).
- For more details on Spack external packages, see the [Spack documentation on external packages](https://spack.readthedocs.io/en/latest/build_settings.html#external-packages).

95 changes: 95 additions & 0 deletions docs/users_guide/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,9 +176,96 @@ Important settings:
- `spack.spack_path`: required when Spack support is enabled and no hook or
CLI override provides it, unless the user disables Spack for that run with
`--no-spack`.
- `spack.exclude_packages`: optional list of machine-provided Spack packages
that the target software wants Spack to build instead.
- `jigsaw.enabled`: optional.
- `hooks`: optional and disabled unless explicitly configured.

Example:

If a downstream package such as Compass wants to use the machine-provided
libraries in general but build its own newer `cmake`, it can set
`spack.exclude_packages` in `deploy/config.yaml.j2`:

```yaml
spack:
supported: true
deploy: true
spack_path: /path/to/shared/spack
exclude_packages:
- cmake
```

With this setting, `mache deploy run` will:

- remove the machine-provided `cmake` external from the rendered Spack
environment YAML,
- remove matching machine-provided `cmake` module loads and related
environment-variable setup from generated shell snippets, and
- let Spack build the `cmake` version requested by `deploy/spack.yaml.j2`.

The package list in `deploy/spack.yaml.j2` does not need any special syntax
for this. You simply request the Spack package you want, for example:

```yaml
software:
- "cmake@{{ spack.cmake }}"
library:
- "trilinos@{{ spack.trilinos }}"
```

The same mechanism can be used for the machine-provided HDF5/NetCDF bundle:

```yaml
spack:
exclude_packages:
- hdf5_netcdf
```

This is the preferred replacement for the older `e3sm_hdf5_netcdf` flag.

Another common pattern is to keep a machine-specific boolean such as
`[deploy] use_e3sm_hdf5_netcdf` in the target repository's machine config and
translate that into `exclude_packages` in a `pre_spack` hook.

For example, if a downstream project like Compass has a machine config with:

```ini
[deploy]
use_e3sm_hdf5_netcdf = false
```

then `deploy/hooks.py` can map that to the new mechanism:

```python
from __future__ import annotations

from typing import Any

from mache.deploy.hooks import DeployContext


def pre_spack(ctx: DeployContext) -> dict[str, Any] | None:
updates: dict[str, Any] = {}

exclude_packages = list(ctx.config.get("spack", {}).get("exclude_packages", []))

use_bundle = False
if ctx.machine_config.has_section("deploy") and ctx.machine_config.has_option(
"deploy", "use_e3sm_hdf5_netcdf"
):
use_bundle = ctx.machine_config.getboolean("deploy", "use_e3sm_hdf5_netcdf")

if not use_bundle:
exclude_packages.append("hdf5_netcdf")

updates.setdefault("spack", {})["exclude_packages"] = exclude_packages
return updates
```

This lets existing machine-config policy continue to drive behavior while
moving the actual Spack selection onto `exclude_packages`.

### `deploy/pixi.toml.j2`

Required: yes
Expand Down Expand Up @@ -214,12 +301,20 @@ Purpose:
specs, `software` specs, or both.
- Supplies the target-specific package list used when `mache` constructs Spack
environments.
- Receives `exclude_packages` as a deploy-time Jinja variable, reflecting any
configured opt-outs of machine-provided Spack packages.

Edit policy:

- Safe and expected to edit.
- Leave it empty if you do not support Spack yet.

In most target repositories, package opt-outs belong in `deploy/config.yaml.j2`
under `spack.exclude_packages`, not in `deploy/spack.yaml.j2`. The Spack spec
template is where you request the package versions and variants you want to
build; `exclude_packages` controls which machine-provided externals are
suppressed before concretization.

### `deploy/hooks.py`

Required: no
Expand Down
67 changes: 60 additions & 7 deletions docs/users_guide/spack/build.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ make_spack_env(
config_file=machine_config,
include_e3sm_lapack=include_e3sm_lapack,
e3sm_hdf5_netcdf=e3sm_hdf5_netcdf,
exclude_packages=exclude_packages,
yaml_template=yaml_template,
tmpdir=tmpdir,
spack_mirror=spack_mirror,
Expand All @@ -71,8 +72,16 @@ make_spack_env(
- `compiler`, `mpi`: Compiler and MPI library names.
- `machine`: Machine name (optional, auto-detected if not provided).
- `config_file`: Path to a machine config file (optional).
- `include_e3sm_lapack`, `e3sm_hdf5_netcdf`: Whether to include E3SM-specific
LAPACK or HDF5/NetCDF packages.
- `include_e3sm_lapack`: Whether to include E3SM-specific LAPACK packages.
- `exclude_packages`: A package name or list of package names whose
machine-provided externals, modules, and related environment variables
should be removed so Spack can build them instead. For example, setting
`exclude_packages=["cmake"]` lets a downstream package build a newer CMake
than the system provides.
- `e3sm_hdf5_netcdf`: Deprecated compatibility flag for opting into the
machine-provided HDF5/NetCDF bundle. New code should prefer
`exclude_packages=["hdf5_netcdf"]` (or the individual package names
`hdf5`, `netcdf-c`, `netcdf-fortran`, and `parallel-netcdf`) instead.
- `yaml_template`: Path to a custom Jinja2 YAML template (optional).
- `tmpdir`: Temporary directory for builds (optional).
- `spack_mirror`: Path to a local Spack mirror (optional).
Expand All @@ -85,6 +94,36 @@ make_spack_env(
- Generates and runs a shell script to create the Spack environment.
- Loads any required modules and sets up environment variables as needed.

**Recommended pattern for downstream packages:**

```python
exclude_packages = []
if needs_newer_cmake:
exclude_packages.append("cmake")

make_spack_env(
...,
exclude_packages=exclude_packages,
)
```

To opt out of the machine-provided HDF5/NetCDF bundle, use either:

```python
exclude_packages=["hdf5_netcdf"]
```

or the individual package names:

```python
exclude_packages=[
"hdf5",
"netcdf-c",
"netcdf-fortran",
"parallel-netcdf",
]
```

---

## `get_spack_script`
Expand Down Expand Up @@ -116,6 +155,7 @@ spack_script = get_spack_script(
machine=machine,
include_e3sm_lapack=include_e3sm_lapack,
e3sm_hdf5_netcdf=e3sm_hdf5_netcdf,
exclude_packages=exclude_packages,
)
```

Expand All @@ -133,6 +173,13 @@ The returned snippet is assembled in three steps:
This design keeps Mache aligned with E3SM’s authoritative machine
configuration and minimizes maintenance.

`exclude_packages` applies here too, so `get_spack_script()` removes matching
machine-provided module loads and environment variables from both:

- shell snippets derived from `config_machines.xml`, and
- any package-local shell overrides in `mache/spack/templates/*.sh` or
`*.csh`.

**Usage in activation scripts:**

```bash
Expand Down Expand Up @@ -167,6 +214,7 @@ mpicc, mpicxx, mpifc, mod_env_commands = get_modules_env_vars_and_mpi_compilers(
shell='sh', # or 'csh'
include_e3sm_lapack=include_e3sm_lapack,
e3sm_hdf5_netcdf=e3sm_hdf5_netcdf,
exclude_packages=exclude_packages,
)
```

Expand All @@ -177,17 +225,20 @@ mpicc, mpicxx, mpifc, mod_env_commands = get_modules_env_vars_and_mpi_compilers(
- `mpifc`: Name of the MPI Fortran compiler wrapper (e.g., `mpif90` or `ftn`).
- `mod_env_commands`: Shell commands to load modules and set environment variables.

As with `get_spack_script()`, `exclude_packages` can be used to remove
machine-provided package setup from the generated shell snippet.

**Notes and usage in build scripts:**

```bash
{{ mod_env_commands }}
# Now safe to use $mpicc, $mpicxx, $mpifc for building MPI-dependent software
```

- This helper produces a minimal snippet based on Mache’s template overrides
(when present). It does not auto-generate content from
`config_machines.xml`. If you need the full environment setup used by
downstream activation scripts, prefer `get_spack_script`.
- This helper uses the same shell-generation logic as `get_spack_script()`
but does not activate a Spack environment. It therefore includes the
machine-derived setup from `config_machines.xml` plus any matching Mache
shell overrides.

---

Expand All @@ -211,9 +262,11 @@ mpicc, mpicxx, mpifc, mod_env_commands = get_modules_env_vars_and_mpi_compilers(

- These functions are intended for use in deployment scripts, not for
interactive use.
- `e3sm_hdf5_netcdf` and `include_e3sm_hdf5_netcdf` remain supported for
backward compatibility, but new downstream code should use
`exclude_packages` instead.
- The downstream package is responsible for determining the correct arguments
(machine, compiler, MPI, etc.) and for integrating the generated scripts
into their activation workflow.
- For more details, see the source code and examples in the downstream
packages listed above.

Loading
Loading