From 522e2f9e0a1439ca090b23de2b9271ae31b0f6b5 Mon Sep 17 00:00:00 2001 From: Kenneth Hoste Date: Tue, 17 Dec 2024 09:20:55 +0100 Subject: [PATCH] rewrite abstract to include CernVM-FS --- isc25/EESSI/abstract.tex | 33 ++++++++++++++++++++++----------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/isc25/EESSI/abstract.tex b/isc25/EESSI/abstract.tex index 5beb4b3..1499cd9 100644 --- a/isc25/EESSI/abstract.tex +++ b/isc25/EESSI/abstract.tex @@ -1,21 +1,32 @@ -What if there was a way to avoid having to install a broad range of scientific software from scratch on every HPC -cluster or cloud instance you use or maintain, without compromising on performance? +What if there was a way to avoid having to install a broad range of scientific software from scratch on every +supercomputer, cloud instance, or laptop you use or maintain, without compromising on performance? Installing scientific software for supercomputers is known to be a tedious and time-consuming task. The application software stack continues to deepen as the -HPC user community becomes more diverse, computational science expands rapidly, and the diversity of system architectures +High-Performance Computing (HPC) user community becomes more diverse, computational science expands rapidly, and the diversity of system architectures increases. Simultaneously, we see a surge in interest in public cloud infrastructures for scientific computing. Delivering optimised software installations and providing access to these installations in a reliable, user-friendly, and reproducible way is a highly non-trivial task that affects application developers, HPC user support teams, and the users themselves. -This tutorial aims to address these challenges by providing the attendees with the tools to \emph{stream} the optimised -scientific software they need. The tutorial introduces European Environment for Scientific Software Installations -(\emph{EESSI}), a collaboration between various European HPC sites \& industry partners, with the common goal of -creating a shared repository of scientific software installations (\emph{not} recipes) that can be used on a variety of -systems, regardless -of which flavor/version of Linux distribution or processor architecture is used, or whether it's a full size HPC +Although scientific research on supercomputers is fundamentally software-driven, +setting up and managing a software stack remains challenging and time-consuming. +In addition, parallel filesystems like GPFS and Lustre are known to be ill-suited for hosting software installations +that typically consist of a large number of small files. This can lead to surprisingly slow startup performance of +software, and may even negatively impact the overall performance of the system. +While workarounds for these issues such as using container images are prevalent, they come with caveats, +such as the significant size of these images, the required compatibility with the system MPI for distributing computing, +and complications with accessing specialized hardware resources like GPUs. + +This tutorial aims to address these challenges by introducing the attendees to a way to \emph{stream} +software installations via \emph{CernVM-FS}, a distributed read-only filesystem specifically designed +to efficiently distribute software across large-scale computing infrastructures. +The tutorial introduces the \emph{European Environment for Scientific Software Installations (EESSI)}, +a collaboration between various European HPC sites \& industry partners, with the common goal of +creating a shared repository of optimised scientific software installations (\emph{not} recipes) that can be used on a variety of +systems, regardless of which flavor/version of Linux distribution or processor architecture is used, or whether it's a full size HPC cluster, a cloud environment or a personal workstation. -We cover the usage of EESSI, different ways to accessing EESSI, how to add software to EESSI, and highlight some more -advanced features. We will also show attendees how to engage with the community and contribute to the project. +We cover the installation and configuration of CernVM-FS to access EESSI, the usage of EESSI, how to add software +installations to EESSI, how to install software on top of EESSI, and advanced topics like GPU support and performance +tuning.