Skip to content

Commit bb4dd39

Browse files
committed
Update Zenodo details
1 parent 16505f7 commit bb4dd39

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

.zenodo.json

+8-8
Original file line numberDiff line numberDiff line change
@@ -29,19 +29,19 @@
2929

3030
"title": "Efficient Distributed GPU Programming for Exascale",
3131

32-
"publication_date": "2021-11-14",
32+
"publication_date": "2022-05-29",
3333

34-
"description": "<p>Over the past years, GPUs became ubiquitous in HPC installations around the world. Today, they provide the majority of performance of some of the largest supercomputers (e.g. Summit, Sierra, JUWELS Booster). This trend continues in upcoming pre-exascale and exascale systems (LUMI, Leonardo; Frontier): GPUs are chosen as the core computing devices to enter this next era of HPC.</p><p>To take advantage of future GPU-accelerated systems with tens of thousands of devices, application developers need to have the proper skills and tools to understand, manage, and optimize distributed GPU applications.</p><p>In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, also advanced techniques and models (NCCL, NVSHMEM, …) are presented. Tools for analysis are used to motivate implementation of performance optimizations. The tutorial combines lectures and hands-on exercises, using Europe’s fastest supercomputer, JUWELS Booster with NVIDIA A100 GPUs.</p>",
34+
"description": "<p>Over the past years, GPUs became ubiquitous in HPC installations around the world. Today, they provide the majority of performance of some of the largest supercomputers (e.g. Summit, Sierra, JUWELS Booster). This trend continues in the pre-exascale and exascale systems (LUMI, Leonardo; Perlmutter, Frontier): GPUs are chosen as the core computing devices to enter this next era of HPC.</p><p>To take advantage of future GPU-accelerated systems with tens of thousands of devices, application developers need to have the propers skills and tools to understand, manage, and optimize distributed GPU applications. In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, advanced tuning techniques and complementary programming models like NCCL and NVSHMEM are presented as well. Tools for analysis are shown and used to motivate and implement performance optimizations. The tutorial is a combination of lectures and hands-on exercises, using Europe’s fastest supercomputer, JUWELS Booster with NVIDIA GPUs, for interactive learning and discovery.</p>",
3535

36-
"notes": "Slides and exercises of tutorial presented virtually at SC21 (International Conference for High Performance Computing, Networking, Storage, and Analysis); https://sc21.supercomputing.org/presentation/?id=tut138&sess=sess188",
36+
"notes": "Slides and exercises of tutorial presented virtually at ISC22 (ISC High Performance 2022); https://app.swapcard.com/widget/event/isc-high-performance-2022/planning/UGxhbm5pbmdfODYxMTQ2",
3737

3838
"access_right": "open",
3939

40-
"conference_title": "Supercomputing Conference 2021",
41-
"conference_acronym": "SC21",
42-
"conference_dates": "14-19 November 2021",
43-
"conference_place": "St. Louis, MO, USA and virtual",
44-
"conference_url": "https://sc21.supercomputing.org/",
40+
"conference_title": "ISC HPC 2022",
41+
"conference_acronym": "ISC22",
42+
"conference_dates": "29 May-02 June 2022",
43+
"conference_place": "Hamburg, Germany",
44+
"conference_url": "https://www.isc-hpc.com/",
4545
"conference_session": "Tutorials",
4646
"conference_session_part": "Day 1",
4747

0 commit comments

Comments
 (0)