Skip to content

Commit d132d8d

Browse files
committed
First updates for ISC25
1 parent 30b3d8f commit d132d8d

File tree

6 files changed

+33
-33
lines changed

6 files changed

+33
-33
lines changed

.archive.mk

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,16 +6,16 @@
66
# Changelog:
77
# * Nov 2022: The archive is extracted again, then slides.pdf is removed if a patched slides-sc22.pdf is found (which includes an SC22 slide 0 title slide); and then repackaged
88
.PHONY: all
9-
all: tut123-multi-gpu.tar.gz
9+
all: tut105-multi-gpu.tar.gz
1010

11-
SOURCES=$(shell gfind . -maxdepth 1 -mindepth 1 -not -path "./.*" -not -name "tut123-multi-gpu.tar.gz" -printf '%P\n' | sort -h)
11+
SOURCES=$(shell gfind . -maxdepth 1 -mindepth 1 -not -path "./.*" -not -name "tut105-multi-gpu.tar.gz" -printf '%P\n' | sort -h)
1212

13-
tut123-multi-gpu.tar.gz: $(shell find . -not -name "tut123-multi-gpu.tar.gz")
13+
tut105-multi-gpu.tar.gz: $(shell find . -not -name "tut105-multi-gpu.tar.gz")
1414
sed -i '1 i***Please check GitHub repo for latest version of slides: https://github.com/FZJ-JSC/tutorial-multi-gpu/ ***\n' README.md
15-
tar czf $@ --transform 's,^,SC24-tut123-Multi-GPU/,' --exclude=".*" $(SOURCES)
15+
tar czf $@ --transform 's,^,ISC25-tut105-Multi-GPU/,' --exclude=".*" $(SOURCES)
1616
tar xf $@
1717
rm $@
18-
find SC24-tut123-Multi-GPU/ -not -path './.*' -iname 'slides-*.pdf' -execdir rm slides.pdf \;
19-
tar czf $@ SC24-tut123-Multi-GPU
20-
rm -rf SC24-tut123-Multi-GPU
18+
find ISC25-tut105-Multi-GPU/ -not -path './.*' -iname 'slides-*.pdf' -execdir rm slides.pdf \;
19+
tar czf $@ ISC25-tut105-Multi-GPU
20+
rm -rf ISC25-tut105-Multi-GPU
2121
sed -i '1,2d' README.md

.etc/jsccourse-bashrc.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ if [[ $- =~ "i" ]]; then
118118

119119
echo ""
120120
echo "*******************************************************************************"
121-
echo " Welcome to the SC24 Tutorial on Multi-GPU Computing for Exascale! "
121+
echo " Welcome to the ISC25 Tutorial on Multi-GPU Computing for Exascale! "
122122
# echo " A default call to get a batch system allocation is stored in \$JSC_ALLOC_CMD!"
123123
# echo " Use it with \`eval \$JSC_ALLOC_CMD\`. The value of \$JSC_ALLOC_CMD is:"
124124
# echo -n " "

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
1-
tut123-multi-gpu.tar.gz
1+
tut105-multi-gpu.tar.gz
22
*-sc24.pdf
33
tut*

.zenodo.json

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -29,21 +29,21 @@
2929

3030
"title": "Efficient Distributed GPU Programming for Exascale",
3131

32-
"publication_date": "2024-11-17",
32+
"publication_date": "2025-06-13",
3333

3434
"description": "<p>Over the past decade, GPUs became ubiquitous in HPC installations around the world, delivering the majority of performance of some of the largest supercomputers (e.g. Summit, Sierra, JUWELS Booster). This trend continues in the recently deployed and upcoming Pre-Exascale and Exascale systems (JUPITER, LUMI, Leonardo; El Capitan, Frontier, Aurora): GPUs are chosen as the core computing devices to enter this next era of HPC.To take advantage of future GPU-accelerated systems with tens of thousands of devices, application developers need to have the proper skills and tools to understand, manage, and optimize distributed GPU applications.In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, also advanced tuning techniques and complementing programming models like NCCL and NVSHMEM are presented. Tools for analysis are shown and used to motivate and implement performance optimizations. The tutorial teaches fundamental concepts that apply to GPU-accelerated systems in general, taking the NVIDIA platform as an example. It is a combination of lectures and hands-on exercises, using a development system for JUPITER (JEDI), for interactive learning and discovery.</p>",
3535

36-
"notes": "Slides and exercises of tutorial presented at SC24 (The International Conference for High Performance Computing, Networking, Storage, and Analysis 2024); https://sc24.conference-program.com/presentation/?id=tut123&sess=sess412",
36+
"notes": "Slides and exercises of tutorial presented at ISC High Performance 2025; https://isc.app.swapcard.com/widget/event/isc-high-performance-2025/planning/UGxhbm5pbmdfMjU4MTc5Ng==",
3737

3838
"access_right": "open",
3939

40-
"conference_title": "SC 2024",
41-
"conference_acronym": "SC24",
42-
"conference_dates": "17 November-22 November 2024",
43-
"conference_place": "Atlanta, Georgia, USA",
44-
"conference_url": "https://sc24.supercomputing.org/",
40+
"conference_title": "ISC 2025",
41+
"conference_acronym": "ISC25",
42+
"conference_dates": "10 June-13 June 2025",
43+
"conference_place": "Hamburg, Germany",
44+
"conference_url": "https://www.isc-hpc.com/",
4545
"conference_session": "Tutorials",
46-
"conference_session_part": "Day 1",
46+
"conference_session_part": "Afternoon",
4747

4848
"upload_type": "lesson"
4949
}

CITATION.cff

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,5 +48,5 @@ keywords:
4848
- NVSHMEM
4949
- Distributed Programming
5050
license: MIT
51-
version: '7.0-sc24'
52-
date-released: '2024-11-17'
51+
version: '8.0-isc25'
52+
date-released: '2025-06-13'

README.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
1-
# SC24 Tutorial: Efficient Distributed GPU Programming for Exascale
1+
# ISC25 Tutorial: Efficient Distributed GPU Programming for Exascale
22

33
[![DOI](https://zenodo.org/badge/409504932.svg)](https://zenodo.org/badge/latestdoi/409504932)
44

55

6-
Repository with talks and exercises of our Efficient GPU Programming for Exascale tutorial, to be held at [SC24](https://sc24.conference-program.com/presentation/?id=tut123&sess=sess412).
6+
Repository with talks and exercises of our Efficient GPU Programming for Exascale tutorial, to be held at [ISC25](https://isc.app.swapcard.com/widget/event/isc-high-performance-2025/planning/UGxhbm5pbmdfMjU4MTc5Ng==).
77

88
## Coordinates
99

10-
* Date: 17 November 2024
11-
* Occasion: SC24 Tutorial
12-
* Tutors: Simon Garcia de Gonzalo (SNL), Andreas Herten (JSC), Markus Hrywniak (NVIDIA), Jiri Kraus (NVIDIA), Lena Oden (Uni Hagen)
10+
* Date: 13 June 2025
11+
* Occasion: ISC25 Tutorial
12+
* Tutors: Simon Garcia de Gonzalo (SNL), Andreas Herten (JSC), Lena Oden (Uni Hagen), with support by Markus Hrywniak (NVIDIA) and Jiri Kraus (NVIDIA)
1313

1414

1515
## Setup
@@ -20,21 +20,21 @@ Walk-through:
2020

2121
* Sign up at JuDoor
2222
* Open Jupyter JSC: https://jupyter-jsc.fz-juelich.de
23-
* Create new Jupyter instance on JEDI, using training2446 account, on **LoginNode**
24-
* Source course environment: `source $PROJECT_training2446/env.sh`
23+
* Create new Jupyter instance on JEDI, using training25XX account, on **LoginNode**
24+
* Source course environment: `source $PROJECT_training25XX/env.sh`
2525
* Sync material: `jsc-material-sync`
2626
* Locally install NVIDIA Nsight Systems: https://developer.nvidia.com/nsight-systems
2727

28-
Curriculum:
28+
Curriculum (Note: square-bracketed sessions are skipped at ISC25 because only ½ day was allocated to the tutorial):
2929

3030
1. Lecture: Tutorial Overview, Introduction to System + Onboarding *Andreas*
3131
2. Lecture: MPI-Distributed Computing with GPUs *Simon*
3232
3. Hands-on: Multi-GPU Parallelization
33-
4. Lecture: Performance / Debugging Tools *Markus*
34-
5. Lecture: Optimization Techniques for Multi-GPU Applications *Simon*
33+
4. [Lecture: Performance / Debugging Tools]
34+
5. Lecture: Optimization Techniques for Multi-GPU Applications *Lena*
3535
6. Hands-on: Overlap Communication and Computation with MPI
36-
7. Lecture: Overview of NCCL and NVSHMEN in MPI *Lena*
37-
8. Hands-on: Using NCCL and NVSHMEM
38-
9. Lecture: Device-initiated Communication with NVSHMEM *Jiri*
39-
10. Hands-on: Using Device-Initiated Communication with NVSHMEM
36+
7. [Lecture: Overview of NCCL and NVSHMEN in MPI]
37+
8. [Hands-on: Using NCCL and NVSHMEM]
38+
9. [Lecture: Device-initiated Communication with NVSHMEM]
39+
10. [Hands-on: Using Device-Initiated Communication with NVSHMEM]
4040
11. Lecture: Conclusion and Outline of Advanced Topics *Andreas*

0 commit comments

Comments
 (0)