MPI-OpenMP2025

Dates
• September 15 – 18, 2025

Location
• University Bremen, Room MZH5600

Content
• This course gives an introduction to parallel programming. The main focus is on the parallel programming models MPI and OpenMP. Exercises will be an essential part of the workshop.

Instructors
• The course given by Dr. Hinnerk Stüben (Regionales Rechenzentrum der Universität Hamburg) and Dr. Lars Nerger (BremHLR and Alfred Wegener Institute Bremerhaven).

Participation
• This course is open to all interested students or members of the Alfred Wegener Institute for Polar and Marine Research, University Bremen, the Jacobs University Bremen, Hochschule Bremerhaven, as well as associated institutions. In addition, we accept this year registrations from HLRN and NHR users that are not affiliated in Bremen.

Prerequisites
• Solid fundamentals in Unix, C and/or Fortran will be essential!

Registration
• For registration please send an e-mail to: bremhlr@uni-bremen.de
• Deadline: September 10, 2025
• No registration fee!

Hands-on tutorials
• For the hands-on exercises, we ask the participants to bring their own notebook computers. They need to have a compiler (e.g. gcc) and MPI library (e.g. OpenMPI) installed so that parallel programs can be compiled. For Linux and MacOS, there are packages providing this software, while for Windows we recommend to install it via Cygwin (www.cygwin.com).

Information for non-local participants
• Hotels close to the University are “7 Things”, “Atlantic Hotel Universum Bremen”, and “Hotel Munte”.
• The University can be easily reached from Bremen main station (Hauptbahnhof) using the Tram line 6 in direction “Universität”. Please exit at the station “Bremen Universität/Zentralbereich”.
• The workshop is held in the building ‘Mehrzweckhochhaus (MZH)’ centrally located on the campus.

Tentative Schedule

Monday, 10:00 – 16:30
• Overview
Thinking Parallel (I) – Computer architectures and programming models
Laplace equation (I) – A realistic application example
Checking computer setup
Programming – A parallel “Hello World” program
MPI (I) – basic functions, communicators, messages, basic data types
Programming – send and recv
MPI (II)
– point-to-point communication (send and receive modes)
– Collective communication
Programming – Ring I

Tuesday, 9:15 – 16:30
Thinking Parallel (II)
– Characterization of parallelism
– Data dependence analysis
OpenMP (I) with exercises
– Concepts
– Parallelizing loops
Laplace equation (II) – implementation with OpenMP
Programming project (II) – OpenMP part
OpenMP (II) with exercises
– Synchronization
– Loop scheduling
– False sharing

Wednesday, 9:15 – 16:30
MPI (III)
– Derived data types
– Reduction operations
Programming – Ring II
Laplace equation (III) – Laplace example with MPI
Thinking Parallel (III) – performance considerations
MPI (IV) – Virtual topologies and communicator splitting
Programming – Advanced ring communication
MPI (V) – One-sided communication
MPI (VI) – Parallel I/O with MPI-IO
Programming – Parallel output with MPI-IO

Thursday, 9:15 – 16:30
• Parallel programming bugs
Hybrid parallelization – joint use of MPI and OpenMP
Programming – Hybrid “hello world” program