[Lecture] Heterogeneous and Parallel Computing
Course Language: German
The course will start from an overview over current processor systems and development trends in computer hardware towards increased heterogeneity and specialisation, driven by the need for more computer performance and increased energy efficiency. The first section of the course will provide a base knowledge of processor architecture from a performance perspective.
In a second section, the principles of parallelisation will be elaborated on all levels, from large scale computing systems, such as high performance computing and clouds, down to multi- and many-core processors. This covers the principles of parallel programming and programming models, such as OpenMP, MPI and Partitioned Global Address Space (PGAS). This will also cover their limitations, such as Amdahl's law and the impact of data locality.
The third section will address specialisation of systems, ranging from embedded devices and multi-core systems to specialised co-processors, such as GPUs. The impact of specialisation on performance and energy efficiency, but also on programmability and portability will be elaborated. The future trends towards completely heterogeneous setups on all levels will be examined and assessed.
The lecture will conclude with an outlook on how processors will likely develop in the future and what this means for the programmability and portability of software.
[Lecture] Introduction to High-Performance Computing
Course Language: German
The field of High-Performance Computing (HPC) deals with the efficient and fast execution of large simulations on modern supercomputers. In the lecture “Introduction to High-Performance Computing” the theoretical and practical basics of HPC and parallel scientific computing are covered. First, current parallel computer architectures are considered, from whose structure the necessity of two different types of parallelism (shared memory and distributed memory) arises. After basic computational operations such as matrix-vector and matrix-matrix multiplications, complex parallel numerical methods for solving systems of linear equations are introduced. Speedup, efficiency, and parallel scalability are employed as metrics for the quality of algorithms. Introductions to the concept of message passing using MPI and shared-memory parallel programming using OpenMP are given for practical implementations. In addition, various software packages that can be used for efficient parallel scientific computing will be introduced.
[Lecture] Performance Engineering
Course Language: German
Nowadays, the development of efficient software is relevant in almost all scientific, industrial, and social areas. Examples are aircraft or automotive design, weather forecasting, crisis management, and analyses of satellite or market data.
Software is efficient if it makes the best possible use of today's, usually parallel, computing resources.
To develop efficient software code, a basic understanding of possible hardware performance bottlenecks and relevant software optimization techniques is required. Code transformations enable the optimized use of computer resources.
This lecture covers a structured approach to software optimization through a model-based performance engineering approach. This approach enables incremental software optimization by considering software and hardware aspects. Even simple performance models like the Roofline model allow accurate runtime predictions and deep insights into optimized hardware usage.
After a brief introduction to parallel processor architectures and massively parallel computing on distributed-memory systems, this lecture covers model-based performance engineering for simple numerical operations such as sparse matrix-vector multiplication. For massively parallel computers with distributed memory, communication-hiding and communication-avoiding methods are presented. Finally, the importance of performance engineering for parallel software tools, e.g., from rocket engine or aircraft design and from analyses of Earth observation or space debris data, will be discussed.
[Seminar] Research Trends in Parallel and Distributed Systems
Course Language: German
In this seminar a range of emerging topics in the field of parallel, heterogeneous computing (system Architecture for current and future high performance computing systems) and distributed computing systems (e.g. Cloud, Edge Computing) are offered based on primary literature from major conferences and journals in the field.
The task for the participants is inspired by the process of writing a scientific publication. Starting from a review of the provided literature the participant identifies additional relevant material such as scientific publications but also tech reports from major vendors to have a good baseline of the state of the art and current developments. Based on a topic outline a written report and oral presentation as part of a full-day seminar is necessary to successfully pass the seminar.
We plan to publish selected reports as an open access seminar series.
[Seminar] Rendering and Simulation with Graphics Processors
Course Language: German