HPC Courses at UoC
Courses offered in WS 2025/2026
[Lecture] Performance Engineering
Dr.-Ing. Achim Basermann, Institute for Software Technology German Aerospace Center (DLR)
The development of efficient software is relevant in almost all scientific, industrial, and social fields today.
Examples include aircraft and automobile design, weather forecasting, crisis management, and analysis of satellite or market data. Software is efficient when it makes the best possible use of today's computer resources, which are usually parallel.
To develop efficient software code, a fundamental understanding of possible hardware performance bottlenecks and relevant software optimization techniques is required.
Code transformations enable the optimized use of computing resources.
This lecture covers a structured approach to software optimization using a model-based performance engineering approach. This approach enables incremental software optimization by taking software and hardware aspects into account.
Even simple performance models such as the Roofline model allow accurate runtime predictions and deep insights into optimized hardware usage.
After a brief introduction to parallel processor architectures and massively parallel computing on distributed memory systems, this lecture covers model-based performance engineering for simple numerical operations such as sparse matrix-vector multiplication.
For massively parallel computers with distributed memory, communication-hiding and communication-avoiding methods are presented. Finally, the importance of performance engineering for parallel software tools, e.g., from rocket engine or aircraft design and from analyses of Earth observation or space debris data, is discussed.
In the exercise, model-based performance engineering techniques are demonstrated using simple benchmark codes.
[Lecture] Software Engineering
Acting Prof. Dr. Mersedeh Sadeghi, Institute of Computer Science
Developing good, successful software requires more than just programming skills. Software engineering deals with the systematic use of principles, methods, and tools for the collaborative, engineering-based development and application of large-scale software systems. This includes topics such as:
- Requirements
- Software design and software architecture
- Programming techniques and guidelines
- Maintenance and evolution
- Quality assurance
- Testing
- Development processes
[Lecture] Softwarequality
Prof. Dr. Michael Felderer, Institute for Software Technology German Aerospace Center (DLR)
Quality is a decisive success factor for the development and operation of software systems and requires the application of appropriate quality assurance techniques to ensure it. This lecture provides an overview of software quality characteristics, constructive and analytical quality assurance techniques, and their application in specific areas of use.
Topics covered in the course include modern software development processes, software quality attributes such as reliability, usability, security, and maintainability and their measurement, software testing methods, software analysis, and quality assurance in specific application areas such as intelligent, distributed, or safety-critical systems.
[Lecture] Heterogeneous and parallel computing
Prof. Dr. Stefan Wesner with Dr. Lutz Schubert and Robert Keßler, Institute of Computer Science
The course will start from an overview over current processor systems and development trends in computer hardware towards increased heterogeneity and specialisation, driven by the need for more computer performance and increased energy efficiency. The first section of the course will provide a base knowledge of processor architecture from a performance perspective. In a second section, the principles of parallelisation will be elaborated on all levels, from large scale computing systems, such as high performance computing and clouds, down to multi- and many-core processors. This covers the principles of parallel programming and programming models, such as OpenMP, MPI and Partitioned Global Address Space (PGAS). This will also cover their limitations, such as Amdahl's law and the impact of data locality. The third section will address specialisation of systems, ranging from embedded devices and multi-core systems to specialised co-processors, such as GPUs. The impact of specialisation on performance and energy efficiency, but also on programmability and portability will be elaborated. The future trends towards completely heterogeneous setups on all levels will be examined and assessed. The lecture will conclude with an outlook on how processors will likely develop in the future and what this means for the programmability and portability of software.
[Seminar] High-Performance Computing with GPUs
Prof. Dr. Stefan Wesner with Dr. Andreas Herten and Robert Keßler, Institute of Computer Science
GPUs are ubiquitous in High-Performance Computing, delivering the majority of performance in the fastest supercomputers around the world. The platform is enabled by highly parallel applications, suitable programming models, and a close combination between software and hardware, and advanced hardware designs. The seminar covers topics relevant to all components of the HPC GPU ecosystem, like effective implementation of GPU algorithms, investigations to programming models, performance analysis, benchmarking of applications, and understanding hardware features.
[Seminar] Programming Principles of Distributed Systems
Prof. Dr. Stefan Wesner with Dr. Lutz Schubert and Robert Keßler, Institute of Computer Science
This seminar covers emerging topics in parallel and distributed computing. The scope spans from tightly coupled high performance computing systems to loosely coupled cloud and edge computing systems. Special emphasis is placed on advanced system architectures with heterogeneous processor and memory technologies.
During the seminar, students will work together in a small group to reproduce the results of a previously published research paper from the above mentioned scientific domain. Relevant publications must have been peer-reviewed by a major conference or in a journal and feature an open source codebase (see the literature below for examples of representative papers). To build/run and produce the results of the paper, both local and remote resources can be used, which are provided by the seminar lecturer as needed. Optionally the students are even capable to improve or optimize the proposed solution in the chosen paper. We are planning to send students who have demonstrated outstanding quality and dedication to a European Reproducibility Challenge if sufficient interest is expressed.
[Seminar] Development with Game Engines
Prof. Dr. Stefan Wesner and Paul Benölken, Institute of Computer Science
Trade fairs such as Cologne's GamesCom impressively demonstrate, with their visitor numbers, the unbroken fascination that computer games (video games) continue to exert. Having now outgrown their infancy, games are increasingly finding their way into professional environments beyond the entertainment industry under the heading of serious games.
The fields of application now range from education and training to cultural heritage, medicine, architecture, and the automotive and aviation sectors. As with modeling and animation, professional tools such as game engines are now used for the development of new games. Based on a specific application, the aim is to develop a game engine for the development of a new game.
As with modeling and animation, professional tools such as game engines are now also used for the development of new games. Using a specific application, the possibilities of a game engine will be explored and utilized using the example of the Unreal Engine.
To this end, participants will develop a joint project in groups, with each group responsible for a specific task. The seminar is suitable for students from the 4th semester onwards. Basic knowledge of computer graphics and knowledge of an object-oriented programming language (C++ or Java) are advantageous. For capacity reasons, the number of participants is limited to 12.
Courses offered in SS 2025
[Lecture] High Performance Computing for Machine Learning
Dr. Janine Weber, Mathematical Institute
High Performance Computing (HPC) is concerned with the efficient and fast execution of large simulations on modern supercomputers. It makes use of state-of-the-art technologies (such as GPUs, low-latency connections etc.) to efficiently solve complex scientific and data-driven problems. One of the key factors for the current success of machine learning models is the ability to perform calculations on modern computers with many model parameters and large amounts of training data. However, in their simplest form, current machine learning libraries only make limited efficient use of available HPC resources. The aim of this lecture is therefore to examine theoretical and practical aspects for the efficient training of machine learning and, in particular, deep learning models on modern HPC resources.
With this in mind, in the first part of the lecture, we will cover techniques that typically are used for the performance optimization of software on supercomputers. After a short introduction to HPC, we will deal specifically with GPUs (graphics processing units) and various memory models as well as performance optimization models and a practical introduction to CUDA, a programming interface developed by Nvidia for GPU programming.
In the second part of the lecture, the learnt techniques and concepts for the efficient training of Machine Learning and Deep Learning models will be applied. Different data- and model-parallel trainings methods for the efficient training on GPUs, algorithmic and practice-oriented, will be demonstrated using various examples from applications.
[Lecture] Compute Continuum
Prof. Dr. Stefan Wesner with Dr. Lutz Schubert and Robert Keßler, Institute of Computer Science
Modern computing has moved away from the desktop computer to the cloud, where resources and data are shared alike. Yet the Cloud computing paradigm suffers from the scope and complexity of modern compute scenarios, where data may reside anywhere, be produced and consumed anytime in any amount, and where users are mobile and distributed all over the world. To reduce the load on servers and the network, Fog and Edge computing were introduced - forms of distributed computing with flexible and variable allocation and load. This course will introduce the concept of the compute continuum, which aims at executing distributed applications flexibly over any infrastructure. The goal is to adapt immediately to different usage contexts. The compute continuum aims at scenarios arising from connected smart homes, smart cities, global logistic networks etc.
Within this lecture, we will investigate the relevant technologies to realise such an environment, and when it can be used, as well as its obstacles. The lecture is essentially segmented into three parts: The first part focuses on the hardware layer, including equally the type of processors, embedded system architectures and their connectivity. In the second part we will talk about the main principles of distributed computing, including how data is distributed and processed, and which use case criteria are fulfilled how. The third part is focusing on adaptive execution in the compute continuum, that includes embedded Operating Systems, virtualisation and containerisation.
[Seminar] High-Performance Computing with GPUs
Dr. Andreas Herten, Institute of Computer Science (JSC)
GPUs are ubiquitous in High-Performance Computing, delivering the majority of performance in the fastest supercomputers around the world. The platform is enabled by highly parallel
applications, suitable programming models, and a close combination between software and hardware, and advanced hardware designs. The seminar covers topics relevant to all components
of the HPC GPU ecosystem, like effective implementation of GPU algorithms, investigations to programming models, performance analysis, benchmarking of applications, and understanding
hardware features.
[Seminar] Programming Principles of Distributed Systems
Prof. Dr. Stefan Wesner with Dr. Lutz Schubert and Robert Keßler, Institute of Computer Science
This seminar covers emerging topics in parallel and distributed computing. The scope spans from tightly coupled high performance computing systems to loosely coupled cloud and edge
computing systems. Special emphasis is placed on advanced system architectures with heterogeneous processor and memory technologies.
During the seminar, students will work together in a small group to reproduce the results of a previously published research paper from the above mentioned scientific domain.
Relevant publications must have been peer-reviewed by a major conference or in a journal and feature an open source codebase (see the literature below for examples of representative papers). To
build/run and produce the results of the paper, both local and remote resources can be used, which are provided by the seminar lecturer as needed. Optionally the students are even capable
to improve or optimize the proposed solution in the chosen paper. We are planning to send students who have demonstrated outstanding quality and dedication to a European Reproducibility Challenge if sufficient interest is expressed.
Courses offered in WS 2024/2025
[Lecture] Performance Engineering
Dr.-Ing. Achim Basermann, Institute for Software Technology (DLR)
Course Language: German
Nowadays, the development of efficient software is relevant in almost all scientific, industrial, and social areas. Examples are aircraft or automotive design, weather forecasting, crisis management, and analyses of satellite or market data.
Software is efficient if it makes the best possible use of today's, usually parallel, computing resources.
To develop efficient software code, a basic understanding of possible hardware performance bottlenecks and relevant software optimization techniques is required. Code transformations enable the optimized use of computer resources.
This lecture covers a structured approach to software optimization through a model-based performance engineering approach. This approach enables incremental software optimization by considering software and hardware aspects. Even simple performance models like the Roofline model allow accurate runtime predictions and deep insights into optimized hardware usage.
After a brief introduction to parallel processor architectures and massively parallel computing on distributed-memory systems, this lecture covers model-based performance engineering for simple numerical operations such as sparse matrix-vector multiplication. For massively parallel computers with distributed memory, communication-hiding and communication-avoiding methods are presented. Finally, the importance of performance engineering for parallel software tools, e.g., from rocket engine or aircraft design and from analyses of Earth observation or space debris data, will be discussed.
[Lecture] Heterogeneous and Parallel Computing
Prof. Dr. Stefan Wesner with Dr. Lutz Schubert and Robert Keßler, Institute of Computer Science
Course Language: German
The course will start from an overview over current processor systems and development trends in computer hardware towards increased heterogeneity and specialisation, driven by the need for more computer performance and increased energy efficiency. The first section of the course will provide a base knowledge of processor architecture from a performance perspective.
In a second section, the principles of parallelisation will be elaborated on all levels, from large scale computing systems, such as high performance computing and clouds, down to multi- and many-core processors. This covers the principles of parallel programming and programming models, such as OpenMP, MPI and Partitioned Global Address Space (PGAS). This will also cover their limitations, such as Amdahl's law and the impact of data locality.
The third section will address specialisation of systems, ranging from embedded devices and multi-core systems to specialised co-processors, such as GPUs. The impact of specialisation on performance and energy efficiency, but also on programmability and portability will be elaborated. The future trends towards completely heterogeneous setups on all levels will be examined and assessed.
The lecture will conclude with an outlook on how processors will likely develop in the future and what this means for the programmability and portability of software.
[Seminar] Research Trends in Parallel and Distributed Systems
Dr. Lutz Schubert with Robert Keßler and Laslo Hunhold, Institute of Computer Science
Course Language: German
In this seminar a range of emerging topics in the field of parallel, heterogeneous computing (system Architecture for current and future high performance computing systems) and distributed computing systems (e.g. Cloud, Edge Computing) are offered based on primary literature from major conferences and journals in the field.
The task for the participants is inspired by the process of writing a scientific publication. Starting from a review of the provided literature the participant identifies additional relevant material such as scientific publications but also tech reports from major vendors to have a good baseline of the state of the art and current developments. Based on a topic outline a written report and oral presentation as part of a full-day seminar is necessary to successfully pass the seminar.
We plan to publish selected reports as an open access seminar series.
Courses offered in SS 2024
[Lecture] Compute Continuum
Prof. Dr. Stefan Wesner with Dr. Lutz Schubert and Robert Keßler, Institute of Computer Science
Course Language: German
Modern computing has moved away from the desktop computer to the cloud, where resources and data are shared alike. Yet the Cloud computing paradigm suffers from the scope and complexity of modern compute scenarios, where data may reside anywhere, be produced and consumed anytime in any amount, and where users are mobile and distributed all over the world. To reduce the load on servers and the network, Fog and Edge computing were introduced - forms of distributed computing with flexible and variable allocation and load. This course will introduce the concept of the compute continuum, which aims at executing distributed applications flexibly over any infrastructure. The goal is to adapt immediately to different usage contexts. The compute continuum aims at scenarios arising from connected smart homes, smart cities, global logistic networks etc.
Within this lecture, we will investigate the relevant technologies to realise such an environment, and when it can be used, as well as its obstacles. The lecture is essentially segmented into three parts: The first part focuses on the hardware layer, including equally the type of processors, embedded system architectures and their connectivity. In the second part we will talk about the main principles of distributed computing, including how data is distributed and processed, and which use case criteria are fulfilled how. The third part is focusing on adaptive execution in the compute continuum, that includes embedded Operating Systems, virtualisation and containerisation.
[Lecture] High-Performance Computing for Advanced Students
Dr. Martin Lanser, Mathematical Institute
Course Language: German
High-Performance Computing (HPC) is about the efficient and fast execution of large simulations on modern supercomputers. In the lecture High Performance Computing for Advanced Students, advanced theoretical and practical aspects of HPC and parallel scientific computing are considered. Building upon the knowledge gained in the previous lecture Introduction to High Performance Computing, the focus will be put on shared-memory parallel programming using OpenMP and hybrid programming models that are tailored ideally to modern supercomputers. By considering complex model problems from the field of numerical solution of partial differential equations, concrete applications (also in the form of larger programming projects) will be implemented. Another focus of the lecture will be the increasingly important area of machine learning. The algebra required and the methods used for machine learning are ideal for the use on GPUs (graphics processors) or accelerators.
Courses offered in WS 2023/2024
[Lecture] Heterogeneous and Parallel Computing
Prof. Dr. Stefan Wesner with Dr. Lutz Schubert, Institute of Computer Science
Course Language: German
The course will start from an overview over current processor systems and development trends in computer hardware towards increased heterogeneity and specialisation, driven by the need for more computer performance and increased energy efficiency. The first section of the course will provide a base knowledge of processor architecture from a performance perspective.
In a second section, the principles of parallelisation will be elaborated on all levels, from large scale computing systems, such as high performance computing and clouds, down to multi- and many-core processors. This covers the principles of parallel programming and programming models, such as OpenMP, MPI and Partitioned Global Address Space (PGAS). This will also cover their limitations, such as Amdahl's law and the impact of data locality.
The third section will address specialisation of systems, ranging from embedded devices and multi-core systems to specialised co-processors, such as GPUs. The impact of specialisation on performance and energy efficiency, but also on programmability and portability will be elaborated. The future trends towards completely heterogeneous setups on all levels will be examined and assessed.
The lecture will conclude with an outlook on how processors will likely develop in the future and what this means for the programmability and portability of software.
[Lecture] Introduction to High-Performance Computing
Dr. Martin Lanser, Mathematical Institute
Course Language: German
The field of High-Performance Computing (HPC) deals with the efficient and fast execution of large simulations on modern supercomputers. In the lecture “Introduction to High-Performance Computing” the theoretical and practical basics of HPC and parallel scientific computing are covered. First, current parallel computer architectures are considered, from whose structure the necessity of two different types of parallelism (shared memory and distributed memory) arises. After basic computational operations such as matrix-vector and matrix-matrix multiplications, complex parallel numerical methods for solving systems of linear equations are introduced. Speedup, efficiency, and parallel scalability are employed as metrics for the quality of algorithms. Introductions to the concept of message passing using MPI and shared-memory parallel programming using OpenMP are given for practical implementations. In addition, various software packages that can be used for efficient parallel scientific computing will be introduced.
[Lecture] Performance Engineering
Dr.-Ing. Achim Basermann, Institute for Software Technology (DLR)
Course Language: German
Nowadays, the development of efficient software is relevant in almost all scientific, industrial, and social areas. Examples are aircraft or automotive design, weather forecasting, crisis management, and analyses of satellite or market data.
Software is efficient if it makes the best possible use of today's, usually parallel, computing resources.
To develop efficient software code, a basic understanding of possible hardware performance bottlenecks and relevant software optimization techniques is required. Code transformations enable the optimized use of computer resources.
This lecture covers a structured approach to software optimization through a model-based performance engineering approach. This approach enables incremental software optimization by considering software and hardware aspects. Even simple performance models like the Roofline model allow accurate runtime predictions and deep insights into optimized hardware usage.
After a brief introduction to parallel processor architectures and massively parallel computing on distributed-memory systems, this lecture covers model-based performance engineering for simple numerical operations such as sparse matrix-vector multiplication. For massively parallel computers with distributed memory, communication-hiding and communication-avoiding methods are presented. Finally, the importance of performance engineering for parallel software tools, e.g., from rocket engine or aircraft design and from analyses of Earth observation or space debris data, will be discussed.
[Seminar] Research Trends in Parallel and Distributed Systems
Prof. Dr. Stefan Wesner with Dr. Lutz Schubert, Institute of Computer Science
Course Language: German
In this seminar a range of emerging topics in the field of parallel, heterogeneous computing (system Architecture for current and future high performance computing systems) and distributed computing systems (e.g. Cloud, Edge Computing) are offered based on primary literature from major conferences and journals in the field.
The task for the participants is inspired by the process of writing a scientific publication. Starting from a review of the provided literature the participant identifies additional relevant material such as scientific publications but also tech reports from major vendors to have a good baseline of the state of the art and current developments. Based on a topic outline a written report and oral presentation as part of a full-day seminar is necessary to successfully pass the seminar.
We plan to publish selected reports as an open access seminar series.
[Seminar] Rendering and Simulation with Graphics Processors
PD Dr. Stefan Zellmann, Institute of Computer Science
Course Language: German
Courses offered in SS 2023
[Lecture] High Performance Computing for Advanced Students
Dr. Janine Weber, Mathematical Institute
Course Language: German
High Performance Computing (HPC) is about the efficient and fast execution of large simulations on modern supercomputers. In the lecture High Performance Computing for Advanced Students, advanced theoretical and practical aspects of HPC and parallel scientific computing are considered. Building upon the knowledge gained in the previous lecture Introduction to High Performance Computing, the focus will be put on shared-memory parallel programming using OpenMP and hybrid programming models that are tailored ideally to modern supercomputers. By considering complex model problems from the field of numerical solution of partial differential equations, concrete applications (also in the form of larger programming projects) will be implemented. Another focus of the lecture will be the increasingly important area of machine learning. The algebra required and the methods used for machine learning are ideal for the use on GPUs (graphics processors) or accelerators.
[Seminar] Research Trends in Parallel and Distributed Systems
Prof. Dr. Stefan Wesner with Dr. Lutz Schubert, Institute of Computer Science
Course Language: German
In this seminar a range of emerging topics in the field of parallel, heterogeneous computing (system Architecture for current and future high performance computing systems) and distributed computing systems (e.g. Cloud, Edge Computing) are offered based on primary literature from major conferences and journals in the field.
The task for the participants is inspired by the process of writing a scientific publication. Starting from a review of the provided literature the participant identifies additional relevant material such as scientific publications but also tech reports from major vendors to have a good baseline of the state of the art and current developments. Based on a topic outline a written report and oral presentation as part of a full-day seminar is necessary to successfully pass the seminar.
We plan to publish selected reports as an open access seminar series.
[Lecture] Cloud and Edge Computing Systems
Prof. Dr. Stefan Wesner with Dr. Lutz Schubert, Institute of Computer Science
Course Language: German
The lecture is divided into three main parts.
The first part discusses key concepts and technology of cross organisational Cloud Computing Systems. Beside operation models, virtualization and container technologies in particular elasticity, scalability and why Cloud infrastructures are able to respond dynamically to changing requirements are discussed. Along an application use case the potential but also the limitation of scalability are discussed.
In the second part of the lecture Data Centre Architecture and Technology is presented outlining how the capabilities presented in the first part can be realised. This covers data centre system design, data centre components and software solutions to realize the Cloud properties as discussed in the first part. This will also cover const considerations and performance benchmarking.
The third part of the lecture covers multi-Cloud and Edge Computing Systems and their particular capabilities and challenges.
Courses offered in WS 2022/2023
[Lecture] Heterogeneous and Parallel Computing
Prof. Dr. Stefan Wesner with Dr. Lutz Schubert, Institute of Computer Science
Course Language: German
The course will start from an overview over current processor systems and development trends in computer hardware towards increased heterogeneity and specialisation, driven by the need for more computer performance and increased energy efficiency. The first section of the course will provide a base knowledge of processor architecture from a performance perspective. In a second section, the principles of parallelisation will be elaborated on all levels, from large scale computing systems, such as high performance computing and clouds, down to multi- and many-core processors. This covers the principles of parallel programming and programming models, such as OpenMP, MPI and Partitioned Global Address Space (PGAS). This will also cover their limitations, such as Amdahl's law and the impact of data locality. The third section will address specialisation of systems, ranging from embedded devices and multi-core systems to specialised co-processors, such as GPUs. The impact of specialisation on performance and energy efficiency, but also on programmability and portability will be elaborated. The future trends towards completely heterogeneous setups on all levels will be examined and assessed. The lecture will conclude with an outlook on how processors will likely develop in the future and what this means for the programmability and portability of software.
[Lecture] Introduction to High-Performance Computing
Dr. Janine Weber, Mathematical Institute
Course Language: German
The field of High-Performance Computing (HPC) deals with the efficient and fast execution of large simulations on modern supercomputers. In the lecture “Introduction to High-Performance Computing” the theoretical and practical basics of HPC and parallel scientific computing are covered. First, current parallel computer architectures are considered, from whose structure the necessity of two different types of parallelism (shared memory and distributed memory) arises. After basic computational operations such as matrix-vector and matrix-matrix multiplications, complex parallel numerical methods for solving systems of linear equations are introduced. Speedup, efficiency, and parallel scalability are employed as metrics for the quality of algorithms. Introductions to the concept of message passing using MPI and shared-memory parallel programming using OpenMP are given for practical implementations. In addition, various software packages that can be used for efficient parallel scientific computing will be introduced.
[Lecture] Performance Engineering
Dr.-Ing. Achim Basermann, Institute for Software Technology (DLR)
Course Language: German
Nowadays, the development of efficient software is relevant in almost all scientific, industrial, and social areas. Examples are aircraft or automotive design, weather forecasting, crisis management, and analyses of satellite or market data.
Software is efficient if it makes the best possible use of today's, usually parallel, computing resources.
To develop efficient software code, a basic understanding of possible hardware performance bottlenecks and relevant software optimization techniques is required. Code transformations enable the optimized use of computer resources.
This lecture covers a structured approach to software optimization through a model-based performance engineering approach. This approach enables incremental software optimization by considering software and hardware aspects. Even simple performance models like the Roofline model allow accurate runtime predictions and deep insights into optimized hardware usage.
After a brief introduction to parallel processor architectures and massively parallel computing on distributed-memory systems, this lecture covers model-based performance engineering for simple numerical operations such as sparse matrix-vector multiplication. For massively parallel computers with distributed memory, communication-hiding and communication-avoiding methods are presented. Finally, the importance of performance engineering for parallel software tools, e.g., from rocket engine or aircraft design and from analyses of Earth observation or space debris data, will be discussed.
[Seminar] Research Trends in Parallel and Distributed Systems
Prof. Dr. Stefan Wesner with Dr. Lutz Schubert, Institute of Computer Science
Course Language: German
In this seminar a range of emerging topics in the field of parallel, heterogeneous computing (system Architecture for current and future high performance computing systems) and distributed computing systems (e.g. Cloud, Edge Computing) are offered based on primary literature from major conferences and journals in the field.
The task for the participants is inspired by the process of writing a scientific publication. Starting from a review of the provided literature the participant identifies additional relevant material such as scientific publications but also tech reports from major vendors to have a good baseline of the state of the art and current developments. Based on a topic outline a written report and oral presentation as part of a full-day seminar is necessary to successfully pass the seminar.
We plan to publish selected reports as an open access seminar series.
[Seminar] Rendering and Simulation with Graphics Processors
PD Dr. Stefan Zellmann, Institute of Computer Science
Course Language: German