skip to content

Talk by Prof. Jacob B. Schroder — 14 May 2025

Date: May 14, 2025, 4–5:30 p.m.

Speaker: Prof. Jacob B. Schroder 

Location:

     Weyertal 86–90, 50931 Cologne
     Mathematical Institute (Site Map UoC, OpenStreetMap)
     Seminar Room 1 (Room 0.05)

Title: Multigrid Reduction in Time: From PDE Applications to Machine Learning

Abstract:

The need for parallel-in-time algorithms is currently being driven by the rapidly changing nature of computer architectures.  Future speedups will be available through ever increasing parallelism (e.g., more processing units or cores), but not faster clock speeds, which are stagnant.  Previously, increasing clock-speeds compensated for traditional sequential time stepping algorithms when the problem size increased. However, this is no longer the case, leading to the sequential time integration bottleneck and the need to parallelize in time.  In this talk, we examine an optimal-scaling parallel time integration method, multigrid reduction in time (MGRIT).  MGRIT applies multigrid to the time dimension by solving the (non)linear systems that arise when solving for multiple time steps simultaneously.  The result is a versatile approach that is nonintrusive and wraps existing time evolution codes.  MGRIT allows for various time discretizations (e.g., Runge-Kutta and multistep) and for adaptive refinement/coarsening in time and space.  Some recent theoretical results, as well as practical results for a variety of PDE problems will be presented, e.g., nonlinear diffusion, powergrid systems, advection, compressible Navier-Stokes, and chaotic problems.  Lastly, a specific focus on the application of time-parallelism to deep learning is presented, where the layer dimension of deep neural networks is explicitly parallelized by an MGRIT method.

SciML Talk by Prof. Dr. Roman Klinger — 22 May 2025

Date: May 22, 2025, 17:45–19:15

Speaker: Prof. Dr. Roman Klinger (University of Bamberg)

Location:

• Albertus-Magnus-Platz, 50923 Cologne
University Main Building
Lecture Hall XVIII (ground floor; the direct path is currently blocked due to a construction site)

Title: Adapting Language Models for the Analysis of Real World Textual Data: Train the model, change the prompt, or adapt the task?

The talk is organized by Prof. Dr. Nils Reiter.

SciML Talk by Prof. Dr. Stefan Kesselheim — 09 July 2025

Date: July 9, 2025, 16:00–17:30

Speaker: Prof. Dr. Stefan Kesselheim (Forschungszentrum Jülich)

Location:

• Zülpicher Str. 77, 50937 Cologne
• Institute of Physics II (building plan)
• Seminar Room II

Topic: Machine Learning in Science: From supervised learning to manifolds and probabilities

Abstract: TBA

Past Seminars

Talk by Dr.-Ing. Arnd Koeppe — 13 December 2024

Date: December 13, 2024, 12:00–13:30

Speaker: Dr.-Ing. Arnd Koeppe (Karlsruhe Institute of Technology)

Location:

• Innere Kanalstraße 15, 50823 Cologne
• Triforum (Google Maps, OpenStreetMap). Use north entrance next to Weinsbergstraße.
• Room 5.18 (left corridor coming from the staircase)

Title: Unifying Simulations, Research Data Management, and Artificial Intelligence

Abstract: Materials research progresses through iterative loops at multiple levels, from design, study, and optimization down to equilibrium iterations in physics-based simulations. Each level can be interpreted as a scientific workflow that enables efficient and guided investigations of research questions. These workflows exist in various forms, from traditional hand-written experimental protocols to software code and fully digitalized workflows supported by research data platforms like Kadi4Mat. Accelerating these loops is critical for advancing materials research, and Machine Learning (ML) and Artificial Intelligence (AI) offer powerful tools to enhance efficiency at all levels.

At the physics-based simulation level, iterative loops address, e.g., nonlinear material behavior and multi-scale phenomena. ML can considerably improve the performance of physics-based simulations by approximating complex nonlinear solutions, reducing computational costs, and enabling faster iteration. Neural networks, for instance, can be embedded into simulations as surrogate models, constitutive models, or hybrid approaches to achieve speed-ups while maintaining accuracy and flexibility. At the design, study, and optimization level, a tight integration of ML with Research Data Management (RDM),  facilitates automated data preprocessing, pattern recognition, and hypothesis generation, enhancing the utility and interpretability of large datasets.

By integrating ML and AI into both simulations and RDM workflows, researchers can streamline their research processes and uncover insights more efficiently.

[1] A. Koeppe, F. Bamer, and B. Markert, "An intelligent nonlinear meta element for elastoplastic continua: deep learning using a new Time-distributed Residual U-Net architecture," Computer Methods in Applied Mechanics and Engineering, vol. 366, p. 113088, Jul. 2020.
[2] D. Rajagopal et al., "Data-Driven Virtual Material Analysis and Synthesis for Solid Electrolyte Interphases," Advanced Energy Materials 2023, 13, p. 2301985, Sep. 2023.

Talk by Prof. Dr. Uwe Naumann — 06 November 2024

Date: November 6, 2024, 16:00–17:30

Speaker: Prof. Dr. Uwe Naumann (RWTH Aachen University)

Location:

     Weyertal 86–90, 50931 Cologne
     Mathematical Institute (Google Maps, OpenStreetMap)
     Seminar Room 1 (Room 0.05)

Title: Differential Inversion

Abstract: Differential inversion denotes the computation of a Newton step of a differentiable function (also: residual) with invertible Jacobian. The residual is assumed to be implemented as a differentiable program, implying applicability of algorithmic differentiation (AD).

Inspired by adjoint AD, the product of the inverse Jacobian with a vector can often be evaluated efficiently by a backpropagation-like algorithm. We distinguish between structural and symbolic approaches to reducing the computational cost of this method. The former aims to exploit sparsity of invertible local Jacobians for a given decomposition of the residual into differentiable elemental functions. A case study based on banded elemental Jacobians is discussed in [1]. The latter applies analytic insight into the mathematical properties of the residual. An application to differential inversion of the implicit Euler scheme is presented in [2]. A reduction of the computational cost by an order of complexity can be reported for both scenarios.

[1] U. Naumann: A Matrix-Free Exact Newton Method. SIAM Journal on Scientific Computing 46 (3), A1423-A1440, 2024.
[2] U. Naumann: Differential Inversion of the Implicit Euler Method. Under Review. See also arXiv preprint arXiv:2409.05445, 2024.
 

Talk by Prof. Dr. Uwe Naumann — 17 April 2024

The talk needed to be canceled.

Date: April 17, 2024, 12:00–13:30

Speaker: Prof. Dr. Uwe Naumann (RWTH Aachen University)

Location:

     Weyertal 86–90, 50931 Cologne
     Mathematical Institute (Google Maps, OpenStreetMap)
     Seminar Room 1 (Room 0.05)

Title: Derivatives and Applications for Learning Derivatives

Abstract: Surrogates for numerical simulations can be learned (by "machines") through sampling inputs to the simulation to obtain desired outputs. Adequate matching of the latter is likely to not be sufficient for the numerical methods (e.g. calibration or optimal control) to be applied to the simulations and hence to their surrogates. Consistency of simulation and surrogate up to a certain order of differentiation may be required. The "machine" needs to learn values as well as first and possibly higher derivatives. Algorithmic differentiation [1] helps to generate this data.

We discuss two rather different recent applications of differential machine learning (also known as Sobolev training). [2] addresses pruning of oversized artificial neural networks based on interval adjoint significance analysis. A new exact matrix-free Newton method is presented in [3]. Its extension to surrogates of appropriate structure is the subject of ongoing research.

[1] Naumann: The Art of Differentiating Computer Programs. SIAM 2012.
[2] Kichler, Afghan, Naumann: Towards Sobolev Pruning. Under review for PASC'24. See also arXiv:2312.03510.
[3] Naumann: A Matrix-Free Exact Newton Method. To appear in SISC, SIAM. See also arXiv:2305.01669.