[Lecture] Machine Learning
This course introduces students to the fundamental concepts, techniques, and algorithms in machine learning. It covers the mathematical and theoretical foundations, supervised and unsupervised learning techniques, evaluation methods, and advanced aspects. Students will gain hands-on experience in implementing, training, and optimizing machine learning models using real-world datasets.
The tentative list of topics is as followed:
- Introduction
- Probabilistic Inference
- Trees and Forests
- Neighbor-based methods
- Linear models
- (Convex) Optimization
- Gradient-based Optimization
- SVMs
- Kernels
- Basics of Deep Learning: MLPs, CNNs, GNNs
- Dimensionality Reduction: PCA & tSNE
- SVD & Matrix Factorization
- k-Means and GMMs
- Hierarchical Clustering
- Robustness
- Uncertainty
- Privacy
- Fairness
[Lecture] Mathematics of Data Science - An Introduction
With the continuously growing importance and widespread use of automated simulations, decision-making processes, and AI, new challenges arise in the analysis and processing of data. In particular, the growing complexity of the tasks and the amount of data require new and more efficient approaches from the fields of data science, data mining, and machine learning in general.
In this lecture, theoretical and algorithmic principles of modern data processing and analysis will be covered. The lecture focuses mainly, but not exclusively, on the literature given below. Among other topics, the following will be studied:
- Techniques for dimension reduction (singular value decomposition / PCA / robust PCA)
- Classical regression
- Clustering algorithms
- Classification with Support Vector Machines and Linear Discriminant Analysis
- Classification with Classification Trees and Random Forest
- Classical Neural Networks and an introduction to Deep Learning
- Introduction to Reinforcement Learning
- Reduced Order Models (ROM)
The focus will be on the algorithmic and mathematical feasibility of the mentioned methods, an application-oriented implementation, and less on statistical methods, which are part of Data Science as well.
[Lecture] High Performance Computing for Machine Learning
High Performance Computing (HPC) is concerned with the efficient and fast execution of large simulations on modern supercomputers. It makes use of state-of-the-art technologies (such as GPUs, low-latency connections etc.) to efficiently solve complex scientific and data-driven problems. One of the key factors for the current success of machine learning models is the ability to perform calculations on modern computers with many model parameters and large amounts of training data. However, in their simplest form, current machine learning libraries only make limited efficient use of available HPC resources. The aim of this lecture is therefore to examine theoretical and practical aspects for the efficient training of machine learning and, in particular, deep learning models on modern HPC resources.
With this in mind, in the first part of the lecture, we will cover techniques that typically are used for the performance optimization of software on supercomputers. After a short introduction to HPC, we will deal specifically with GPUs (graphics processing units) and various memory models as well as performance optimization models and a practical introduction to CUDA, a programming interface developed by Nvidia for GPU programming.
In the second part of the lecture, the learnt techniques and concepts for the efficient training of Machine Learning and Deep Learning models will be applied. Different data- and model-parallel trainings methods for the efficient training on GPUs, algorithmic and practice-oriented, will be demonstrated using various examples from applications.
[Seminar] Limitations of Large Language Models
This seminar explores the critical limitations of Large Language Models (LLMs) through the study of:
Jailbreaking: How LLMs can be intentionally manipulated to bypass safeguards and restrictions, leading to unintended or unethical outputs. Hallucinations: The tendency of LLMs to generate confidently incorrect or fabricated information, undermining their reliability. Reasoning: Gaps in logical coherence and contextual understanding that affect the models' ability to perform consistent and accurate reasoning. Scalability: Challenges related to the increasing computational and environmental costs of training larger models, and the diminishing returns on performance improvements. We will also examine other aspects that underscore the limitations of LLMs, providing a comprehensive perspective on their current capabilities and future directions.
[Seminar] Periodical Solutions in Mathematical Models for Neural Nets
As you read these lines, millions of neurons are generating electrical signals in your brain. The exchange – sending and receiving – of electrical signals between neurons creates vibrating nerve networks that perform complex oscillations. According to the latest findings in neuroscience, oscillations in brain activity play an important role in many of our brain's functions. They influence our attention, for example. Oscillations also play an important role in artificial neural networks. Artificial neural networks, which mimic networks of natural neurons, are successfully used in artificial intelligence. In this seminar we will get to know mathematical models for networks of artificial neurons with time-delayed interaction. The corresponding models consist of coupled nonlinear differential equations with time delay. Among other things, we will examine the problem of the existence and nonexistence of periodic solutions and the significance of negative coupling parameters in the emergence of oscillations.
[Seminar] Current Trends in Visualization
This seminar covers current research about the principles and for the application of information visualization in practice. Topics include the visual design of graphs, regressions, and hierarchical as well as temporal data, the connection between machine learning and visualization, interaction, perception, evaluation of visualization techniques or their application in practice.
[Seminar] Methods of Mathematical Modeling in Life Sciences
The seminar will discuss recent work on applications of mathematical modeling in the life sciences. The focus is on current developments of machine learning and artificial intelligence methods for industrial problems in the fields of pharmaceuticals and agricultural sciences. This seminar discusses different aspects, such as the mathematical methodology behind each method, its computational complexity, and possible applications. In individual cases, publicly available methods will also be applied and the results discussed.
[Seminar] Mathematical Foundations of the Natural Language Processing
This seminar deals with the mathematical basics of algorithmic language processing. The goal is to develop a solid understanding of methods used for processing natural language. If time permits, small applications from practice are demonstrated.
[Seminar] Seminar for Teachers at Grammar and Comprehensive Schools: AI Algorithms in Teaching
This seminar is targeted at student teachers who are interested in a realistic, youth-oriented teaching structure for the high-school level. It covers current algorithms used for Artificial Intelligence (AI) and Machine Learning (ML), specifically for regression and classification, different variants of neural networks, ChatGPT, Nearest Neighbor algorithm, algorithms based on decision trees, and more.
For the algorithms and mathematical models, teaching modules are supposed to be created that can supplement the current curricula. The lectures will present the required mathematical basics and a suitable didactic concept.