Duration:
2 Semester | Turnus of offer:
each year, can be started in winter or summer semester | Credit points:
8 |
Course of studies, specific field and terms: - Master Robotics and Autonomous Systems 2019 (compulsory), Compulsory courses, 1st and 2nd semester
|
Classes and lectures: - CS4295-Ü: Deep Learning (exercise, 1 SWS)
- CS4575-V: Sequence Learning (lecture, 2 SWS)
- CS4575-Ü: Sequence Learning (exercise, 1 SWS)
- CS4295-V: Deep Learning (lecture, 2 SWS)
| Workload: - 120 Hours work on project
- 120 Hours private studies
- 60 Hours in-classroom exercises
- 60 Hours in-classroom work
| |
Contents of teaching: | - Foundations and Deep Learning Basics (Learning Paradigms, Classification and Regression, Underfitting and Overfitting)
- Shallow Neural Networks (Basic Neuron Model, Multilayer Perceptions, Backpropagation, Computational Graphs, Universal Approximation Theorem, No-Free Lunch Theorems, Inductive Biases)
- Optimization (Stochastic Gradient Descent, Momentum Variants, Adaptive Optimizer)
- Convolutional Neural Networks (1D Convolution, 2D Convolution, 3D Convolution, ReLUs and Variants, Down and Up Sampling Techniques, Transposed Convolution)
- Regularization (Early Stopping, L1 and L2 Regularization, Label Smoothing, Dropout Strategies, Batch Normalization)
- Very Deep Networks (Highway Networks, Residual Blocks, ResNet Variants, DenseNets)
- Dimensionality Reduction (PCA, t-SNE, UMAP, Autoencoder)
- Generative Neural Networks (Variational Autoencoder, Generative Adversarial Networks, Diffusion Models)
- Graph Neural Networks (Graph Convolutional Networks, Graph Attention Networks)
- Fooling Deep Neural Networks (Adversarial Attacks, White Box and Black Box Attacks, One-Pixel Attacks)
- Physics-Aware Deep Learning (Physical Knowledge as Inductive Bias, PINN, PhyDNet, Neural ODE, FINN)
- Introduction to Sequence Learning (Formalisms, Metrics, Recapitulation of Relevant Machine Learning Techniques)
- Recurrent Neural Networks (Simple RNN Models, Backpropagation Through Time)
- Gated Recurrent Networks (Vanishing Gradient Problem in RNNs, Long Short-Term Memories, Gated Recurrent Units, Stacked RNNs)
- Important Techniques for RNNs (Teacher Forcing, Scheduled Sampling, h-Detach)
- Bidirectional RNNs and related concepts
- Hierarchical RNNs and Learning on Multiple Time Scales
- Online Learning and Learning without BPTT (Real-Time Recurrent Learning, e-Prop, Forward Propagation Through Time)
- Reservoir Computing (Echo State Networks, Deep ESNs)
- Spiking Neural Networks (Spiking Neuron Models, Learning in SNNs, Neuromorphic Computing, Recurrent SNNs)
- Temporal Convolution Networks (Causal Convolution, Temporal Dilation, TCN-ResNets)
- Introduction to Transformers (Sequence-to-Sequence Learning, Basics on Attention, Self-Attention and the Query-Key-Value Principle, Large Language Models)
- State Space Models (Structured State Space Sequence Models, Mamba)
| |
Qualification-goals/Competencies: - Students get a fundamental understanding deep learning basics such as backpropagation, computational graphs, and auto-differentiation
- Students understand the implications of inductive biases
- Students get a comprehensive understanding of most relevant deep learning approaches
- Students learn to analyze the challenges in deep learning tasks and to identify well-suited approaches to solve them
- Students will understand the pros and cons of various deep learning models
- Students know how to analyze the models and results, to improve the model parameters, and to interpret the model predictions and their relevance
- Students get a comprehensive understanding of most relevant sequence learning approaches
- Students learn to analyze the challenges in sequence learning tasks and to identify well-suited approaches to solve them
- Students will understand the pros and cons of various sequence learning models
- Students can implement common and custom sequence learning models for time series analysis, classification, and forecasting
- Students know how to analyze the models and results, to improve the model parameters, and to interpret the model predictions and their relevance
|
Grading through: - Written or oral exam as announced by the examiner
|
Responsible for this module: Teachers: - MitarbeiterInnen des Instituts
- Prof. Dr. Sebastian Otte
|
Literature: - Goodfellow, I., Bengio, Y., & Courville, A.: Deep Learning - MIT Press (2016), ISBN 978-0262035613
- Nakajima, K., & Fischer, I.: Reservoir Computing: Theory, Physical Implementations, and Applications - Springer Nature Singapore (2021), ISBN 978-9811316869
- Sun, R., & Giles, C.: Sequence Learning: Paradigms, Algorithms, and Applications - Springer Berlin Heidelberg (2001), ISBN 978-3540415978
- Bishop, C. M.: Pattern Recognition and Machine Learning - Springer (2006), ISBN 978-0387310732
- Sutton, R., & Barto, A.: Reinforcement Learning: An Introduction - The MIT Press (2018), ISBN 978-0262039246
- François-Lavet, V., Henderson, P., Islam, R., Bellemare, M., & Pineau, J.: An Introduction to Deep Reinforcement Learning - Now Publishers Inc (2018), ISBN 978-1680835380
- Recent publications on the related topics:
|
Language: |
Notes:Admission requirements for taking the module: - None Admission requirements for participation in module examination(s): - Successful completion of exercise assignments as specified at the beginning of the semester Module Exam(s): - CS4295-L1: Deep Learning, exam, 90 min, 50% of the module grade - CS4575-L1: Sequence Learning, exam, 90 min, 50% of the module grade |
Letzte Änderung: 7.2.2024 |
für die Ukraine