Research Seminar "Machine Learning Theory"
This is the research seminar by Ulrike's group.When and where
Each thursday 14:00 - 15:00, Seminar room 2rd floor, MvL6.What
Most sessions take place in form of a reading group: everybody reads the assigned paper before the meeting. Then we jointly discuss the paper in the meeting. Sometimes we also have talks by guests or members of the group.Who
PhD students and researchers of the University of Tübingen. We do not mind people dropping in and out depending on whether they find the current session interesting or not.Upcoming meetings
- 24.10.2024 Paper discussion (Robi): Towards Monosemanticity: Decomposing Language Models With Dictionary Learning, 2023. link
- 31.10.2024 Paper discussion: Least Squares Regression Can Exhibit Under-Parameterized Double Descent Xinyue Li, Rishi Sonthalia, NeurIPS 2024
- 7.11.2024 Talk by Hidde Fokkema (University of Amsterdam): title and abstract to come.
- 14.11.2024 MSc Defense Anna Vollweiter; afterwards: quick discussion about our github page
- 21.11.2024 (15:00!!! 2nd floor glassroom) Discussion: Counterexamples in explainability (everybody please update the google doc)
- 28.11.2024 (paper discusssion, who? ) Angelina Wang, Sayash Kapoor, Solon Barocas, Arvind Narayanan: Against Predictive Optimization: On the Legitimacy of Decision-making Algorithms That Optimize Predictive Accuracy. ACM Journal of Responsible Computing (!!!) 2024 pdf
- Monday (!) 2.12.2024, 14:30 - 15:30 (glassroom 2nd floor), Talk by Nil Ayday (TU Munich): Title: Generalisation Error for Semi-Supervised Learning Using Graph Neural Networks. Abstract: Graph Neural Networks (GNNs) have become powerful tools for modeling complex relationships in graph-structured data across various domains. The success of GNNs comes from their message- passing mechanism, which allows information to propagate through the graph structure, enabling each node to aggregate information from its neighbors. This process uses graph information (the connections between nodes) and node features (attributes specific to each node), leading to representations that can be used for various tasks. In this presentation, we investigate how much of the information provided by the graph and the node features contribute to the prediction of GNNs in a semi-supervised learning setting. We derive the exact generalization error for linear GNNs under a theoretical framework, where node features and the graph convolution are partial spectral observations of the underlying data. We investigate the generalization error to evaluate the learning capabilities of Graph Convolutional Networks (GCNs), a specific type of GNN that employs graph convolution operations. A key insight from our analysis is that GCNs fail to utilize graph and feature information when graph and feature information are not aligned. We conclude with ongoing work on extending our analysis to other GNNs and graph attention mechanisms and developing architectures that better exploit graph and feature information.
- 12.12.2024 No reading group (Neurips)
- 19.12.2024 Christmas coffee?
- 9.1.2025 tba (eventually discuss long term teaching plans, WS 2025/26 algorithms, SS 2026 SML, WS 2026/27 maths for ml
- 16.1. 2025 tba
- 23.1. 2025 tba
- 30.1. 2025 no reading group (many of us in Oberwolfach)
- 6.2. 2025
- 13.2. 2025 (last in person meeting before ulrike leaves)
- 20.2. 2025 no reading group
- 27.2. 2025 (hybrid), tba
Past meetings
Listed here.Suggested papers for future meetings
Feel free to make suggestions!If you do, please (i) try to select short conference papers rather than 40-page-journal papers; (ii) please put your name when entering suggestions; it does not mean that you need to present it, but then we can judge where it comes from; (iii) Please provide a link, not just a title.
- Ulrike: Why do random forests work? Understanding tree ensembles as self-regularizing adaptive smoothers. pdf
- Robust Explanation for Free or At the Cost of Faithfulness. ICML 2023. link (Ulrike)
- Trade-off Between Efficiency and Consistency for Removal-based Explanations, Neurips 2023 link (Ulrike)
- Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning link (Ulrike)
- Getting Aligned on Representational Alignment, 2023 pdf (David)
- On Provable Copyright Protection for Generative Models, ICML 2023 pdf (Peru)
- Causal Abstractions of Neural Networks, NeurIPS 2021, pdf (Gunnar)
- A theory of interpretable approximations, COLT 2024, pdf (Gunnar)
- Benign overfitting in ridge regression, by Alexander Tsigler and Peter Bartlett pdf
- Infinite Limits of Multi-head Transformer Dynamics, NeurIPS 2024 pdf (Moritz)