Prof Ambedkar Dukkipati

IISc Bangalore

Synopsis of Research Activities:

Department of Computer
Science and Automation
IISc Bangalore

Prof. Ambedkar Dukkipati heads the Statistics and Machine Learning group at the Department of Computer Science and Automation, IISc. His research interests include statistical network analysis, network representation learning, spectral graph methods, machine learning in low data regime, sequential decision-making under uncertainty and deep reinforcement learning.

Prof Debarghya Ghoshdastidar

TU Munich

Synopsis of Research Activities:

Technical University of Munich
Department of Informatics
& Munich Data Science Institute

Debarghya Ghoshdastidar works on the statistical theory of machine learning. The goal of his research is to provide mathematical understanding of machine learning and deep learning models, thereby establishing black-box AI tools as formal statistical principles. He is particularly interested in understanding how AI learns for unlabeled data, for instance, in transductive inference and in unsupervised learning problems. His work uses concepts from optimization, probability and high-dimensional statistics to rigorously explain the behavior and performance of graph neural networks, kernel machines and spectral algorithms. Apart from his primary research in machine learning theory, Debarghya Ghoshdastidar also works on statistical tools for network analysis and collaborates on the learning problems arising in physics.


Dr Amit Sharma

Microsoft Research Bangalore

Synopsis of Research Activities:

Senior Researcher
Microsoft Research India, Bangalore

Amit Sharma is a Senior Researcher at Microsoft Research India. His work bridges causal inference techniques with data mining and machine learning, with the goal of making machine learning models generalize better, be explainable and avoid hidden biases. To this end, Amit has co-led the development of the open-source DoWhy library for causal inference and DiCE library for counterfactual explanations of machine learning. His work has received many awards including a Best Paper at CHI 2021, ACM Conference on Human Factors in Computing Systems; Best Paper Honorable Mention Award at CSCW 2016, ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW); the 2012 Yahoo! Key Scientific Challenges Award; and the 2009 Honda Young Engineer and Scientist Award. Amit received his Ph.D. in computer science from Cornell University and B.Tech. in Computer Science and Engineering from Indian Institute of Technology (IIT) Kharagpur.

Prof Benjamin Gess

Max Planck Institute for Mathematics in the Sciences Leipzig

Synopsis of Research Activities:

Fakultät für Mathematik Universität Bielefeld & Max Planck Institute for Mathematics in the Sciences Leipzig

Optimization in machine learning
Our research addresses the conference of stochastic optimization algorithms such as stochastic gradient descent in machine learning. In this context, the optimization problems are typically non-convex and non-smooth posing challenging mathematical questions in proving quantified convergence estimates.

Scaling limits in machine learning and interacting systems
Universal behavior in scaling limits of large system size, so-called overparametrization, large batch size and small learning rate could serve as guiding principle in the understanding of dynamics and generalization in machine learning. The investigation and identification of such scaling limits is part of our research interest.

Statistical physics
Large neural networks can be understood as large systems of interacting particles. Stochastic optimization algorithms lead to stochastic dynamics on these particle systems. This interpretation shows resemblance to concepts of statistical physics and their scaling limits.

Stochastic dynamics
Employing techniques from the field of stochastic dynamics to stochastic optimization algorithms may help to better understand their qualitative and quantitative performance. We aim to transfer knowledge from this classical mathematical field to the field of machine learning.

(Stochastic) partial differential equations
Scaling limits of large networks lead to continuum descriptions of optimization dynamics. In the case of stochastic optimization, these dynamics are stochastic, leading to scaling limits in terms of stochastic partial differential equations, posing a variety of mathematical and applied challenges.

Dr Praneeth Netrapalli

Google Research Bangalore

Synopsis of Research Activities:

Research Scientist

Praneeth Netrapalli’s research interests are in the areas of reliable machine learning and optimization. In the first topic, his research has identified the reasons for brittleness of neural networks in practice and is currently focused on developing training procedures that can train more robust models. In the second topic, his research focuses on understanding stochastic and nonconvex optimization algorithms and extend them to new and important settings such as optimizing with dependent data (such as in time series and reinforcement learning), minimax optimization, quantum optimization etc.

Dr Caterina De Bacco

Max Planck Institute for Intelligent Systems Stuttgart

Synopsis of Research Activities:

Cyber Valley, Max Planck Institute
for Intelligent Systems

The main research goal is understanding, optimizing and predicting relations between the microscopic and macroscopic properties of complex large-scale interacting systems. This is done by addressing application-oriented problems of inference and optimization on networks via developing models and algorithms derived from statistical physics principles. The two main motivations behind this interest are the idea that there is a pressing need for theory to be grounded in concrete applications in order to solve relevant scientific problems in rigorous ways, improving both methodological and domain-specific knowledge. In addition, in recent years statistical physics, probabilistic modelling and machine learning have been able to provide new insights and novel approaches to problems across a broad range of disciplines.

Our research approach is a combination of developing theoretically grounded models, assessing their properties and limitations and tackling interdisciplinary applications involving domain experts from other disciplines, in particular social science. One of our main research's objective is in fact to make rigorous models accessible to practitioners that may not have a strong mathematical expertise but have domain-knowledge in other disciplines. To facilitate this, we always releases open source implementations of the codes online.

Dr Siddharth Barman

IISc Bangalore

Synopsis of Research Activities:

Associate Professor Department of Computer Science and Automation, IISc Bangalore

Siddharth’s research spans algorithmic and foundations aspects in Microeconomics, Machine Learning, and Artificial Intelligence. Acr0ss these fields, he focuses on theoretical problems that capture real-world applications. His current research focus lies in Algorithmic Game Theory (AGT) and, in particular, addresses fairness in AI and resource-allocation settings. The list highlights some of Siddharth’s recent contributions:

- Discrete Fair Division: A recent joint work of Siddharth establishes that, in the context of allocating indivisible goods, economic efficiency is not sacrificed by imposing fairness. This result is conceptually surprising since it shows that the seemingly incompatible properties of fairness and economic efficiency can be achieved together in pseudo-polynomial time. Interestingly, this work employs a novel design paradigm wherein a microeconomic result stands in the service of algorithm design.

This work has the potential for direct impact in terms of guiding allocation policies in resource-allocation settings. Siddharth has established multiple other results in discrete fair division that consider, for instance, strategic agents, partial-information settings, and complementary notions of fairness.

- Fairness Audit of Classifiers: Siddharth has also worked on developing a framework to audit as well as mitigate bias in classification settings. This work can be applied in settings wherein automated (and data-driven) tools are used to make medical or insurance-related decisions.

Prof Gitta Kutyniok

Ludwig Maximilian University of Munich

Synopsis of Research Activities:

Bavarian AI Chair on “Mathematical Foundations of Artificial Intelligence”

Gitta Kutyniok’s research work is at the interface of mathematics and computer science. One of her current main goals is to develop a theoretical foundation for deep learning. Some of her most well-known contributions are in expressivity, namely, the approximation power of deep neural networks. She, for instance, provided an analysis and construction of memory-optimal deep neural networks by using classical approximation theory. The amazing generalization ability of deep neural networks is another of her research interests, where, for instance, for graph convolutional neural networks she completely unravelled this mystery in the setting of graphs describing the same phenomenon. Finally, also explainability, namely opening the black box of deep learning, is among her research directions. One of her contributions is the first theoretically founded approach to explainability of deep neural networks by using information theory. Recently, she also became interested in robustness of deep neural networks, primarily targeted at applications in robotics.

A second main research direction is the development of theoretically founded approaches to mathematical problems such as inverse problems in imaging science and numerical analysis of partial differential equations. Related to the first topic and building on her previous work in imaging science, foremost the introduction of the directional multiscale system of shearlets (see also, which is now exploited by various research groups worldwide, she aims to optimally combine model- and AI-based approaches for specific application settings such as computed tomography. In this range, she, for instance, developed a state-of-the-art algorithm for the limited-angle computed tomography problem using a combination of deep neural networks and sparse regularization by shearlets. Related to the second topic, she focusses on developing a theoretical understanding of the ability of deep neural networks to beat the curse of dimensionality with one of her contributions being a theoretical and numerical analysis of using deep neural networks to solve parametric partial differential equations.

Prof Sunita Sarawagi

IIT Bombay

Synopsis of Research Activities:

Computer Science and Engineering & Center for Machine Intelligence and Data Science IIT Bombay

Neural Models for Sequence Prediction with applications to dialog generation, translation, grammar correction, semantic parsing.

Domain Adaptation, Domain Generalization and model calibration.

Forecasting models for temporal sequences

Dr Krikamol Muandet

Max Planck Institute for Intelligent Systems Stuttgart

Synopsis of Research Activities:

Max Planck Institute for
Intelligent Systems

``How can we build machines that can learn to generalize in the real world from past observations?'' My research aims to answer this question by addressing three challenges. Firstly, the real-world generalization must cope with changes not only on observed data, but also over probability distributions that generate them, i.e., distributional shift. Secondly, understanding of cause-effect relationships enables machines to generalize better in the real world and is a prerequisite for consequential decision making. However, it requires experimental data which can be expensive, time-consuming, or even unethical to collect, so machine learning models are mostly trained on non-experimental data alone. Lastly, the omnipresence of machine learning systems and scarcity of resources in the society can create complex human-machine interactions that may hinder the deployment of these systems.

Prof Nihat Ay

Hamburg University of Technology

Synopsis of Research Activities:

Professor and Head of the Institute for Data Science Foundations

Hamburg University of Technology,

    Research interests:

  • Complexity and information theory
  • Mathematical theory of learning in neural networks and cognitive systems
  • Graphical models (Bayesian networks) and their application to causality theory
  • Information geometry and its applications to complexity and network robustness
  • Geometric structure in quantum theory (non-commutative state spaces)

Nihat Ay studied mathematics and physics at the Ruhr University Bochum and received his Ph.D. in mathematics from the Leipzig University in 2001. In 2003 and 2004, he was a postdoctoral fellow at the Santa Fe Institute and at the Redwood Neuroscience Institute (now the Redwood Center for Theoretical Neuroscience at UC Berkeley). After his postdoctoral stay in the USA, he became an assistant professor (wissenschaftlicher Assistent) at the Mathematical Institute of the Friedrich Alexander University in Erlangen. From September 2005 to March 2021, he worked as a Max Planck Research Group Leader at the Max Planck Institute for Mathematics in the Sciences in Leipzig where he was heading the group Information Theory of Cognitive Systems. As a part-time professor of the Santa Fe Institute he is involved in research on complexity and robustness theory. He is also affiliated with the Leipzig University as an honorary professor for information geometry. Nihat Ay has co-authored a comprehensive mathematics book and written numerous articles on this subject. Furthermore, he serves as the Editor-in-Chief of the Springer journal Information Geometry. In April 2021, he joined the Hamburg University of Technology as a professor and the head of the newly founded institute for Data Science Foundations.