Hessian Center for Artificial Intelligence
My team and I would like to make computers learn so much about the world, so rapidly and flexibly, as humans. This poses many deep and fascinating scientific problems: How can computers learn with less help from us and data? How can computers reason about and learn with complex data such as graphs and uncertain databases? How can pre-existing knowledge be exploited? How can computers decide autonomously which representation is best for the data at hand? Can learned results be physically plausible or be made understandable by us? How can computers learn together with us in the loop? To this end, my team and I develop novel machine learning (ML) and artificial intelligence (AI) methods, i.e., novel computational methods that contain and combine, for example search, logical and probabilistic techniques as well as (deep) (un)supervised and reinforcement learning methods. Currently, we focus specifically on probabilistic circuits, causality, explanatory interactive learning, probabilistic programming, and ethics in AI in order to push the third wave of AI.
Most of AI in use today falls under the categories of the first two waves of AI research. First wave AIs follow clear rules, written by people, aiming to cover every eventuality. Second wave AIs are the kind that use statistical learning to arrive at an answer for a certain type of problem—think of image recognition systems. The Third wave of AI envisions a future in which machines are more than just tools that execute human programmed rules or generalize from human-curated data sets. Rather, the machines it envisions will function more as colleagues than as tools. AI systems of this third wave of AI can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and to adapt to them. For example, a third wave AI might note that a speed limit of 120 km/h does not make sense when entering a small village.
Department of Computer Science
Technical University of Munich (TUM)
The research activity of Professor Piazza focuses on the areas of rehabilitation and assistive robotics. Her main research interests include the study of human movement and the mechatronic design of artificial devices based on soft robotics technologies. The aim is to develop simple and reliable technologies, based on neuroscientific theoretical principles, to assist people with limb loss or motor disabilities. An additional focus is to explore the development of advanced control techniques based on non-invasive methods, such as the use of surface electromyographic (sEMG) sensors. The ultimate goal is to build intelligent prostheses and promote a natural integration of bionic limbs. She has also experience in designing and conducting clinical trials with subjects with limb loss, in close collaboration with national and international clinical partners. Prof. Piazza is also interested in explore novel assessments for upper limb rehabilitation and provide innovative solutions to patients, based on emerging technologies, such as virtual reality.
Prof. Piazza received her PhD in Robotics at University of Pisa, Italy before moving to Chicago (USA) where she worked as a Postdoctoral Researcher at the Department of Physical Medicine and Rehabilitation, Northwestern University and the Regenstein Foundation Center for Bionic Medicine, Shirley Ryan AbilityLab (former Rehabilitation Institute of Chicago). Since November 2020, she is Professor for Healthcare and Rehabilitation Robotics at the Technical University of Munich (TUM).
Technical University of Munich
Department of Informatics
& Munich Data Science Institute
Debarghya Ghoshdastidar works on the statistical theory of machine learning. The goal of his research is to provide mathematical understanding of machine learning and deep learning models, thereby establishing black-box AI tools as formal statistical principles. He is particularly interested in understanding how AI learns for unlabeled data, for instance, in transductive inference and in unsupervised learning problems. His work uses concepts from optimization, probability and high-dimensional statistics to rigorously explain the behavior and performance of graph neural networks, kernel machines and spectral algorithms. Apart from his primary research in machine learning theory, Debarghya Ghoshdastidar also works on statistical tools for network analysis and collaborates on the learning problems arising in physics.