Asia Biega, Max Planck Institute for Security and Privacy
Responsible Information Access Systems
Information access systems (such as search or recommendation systems) are much more than the algorithms that power them. They’re sociotechnical — learning from user behaviour, processing personal data, influencing people’s views and decisions. Because of their potential negative impacts, these systems are increasingly regulated and redesigned to respect different societal and ethical constraints. In the first part, this lecture will discuss potential negative societal and individual impacts in information access systems as well as identifying their sources. We will then learn about different computational and non-computational impact mitigation strategies. In the second part, we will take a deep dive into some of the recent research operationalizing legal constraints in information access systems, demonstrating the key challenges and research opportunities in this area.
Soheil Feizi, University of Maryland
Natural and Adversarial Robustness in Deep Learning
In the last couple of years, a lot of progress has been made in understanding various fundamental aspects of deep models. A key question is how to measure success in deep learning. A classical answer to this question is to evaluate the performance of trained models in the test set. However, it has been shown that this measure, although important, does not tell the whole story: models with an impressive test set accuracy can be extremely fragile against natural or adversarial noise. In this talk, I will present our recent results to study natural and adversarial robustness of deep models. For robustness to adversarial attacks, I will present defenses for a novel “perceptual” adversarial threat model that generalize well against many types of unseen Lp and non-Lp adversarial attacks. For robustness to outlier samples, I will introduce robust optimal transport and discuss its applications in GANs and domain adaptation.
Joe Halpern, Cornell University
Actual Causality: A Survey
What does it mean that an event C “actually caused” event E? The problem of defining actual causation goes beyond mere philosophical speculation. For example, in many legal arguments, it is precisely what needs to be established in order to determine responsibility. (What exactly was the actual cause of the car accident or the medical problem?) The philosophy literature has been struggling with the problem of defining causality since the days of Hume, in the 1700s. Many of the definitions have been couched in terms of counterfactuals. (C is a cause of E if, had C not happened, then E would not have happened.) In 2001, Judea Pearl and I introduced a new definition of actual cause,
using Pearl’s notion of structural equations to model counterfactuals. The definition has been revised twice since then, extended to deal with notions like “responsibility” and “blame”, and applied in databases and program verification. I survey the last 15 years of work here, including joint work with Judea Pearl, Hana Chockler, and Chris Hitchcock. The talk will be completely self-contained.
Katherine J. Kuchenbecker, Max Planck Institute for Intelligent Systems
Understanding Robot Motion
Robots have the potential to support humans in a wide variety of tasks; for example, my team works on robot-assisted surgery, rehabilitation robotics, and even hugging robots. Making such robots move the way you want them to move is a key challenge in any such project. At the core, one needs to understand how to represent positions and orientations in three-dimensional space. I will explain good ways to think about rotation matrices, Euler angles, and axis-angle formulations, also showing how they relate to one another using animated visualizations. Then we will talk about some fundamental aspects of robot movement, including revolute and prismatic joints, forward kinematics, inverse kinematics (briefly), and velocity Jacobians. By the end of this pair of lectures, I hope you will see both the beauty and the utility of these core principles from robotics.
Dave Mount, University of Maryland
Andrew Myers, Cornell University
We know that many security vulnerabilities result from the choice of
programming language. How much could security be improved if the language were
helping with security instead of getting in the way? I will discuss what it
means formally for a system to be secure and show how language features and
program analyses can with high assurance enforce important security properties
like integrity, confidentiality, and availability. These methods can be applied
at different levels of the system, ranging from low-level hardware design up to
decentralized distributed systems.
Huaishu Peng, University of Maryland
Wearable computing, mixed reality, and interactive fabrication
Human-computer interaction (HCI) is about how we shape the ways of living with interactive technologies. For instance, one may envision a future where smart sensors and IoT devices will live on human’s body or clothes to help monitor our health 24/7; mixed reality devices will provide not only immersive digital renderings but also offer realistic in-situ haptic feedback; and personal fabrication machines will empower us to design and build customized functional artifacts at home, not a factory. In these lectures, we will explore various research projects that contribute to such visions. We will examine recent publications in HCI, learn the technology behind the scene, and discuss how these interactive technologies can lead to a future that is different from now.