Machine and Reinforcement Learning, Robust and Distributed Optimal Control, Robotics, Convex Optimization, Cyber-Physical Systems
Machine learning techniques - bolstered by successes in video games, sophisticated robotic simulations, and Go – are now being applied to plan and
control the behavior of autonomous systems interacting with physical environments. Such systems, which include self-driving vehicles, distributed sensor
networks, and agile robots, must interact with complex environments that
are ever changing and difficult to model, strongly motivating the use of data-driven decision making and control. However, if machine learning techniques
are to be applied in these new settings, it is critical that they be accompanied
by guarantees of reliability, robustness, and safety, as failures could be catastrophic. To address these challenges, my research is focused on developing learning-based control strategies for the design of safe and robust autonomous networked systems. Please see my publications page for current research projects, and the talks below for an accessible introduction (aimed at a general engineering audience) to some of the ideas behind my work. The first talk (given as part of the Everhart Lecture Series at Caltech) focusses on more control theoretic ideas, whereas the second (given at the University of Illinois - Chicago) and third (given as part of the ETHz Autonomy Talks series) presents some of our more recent work on what makes learning-enabled control easy and hard.
I organized and hosted the first edition of the Northeast Systems and Control Workshop at Penn!
I gave a special invited lecture on What Makes Learning to Control Easy or Hard at the Boston Dynamics AI Institute.
I gave a ISL Colloquium on What Makes Learning to Control Easy or Hard at Stanford University.
I gave a virtual Control and Dynamical Systems (CDS) Invited Lecture on What Makes Learning to Control Easy or Hard at hosted by the University of Maryland.
I gave a Robot Autonomy Seminar on What Makes Learning to Control Easy or Hard at Lehigh University.
I gave a CCI-AAI-MHI Seminar on What Makes Learning to Control Easy or Hard at USC.
I was awarded an AFOSR YIP for FY24 aimed at developing a statistical learning theory for nonlinear control!
We put on a tutorial on Statistical Learning Theory for Control at IEEE CDC 2023.
I gave a MCE Seminar on Representation Learning for Dynamics and Control at Caltech.
I gave a EE Special Seminar on Representation Learning for Dynamics and Control at USC.
I gave a talk on Representation Learning for Dynamics and Control at the UPenn Optimization Seminar.
I gave a talk on Meta-Learning Linear Operators to Optimality from Multi-Task Non-IID Data at MOPTA 2023.
Our papers on fundamental limitations of learning LQR controllers and using robust mpc as a safety shield for NN dynamics have been accepted to CDC 2023.
Our paper on data-driven dynamics aware trajectory generation for under-actuated robotic systems has been accepted to IROS 2023.
I'm serving on the ICCPS 2024 Program Committee.
I gave a DCL talk at GaTech on "What makes learning to control easy or hard?"
I gave a talk at the Microsoft Research NY Reinforcement Learning Seminar Series on "What makes learning to control easy or hard?"
I attended ITA, as well as co-organized and spoke at a special session on statistical learning theory for control.
I gave a talk at the Penn ASSET Seminar Series on "What makes learning to control easy or hard?"
I gave a talk at the University of Toronto Robotics Institute on "What makes learning to control easy or hard?"
I am attending the NSF CPS PI Meeting in Arlington, VA for our grant on robust-perception-based control.
I gave a special LIDS seminar at MIT EECS on "What makes learning to control easy or hard?"
I gave a talk at the Harvard EE departmental seminar on "What makes learning to control easy or hard?"
I was elevated to Senior Member of the IEEE.
I gave a talk as part of the ETHz Autonomy Talks on "What makes learning to control easy or hard?", see here for the recording!
I gave a talk at the 2022 Allerton Conference on TaSIL: Taylor Series Imitation Learning as part of an invited session on Learning, Dynamics, and Control.
I attended the Frontiers of Engineering symposium hosted by the National Academy of Engineers and Amazon in Seattle.
Our paper TaSIL: Taylor Series Imitation Learning was accepted to NeurIPS!
We had four papers accepted to the 2022 IEEE Conference on Decision and Control! Be sure to check them out in Cancun!
I gave a talk at Caltech on "What makes learning to control easy or hard?"
I gave a talk at the 2022 Stockholm Workshop on Emerging Topics in Systems and Control on TaSIL: Taylor Series Imitation Learning.
I gave a talk at the 2022 IEEE ICRA Workshop on Safe and Reliable Robot Autonomy under Uncertainty about what makes learning to control easy or hard!
I gave a talk at the University of Illinois Chicago as part of the ECE Departmental Seminar Series on Robust Learning for Safe Control
Carmen Amo Alonso is selected as a Rising Star in CPS! Congratulations Carmen!
Our L4DC paper On the Sample Complexity of Stability Constrained Imitation Learning was selected for an oral presentation (16/176 submissions).
Three papers accepted to L4DC! Paper 1: joint w/ M. Posa's group, we study the generalization properties of implicit learning of nearly discontinuous functions; Paper 2: we show that adversarially robust stability certificates are easy to learn if your system is incrementally stable; Paper 3: we study the sample complexity of imitation learning through the lens of robust nonlinear stability.
Was awarded the George S Axelby Award by the IEEE Control Systems Society for our TAC paper on System Level Synthesis.
Co-organized a workshop on Robust Deep Learning-based Control at IEEE CDC 2021.
I gave a talk at the University of Delaware as part of their Signal Processing, Communication, and Control Seminar series on Robust Learning for Safe Control.
I was selected as an Outstanding Reviewer at NeurIPS 2021.
I joined the Robotics Group at Google Brain as a Visiting Faculty Researcher.
I gave a talk to PNNL Optimization and Control group on Robust Learning for Safe Control.
Stephen Tu and I gave a combined 8hr (!) mini-course on our recent results on learning to control nonlinear dynamical systems at the EPFL and ETHZ Summer School on Foundations and Mathematical Guarantees for Data-Driven Control.
Participating in the IEEE CSS Workshop on Control for Societal Challenges as a panelist for Decision-Making with Real-Time and Distributed Data.
I gave a talk at SIAM DS21 as part of an invite session on Data-Driven Analysis and Control of Dynamical Systems on our work on Safe Imitation Learning.
I gave a talk at ACC 2021 as part of a workshop on cognition, learning, and control on Robust Guarantees for Perception-Based Control.
I gave a talk at UC San Diego as part of their control systems seminar series on Robust Learning for Safe Control.
I gave a talk at KTH as part of their control systems seminar series on Robust Learning for Safe Control.
I gave a talk at the Max Planck Institute for Intelligent Systems on Robust Learning for Safe Control.
I gave a talk at the University of Maryland as part of the Lockheed Martin Robotics Seminar Series on Robust Learning for Safe Control.
I participated in the NSF Next Big Research Challenges in Cyber-Physical Systems Workshop.
I received a Google Research Scholar Award!
I gave a talk in an invited session on learning and control on our learning CBFs work at CISS 2021.
I was the external reviewer for Ingvar Ziemann's licentiate thesis defense.
I gave a talk on our recent work on safe imitation learning at the Google Machine Learning and Robot Safety Workshop.
I received the NSF CAREER Award! See here and here for some Penn press about it.
New paper on Learning Robust Hybrid Control Barrier functions for Uncertain Systems.
I gave a talk on Learning Control Barrier Functions from Data at the CDC 2020 Workshop on Data-Driven Control.
I gave a webinar talk on Safety and Robustness Guarantees with Learning in the Loop hosted by the IEEE joint control, robotics, and cybernetics chapter of the Vancouver section.
We have two papers being presented at CoRL 2020! Check out our work on learning stability certificates and hybrid control barrier functions from data!
I gave a talk on Learning CBFs from Expert Demos at INFORMS 2020 as part of an invited session on Recent Advances in Learning, Optimization, and Control. Check out an extended version of the talk here!
I gave a talk on Learning CBFs from Expert Demos at CCTA 2020 as part of a tutorial session on Control/Optimization in ML/AI. Check out an extended version of the talk here!
I am serving on the program comittee for CORL 2020 and have joined the Conference Editorial Board for the IEEE Control Systems Society.
All four of our papers were accepted to CDC! Check out our work on Learning CBFs from Expert Demos, Robust MPC, Distributed MPC, and Explicit Distributed MPC!
Paper on Evaluating Robust, Perception Based Controllers with Quadrotors accepted to IROS 2020!
Papers on Robust Perception Based Control and the Sample Complexity of Kalman Filtering for Unknown System accepted to L4DC 2020! Our KF paper will be one of 14 oral spotlight presentations!
Dec 11-13: I will be giving a tutorial on self-tuning control and reinforcement learning at CDC 2019 in Nice, France!
Nov 21: I will be discussing recent developments, challenges, and new opportunities in data-driven control and optimization of CPS at the Mini-Workshop on Learning for Control at the 2019 NSF CPS PI Meeting.
I am serving on the program comittees for HSCC 2020 and L4DC 2020.
Nov 4: Presenting our work on safety and robusntess guarantees with learning in the loop as part of the ETHz Control Seminar Series, at ETH Zurich.
Oct 16: Presenting our work on safety and robustness guaranteees with learning in the loop at the LCSR at John Hopkins University.
Oct 14-15: Attending the NSF-sponsored Robot Learning Workshop at Lehigh University to present our work on robust guarantees for perception based control. (Presentation Video)
Sep 24-26: Attending Allerton to present our work on efficient learning of distributed controllers.
New papers on distributed MPC using SLS, robust performance guarantees for SLS, and efficient learning of distributed controllers.
May 30-31: Attending the inaugural Learning for Dynamics and Control at MIT
Our paper A System Level Approach to Controller Synthesis is accepted for publication in IEEE TAC
Dec 13-18: IEEE CDC 2018, Miami, FL
Dec 2-8: NeurIPS, Montreal, Canada