Scholarly Colloquia and Events

  • 12/18 Certifiable Neural Control for Safe Autonomy

    Please join us for the following talk by faculty candidate Wei Xiao, on Monday, Dec 18 at 9:30 am in ITE 336 (or via webex)

     

     

    Certifiable Neural Control for Safe Autonomy 

    Wei Xiao, MIT Post Doc scholar

      

    Abstract: Safety is central to autonomous systems since a single failure could lead to catastrophic results. In unstructured complex environments where system states and environment information are not available, the safety-critical control problem is much more challenging. In this talk, I will first discuss safety from a control theoretic perspective with Control Barrier Functions (CBFs). CBFs capture the evolution of the safety requirements during the execution of a control system and can be used to guarantee safety for all times due to their forward invariance. Next, this talk will introduce an approach for extending the use of CBFs to machine learning-based control, using differentiable CBFs that are end-to-end trainable and adaptively guarantee safety using environmental dependencies. These novel safety layers give rise to new neural network (NN) architectures such as what we have termed BarrierNet.  In machine learning, especially for safety-critical control applications, the interpretability of a NN is crucial. The talk will further introduce a novel method called invariance set propagation through the NN. This approach enables causal manipulation of the NN's parameters or inputs with respect to output specifications, as well as introducing guarantees. Finally, this talk will show how we may achieve safety-critical planning and control using more powerful generative AI, such as diffusion models, for generalizable autonomy. These techniques have been applied to various robots, such as autonomous ground vehicles, vessels, and flight vehicles, legged robots, robot swarms,  soft robots, and manipulators. 

     

    Bio: Wei Xiao is currently a postdoctoral associate at the Computer Science and Artificial Intelligence lab (CSAIL), Massachusetts Institute of Technology. He received his Ph.D. degree from the Boston University, Brookline, MA, USA in 2021. His research interests include safety-critical control theory and trustworthy machine learning, with particular emphasis on robotics and traffic network optimization and control. He received an Outstanding Dissertation Award at Boston University, an Outstanding Student Paper Award and Best Student Paper Nomination at the 2020 IEEE Conference on Decision and Control, and a Best Paper Nomination at ACM/IEEE ICCPS 2021. 

    For more information, contact: Brandy Ciraldo at brandy.ciraldo@uconn.edu