Draguna Vrabie

Draguna Vrabie is a Chief Data Scientist in the Data Sciences and Machine Intelligence Group at Pacific Northwest National Laboratory. She serves as Team Lead for the Autonomous Learning and Reasoning Team and Thrust Lead for the Data Model Convergence Initiative. Before joining PNNL in 2015, she was a senior scientist in the Control Systems Group at United Technologies Research Center, East Hartford, Connecticut, from 2010-2015. Vrabie received a Ph.D. in Electrical Engineering for her work on adaptive optimal control by reinforcement learning principles from the University of Texas at Arlington. She received Dipl. Ing. and M.S. degrees in Automatic Control and Computer Engineering from the Gheorghe Asachi Technical University in Iasi, Romania. Her work focuses on the design of adaptive and predictive control systems. Her current research is on control and deep learning. She applies her expertise to high-performance cyber-physical energy systems and autonomous scientific discovery. Vrabie co-authored two books on optimal control, reinforcement learning, and differential games and has published over seventy peer-reviewed journal and conference papers. Her work was recognized with Best Paper awards and two corporate awards for outstanding achievement and operational excellence. Vrabie holds seven patents, and in 2021 she received an R&D100 Award. She is a member of the IEEE.

Control design based on deep learning

Abstract: Control systems with learning abilities could cost-effectively address societal issues like energy reliability, decarbonization, climate security and enable autonomous scientific discovery. Recent investigations focus on longstanding challenges such as robustness, uncertainty, and safety of complex engineered systems. But most importantly, innovation in deep learning methods, tools, and technology offers an unprecedented opportunity to transform the control engineering practice and bring much excitement to control systems theory research. In this talk, I will introduce recent results in modeling dynamic systems with deep learning representations that embed domain knowledge. I will also discuss differentiable predictive control, a data-driven approach that uses physics-informed deep learning representations to synthesize predictive control policies. I’ll illustrate the concepts with examples from various engineering applications. I’ll close by considering the implications of differentiable programming on the broader control systems context.