Optimal Distributed Control

Evolving from the quintessential and fully understood class of optimization problems where the dynamics are Linear, the loss function is Quadratic, and the process noise is Gaussian (LQG), control theory has turned its focus towards safety-critical and distributed control applications. Due to the required reliability, the high dimensionality of models, the stochasticity, and the nonlinearities, these problems often belong to the NP-hard class.

Inspired and motivated by the recent success of RL technologies, we study the potential of online regret minimization for distributed control architectures. By computing local control policies that minimize the worst-case regret with respect to a benchmark policy that knows the coupling trajectory in advance, our methods can adapt online to the realized coupling trajectories. Further, local controllers do not need to know the dynamical model generating the couplings, and the computations for policy design can be fully parallelized.

Reliable Deep Neural Networks (DNNs) for Optimal Nonlinear Control

The complexity of general optimal distributed control problems motivate considering control policies that are parametrized as DNNs. A main challenge of DNN controllers is that they are not dependable during and after training, that is, the closed-loop system may be unstable, and the training may fail due to vanishing and exploding gradients. Not only should the learned control policy be safe and distributed, but dependability should be guaranteed during the learning process itself — that is, during the exploration phase.

We strive to characterize general classes of DNN-based control policies that guarantee robustness and safety
during training
, while maintaining favorable numerical properties that ensure convergence to high-performance solutions. Furthermore, we plan to tackle the curse of dimensionality, thus scaling up and parallelizing the methods to extremely large-scale systems.

Safety-critical data-driven control

The mathematical models of large-scale CPSs are often unreliable or simply unavailable, and therefore, one can only base the safe control design on noise-corrupted input-output data gathered during past experiments.

Driven by the increasing ubiquity of low-cost devices capable of sensing, storing, and communicating data, we develop model-free methods for distributed control of unknown systems that learn safe and globally optimal control policies in a sample-efficient way.


We plan to validate our methodologies on realistic engineering systems, including wide-area power network systems, mobility-on-demand, wind-farm control for green power generation, and control of fleets of autonomous vehicles.