Dr. Kristian Hengster-Movrić: Cooperative control of multi-agent systems: Stability, optimality and robustness

Fri, 11/15/2013

Everyone is welcome to attend a talk given by Dr. Kristian Hengster-Movrić - a new member of the AA4CC group at Department of Control Engineering FEE CTU in Prague. The talk will be given at KN:E-14 room at Karlovo namesti 13/E, Prague. It will start at 2pm and will take 60 minutes including a discussion.

Abstract: The last two decades have witnessed an increasing interest in multi-agent network cooperative systems, inspired by natural occurrence of flocking and formation forming. These systems are applied to formations of spacecrafts, unmanned aerial vehicles, mobile robots, distributed sensor networks etc. Early work with networked cooperative systems in continuous and discrete time generally referred to consensus without a leader. By adding a leader that pins to a group of other agents one can obtain synchronization to a command trajectory using a virtual leader, also named pinning control. Necessary and sufficient conditions for synchronization are given by the master stability function, and the related concept of the synchronizing region. For continuous-time systems synchronization was guaranteed using optimal state feedback derived from the continuous time Riccati equation. It was shown that, using Riccati design for the feedback gain of each node guarantees an unbounded right-half plane region in the s-plane. This talk is concerned with synchronization for agents described by linear time-invariant discrete-time dynamics. The interaction graph is directed and assumed to contain a directed spanning tree. The pinning control is used for the needs of consensus and synchronization to a leader or control node. The concept of synchronizing region is instrumental in analyzing the synchronization properties of cooperative control systems.  The synchronizing region is the region in the complex plane within which the graph Laplacian matrix eigenvalues must reside to guarantee synchronization. The crucial difference between systems in continuous time and discrete time is the form of the stability region. For continuous-time systems the stability region is the left half s-plane, which is unbounded by definition, and a feedback matrix can be chosen  such that the synchronizing region for a matrix pencil is also unbounded. On the other hand the discrete-time stability region is the interior of the unit circle in the z-plane, which is inherently bounded.  Therefore, the synchronizing regions are bounded as well. This accounts for stricter synchronizability conditions in discrete-time.

If the perfect information on the state of the neighbouring systems is not available, output measurements are assumed and cooperative observers are specially designed for the multi-agent systems. Potential applications are distributed observation, sensor fusion, dynamic output regulators for synchronization, etc.  Conditions for cooperative observer convergence and for synchronization of the multi-agent system are shown to be related by a duality concept for distributed systems on directed graphs. It is also shown that cooperative control design and cooperative observer design can both be approached by decoupling the graph structure from the design procedure by using Riccati-based design. Sufficient conditions are derived that guarantee observer convergence as well as synchronization. This derivation is facilitated by the concept of convergence region for a distributed observer, which is analogous, and in a sense dual, to the synchronization region defined for a distributed synchronization controller. Furthermore the proposed observer and controller feedback designs have a robustness property similar to the one found in controller design.
Cooperative optimal control was recently considered by many authors. Optimality of a control protocol gives rise to desirable characteristics such as gain and phase margins, that guarantee robustness in presence of some types of disturbances. The common difficulty, however, is that in the general case optimal control is not distributed. Solution of a global optimization problem generally requires centralized, i.e. global, information. In order to have local distributed control that is optimal in some sense it is possible e.g. to consider each agent optimizing its own, local, performance index. This is done for receding horizon control, and for distributed games on graphs, where the notion of optimality is Nash equilibrium.  Some authors phrase the LQR problem a maximization problem of LMI's under the constraint of the communication graph topology. This is a constrained optimization taking into account the local character of interactions among agents. It is also possible to use a local observer to obtain the global information needed for the solution of the global optimal problem.  Optimal control for multi-agent systems is complicated by the fact that the graph topology interplays with system dynamics. The problems caused by the communication topology in the design of globally optimal controllers with distributed information can be approached using the notion of inverse optimality. There, one chooses an optimality criterion related to the communication graph topology to obtain distributed optimal control. This connection between the graph topology and the structure of the performance criterion allows for the distributed optimal control. In the case that the agent integrator dynamics contains topological information there is a performance criterion such that the original distributed control is optimal with respect to it.

In this talk, theorems are presented for partial stability and inverse optimality in a form useful for applications to cooperative control, where the synchronization manifold may be noncompact.  As a first contribution, using these results, the globally optimal cooperative regulator and cooperative tracker problems for agents with identical linear time-invariant dynamics is solved. It is found that globally optimal linear quadratic regulator (LQR) performance cannot be achieved using distributed linear control protocols on arbitrary digraphs. A sufficient condition on the graph topology is given for the existence of distributed linear protocols that solve a global optimal LQR control problem. As a second contribution, a new class of digraphs is defined, namely, those whose Laplacian matrix is simple, i.e. has a diagonal Jordan form. On these graphs the globally optimal LQR problem has a distributed linear protocol solution. If this condition is satisfied, distributed linear protocols exist that solve the globally optimal LQR problem only if the performance indices are of a certain form that captures the topology of the graph. That is, the achievable optimal performance depends on the graph topology. The third contribution is an investigation of the allowed forms for global performance indices which depend on the graph Laplacian matrix.