Abstract List

 

Program & abstracts booklet

 

Polynomial approximation of Isaacs' equation and applications to control under uncertainty,

Dante Kalise, Imperial College London.

We propose a numerical scheme for the approximation of high-dimensional, nonlinear Isaacs PDEs arising in robust optimal feedback control of nonlinear dynamics. The numerical method consists of a global polynomial ansatz together separability assumptions for the calculation of high-dimensional integrals. The resulting Galerkin residual equation is solved by means of an alternating Newton-type/policy iteration method for differential games. We present numerical experiments illustrating the applicability of our approach in robust optimal control of nonlinear parabolic PDEs.

 

 

Emergent behavior of a Cucker-Smale model with distributed time delay,

Cristina Pignotti, Università dell'Aquila

We analyze a Cucker-Smale type model with distributed time delay where individuals interact with each other through normalized communication weights.Based on a Lyapunov functional approach, we provide sufficient conditions for the velocity alignment behavior. We then show that as the number of individuals N tends to infinity, the N-particle system can be well approximated by a delayed Vlasov alignment equation. Furthermore, we also establish the global existence of measure-valued solutions for the delayed Vlasov alignment equation and its large-time asymptotic behavior. Joint work with Young-Pil Choi, Inha University, Republic of Korea.

 

 

Converse Lyapunov Theorems for Discrete-Time Switching Systems with Given Switches Digraphs,

Pierdomenico Pepe, Università dell'Aquila

In this talk it is shown that the existence of a suitable multiple Lyapunov function is a necessary and sufficient condition for a discrete-time fully nonlinear switching system, with given switches digraph, to be globally asymptotically stable. The same result is provided for the input-to-state stability. The less is the number of edges in the switches digraph, the less is the number of inequalities which are involved in the provided necessary and sufficient Lyapunov conditions.

 

 

Modeling and Optimal Control of an Octopus Tentacle,

Simone Cacace, Università di Roma Tre

We present a control model for an octopus tentacle, based on the dynamics of an inextensible string with curvature constraints and curvature controls. We derive the equations of motion together with an appropriate set of boundary conditions, and we characterize the corresponding equilibria.  The model results in a system of fourth-order evolutive nonlinear controlled PDEs, generalizing the classic Euler's dynamic elastica equation, that we approximate and solve numerically introducing a consistent finite difference scheme. We proceed investigating a reachability optimal control problem associated to our tentacle model. We first focus on the stationary case, by establishing a relation with the celebrated Dubins car problem. Moreover, we propose an augmented Lagrangian method for its numerical solution. Finally, we address the evolutive case obtaining first order optimality conditions, then we numerically solve the optimality system by means of an adjoint-based gradient descent method. Joint work with Anna Chiara Lai and Paola Loreti.

 

 

Nonlocal optimal control problems: Lagrangian and Eulerian formulations,

Giulia Cavagnari, University of Pavia, Italy

This talk aims to exploit the relations between various formulations of an optimal control problem for interacting multi-particles systems. Different research fields comes into play: optimal control and transport theory to set out the variational model and analyze the underlying principles, and a random variable approach to deal with the problem in its various Lagrangian formulations. In particular, we consider an abstract parametrization space for the mass of agents. Here, we are interested in the time-evolution of a random variable subject to non-local dynamics where the control appears under different natures. We consider related nonlocal cost functionals and we study the equivalence of their infima. Then, we state a suitable Eulerian formulation of the problem, i.e. an optimal control problem for the corresponding laws in the space of probability measures, and we discuss conditions in order to have the equivalence with the corresponding value function. Finally we deal with stability and Gamma-convergence results for the corresponding problems involving a finite number of agents to the mean-field ones. This is a joint work with Stefano Lisini (University of Pavia), Carlo Orrieri (University of Trento) and Giuseppe Savaré (University of Pavia).

 

 

Control design of Cyber-Physical Systems with logic specifications,

Giordano Pola, Università dell'Aquila

A challenging paradigm in the design of modern engineered systems are Cyber-Physical Systems (CPS).  CPS are complex, heterogeneous, spatially distributed systems where physical processes interact with distributed computing units through non-ideal communication networks. Key features of CPS are heterogeneity and complexity. Indeed, while physical processes are generally described by differential equations, computing units are generally described by finite state machines. The paradigm of symbolic models is promising of being appropriate in coping with the inherent heterogeneity of CPS. Symbolic models are abstract descriptions of control systems where any state corresponds to an aggregate of continuous states and any control label to an aggregate of control inputs. Since symbolic models are of the same nature as the mathematical models of the computing units, they offer a sound approach for solving control problems in which software and hardware interact with the physical world, as in the case of CPS. Furthermore, by using symbolic models, one can address a wealth of novel logic specifications that are difficult to enforce by means of conventional control design methods. In this talk I will describe some results that I obtained in this area and show an approach based on symbolic models for the control design of CPS. In particular, I will show how a symbolic model can be constructed which approximates a nonlinear control system with any desired accuracy. I will then show how this symbolic model can be used to design digital and quantized controllers for enforcing complex  logic specifications on the original nonlinear control system. Some extensions of these results to more general classes of control systems will be also briefly discussed.

 

 

Optimal control over communication networks and large scale automation systems,

Alessandro D'Innocenzo, Univerità dell'Aquila

In this talk we present some recent results on optimal control techniques over communication networks, characterised by non-idealities induced by the communication protocol and channel, and over large scale automation systems, characterised by a large number of system variables and practical challenges in deriving a physics-based dynamical model. In the first part we describe the challenges arising when co-designing the control algorithm and the communication network configuration parameters, taking into account non-idealities induced by the communication channel. We overview some recent results on this topic, and mainly focus on a novel stochastic modeling framework leveraging the class of time-homogeneous Markov jump linear systems (MJLSs), where the Markov chain Transition Probability Matrix is assumed timevariant within an arbitrary polytopic set of stochastic matrices. For this class of systems we derive necessary and sufficient stability conditions and characterise optimal Linear Quadratic Regulation. We finally show that the MJLS model derived from an accurate Markov channel model permits to discover and overcome the challenging subtleties arising from bursty behaviour, in particular it can guarantee stability of the closed-loop where other approaches based on a simplified channel model fail.In the second part we describe the challenges arising when deriving a dynamical model of a large scale automation system providing two practical examples: energy optimisation in a Heating, Ventilation and Air Conditioning (HVAC) system of a building, and semiactive structural control of a building. Building a physics-based dynamical model for a large scale system as the ones mentioned above is often cost and time prohibitive. To overcome this problem we propose a methodology to exploit machine learning techniques (i.e. regression trees and random forests) in order to build a state-space Markov Switching Affine dynamical model of a complex system using historical data, and apply standard Model Predictive Control (MPC) techniques. We compare our methodology with an optimal MPC benchmark and with related techniques on an energy management system.

 

 

Bilinear Control of Evolution Equations: Theory and Applications.

Cristina Urbani, GSSI

The talk will be devoted to the bilinear control of PDEs. We present some existing and new results on this field. The proofs of the results we show rely on a linearization argument and on the resolution of a moment problem. This last step requires a sharp hypothesis on the Fourier coefficients of a map involving the eigenfunctions of the second order operator. We further discuss a sufficient condition to construct examples of such a map.

 

 

Boltzmann-type optimal control problems.

Giacomo Albi, University of Verona

We are interested in a Boltzmann-type framework to deal with the optimal control of large particle systems. We will start reviewing suboptimal approaches based on the control of binary interaction dynamics. Secondly we will tackle directly the optimal control of the Boltzmann equation, in particular showing its relation with respect to mean-field optimal control problems.
Finally we will propose a stochastic hybrid algorithm, able to mitigate the numerical complexity of these problems. Numerical examples will be presented in the context of consensus dynamics, and swarming models.

 

 

Primal-dual forward-backward methods for PDE-constrained optimization problems

Teresa Scarinci, Wien University       

In this talk we investigate some direct methods to solve a class of optimization problems in a Hilbert space framework. In particular, we discuss some methods based on proximal splitting techinques and propose some applications to PDE-constrained optimization. We conclude the talk with a discussion about stochastic methods for problems with random data, numerical simulations, and some future research directions. The talk is based on a joint work with Caroline Geiersbach.

 

 

A Dynamic Programming approach for PDE-constrained optimal control on a tree structure

Luca Saluzzi, GSSI

The classical Dynamic Programming (DP) approach to optimal control problems is based on the characterization of the value function as the unique viscosity solution of a Hamilton-Jacobi-Bellman (HJB) equation. The DP scheme for the numerical approximation of viscosity solutions of those equations is typically based on a time discretization which is projected on a fixed space triangulation of the numerical domain. The time discretization can be done by a one step scheme for the dynamics and the projection on the grid typically uses a polynomial interpolation. In this talk, we will discuss a new approach for finite horizon optimal control problems where we compute the value function on a tree structure built directly by the time discrete dynamics avoiding the use of a space triangulation to solve the HJB equation. This allows to drop the cost of the space interpolation and the tree will guarantee a perfect matching with the discrete dynamics. We will also provide error estimates for the algorithm if the dynamics is discretized with an Euler method. Furthermore, this approach has been extended to high-order schemes and we will show some examples of second order approximation schemes. Finally we will show the effectiveness of the method for the control of PDEs.   This is a joint work with Maurizio Falcone (La Sapienza, Roma) and Alessandro Alla (PUC, Rio de Janeiro).

 

 

Long time behavior of first order Mean Field Games on Euclidean space

Cristian, Mendico, GSSI L’Aquila and Paris Dauphine University

The aim of this talk is to present the results obtained by the speaker about the long time behavior of solutions to deterministic mean field games systems on Euclidean space. This problem was addressed on the torus T^n in [P. Cardaliaguet, Long time average of first order mean field games and weak KAM theory, Dyn. Games Appl. 3 (2013), 473–488], where solutions are shown to converge to the solution of a certain ergodic mean field games system on T^n. By adapting the approach in [A. Fathi, E. Maderna, Weak KAM theorem on non compact manifolds, NoDEA Nonlinear Differential Equations Appl. 14 (2007), 1–27], we identify structural conditions on the Lagrangian, under which the corresponding ergodic system can be solved in R^n. Then we show that time dependent solutions converge, in some sense, to the solution of such a stationary system on all compact subsets of the whole space.

 

 

Optimal control under controlled loss constraints via reachability approach and compactification

Athena Picarelli, Università degli studi di Verona

 We study optimal control problems under controlled loss constraints at several fixed dates. It is well known that for such problems the characterization of the value function by a Hamilton-Jacobi-Bellman equation requires additional strong assumptions involving an interplay between the set of constraints and the dynamics of the controlled system. To treat the problem in absence of these assumptions we first translate it into a  state-constrained stochastic target problem and then apply a level-set approach to describe the reachable set. The main advantage of our approach is that it allows us to easily handle the state constraints by an exact penalization. However, this target problem involves a new set of control variables that are unbounded. A "compactification" of the problem is then performed.
 
 
Economic MPC: Stability and Performance
 
Mario Zanon, MT School for Advanced Studies Lucca
 
Model Predictive Control (MPC) has recently gained popularity due to the development of efficient algorithms, which has made it possible to solve optimal control problems at unprecedented rates. One of the most attractive advantages of optimization-based control is the possibility of explicitly optimizing a prescribed performance criterion, which often relates to an economic gain. Schemes directly optimizing performance have therefore been named economic MPC, though this class of problems includes all formulations which do not explicitly penalize the deviation from a given setpoint or trajectory (e.g. minimum time problems). The main challenge for such schemes is twofold: the algorithmic complexity is increased and it is not trivial to prove some form of stability. We will present the main challenge faced when attempting at proving stability and propose a practical approach for stability-enforcing approximate economic MPC.
 
 
Uncertainty damping in kinetic models of collective phenomena
 
Mattia Zanella, Politecnico di Torino
 
We develop a hierarchical description of controlled multiagent systems in the presence of uncertain quantities by means of kinetic-type control strategies with applications to social and traffic models. Binary feedback controls are designed at the level of agent-to-agent interactions and then upscaled to the global flow via a kinetic approach based on the Boltzmann equation. The passage to hydrodynamic equations for constrained kinetic models of collective behavior is discussed taking into account several closure methods. The action of the control is capable to restrain structural uncertainties naturally embedded in realistic dynamics and to promote effective decision-making tasks.

 
Hamilton-Jacobi-Bellman Equation for Control Systems with Friction
 
Fabio Tedone, Gran Sasso Science Institute
 
In the talk it is proposed a new framework to model control systems in which a dynamic friction occurs. The model consists in a controlled differential inclusion with a dissipative, upper semi-continuous right hand side, which still preserves existence and uniqueness of the solution for each given input function u(t). Under general hypothesis, we are able to derive the Hamilton-Jacobi-Bellman equation for the related free time optimal control problem.


 

Online user: 1