0% found this document useful (0 votes)
8 views19 pages

Eeciigsc 2025 Summaries 1

The document outlines a series of courses offered by the International Graduate School on Control in 2025, covering topics such as data-driven control design, modeling and control of continuum soft robots, analysis and design methods for time-delay systems, sliding mode controllers, neural networks for optimal control, and learning-based predictive control. Each course includes a summary, course outline, and details about the instructors. The courses aim to equip participants with advanced theoretical and practical knowledge in various control systems and methodologies.

Uploaded by

catflyingdawn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views19 pages

Eeciigsc 2025 Summaries 1

The document outlines a series of courses offered by the International Graduate School on Control in 2025, covering topics such as data-driven control design, modeling and control of continuum soft robots, analysis and design methods for time-delay systems, sliding mode controllers, neural networks for optimal control, and learning-based predictive control. Each course includes a summary, course outline, and details about the instructors. The courses aim to equip participants with advanced theoretical and practical knowledge in various control systems and methodologies.

Uploaded by

catflyingdawn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

2025

International Graduate School on Control


www.eeci-igsc.eu

M01 PARIS-SACLAY
Data-driven Control Design
27/01/2025 - 31/01/2025

Claudio De Persis Pietro Tesi


University of Groningen University of Florence
https://www.rug.nl/staff/c.de.persis/ https://cercachi.unifi.it/p-doc2-0-0-A-
3f2b342f322930.html
Summary of the course
Data-driven control is concerned with the use of data to design controllers for dynamical systems whose model is unknown or
uncertain. This course focuses on a recently introduced method for the so-called direct design of data-driven controllers for
linear and nonlinear systems. The design is made possible by input-state or input-output data that are collected during offline
experiments and some a priori information about the system to be controlled. The adjective “direct” refers to the
characteristic that data are used to formulate data-dependent convex programs whose solution “directly” returns the
controller that solves the desired control problem, without explicitly identifying the dynamics of the system.
The course will review a few of the results that have been obtained in the last few years. These results are based on basic
control-theoretical tools, which will be briefly reviewed during the course to make the latter as self-contained as possible.
Some numerical tools to solve the convex programs that return the controllers will also be discussed. The programme below
contains some of the topics that might be covered. The precise contents will be decided during the course based on the
interest of the attendees and the time available.

Outline
1. Introduction to data-driven control, non-parametric representations of systems and their implication in data-
driven control design.
2. Data-driven control for linear systems: the case of unperturbed data
2.1 Data-dependent representations of linear closed-loop systems
2.2 The design of data-dependent state-feedback stabilizers
2.3 Designing controllers from input-output data: the output feedback stabilization problem
2.4 Stabilization and optimality: The design of the Linear Quadratic Regulator via data
2.5 Additional aspects: discrete- vs continuous-time systems, software for data-driven control design
3. Data-driven control design with perturbed data
3.1 Data-dependent representations in the case of perturbed data
3.2 Matrix Elimination results and robust control design with perturbed data
3.3 Statistical noise on data: results in probability and sample complexity
4. Data-driven control design for nonlinear systems
4.1 Data and Lyapunov’s methods: stabilization of the first approximation
4.2 Control of nonlinear systems expressed via libraries of functions: data-driven feedback linearization
methods
4.3 Data-driven control design for special classes of nonlinear systems: bilinear, Lur’e and polynomial
systems
4.4 Additional results (design via contraction, tracking)
2025
International Graduate School on Control
www.eeci-igsc.eu

M02 LILLE
Modeling and Control of Continuum Soft Robots
10/03/2025 - 14/03/2025

Cosimo Della Santina Daniel Feliu Talegon


TU Delft TU Delft
http://cosimodellasantina.eu [email protected]
[email protected]

Abstract of the course


Animals still substantially outperform classic robots in performance, reliability, and efficiency. Interestingly,
their physical characteristics differ substantially from those of robots. Elastic tendons, ligaments, and muscles
enable animals to interact robustly with the external world and perform dynamic tasks. On the contrary,
traditional robots have generally been very stiff and heavyweight. Therefore, robotics researchers have
departed from the as-stiff-as-possible principle in favor of lightweight and compliant structures. Taking
inspiration from the natural example, elastic and soft components are included in the robot design, yielding
articulated and continuum soft robots. This course will focus on the latter, which are entirely made of
continuously deformable elements, bringing them close to invertebrate animals. This recent explosion of new
robotic concepts opened up the avenue of developing effective control strategies to manage the soft body, a
nonlinear mechanical system with a large – possibly infinite – number of DOFs and, as a result, also a large
degree of underactuation from the control point of view. This course aims at introducing such control
challenge. We will review established results in the field, introduce the most recent advances, and discuss
interesting open issues.

The course will include practical sessions in MatLab.

Topics
▪ Introduction to robotics beyond rigid robots
▪ Modelling soft robots:
▪ constant curvature,
▪ strain discretization,
▪ general form of equations
▪ Controlling soft robots:
▪ shape regulation (general case and subclasses),
▪ shape tracking,
▪ task-space control
2025
International Graduate School on Control
www.eeci-igsc.eu

M03 PARIS-SACLAY Analysis and Design Methods for


17/03/2025 - 21/03/2025 Time-Delay Systems

Wim Michiels Silviu-Iulian Niculescu


Department of Computer Science Université Paris-Saclay, CNRS, CentraleSupélec
KU Leuven, Belgium Laboratoire des Signaux et Systèmes (L2S)
https://www.kuleuven.be/wieiswie/en/person/00011378 Inria team «DISCO»
[email protected] https://cv.archives-ouvertes.fr/silviu-iulian-niculescu
[email protected]
Summary of the course
The aim of this course is to describe fundamental properties of systems including time-delays in their
representation, and to present an overview of methods and techniques for the analysis and control design. The
focus lies on systems described by functional differential equations and on frequency-domain techniques,
grounded in numerical linear algebra (e.g., eigenvalue computations) and optimization, but the main principles
behind time-domain methods are addressed as well. Several examples (from chemical to mechanical
engineering, from haptics systems and tele-operation to communication networks, from biological systems to
population dynamics and genetic regulatory networks) complete the presentation. In particular, the synergies
of analytic and computational tools for analysis and design are highlighted. The course is complemented with
home-works where analysis and control design problems are solved using dedicated software tools.

Outline
Theory:
• Classification and representation
• Definition and properties of solutions of delay systems
• Spectral properties of linear time-delay systems
Analysis:
• Frequency-domain approaches
• Stability domains in parameter spaces
• Relative stability and synchronization
• Robustness and performance measures
• Time-domain, Lyapunov based criteria,
Lyapunov matrices and converse theorems
Control design:
• Fundamental limitations of delays in control loops
• Structured stabilizing and optimal H-2 and H-infinity
controllers (fixed-order, PID, decentralized,…)
• Delay compensation using predictor and periodic feedback
• Improving stability and performance by using delays as control parameters
2025
International Graduate School on Control
www.eeci-igsc.eu

M04 ILMENAU
Lyapunov Based Design of Sliding Mode Controllers
31/03/2025 - 04/04/2025

Jaime Moreno Leonid Fridman


Instituto de Ingeniería Facultad de Ingeniería
Universidad Nacional Autónoma de México Universidad Nacional Autónoma de México
[email protected] [email protected]

Summary of the course


The sliding mode algorithms(SMA) have proved to be effective in dealing with complex dynamical systems affected
by disturbances and/or uncertainties. These robustness properties have also been exploited in the development of
nonlinear observers for state and unknown input estimation. Higher‐order sliding mode algorithms have been
developed to force the switching function and a number of its time derivatives to zero in finite time.

The proposed course reflects the recent results of the authors developing novel types of discontinuos, continuous
and Lipschitz sliding mode controllers and their properties.

Outline
1. MATHEMATICAL TOOLS
Solutions of equations with discontinuous right-hand side ; Stabiity rates. Finite-, fixed- and predefined-time
convergence ; Matched and unmatched perturbations/uncertainties
2. FIRST ORDER SLIDING MODE ALGORITHMS (FOSMA)
Relay FOSMA ; Unit FOSMA
3. SLIDING SURFACES DESIGN FOR FOSMA
Forced sliding surfaces design ; Integral sliding modes ; Nominal Lyapunov Function based surface design ;
Control Lypunov functions for sliding mode controllers design
4. SECOND ORDER SMA (SOSMA)
Discontinuous SMA: Twisting, Terminal, Quasi- Continuous ; Lipschitz continuous SMA for systems with relative
degree one ; Super-twisting algorithm ; Robust Exact Differentiator (RED)
5. LYAPUNOV BASED DESIGN OF HIGHER ORDER SMA (HOSMA)
Homogeneity: homogeneity weigths and degrees ; Discontinuous HOSMA ; Continuous HOSMA(CHOSMA)
6. OUTPUT BASED DESIGN OF HIGHER ORDER SMA
Arbitrary order RED. Output based HOSMA and CHOSMA
7. SLIDING MODE BASED OBSERVERS
Strong observability and detectability ; RED based observers for LTI, LTV and nonlinear systems ; RED based
identification of uncertainties and parameters
8. CHATTERING
Chattering analysis caused by continuous, discontinuous and Lipschitz SMA ; CHOSMA gains design minimizing
the chattering and energy consumptions
2025
International Graduate School on Control
www.eeci-igsc.eu

M05 – Lausanne
Neural Networks for Optimal Control
31/03/2025 - 04/04/2025

Giancarlo Ferrari Trecate Danilo Saccani Leonardo Massai


EPFL EPFL EPFL
https://people.epfl.ch/ https://people.epfl.ch/ https://people.epfl.ch/
giancarlo.ferraritrecate danilo.saccani l.massai

Summary of the course


The effectiveness of control algorithms in large-scale cyber-physical systems relies not only on advancements in sensing,
computation, and communication but also on the availability of methods to design controllers capable of stabilizing nonlinear
systems under nominal operating conditions. However, stabilization alone is insufficient; achieving satisfactory performance
is equally critical. In Optimal Control (OC), performance is typically encoded in the cost function that the control policy aims to
minimize. This highlights the need for OC algorithms that leverage Neural Network (NN) models to enable sophisticated
closed-loop behaviors, such as collision avoidance or waypoint tracking in robot swarms. The challenge lies in constraining the
search for high-performance controllers to those that ensure closed-loop guarantees, such as stability and robustness.
This course will equip PhD students with contemporary theoretical and computational tools for designing and deploying NN-
based controllers with embedded theoretical guarantees for closed-loop systems. The course begins with a focus on recent
optimal control methods for linear systems, emphasizing the direct design of closed-loop maps rather than control policies
(e.g., Youla parametrizations, Internal Model Control, System-Level Synthesis). We then review stability tools for nonlinear
systems, including L2 gains, dissipativity, and the small-gain theorem. Building on the first part, we teach a recent approach to
nonlinear OC, termed "performance boosting," which utilizes NNs and automatic differentiation to enhance closed-loop
system performance without compromising existing properties. The final section extends performance boosting to large-scale
systems, where multiple nonlinear local systems interact dynamically, relying solely on local measurements for control
deployment.
Lectures will be supplemented with exercise papers and coding exercises.

Outline
1. Designing Optimal Closed-Loop Maps for Linear Systems
• Stable transfer matrices, internal stability, Youla parametrization, Internal Model Control (IMC)
• Convex optimal control over all stabilizing policies: guarantees for both model-based and model-free cases
• Finite-dimensional approximations and state-space implementations
2. Performance Boosting for Nonlinear Optimal Control
• Signal-space notation, nonlinear stable operators, L2 gains, and the small-gain theorem
• IMC parametrization of stabilizing nonlinear policies, robustness for uncertain models
• NN parametrizations of stabilizing controllers
3. Performance Boosting at Scale
• Dissipativity for interconnected systems
• Distributed Performance Boosting
2025
International Graduate School on Control
www.eeci-igsc.eu

M06 ZURICH
Learning-Based Predictive Control
07/04/2025-11/04/2025
After a brief
Summary
Learning-based
ofreview
theModel
course
of fundamentals
Predictive Control
of MPC,
(MPC)
theprovides
course presents
advanced ancontrol
overview
systems
of existing
with the
learning-based
capability toMPC
exploit
methods,
real-timefollowed
collected
by
Summary of the course
ainformation
deep-dive to
into
improve
theory performance
and applications
requirement of next generation
unknown-but-bounded uncertainty,
in face
control
of selected
of uncertainty,
andapplications,
techniques
reactive techniques.
at the
forsame
different
such as autonomous
timeproblem
maintaining
A discussionpassenger
on advanced
settings.
cars,
high These
topics
safety include
or autonomous
standards.
stochastic
and active research
This is and
aerial, marine
a crucial
direction
and ground
drones for the
concludes civilmodule.
applications. In established industrial systems, such a capability can also bring significant benefits, reducing
commissioning time and cost, and the effects of product/process variability.
Learning-based Model Predictive Control (MPC) provides advanced control
systems with the capability to exploit real-time collected information to
improve performance in face of uncertainty, at the same time maintaining
high safety standards. This is a crucial requirement of next generation
control applications, such as autonomous passenger cars, or autonomous
aerial, marine and ground drones for civil applications. In established
industrial systems, such a capability can also bring significant benefits,
reducing commissioning time and cost, and the effects of product/process
variability. Melanie N. Zeilinger
After a brief review of fundamentals of MPC, the course presents an ETH Zurich, Switzerland
overview of existing learning-based MPC methods, followed by a deep-dive [email protected]
into theory and applications of selected techniques for different problem
settings. These include stochastic and unknown-but-bounded uncertainty,
and reactive techniques. A discussion on advanced topics and active
research direction concludes the module.
Review
Outline
Fundamentals
Classification
II.
Introduction
Model
Adaptive
III.
Stochastic
IV.
Invariance-based
Nominal
Robust
V. Set
Advanced
Stochastic
Model
membership
learning
extensions
ofpredictive
MPC
(learning-based)
model
MPC
predictive
to
of
topics
methods
via
of
set
with
learning-based
based
MPC
learning
safe
on-line
membership
safety
methods
and
guarantees
safety
on
learning
inresearch
scenario
MPC
set
filter
MPC
(Bayesian
filters
membership
inextensions
MPC
estimation
directions
optimization
linearidentification
regression/Kalman Filtering/GPs)
Outline

I. Review of (learning-based) MPC


1. Fundamentals of MPC
2. Classification of learning-based extensions

II. Set membership methods in MPC


1. Introduction to set membership estimation Lorenzo Fagiano
2. Model learning with guarantees Politecnico di Milano, Italy
3. Adaptive MPC via on-line set membership identification [email protected]

III. Stochastic methods in MPC


1. Stochastic model learning
(Bayesian linear regression/Kalman Filtering/GPs)
2. Stochastic MPC based on scenario optimization

IV. Model predictive safety filters


1. Invariance-based safe learning
2. Nominal predictive safety filter
3. Robust extensions

V. Advanced topics and research directions Lukas Hewing


The Exploration Company, Germany
[email protected]
2025
International Graduate School on Control
www.eeci-igsc.eu

M07 ROME
Static and Dynamic Optimisation
07/04/2025-11/04/2025

Giordano Scarciotti Thulasi Mylvaganam


CAP Group, EEE Department Department of Aeronautics,
Imperial College London, United Imperial College London, United
Kingdom Kingdom
[email protected] [email protected]
Abstract
Optimisation is a cornerstone of science and engineering: we are constantly striving to find the “best” solution to a variety of problems. In this IGSC, starting from a “static
Abstract
perspective” we consider tools to formulate and solve general constrained and unconstrained optimisation problems. Necessary and sufficient conditions of optimality and
basic optimisation algorithms are covered in a mathematically rigorous manner. We then consider more advanced topics, including convex optimisation and multi-objective
Optimisation is a cornerstone of science and engineering: we are constantly striving to find the “best”
optimization, from an applications-driven mindset. The relevance of the theory to a variety of practical domains, such as fitting, finance, classification, biology and
advertising is explored, and solutions implemented*. We then consider the “dynamic perspective”, by turning our attention to dynamical systems and the question of how to
solution to a variety of problems. In this IGSC, starting from a “static perspective” we consider tools to
design control laws to optimise one or more performance criteria. Such problems, termed dynamic optimisation problems, include optimal control and differential games,
formulate and solve general constrained and unconstrained optimisation problems. Necessary and
and are (with a few exceptions) notoriously difficult to solve in practice. We consider the two main approaches to characterise their solutions, i.e. dynamic programming and
Pontryagin’s minimum principle. In addition to a review of the classical theory, we outline various systematic strategies to construct approximate solutions, at relatively low
sufficient conditions of optimality and basic optimisation algorithms are covered in a mathematically
computational cost.
rigorous manner. We then consider more advanced topics, including convex optimisation and multi-
objective optimization, from an applications-driven mindset. The relevance of the theory to a variety
of practical domains, such as fitting, finance, classification, biology and advertising is explored, and
solutions implemented*. We then consider the “dynamic perspective”, by turning our attention to
dynamical systems and the question of how to design control laws to optimise one or more
performance criteria. Such problems, termed dynamic optimisation problems, include optimal control
and differential games, and are (with a few exceptions) notoriously difficult to solve in practice.
We consider the two main approaches to characterise their solutions, i.e. dynamic programming and
Pontryagin’s minimum principle. In addition to a review of the classical theory, we outline various
systematic strategies to construct approximate solutions, at relatively low computational cost.
*Basic knowledge of Python is required for this part. If in doubt, please email Dr Scarciotti who can suggest online recourse s
to get you up to speed quickly.
Course
Introducing
Necessary
Basic
Convex
Multi-objective
Use
Introduction
Classical
Computationally-efficient
ofoptimisation
CVX
outline
optimisation
theory
and
to
to
topose
optimisation
sufficient
dynamic
optimisation
of dynamic
algorithms
and solve
optimisation
conditions
strategies
optimisation
practical
of
tooptimality
optimisation
solve dynamic
problems.
optimisation problems
Course outline
• Introducing to optimisation
• Necessary and sufficient conditions of optimality
• Basic optimisation algorithms
• Convex optimisation
• Multi-objective optimisation
• Use of CVX to pose and solve practical optimisation problems.
• Introduction to dynamic optimisation
• Classical theory of dynamic optimisation
• Computationally-efficient strategies to solve dynamic optimisation problems
2025
International Graduate School on Control
www.eeci-igsc.eu

M08 LONDON Multi-agent optimization and learning:


06/05/2025 - 09/05/2025 resilient and adaptive solutions

Nicola Bastianello Ruggero Carli Luca Schenato


KTH, Stockholm, Sweden University of Padova, Italy University of Padova, Italy
[email protected] [email protected] [email protected]
https://bastianello.me https://automatica.dei.unipd.it/rug https://automatica.dei.unipd.it/luc
gero-carli/ a-schenato/

Course website https://sites.google.com/view/eeci-multi-agent-optimization (schedule, venue/travel, logistics)


Summary
Recent technological advances have enabled the widespread adoption of intelligent devices in many applications, such as
healthcare, edge computing, transportation, robotics, smart grids. These devices are equipped with communication and
computational resources, which allow them to learn from the data they collect. However, in order to improve the accuracy of
the models they train, the important paradigm of decentralized learning is being deployed. Therefore, there is a need for
algorithmic advances that can support cooperative learning and optimization. The course will provide a thorough introduction
to the state of the art in decentralized learning, both with federated and peer-to-peer communication architectures. The
course will cover different algorithmic approaches, e.g. gradient-based and dual methods. A particular emphasis will be given
to the practical challenges that arise in this context, such as asynchrony and limited communications.
Outline
1. Introduction and motivating examples (healthcare, edge/fog computing, transportation, robotics, smart grids)
2. Decentralized learning and optimization
• From centralized to decentralized
• Practical challenges
• Decentralized cooperative architectures
3. Federated learning
• Deep learning
• Privacy and resilience to attacks
4. Consensus and distributed optimization
• The consensus algorithm: standard, accelerated, push-sum/ratio, broadcast w/ faulty communications
• Consensus-based distributed optimization: gradient tracking and Newton
• Non-expansive operators for optimization: background, operator-based algorithms (proximal gradient, ADMM,
primal-dual, …)
• Application to decentralized asynchronous and lossy networks: a stochastic operators approach
5. Current trends
• Online distributed optimization (prediction-correction, control-theoretical approaches)
• Data-driven optimization, privacy, human-in-the-loop
• Frontiers in applications
6. Hands-on coding experiences
2025
International Graduate School on Control
www.eeci-igsc.eu

M09 PARIS-SACLAY Dissipativity in Optimal Control - Turnpikes, Predictive


12/05/2025-16/05/2025 Control, and Uncertainty

Lars Grüne Timm Faulwasser


University of Bayreuth TU Hamburg
https://num.math.uni-bayreuth.de/ https://www.tuhh.de/ics/teampages/timm-
en/team/lars-gruene/ faulwasser

Summary of the course


The optimal control twin breakthroughs, i.e. Pontryagin's maximum principle and Bellman's dynamic programming principle,
and the dissipativity notion for open systems conceived by Jan. C. Willems are supporting pillars of systems and control. On
this canvas, this course explores the constitutive relations between optimal control and dissipativity.
The week commences with a brief and example-driven introduction into optimal control formulations in continuous time and
discrete time and we comment on the challenges that arise from infinite-horizon problems. We then turn towards
dissipativity, discussing how optimal control has been at the very core of the concept since its inception. We comment on the
surprisingly rich set of systems-and-control problems that admit a dissipativity-based analysis.
After this introduction we explore the turnpike phenomenon in optimal control – the first observations of which can be traced
back to John von Neumann and Frank P. Ramsey. We discuss the deep link between dissipativity notions for optimal control
problems and the turnpike phenomenon as well as the relation to the optimality system implied by the maximum principle.
Moving from open-loop to feedback considerations, we show how dissipativity helps to analyze the properties of receding-
horizon approximations to infinite-horizon problems, i.e., we close the loop with model predictive control. Furthermore, we
explore how the dissipativity-based framework can be extended to stochastic problems. Throughout the week our discussions
are illustrated with examples from different application domains such as process control, mechanics, thermodynamics, and
energy. Moreover, the students will conduct numerical experiments in class. The course concludes with an outlook on open
problems and on ongoing research.

Outline
1. Introduction
• Optimal Control
• Dissipativity and Optimal Control
• The Turnpike Phenomenon
2. Turnpike and Dissipativity
• Detectability and Turnpikes
• Turnpikes and the Maximum Principle
• Infinite-horizon optimal control and dissipativity
3. Predictive Control
• Economic MPC and Dissipativity
• Stochastic Turnpike and MPC
4. Advanced topics
• Discounted Optimal Control and turnpike
• port-Hamiltonian systems and symmetries
5. Summary and Outlook
2025
International Graduate School on Control
www.eeci-igsc.eu

M10 ISTANBUL Quantify your uncertainties!


12/05/2025 - 16/05/2025 The input-to-state stability framework

Antoine Chaillet Iasson Karafyllis


CentraleSupélec National Technical University of Athens
https://sites.google.com/site/antoinechaillet https://scholar.google.com/citations?user=bw
RBLesAAAAJ
Summary of the course

The notion of Input-to-State Stability (ISS) plays a fundamental role in modern nonlinear control theory. It allows to
quantify the impact of an exogenous disturbance on the system's performance. The success of ISS is explained by
powerful Lyapunov-based characterizations, which offer valuable means to establish it in practice. This is also true
for some variants of ISS, including integral-ISS (which measures the disturbance's influence through its energy
rather than its magnitude) or Input-to-Output Stability (which focuses only on specific outputs). This module
provides the theoretical background for these notions and illustrates their use in important control problems (e.g.,
backstepping, Lyapunov redesign and high-gain observer design). While covering most of the central notions of the
ISS framework, this course makes an intensive use of simple examples to intuitively grasp the presented concepts,
and illustrates their use in various application domains such as neuroscience, spacecraft dynamics, robotics, or
bioprocesses. The course is presented in the context of systems modeled by ordinary differential equations, but
some recent extensions to time-delay systems and systems ruled by partial differential equations will also be
mentioned.

Outline

1. Introduction: what can go wrong with nonlinear dynamics?


2. Stability notions in the absence of disturbances
3. Input-to-State Stability (ISS): definition, Lyapunov-based and solutions-based
characterizations, control laws to induce ISS, interconnection (cascade and feedback)
4. Integral ISS (iISS) and Strong iISS
5. Input-to-Output Stability (IOS) notions
6. Time-delay and PDE extensions.
7. Applications to control problems
2025
International Graduate School on Control
www.eeci-igsc.eu

M11 LIEGE
Fast and flexible multi-agent decision-making
19/05/2025 - 23/05/2025

Anastasia Bizyaeva Alessio Franci


Cornell University University of Liège
https://anastasiabzv.github.io/ http://sites.google.com/site/francialessioac/
Summary of the course
Advancing our understanding of the decision-making behavior of multi-agent systems inspires challenging questions and
benefits many research areas and practical applications. A fundamental and unifying question is how a large group of
interacting agents makes decentralized choices that enhance performance in the presence of uncertainty and dynamically
changing context even when individuals are limited in sensing, computation, and actuation. Examples of multi-agent decision-
making in engineering include safe, efficient navigation in multi-vehicle networks, coordination of multi-robot teams, human-
robot collaboration, and multi-robot task allocation. In biological science, examples include collective motion in animal
groups, phenotypic differentiation and social behaviors in microorganisms, and the dynamics of cognitive computations in
neural systems. In social science, examples include the role of social networks and social behavior in governance, in financial
trading, in international diplomacy, in political polarization, and in public opinion shaping.
A central concern of this course is that a multi-agent system should be capable of decision-making that is fast and flexible if it
is to successfully manage the uncertainty, variability, and dynamic change encountered when operating in the real world.
Decision-making is fast if it breaks indecision as quickly as indecision becomes costly. This requires fast divergence away from
indecision in addition to fast convergence to a decision. Decision-making is flexible if it adapts to signals important to
successful operation, even if they are weak or rare. This requires tunable sensitivity to input for modulating regimes in which
the system is ultrasensitive and in which it is robust. Nonlinearity and feedback in the decision-making process are necessary
to meeting these requirements. This course reviews theoretical principles, mathematical tools, analytical results, related
literature, and applications of decentralized nonlinear opinion dynamics that enable fast and flexible decision-making among
multiple options for multi-agent systems interconnected by communication and belief system networks. The theory and tools
provided form a principled and systematic framework for analyzing and designing decision-making in engineered, biological,
and social systems. At the end of this course the students will understand the basics of bifurcation theory and its role in
decision making and apply this knowledge to analyze biological and social multi-agent decision-making, as well as to
design fast and flexible multi-agent decision-making in engineered systems.
Outline
1. Theoretical principles
•Pitchfork Bifurcation as a Principle for Two-option Indecision-breaking
•Generalization to Multiple Options
•Model and Model-independent Approaches to Indecision-breaking
2. Fast and flexible multi-agent decision-making dynamics and analytical results
•Opinions, Attention, Networks, Inputs and Biases
•Nonlinear Multi-agent Multi-option Opinion and Attention Dynamics
•Analytical results for Two Options
•Generalization to Multiple Options
3. Connections to existing decision-making dynamics
•Weighted Averaging, Consensus Dynamics, Variations and Extensions
•Honeybee Decision-Making Dynamics
•Connections to Cognitive Science and Neuroscience
4. Technological applications: robotics, machine learning, neuromorphic engineering
2025
International Graduate School on Control
www.eeci-igsc.eu

M12 BARCELONA An overview on observer design methods


19/05/2025 - 23/05/2025 for nonlinear systems

Vincent Andrieu Daniele Astolfi


CNRS CNRS
sites.google.com/site/vincentandrieu/ sites.google.com/site/astolfidaniele/home

Summary of the course


The purpose of this course is to give an overview of the main synthesis techniques of state observers for nonlinear dynamical
systems. The lecture will start by addressing some general comments on the ``estimation problem'', that is, reconstructing the
full information of a dynamical process on the basis of partial observed data. We will then introduce a particular type of
algorithm: the asymptotic observer. Some necessary conditions that ensure convergence of the estimate toward the state of
the system will be introduced: the detectability and its infinitesimal characterization.

Then, based on a characterization of detectability, a first class of observers will be presented. Some methods to design such
observers will be introduced based on numerical methods.

The next part of the course will consist in presenting the main three families of observers based on stronger observability
properties:
● Kalman and Kalman-like observers for state-affine systems, based on a persistence of excitation of the gramian of
observability;
● High-gain observers and differentiators, based on differential observability assumptions;
● Kazantzis/Kravaris-Luenberger observers, based on backward distinguishability conditions.
We will show that each class of observer relies on transforming the plant's dynamics in a particular normal form which allows
the design of an observer. We will explain how each observability condition guarantees the invertibility of its associated
transformation and the convergence of the observer. The most important and informative proofs will be detailed, and the
advantages/drawbacks of each design discussed.

At the end of the course, we will study some implementation issues and
open problems.
For instance, the case of time discretization of the output will be
considered. Another issue related to the left inversion problem in
observers will also be discussed.
Finally, we show how an estimate given by the observer may be used in
combination with a stabilizing state feedback in order to guarantee
asymptotic stabilization of the origin by means of output feedback.

Throughout the course, the various concepts encountered will be


illustrated with examples and followed by homework assignments
designed to enhance their understanding.
2025
International Graduate School on Control
www.eeci-igsc.eu

M13 DELFT
Formal Methods for Multi-Agent Feedback Control Systems
02/06/2025 - 06/06/2025

Lars Lindemann Dimos V. Dimarogonas


University of Southern California KTH Royal Institute of Technology
https://sites.google.com/view/larslindemann/main-page https://people.kth.se/~dimos/
Abstract of the course
Multi-agent control systems are found in manufacturing, transportation, and multi-agent robotics, e.g., drone fleets
for surveillance. Such systems are often safety-critical, e.g., avoiding collisions with other drones, while they should
accomplish sophisticated system specifications, e.g., surveilling different areas in certain time intervals. The formal
methods community has proposed spatiotemporal logics by extending Boolean logic with temporal modalities to
express such spatial and temporal system requirements. Over the past decade, a new community has formed that
works at the intersection of formal methods and control theory to design control algorithms to satisfy
spatiotemporal logic specifications. Arguably, the biggest challenge in formal methods for multi-agent control is of
computational nature as existing techniques are not scalable. This course introduces scalable feedback control
design techniques to tackle these computational bottlenecks. Our first goal is to provide an introduction to signal
temporal logic (STL), and to discuss how feedback control laws, based on control barrier functions and funnel
control, can be designed to satisfy a global (i.e., collaborative) multi-agent STL specification. We then discuss how
decentralized control laws can be derived so that each agent can calculate its control input locally, and how the case
of local (i.e., individual) and potentially adversarial specifications is dealt with. This course is based on the book:
https://mitpress.mit.edu/9780262049719/formal-methods-for-multi-agent-feedback-control-systems/

Course outline:

Course outline
Part I: Preliminaries Part III: Decentralized control
• Introduction to signal temporal logic (STL) • Decentralized control under global specifications
• Control barrier functions and funnel control • Decentralized control under individual agent specifications
Part II: Feedback control for spatiotemporal logics Part IV: Miscellaneous
• Encoding STL specifications into control barrier functions • Timed automata for planning under STL specifications
• Encoding STL specifications into funnels • Applications and future research directions
2025
International Graduate School on Control
www.eeci-igsc.eu

M14 LOUVAIN-LA-NEUVE
Hybrid Control Systems
02/06/2025 - 06/06/2025
Course Overview:
Hybrid dynamical systems, when broadly understood, encompass
dynamical systems where states or dynamics can change continuously
as well as instantaneously. Hybrid control systems arise when hybrid
control algorithms — algorithms which involve logic, timers, clocks, and
other digital devices — are applied to classical dynamical systems or
systems that are themselves hybrid. Hybrid control may be used for
improved performance and robustness properties compared to
classical control, and hybrid dynamics may be unavoidable due to the
interplay between digital and analog components of a system.

The course has two main parts. The first part presents various
modeling approaches to hybrid dynamics, focuses on a particular
framework which combines differential equations with difference
Ricardo G. Sanfelice equations (or inclusions), and present key analysis tools. The ideas are
Department of Electrical illustrated in several applications. The second part presents control
and Computer Engineering design methods for such rich class of hybrid dynamical systems, such as
University of California, Santa Cruz, USA supervisory control, CLF-based control, invariance-based control, and
https://hybrid.soe.ucsc.edu passivity. A particular goal of the course is to reveal the key steps in
[email protected] carrying over such methodologies to the hybrid dynamics setting. Each
proposed module/lecture is designed to present key theoretical
concepts as well as applications of hybrid control of current relevance.
Course Outline:
• Part 1: Introduction, examples, and modeling.
- Theoretical topics: hybrid inclusions; solution concept, existence, and
uniqueness.
- Applications: hybrid automata, networked systems, and cyber-physical
systems. References available at
• Part 2: Dynamical properties. https://hybrid.soe.ucsc.edu
- Theoretical topics: continuous dependence of solutions, Lyapunov stability /biblio
notion and sufficient conditions, invariance principles, and converse and 2021 Princeton
theorem. University Press book
- Applications: synchronization of timers and state estimation over a network. “Hybrid Feedback Control”.
• Part 3: Supervisory control, uniting control, throw-catch, and event-
triggered control.
- Theoretical topics: logic-based switching, uniting control, throw-and-catch
control, supervisory control, and event-triggered control.
- Applications: aggressive control for aerial vehicles, control of the
pendubot, obstacle avoidance, control of robotic manipulators.
• Part 4: Synergistic control, CLF-based control, invariance-based control,
passivity-based control, and hybrid model predictive control
- Theoretical topics: synergistic control, control Lyapunov functions,
stabilizability, Sontag-like universal formula for hybrid systems, selection
theorems, invariance and invariance-based control, passivity-based control,
and hybrid model predictive control.
- Applications: control for DC/DC conversion and for mechanical systems
with impacts.
2025
International Graduate School on Control
www.eeci-igsc.eu

M15 ROME
Dynamic Control Allocation
16/06/2025 - 20/06/2025
The lectures
Several
Abstract modern
will control
take place
applications
in Villa Mondragone
involve a large
(Monte
number
Porzio
of actuators
Catone), aand
beautiful
sensors,
1573
typically
villa just
chosen
outside
withRome,
the objective
which isof
theensuring
Abstract
a certain level
congress and event
of redundancy
center of and
Tor Vergata
reliability.
University.
Such a redundancy is a challenge as well as a valuable opportunity for the designer. The
aim of this course is to introduce the student to the framework of dynamic control allocation, which constitutes an effective strategy
for addressing and tackling such scenarios. In particular, the topics discussed in the course range from the analysis of allocated
control schemes (with points of view borrowed from different contexts, such as geometric tools or frequency-domain approaches) to
Several modern control applications involve a large number of actuators and
detailed synthesis algorithms (based on advanced hybrid and optimization techniques or viable also in the presence of uncertain
systems). Moreover, links to intimately connected topics, such as optimal control, anti-windup control, and the output regulation
sensors, typically chosen with the objective of ensuring a certain level of
problem, are introduced and further explored.
redundancy and reliability. Such a redundancy is a challenge as well as a
valuable opportunity for the designer. The aim of this course is to introduce
the student to the framework of dynamic control allocation, which
constitutes an effective strategy for addressing and tackling such scenarios.
In particular, the topics discussed in the course range from the analysis of
allocated control schemes (with points of view borrowed from different
contexts, such as geometric tools or frequency-domain approaches) to
detailed synthesis algorithms (based on advanced hybrid and optimization
Andrea Serrani
The Ohio State University
techniques or viable also in the presence of uncertain systems). Moreover,
links to intimately connected topics, such as optimal control, anti-windup [email protected]
control, and the output regulation problem, are introduced and further
explored.
The lectures will take place in Villa Mondragone (Monte Porzio Catone), a
beautiful 1573 villa just outside Rome, which is the congress and event
center of Tor Vergata University.
Introduction
Outline
Motivating
Objectives
Characterization
Dynamic
Main
Complementarity
Comparison
III.
The
Connections
Revisiting
IV.
MPC-based
Hybrid
Data-driven
Extensions
Structural
Advanced
geometric
frequency-domain
control
control
Control
the
and
to
examples
strategies
control
with
to
with
scheme
nonlinear
key
Insights
allocation
Topics
control
Dynamic
main
of
Allocation
static
optimal
with
building
input
allocation
assumptions
point
on
and
anti-windup
control
approach
systems
Control
for
redundancy
Allocated
control
blocks:
reference
Framework
output
of allocation
view
Allocation
and
annihilators,
regulation
techniques
Control
governor
the output
Schemes
optimizers
regulationand
problems
steady-state generators

Outline

I. Introduction to Dynamic Control Allocation


1. Motivating examples
2. Objectives and main assumptions
3. Characterization of input redundancy
Mario Sassano
University of Rome, Tor
II. Dynamic Control Allocation Framework
Vergata
4. Main control scheme [email protected]
5. Complementarity with anti-windup techniques
6. Comparison with static control allocation

III. Structural Insights on Allocated Control Schemes


7. The geometric control point of view
8. The frequency-domain approach
9. Connections with optimal control and the output regulation problems
10. Revisiting the key building blocks: annihilators, optimizers and steady-
state generators

IV. Advanced Topics


11. MPC-based strategies and reference governor
12. Hybrid control allocation for output regulation Sergio Galeani
13. Data-driven control allocation University of Rome, Tor
14. Extensions to nonlinear systems Vergata
[email protected]
2025
International Graduate School on Control
www.eeci-igsc.eu

M16 OXFORD The Scenario Approach: Data Science for Systems, Control
16/06/2025 - 20/06/2025 and Machine Learning

Marco C. Campi Simone Garatti


Department of Information Engineering Dip. di Elettronica, Informazione e Bioingegneria
University of Brescia, Italy Politecnico di Milano, Italy
https://marco-campi.unibs.it/ https://garatti.faculty.polimi.it/
[email protected] [email protected]

Abstract of the course

Data are ubiquitous in nowadays science and engineering. In this course,


we introduce the “scenario approach”, which is a general methodology for
data-driven decision making, and discuss its application to various fields
(including machine learning, data-driven system design and control). We
also present the most recent developments of its powerful generalization
theory, which allows the user to accurately evaluate the out-of-sample
robustness and keep control on the risk associated with the data-driven
solution.

A gradual presentation of all the practical and theoretical aspects will allow
for an easy comprehension of the material, while virtually no prior
knowledge is required to follow the course.

Topics: - Scenario Approach


- Data-driven design
- Risk evaluation
- Application to systems, control and supervised learning
- Presentation of open problems that offer an opportunity for research
2025
International Graduate School on Control
www.eeci-igsc.eu

M17 PARIS-SACLAY
Introduction to Nonlinear Systems and Control
23/06/2025 - 27/06/2025

Abstract of the course


This is a first course in nonlinear control with the target
audience being engineers from multiple disciplines
(electrical, mechanical, aerospace, chemical, etc.) and
applied mathematicians.
The course is suitable for practicing engineers or graduate
students who didn’t take such introductory course in their
programs.
Prerequisites: Undergraduate-level knowledge of
Hassan Khalil differential equations and control systems.
Dept. Electrical & Computer
Engineering
Michigan State University , USA
The course is designed around the text book:
http://www.egr.msu.edu/~khalil/ H.K. Khalil, Nonlinear Control, Pearson Education, 2015
[email protected]

Outline
1. Introduction and second-order systems (phase portraits; multiple equilibrium
points; limit cycles)
2. Stability of equilibrium points (basics concepts; linearization; Lyapunov’s method;
the invariance principle; region of attraction; time-varying systems)
3. Perturbed systems; ultimate boundedness; input-to-state stability
4. Passivity and input-output stability
5. Stability of feedback systems (passivity theorems; the small-gain theorem; Circle &
Popov criteria)
6. Normal and controller forms
7. Stabilization (concepts; linearization; feedback linearization; backstepping;
passivity-based control)
8. Robust stabilization (sliding mode Control; Lyapunov redesign)
9. Observers (observers with linear-error dynamics; Extended Kalman Filter, high-gain
observers, sliding mode observers)
10. Output feedback stabilization (linearization; passivity-based control; observer-
based control; robust stabilization, extended high-gain observer as disturbance
estimator)
11. Tracking & regulation (feedback linearization; sliding mode Control; integral control)
2025
International Graduate School on Control
www.eeci-igsc.eu

M18 DUBROVNIK
Control and Machine Learning
30/06/2025 - 04/07/2025

Summary of the course


Control is a classical field in the intersection of Applied Mathematics
and Engineering, arising in most applications to other sciences,
industry and new technologies. Nowadays the field of Control
experiences a revival due to its strong links with the broad and
dynamic field of Machine Learning (ML). On the one hand, classical
mathematical and computational methods developed in Control are
complemented with new techniques emanating from ML, thus
improving their performance. On the other hand, the, sometimes
amazing, efficiency of the computational methods developed in ML,
e.g. in Supervised and Reinforcement Learning, is not yet well
understood analytically. And the knowledge accumulated over decades Enrique Zuazua
in the area of Control provides powerful tools to gain understanding. Friedrich-Alexander-Universität
This course is aimed to introduce some of the fundamental tools in [email protected]
control theory and machine learning and their computational
counterparts, showing how they can be combined and employed to
address applications efficiently, in an holistic manner, interrogating the
know-how in each of these areas.

Outline
• Historical preliminaries
• Control of linear finite-dimensional systems
• Control of parameter dependent problems
• Neural ODEs
• Control formulation of supervised learning
• The universal approximation theorems
• Simultaneous controllability of neural differential equations
• Width versus depth
• Introduction to unsupervised learning Martin Lazar
• Introduction to federated learning University of Dubrovnik
• ML in control of parameter dependent systems
https://www.martin-lazar.from.hr
• Turnpike, control, and ML
• Introduction to Physics-Informed Neural Networks (PINNs)
• Solving differential equations by PINNs.

Theoretical presentations will be combined with practical numerical exercises .


2025
International Graduate School on Control
www.eeci-igsc.eu

M19 MILAN
Deep Learning for System Identification
30/06/2025 - 04/07/2025

Dario Piga Marco Forgione


Dalle Molle Institute for Artificial Intelligence, Dalle Molle Institute for Artificial Intelligence,
SUPSI, Lugano, Switzerland SUPSI, Lugano, Switzerland
https://leon.idsia.ch/people https://www.marcoforgione.it/

Summary of the course


In recent years, deep learning has advanced at a tremendous pace and is now the core
methodology behind cutting-edge technologies such as image classification and captioning,
autonomous driving, natural language processing and generation. One exciting and challenging
application field for deep learning is the learning of dynamical systems, also known as system
identification. In this field, tailor-made model architectures and fitting criteria should be designed
to: retain structural physical knowledge when available; introduce regularization to avoid
overfitting or enforce known relationships among variables; and optimize training efficiency by
leveraging parallelization as much as possible. The objective of this course is to introduce deep
learning concepts and recently developed tools for system identification. The course combines
theoretical lectures and hands-on practical sessions in PyTorch.

Outline
1. Introduction to system identification and deep learning
2. Feedforward and recurrent neural networks for system identification
3. Numerical optimization algorithms for training neural networks
4. Neural state space models
5. Integrating system theory in deep learning

Target audience
Students, researchers, and practitioners who want to understand how complex system
identification problems can be formulated and solved using modern deep learning techniques.

Basic knowledge of Python can be beneficial to better follow hands-on practical sessions.

You might also like