Here you will find the whole list of guides and tutorials developed by the DyCon ERC Project's research team and visitors. All of the content has been classified according to the project’s Work Packages.

In this work we address the optimal control of parameter-dependent systems. In particular, we study the dynamics and averaged controllability properties of heat equations with random non-negative diffusivites.

Simultaneous control of parameter-depending systems using stochastic optimization algorithms

Formulation of Optimal control problem for rotor imbalance. Explanation of the code to numerically solve the problem.

In this short tutorial, we explain how to use Riccati’s theory to solve an LQ control problem with targets.

This tutorial is part of the control under state constraints. We will present the main features regarding the controllability of bistable reaction-diffusion equations with heterogeneous drifts.

This tutorial is part of the control under state constraints. We will simulate different control strategies to the same target by minimizing different functionals.

In this tutorial we study the inverse design problem for time-evolution Hamilton-Jacobi equations. More precisely, for a given observation of the viscosity solution at time $T>0$, we construct all the possible initial data that could have led the solution to the observed state. We note that these initial data are not in general unique.

A short python implementation of POD and DMD for a 2D Burgers equation using FEniCS and Scipy

Numerical implementation of the moving control strategy for a two dimensional heat equation with memory

In this tutorial, we show the simulation of heat fractional equation

In this DyCon Toolbox tutorial, we present how to use OptimaControl enviroment to control a consensus that models the complex emergent dynamics over a given network.

In this tutorial we will apply the DyCon toolbox to find a control to the semi-discrete semi-linear heat equation.

The Multilevel Selective Harmonic Modulation problem is recast under the perspective of optimal control

The aim is then to open the black box of Deep Learning, and try to gain an intuition of what are these models doing. One of the most succesful mathematical theories in recent years has been connecting Deep Learning with Dynamical Systems.

In this tutorial we show how to implement the Q-learning algorithm in simple settings where the state-space and the control-space are finite. In this case, the Q-function can be represented by a table, and therefore, it belongs to a finite-dimensional vector space. We illustrate the algorithm by solving the problem of finding the shortest path between any arbitrary point in a discretized domain and a target area. We then consider the same problem in a domain affected by a potential.

Tutorial of optimal control for inverted pendulum with symbolic MATLAB