In this area, the control problem is considered, where state dependent control is sought. This approach allows us to obtain the solution of the control problem for any initial condition. Here we will study the principle of dynamic programming, the Bellman Hamilton Jacobi equation, the Reinforcement Learning methodology.

#### Available tutorials:

#### Q-learning for finite-dimensional problems

In this tutorial we show how to implement the Q-learning algorithm in simple settings where the state-space and the control-space are finite. In this case, the Q-function can be represented by a table, and therefore, it belongs to a finite-dimensional vector space. We illustrate the algorithm by solving the problem of finding the shortest path between any arbitrary point in a discretized domain and a target area. We then consider the same problem in a domain affected by a potential.

#### Optimal control for inverted pendulum

Tutorial of optimal control for inverted pendulum with symbolic MATLAB

#### Optimal control of a graph evolving in discrete time

Stabilizing the graph by minimizing a discrete LQR and driving it to a reference state.

#### LQR control of a fractional reaction diffusion equation

Design of a LQR controller for the stabilization of a fractional reaction diffusion equation