# Pontryagin maximum principle for optimal nonpermanent control problems on time scales

Loic Bourdin, University of Limoges (France)

The Pontryagin Maximum Principle (PMP in short) is a fundamental result of optimal control theory. In its classical statement, the control of the dynamical system is assumed to be permanent, in the sense that the value of the control is authorized to be modified at any real time. As a consequence, in numerous problems, achieving the optimal trajectory requires a permanent modification of the value of the control. However, such a request is not feasible in numerous practical situations, neither for human beings, nor for mechanical or numerical devices. For this reason, piecewise constant controls (called sampled-data controls), whose number of authorized modifications is finite, are widely considered in Automatic and Engineering. Sampled-data controls constitute a first example of nonpermanent controls. Another example concerns the dynamical systems whose trajectories cross noncontrol areas (such as a mobile phone or a GPS device passing under a tunnel). In order to encompass these various situations of nonpermanent controls, we will use the time scale calculus. Moreover we will see that this mathematical tool allows us to deal simultaneously with continuous and discrete dynamics.

In this talk, we will present a new version of the PMP that can handle optimal nonpermanent control problems on time scales, which has been recently obtained in [1]. Numerous properties are well-known in literature for optimal permanent controls (such as the continuity of the corresponding Hamiltonian function, or the saturation of the control constraint set in the case of an affine Hamiltonian function, etc.). In this talk, we will discuss the preservation (or not) of these properties when we consider nonpermanent controls. In the linear-quadratic setting (see [2]), we will show that the above new version of the PMP allows to prove the convergence of the optimal sampled-data controls to the optimal permanent control when the distances between the sampling times uniformly converge to zero. We will also show that this new PMP allows to express explicitly the optimal sampled-data control as a function of the state (closed-loop control ). Let us mention that this last result has already been obtained in the literature from a dynamical program- ming approach. Hence our work allows us to complete the Riccati theory for linear-quadratic problems with sampled-data controls.

We will close the discussion with a recent work [3] which focuses on optimal sampled-data control problems but with free sampling times. In this case the sampling times become parameters to be optimized as well. We will see that the corresponding necessary optimality condition coincides with the continuity of the Hamiltonian function.

### References

[1] L. Bourdin and E. Tr ́elat. Optimal sampled-data control, and generalizations on time scales.
*Mathematical Control and Related Fields,* 6(1):53-94, 2016.

[2] L. Bourdin and E. Tr ́elat. Linear-quadratic optimal sampled-data control problems: convergence and
Riccati theory.
*Automatica,* 79:273-281, 2017.

[3] L. Bourdin and G. Dhar. Continuity/constancy of the Hamiltonian function in a Pontryagin maximum
principle for optimal sampled-data control problems with free sampling times.
*Submitted,* 2018.