Let $f: \mathbb{R}^n \to \mathbb{R}$ be a smooth function and let $x_0$ be its unique minimizer.

A time-continuous optimization method is an ODE whose solution converges to $x_0$ as the time variable goes to infinity.

In the first part of the talk I will describe a family of methods obtained by perturbing with a dissipation term a second-order conservative ODE.

I will show that the gradient method can be reduced to this class.

In the second part I will introduce an accelerated gradient scheme based on an Optimal Control problem. The problem consists of correcting the direction of the gradient in order to reach

a $\varepsilon$-ball centered at $x_0$ in the least possible time (this method is called Time-Optimal Gradient, TOG).

In the third part I will propose an optimization scheme "piece-wise conservative", which has been inspired by the study of some aspects of the TOG.