MENU

You are here

A piecewise conservative method for unconstrained convex optimization

TitleA piecewise conservative method for unconstrained convex optimization
Publication TypeJournal Article
Year of Publication2022
AuthorsScagliotti, A, P. Franzone, C
Volume81
Issue1
Pagination251 - 288
Date Published2022/01/01
ISBN Number1573-2894
Abstract

We consider a continuous-time optimization method based on a dynamical system, where a massive particle starting at rest moves in the conservative force field generated by the objective function, without any kind of friction. We formulate a restart criterion based on the mean dissipation of the kinetic energy, and we prove a global convergence result for strongly-convex functions. Using the Symplectic Euler discretization scheme, we obtain an iterative optimization algorithm. We have considered a discrete mean dissipation restart scheme, but we have also introduced a new restart procedure based on ensuring at each iteration a decrease of the objective function greater than the one achieved by a step of the classical gradient method. For the discrete conservative algorithm, this last restart criterion is capable of guaranteeing a qualitative convergence result. We apply the same restart scheme to the Nesterov Accelerated Gradient (NAG-C), and we use this restarted NAG-C as benchmark in the numerical experiments. In the smooth convex problems considered, our method shows a faster convergence rate than the restarted NAG-C. We propose an extension of our discrete conservative algorithm to composite optimization: in the numerical tests involving non-strongly convex functions with $$\ell ^1$$-regularization, it has better performances than the well known efficient Fast Iterative Shrinkage-Thresholding Algorithm, accelerated with an adaptive restart scheme.

URLhttps://doi.org/10.1007/s10589-021-00332-0
Short TitleComputational Optimization and Applications

Sign in