Plenary Talk 01
An optimal algorithm for constrained differentiable
convex optimization
Clovis Gonzaga
Abstract: We describe two algorithms for solving differentiable convex optimization problems constrained to simple sets in Rn , i.e., sets on which it is
easy to project an arbitrary point. The algorithms are optimal in the sense
that they achieve an absolute precision of ε in relation to the optimal value
√
in O(1/ ε) iterations using only first order information. This complexity
depends on a Lipschitz constant L for the function derivatives and on a
strong convexity constant µ ≥ 0. The first algorithm extends to the constrained case a well-known method devised by Nesterov for unconstrained
problems, but the complexity analysis follows a simpler geometric approach.
The second algorithm has several enhancements, including line searches to
improve the performance and a novel way of guessing the unknown values
of L and µ.
Plenary Talk 02
Some Remarks on Constrained Optimization
José M. Martı́nez
Abstract: Different features on the practical solution of nonlinear programming problems will be discused. We focus on global optimization properties,
stopping criteria, optimality conditions and extensions of algorithms to some
nonsmooth problems by means of reformulations.
2
Plenary Talk 03
A numerical algorithm for finding solutions of
a generalized Nash Equilibrium Problem
Yuan J. Yun
Abstract: A family of nonempty closed convex sets is built by using the
data of the Generalized Nash equilibrium problem (GNEP). The sets are
selected iteratively such that the intersection of the selected sets contains
solutions of the GNEP. The algorithm introduced by Iusem-Sosa (2003) is
adapted to obtain solutions of the GNEP. Finally some numerical experiments are given to illustrate the numerical behavior of the algorithm.
Mini-course 01
Iterações de Golub-Kahan em Problemas Inversos
de Grande Porte
Fermin S.V. Bazán
Abstract: Problemas inversos aparecem em diversos ramos das ciências
aplicadas. Apresentamos generalidades sobre problemas inversos e as idéias
básicas da teoria de regularização para problemas discretos mal-postos, incluindo métodos modernos para problemas de grande porte, tais como em
restauração de imagens, super resolução e espalhamento inverso. Parte marcante do minicurso será a descrição de métodos baseados em iterações de
Golub-Kahan, embutidos em pacotes computacionais de uso público que incluem RegularizationTools de P. C. Hansen e FP-tools desenvolvido por F.
S. V. Bazán e colaboradores.
3
Plenary Talk 04
On diagonal subdifferential operators in
nonreflexive Banach spaces
Alfredo Iusem
Abstract: Consider a real-valued bifunction f defined on C × C, where C
is a closed and convex subset of a Banach space X, which is concave in its
first argument and convex in its second one. We study its subdifferential
with respect to the second argument, evaluated at pairs of the form (x, x),
and the subdifferential of −f with respect to its first argument, evaluated
at the same pairs. We prove that if f vanishes whenever both arguments
coincide, these operators are maximal monotone, under rather undemanding
continuity assumptions on f . We also establish similar results under related
assumptions on f , e.g. monotonicity and convexity in the second argument.
These results were known for the case in which the Banach space is reflexive
and C = X. Here we use a different approach, based upon a recently
established sufficient condition for maximal monotonicity of operators, in
order to cover the nonreflexive and constrained case (C 6= X). Our results
have consequences in terms of the reformulation of equilibrium problems as
variational inequality ones.
Plenary Talk 05
Local Convergence of Levenberg Marquardt
Methods
Andreas Fischer
Abstract: Since a decade it is known that Levenberg-Marquardt methods
applied to problems with nonisolated solutions converge locally superlinearly under suitable conditions. These conditions basically require that the
problem provides an error bound and is sufficiently smoothness. To relax these conditions constrained Levenberg-Marquardt methods can be employed. Their subproblems are strongly convex quadratic optimization problems with convex (usually linear) constraints. The talk will highlight some
of the related results and proof techniques for inexact versions of LevenbergMarquardt methods.
4
Plenary Talk 06
On the cycles in the method of periodic
projections
Patrick L. Combettes
Abstract: The method of periodic projections consists in iterating projections onto m closed convex subsets of a Hilbert space according to a periodic
sweeping strategy. In the presence of m ≥ 3 sets, a long-standing question
going back to the 1960s is whether the limit cycles obtained by such a process can be characterized as the minimizers of a certain functional. In this
paper we answer this question in the negative. Projection algorithms that
minimize smooth convex functions over a product of convex sets are also
discussed.
Plenary Talk 07
On regularization methods of EM-Kaczmarz type
Antonio Leitão
Abstract: We consider regularization methods of Kaczmarz type in connection with the expectation-maximization (EM) algorithm for solving ill-posed
equations. For noisy data, our methods are stabilized extensions of the well
established ordered-subsets expectation-maximization iteration (OS-EM).
We show monotonicity properties of the methods and present a numerical
experiment which indicates that the extended OS-EM methods we propose
are much faster than the standard EM algorithm.
5
Mini-course 02
Complexidade em Otimização Convexa
Clovis Gonzaga
Abstract: O curso inicia com uma introdução ao conceito de complexidade,
seguida de alguns exemplos. Examinamos rapidamente o problema geral de
viabilidade convexa e o método de centros de massa. Estudamos o problema
de programação linear e descrevemos o funcionamento dos algoritmos de
pontos interiores. Passamos então a problemas não lineares: minimização
de uma função quadrática usando espaços de Krylov e finalmente algoritmos
modernos de “complexidade ótima” para problemas de otimização convexa.
Exploraremos sobretudo os aspectos geométricos de todos os métodos,
com o mı́nimo possı́vel de algebrismo. Supomos que os alunos têm conhecimentos básicos de álgebra linear e de cálculo em Rn .
Plenary Talk 08
Evolution of the Algencan Project:
Advances and Challenges
Ernesto Birgin
Abstract: Augmented Lagrangian methods are useful tools for solving large
scale constrained optimization problems. Recent theory ensures that some
PHR-like (Powell-Hestenes-Rockafellar) implementations enjoy global convergence to KKT points under quite weak constraint qualifications. Our
work is concentrated in a particular implementation called Algencan. In
this talk we review: global minimization issues, convergence to second-order
stationary points, special treatment for the greediness phenomenon, nonmonotone management of the penalty parameters and other practical issues. Exhaustive numerical experiments showing the effectiveness of the
Augmented Lagrangian approach will be presented.
6
Plenary Talk 09
Sharp local convergence analysis of Newtonian
(and not-so-Newtonian) methods for
constrained optimization
Mikhail Solodov
Abstract: In this talk, we shall discuss some recent advances in convergence analysis of several primal-dual Newton-type methods for optimization. These include several variants of the sequential quadratic programming
algorithm (quasi-Newton, truncated, stabilized), the linearly constrained
(augmented) Lagrangian method, and some others. A perturbed JosephyNewton framework and a perturbed SQP framework are introduced, which
allow to treat the algorithms in question in a unified way, dispensing with
some of the previous commonly used assumptions (such as strict complementarity and linear independence of gradients of active constraints). For
stabilized SQP, by a somewhat different Newtonian analysis we show superlinear convergence under second-order sufficient condition only without
any assumptions on the constraints at all (i.e., no constraint qualifications).
Moreover, rather surprisingly, yet another variation of Newtonian line of
analysis applies to the augmented Lagrangian algorithm (the method of
multipliers), which is, in principle, not commonly recorgnized as a Newtontype method. And, again, some significantly stronger-than-usual results are
obtained.
7
Plenary Talk 10
VI decomposition methods with application
to electricity markets
Claudia Sagastizábal
Abstract: Due to the presence of relatively few companies generating power
in a given region, electricity markets are naturally cast in an oligopolistic
competition framework. In this setting, game-theoretic formulations are
useful tools for analyzing strategic behaviour of agents. There are different
game models, depending on the horizon (short/long term), on the fact of
modelling or not uncertainty and risk-hedging, etc. Most importantly, models differ on the role of the Independent System Operator (ISO) coordinating
the game: the ISO can be seen as a leader with the agents following, or simply as another player of the game, in equal terms with respect to the agents.
Under a bounded rationality assumption, in which players ignore strategies
of the adversaries and all decisions are taken simultaneously, the model is
formulated as a generalized Nash game, possibly with a shared-constraint,
yielding a (quasi)Variational Inequality (VI) or a complementarity problem,
and where each player’s problem is an optimization problem. To ease the
exposition, we consider some deterministic variants of this last situation,
and present a Dantzig-Wolfe decomposition method specially suitable in the
presence of shared-constraints.
Joint work with Juan Pablo Luna and Mikhail Solodov.
8
Plenary Talk 11
Constant positive generators:
a new constraint qualification
Paulo J. S. Silva
Abstract: We present a new constraint qualification that extends the relaxed constant rank constraint qualification. We relax the assumption that
the rank of all subsets of gradients of active inequality constraints and equalities constraints must remain constant, to a single subset of such gradients
which is easily computed. Our new constraint qualification also extends
the relaxed constant positive linear dependence condition recently proposed
and ensures the convergence of penalty based methods, like the augmented
Lagrangian, and of a sequential quadratic programming algorithm.
Joint work with: Roberto Andreani, Mara Laura Schuverdt and Gabriel
Haeser.
Plenary Talk 12
Continuous time approach to asymptotic value
of repeated games
Sylvain Sorin
Abstract: We consider the sequences of values of two person zero-sum
repeated games as the expected duration goes to infinity. In the discounted
case the value is a fixed point of the Shapley operator which allows to prove
convergence in some classes. For the general evaluation case where this
property does not hold we extend the value function to the product: state
space x time. The family of values corrresponds then to the discretizatin
of a game in continuous time and we prove convergence using comparison
properties. this is a joint work with P. Cardaliaguet and R. Laraki.
9
Plenary Talk 13
Exact penalization algorithms in constrained
optimization
Juan Peypouquet
Abstract: In this talk we present a simple and fairly general exact penalization scheme and comment on some inexact proximal- and gradient-type
algorithms for constrained optimization problems.
Plenary Talk 14
Convergência local e semi-local do
método de Newton
Orizon P. Ferreira
Abstract: Será apresentado uma análise local do método de Newton, para
resolver equações não-lineares, sob uma condição majorante. Serão estabelecidos convergência da sequência gerada pelo método, o maior raio da bola
onde se tem unicidade de solução, o raio de convergência ótimo e resultados
sobre taxa de convergência. Além disso, casos especiais interessantes são
apresentados como aplicação. Uma análise semi-local do método de Newton
também será discutida.
10
Mini-course 03
Fixed Point Algorithms in Convex Optimization
Patrick L. Combettes
Abstract: The objective of this course is to give an overview of the use
of fixed point algorithms to develop algorithms for smooth and nonsmooth
convex optimization. To be discussed in particular are subgradient projection methods for solving systems of convex inequalities, projection methods for solving best approximation problems, the forward-backward and
the forward-backward-forward algorithm for solving minimization problems
involving a smooth function, the Douglas-Rachford algorithm and its extensions for solving general nonsmooth convex problems.
Mini-course 04
Discrete and continuous dynamics for evolution
equations governed by maximal monotone operators
Sylvain Sorin
Abstract: The course is devoted to the asymptotic behavior of solutions
of evolution equations generated by maximal monotone operators in Hilbert
spaces. The emphasis is in the comparison of the continuous time trajectories to sequences generated by implicit or explicit discrete time schemes
(prox and Euler). The presentation covers weak convergence for the average
process, for the process itself and strong convergence and aims at highlighting the main ideas and unifying the proofs.
Download

An optimal algorithm for constrained differentiable convex