Keynote Speakers
Constrained Min-Max Optimization: Last-Iterate Convergence and Acceleration

Yang Cai
Yale University
Abstract:
Min-max optimization plays a central role in numerous machine learning (ML) applications ranging from generative adversarial networks, adversarial examples, robust optimization, to multi-agent reinforcement learning. These ML applications pose new challenges in min-max optimization, requiring the algorithms to (i) exhibit last-iterate convergence as opposed to the more traditional average-iterate convergence, and (ii) to be robust in nonconvex-nonconcave environments. In this talk, we first focus on the last-iterate convergence rates of two most classical and popular algorithms for solving convex-concave min-max optimization – the extragradient (EG) method by Kopelevich (1976) and the optimistic gradient (OG) method by Popov (1980). Despite their long history and intensive attention from the optimization and machine learning community, the last-iterate convergence rates for both algorithms remained open. We resolve this open problem by showing both EG and OG exhibit a tight O(1/\sqrt{T}) last-iterate convergence rate under arbitrary convex constraints. In the second part of the talk, we discuss some recent progress on obtaining accelerated and optimal first-order methods for structured nonconvex-nonconcave min-max optimization. Our results are obtained via a new sum-of-squares programming based approach, which may be useful in analyzing other iterative methods.
Bio:
Yang Cai is an Associate Professor of Computer Science and Economics (secondary appointment) at Yale University. He finished his PhD at MIT in Computer Science and received his BSc in EECS at Peking University. His research interests lie in theoretical computer science and its interface with economics, probability, learning and statistics. He has been honored with the Sloan Research Fellowship, the NSF CAREER Award, and the William Dawson Scholarship.