
Improved Algorithms for ConvexConcave Minimax Optimization
This paper studies minimax optimization problems min_x max_y f(x,y), whe...
read it

A General Analysis Framework of Lower Complexity Bounds for FiniteSum Optimization
This paper studies the lower bound complexity for the optimization probl...
read it

Lower Complexity Bounds of FiniteSum Optimization Problems: The Results and Construction
The contribution of this paper includes two aspects. First, we study the...
read it

A Unified Analysis of Extragradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach
We consider solving convexconcave saddle point problems. We focus on tw...
read it

NearOptimal Algorithms for Minimax Optimization
This paper resolves a longstanding open question pertaining to the desig...
read it

On the Location of the Minimizer of the Sum of Strongly Convex Functions
The problem of finding the minimizer of a sum of convex functions is cen...
read it

Averagecase Acceleration for Bilinear Games and Normal Matrices
Advances in generative modeling and adversarial learning have given rise...
read it
DIPPA: An improved Method for Bilinear Saddle Point Problems
This paper studies bilinear saddle point problems min_xmax_y g(x) + x^⊤Ay  h(y), where the functions g, h are smooth and stronglyconvex. When the gradient and proximal oracle related to g and h are accessible, optimal algorithms have already been developed in the literature <cit.>. However, the proximal operator is not always easy to compute, especially in constraint zerosum matrix games <cit.>. This work proposes a new algorithm which only requires the access to the gradients of g, h. Our algorithm achieves a complexity upper bound 𝒪̃( A_2/√(μ_x μ_y) + √(κ_x κ_y (κ_x + κ_y))) which has optimal dependency on the coupling condition number A_2/√(μ_x μ_y) up to logarithmic factors.
READ FULL TEXT
Comments
There are no comments yet.