International Joint Conference On Theoretical Computer Science – Frontier of Algorithmic Wisdom

August 15-19, 2022, City University of Hong Kong, Hong Kong

 

Invited Speakers

Track H


Escaping Saddle Points: from Agent-based Models to Stochastic Gradient Descent

Fang-Yi Yu

George Mason University

Abstract:
We study a large family of stochastic processes that update a limited amount in each step. One family of such examples is agent-based modeling, where one agent at a time updates, so the state has small changes in each step. A key question is how this family of stochastic processes is approximated by their mean-field approximations. Prior work shows that the stochastic processes escape repelling fixed points and saddle points in polynomial time.
We provide a tight analysis of the above question. Additionally, We show the power of our results by applying them to several settings:
- Evolutionary game theory
- Opinion formation dynamics on social networks
- Stochastic gradient descent in a finite-dimensional space

Bio:
Fang-Yi is an assistant professor in the Computer Science Department at George Mason University. He was a Postdoctoral fellow at Harvard School of Engineering and Applied Sciences and received a Ph.D. in Computer Science from the University of Michigan. His research is broadly situated at the interface between machine learning, artificial intelligence, and economics. His recent work focuses on machine learning with strategic agents.