Joint Seminar of IEEE Tallahassee PES Chapter and the Center for Advanced Power Systems
Title: Accelerating distributed optimization under communication constraints: the case for expanding quantizers, and maximal dissent gossiping algorithms
Presenter: Marcos Vasconcelos, PhD, Research Assistant Professor, Commonwealth Cyber Initiative,
Department of Electrical and Computer Engineering, Virginia Tech.
Time: 2:00 – 3:00 PM, Tuesday, April 19, 2022
Location: Center for Advanced Power Systems, Research Foundation Building A, Seminar Room 120, 2000 Levy Avenue, Tallahassee, Florida 32310
Refreshments will be served
Abstract
Next-generation cyber-physical systems will seamlessly incorporate machine learning capabilities to guarantee performance in non-stationary environments. In decentralized cyber-physical systems, real-time, collaborative machine learning methods require a collection of agents to train a global model efficiently based on local processing and information exchange over a network. Such problems can be posed using the framework of distributed optimization.
This talk will discuss distributed optimization under two types of communication constraints among the networked agents. First, we will consider a distributed two-time-scale gradient method for solving distributed convex optimization problems over a network of agents when the communication bandwidth is limited. So, information exchanged between the nodes must be quantized. Then, using a novel analysis technique, we show that our two-time-scale algorithm improves the convergence rate compared to existing works. In the second part of the talk, we will discuss a set of agents collaboratively solving the distributed convex optimization problems, asynchronously, under the constraint that each agent can communicate with only one of its neighbors. In that case, we would like to pick the agent holding the most informative local estimate. We propose a new algorithm where the agents with maximal dissent average their estimates, leading to an information mixing mechanism that often displays faster convergence to an optimal solution. However, unlike most available algorithms, the resulting algorithm is state-dependent, complicating the convergence analysis. Yet, we prove the convergence of max-dissent subgradient methods using a unified framework that can be applied to other state-dependent distributed optimization algorithms.
Brief Bio of Dr. Marcos M. Vasconcelos
Marcos M. Vasconcelos has been a research assistant professor with Virginia Tech’s commonwealth cyber-initiative (CCI) with a courtesy appointment in the department of electrical and computer engineering since 2021. Upon graduation, he was a postdoctoral research associate in the electrical engineering department at the University of Southern California from 2016 to 2020, where he worked with Prof. Urbashi Mitra. He received his Ph.D. in control theory and optimization from the University of Maryland, College Park, under the guidance of Prof. Nuno C. Martins in 2016. His research interests are in the general area of multi-agent systems (both natural and artificial) and designing intelligent devices that can learn and adapt to make provably optimal distributed decisions with applications in robotics and sensor networks.