Xiao Li
Assistant Professor
School of Data Science
The Chinese University of Hong Kong, Shenzhen

A little about me: Starting from summer 2020, I am an Assistant Professor at the School of Data Science at The Chinese University of Hong Kong, Shenzhen. Before that, I obtained my Ph.D. degree at The Chinese University of Hong Kong from 2016 to 2020. My superviors are Prof. Thierry Blu and Prof. Anthony Man-Cho So. I did my undergraduate at Zhejiang University of Technology from 2012 to 2016.
Contact: lixiao at cuhk.edu.cn
Office: 506a, Daoyuan Building, The Chinese University of Hong Kong, Shenzhen.
Abstract (of research): I work on the intersection of (continuous) optimization, machine leanring, and signal processing. In particular, I am interested in designing and analyzing deterministic and stochastic optimization algorithms, and examing specific nonconvex (and possibly nonsmooth) formulations arising in machine learning and signal processing. I wish to reveal simple ideas and results. See the “research” page for more detials.
I would give my deep thanks to my collaborators and students. I highly appreciate the funding agencies that supported my research such as RGC (HK), NSFC, Shenzhen Science and Technology Program, AIRS, etc.
News
-
24 May 2023. Our work “ReSync: Riemannian Subgradient-based Robust Rotation Synchronization” is available online. In this work, we deisgn ReSync, a Riemannian subgradient method, for robust rotation synchronization. Under a random corruption model, we establish the initialization result, the weak sharpness property, and the contration property, giving a full guanratee on the recovery of the underlying rotations. The manuscript can be found here.
-
23 May 2023. Our work “Revisiting Subgradient Method: Complexity and Convergence Beyond Lipschitz Continuity” is available on arXiv. We establish complexity and convergence results for the subgradient method and its variants without using the classical Lipschitz continuity assumption. Details can be found here.
-
17 April 2023. Our work “Randomized Coordinate Subgradient Method for Nonsmooth Optimization” is available on arXiv. We provide a coordinate extension of the subgradient method. Complexity and convergence are given under the scenario where the subgradient grows linearly. Some useful technical lemmas and error bound results are also provided. One can download the manuscript here.
-
16 Mar 2023. I will attend the annual meeting of operations research socite of China (ORSC 2023) from 7 - 9 April at Changsha, Hunan. I will give a talk on “A Unified Convergence Theorem for Stochastic Optimization Methods” (see also here).
-
12 Mar 2023. Our work “Distributed Random Reshuffling over Networks” is accepted for publication in IEEE Transactions on Signal Processing. In this work, we design a simple yet powerful decentralized random reshuffling method over networks. Similar complexity and convergence guarantees to the centralized random reshuffling method are established. Authors’ copy is available here.
-
27 Feb 2023. Prof. Andre Milzarek and I will organize the minisymposium “Recent Advances in Stochastic Optimization Methods for Machine Learning” at SIAM Conference on Optimization (OP23), 31 May to 4 June, Seattle.
-
31 Jan 2023. Our work “Distributed Stochastic Optimization under a General Variance Condition’’ is available online. We established the convergence of FedAvg in nonconvex setting using a general variance condition. This also provide an alternative informative measrue for charactering data heterogeneity in federated learning. Details can be found here.
-
19 Jan 2023. Our work “Finite-Time Analysis of Decentralized Single-Timescale Actor-Critic’’ is accepted for publication in Transactions on Machine Learning Research. Here, we showed the \(\tilde {\mathcal O}(\varepsilon^{-2})\) sample complexity for single-timescale AC algorithm (previous results are based on either double-loop or two-timescale scheme). The key step is to estalish the smoothness condition of the optimal critic variable. The arXiv version is available here.
-
5 Jan 2023. Our work “Convergence of Random Reshuffling Under The Kurdyka-Lojasiewicz Inequality’’ is accepted for publication in SIOPT. The sequence convergence results are established for the (stochastic) random reshuffling method. The key insights are: 1) derive subsequence convergence using diminishing step sizes and 2) combine diminishing step sizes with the traditional KL analysis framework. Note that sequence convergence results for stochastic optimization methods are rather limited in the literature. An authors’ copy is available here.
The original theme template is al-folio.