【报告人】:侯园园
【摘要】:Distributed learning has been extensively studied and applied in various machine-learning scenarios. This talk initiates by revisiting several classical optimization algorithms, namely gradient descent, stochastic gradient descent (SGD), proximal gradient descent, and ADMM. Following this, we broaden the application of these algorithms to centralized and decentralized distributed frameworks. In particular, we introduce distributed learning algorithms including Communication-efficient Surrogate Likelihood (CSL) and some basic decentralized consensus algorithms.
【时间地点】:2024年3月6日 14:00 位育楼417