Eduard Gorbunov
Title
Cited by
Cited by
Year
Distributed learning with compressed gradient differences
K Mishchenko, E Gorbunov, M Takáè, P Richtárik
arXiv preprint arXiv:1901.09269, 2019
452019
An accelerated method for derivative-free smooth stochastic convex optimization
E Gorbunov, P Dvurechensky, A Gasnikov
arXiv preprint arXiv:1802.09022, 2018
40*2018
A unified theory of sgd: Variance reduction, sampling, quantization and coordinate descent
E Gorbunov, F Hanzely, P Richtárik
International Conference on Artificial Intelligence and Statistics, 680-690, 2020
362020
The global rate of convergence for optimal tensor methods in smooth convex optimization
A Gasnikov, P Dvurechensky, E Gorbunov, E Vorontsova, ...
arXiv preprint arXiv:1809.00382, 2018
30*2018
Optimal decentralized distributed algorithms for stochastic convex optimization
E Gorbunov, D Dvinskikh, A Gasnikov
arXiv preprint arXiv:1911.07363, 2019
272019
Near Optimal Methods for Minimizing Convex Functions with Lipschitz -th Derivatives
A Gasnikov, P Dvurechensky, E Gorbunov, E Vorontsova, ...
Conference on Learning Theory, 1392-1393, 2019
232019
On primal and dual approaches for distributed stochastic convex optimization over networks
D Dvinskikh, E Gorbunov, A Gasnikov, P Dvurechensky, CA Uribe
2019 IEEE 58th Conference on Decision and Control (CDC), 7435-7440, 2019
17*2019
Accelerated directional search with non-Euclidean prox-structure
EA Vorontsova, AV Gasnikov, EA Gorbunov
Automation and Remote Control 80 (4), 693-707, 2019
16*2019
Derivative-free method for composite optimization with applications to decentralized distributed optimization
A Beznosikov, E Gorbunov, A Gasnikov
arXiv preprint arXiv:1911.10645, 2019
13*2019
An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization
EG Pavel Dvurechensky, Alexander Gasnikov
arXiv preprint arXiv:1804.02394, 2018
13*2018
Stochastic three points method for unconstrained smooth minimization
EH Bergou, E Gorbunov, P Richtarik
SIAM Journal on Optimization 30 (4), 2726-2749, 2020
11*2020
Stochastic optimization with heavy-tailed noise via accelerated gradient clipping
E Gorbunov, M Danilova, A Gasnikov
Advances in Neural Information Processing Systems 33 (NeurIPS 2020) [arXiv …, 2020
92020
A stochastic derivative free optimization method with momentum
E Gorbunov, A Bibi, O Sener, EH Bergou, P Richtárik
8th International Conference on Learning Representations (ICLR 2020) [arXiv …, 2019
72019
Stochastic spectral and conjugate descent methods
D Kovalev, E Gorbunov, E Gasanov, P Richtárik
arXiv preprint arXiv:1802.03703, 2018
72018
Recent theoretical advances in decentralized distributed convex optimization
E Gorbunov, A Rogozin, A Beznosikov, D Dvinskikh, A Gasnikov
arXiv preprint arXiv:2011.13259, 2020
42020
Linearly Converging Error Compensated SGD
E Gorbunov, D Kovalev, D Makarenko, P Richtárik
Advances in Neural Information Processing Systems 33 (NeurIPS 2020) [arXiv …, 2020
42020
Local sgd: Unified theory and new efficient methods
E Gorbunov, F Hanzely, P Richtárik
arXiv preprint arXiv:2011.02828, 2020
32020
Accelerated gradient-free optimization methods with a non-Euclidean proximal operator
EA Vorontsova, AV Gasnikov, EA Gorbunov, PE Dvurechenskii
Automation and Remote Control 80 (8), 1487-1501, 2019
32019
On the Upper Bound for the Expectation of the Norm of a Vector Uniformly Distributed on the Sphere and the Phenomenon of Concentration of Uniform Measure on the Sphere.
EA Gorbunov, EA Vorontsova, AV Gasnikov
Mathematical Notes 106, 2019
3*2019
Recent Theoretical Advances in Non-Convex Optimization
M Danilova, P Dvurechensky, A Gasnikov, E Gorbunov, S Guminov, ...
arXiv preprint arXiv:2012.06188, 2020
22020
The system can't perform the operation now. Try again later.
Articles 1–20