Dmitry Kovalev
Title
Cited by
Cited by
Year
Stochastic distributed learning with gradient quantization and variance reduction
S Horváth, D Kovalev, K Mishchenko, S Stich, P Richtárik
arXiv preprint arXiv:1904.05115, 2019
592019
Don’t jump through hoops and remove those loops: SVRG and Katyusha are better without the outer loop
D Kovalev, S Horváth, P Richtárik
Algorithmic Learning Theory, 451-467, 2020
552020
Acceleration for compressed gradient descent in distributed and federated optimization
Z Li, D Kovalev, X Qian, P Richtárik
arXiv preprint arXiv:2002.11364, 2020
292020
From Local SGD to Local Fixed-Point Methods for Federated Learning
G Malinovskiy, D Kovalev, E Gasanov, L Condat, P Richtarik
International Conference on Machine Learning, 6692-6701, 2020
192020
Revisiting stochastic extragradient
K Mishchenko, D Kovalev, E Shulgin, P Richtárik, Y Malitsky
International Conference on Artificial Intelligence and Statistics, 4573-4582, 2020
182020
Rsn: Randomized subspace newton
RM Gower, D Kovalev, F Lieder, P Richtárik
arXiv preprint arXiv:1905.10874, 2019
182019
Stochastic Newton and cubic Newton methods with simple local linear-quadratic rates
D Kovalev, K Mishchenko, P Richtárik
arXiv preprint arXiv:1912.01597, 2019
152019
Accelerated methods for composite non-bilinear saddle point problem
M Alkousa, D Dvinskikh, F Stonyakin, A Gasnikov, D Kovalev
arXiv preprint arXiv:1906.03620, 2019
132019
Linearly Converging Error Compensated SGD
E Gorbunov, D Kovalev, D Makarenko, P Richtárik
arXiv preprint arXiv:2010.12292, 2020
92020
Optimal and practical algorithms for smooth and strongly convex decentralized optimization
D Kovalev, A Salim, P Richtárik
Advances in Neural Information Processing Systems 33, 2020
92020
Stochastic proximal langevin algorithm: Potential splitting and nonasymptotic rates
A Salim, D Kovalev, P Richtárik
arXiv preprint arXiv:1905.11768, 2019
82019
A linearly convergent algorithm for decentralized optimization: Sending less bits for free!
D Kovalev, A Koloskova, M Jaggi, P Richtarik, S Stich
International Conference on Artificial Intelligence and Statistics, 4087-4095, 2021
72021
Stochastic spectral and conjugate descent methods
D Kovalev, E Gorbunov, E Gasanov, P Richtárik
arXiv preprint arXiv:1802.03703, 2018
72018
Towards accelerated rates for distributed optimization over time-varying networks
A Rogozin, V Lukoshkin, A Gasnikov, D Kovalev, E Shulgin
arXiv preprint arXiv:2009.11069, 2020
62020
Fast linear convergence of randomized bfgs
D Kovalev, RM Gower, P Richtárik, A Rogozin
arXiv preprint arXiv:2002.11337, 2020
62020
Accelerated Methods for Saddle-Point Problem
MS Alkousa, AV Gasnikov, DM Dvinskikh, DA Kovalev, FS Stonyakin
Computational Mathematics and Mathematical Physics 60 (11), 1787-1809, 2020
42020
Decentralized distributed optimization for saddle point problems
A Rogozin, A Beznosikov, D Dvinskikh, D Kovalev, P Dvurechensky, ...
arXiv preprint arXiv:2102.07758, 2021
32021
Variance reduced coordinate descent with acceleration: New method with a surprising application to finite-sum problems
F Hanzely, D Kovalev, P Richtarik
International Conference on Machine Learning, 4039-4048, 2020
32020
Distributed fixed point methods with compressed iterates
S Chraibi, A Khaled, D Kovalev, P Richtárik, A Salim, M Takáè
arXiv preprint arXiv:1912.09925, 2019
32019
A hypothesis about the rate of global convergence for optimal methods (Newton's type) in smooth convex optimization
AV Gasnikov, DA Kovalev
Computer research and modeling 10 (3), 305-314, 2018
32018
The system can't perform the operation now. Try again later.
Articles 1–20