Dmitry Kovalev
Title
Cited by
Cited by
Year
Stochastic distributed learning with gradient quantization and variance reduction
S Horváth, D Kovalev, K Mishchenko, S Stich, P Richtárik
arXiv preprint arXiv:1904.05115, 2019
502019
Don’t jump through hoops and remove those loops: SVRG and Katyusha are better without the outer loop
D Kovalev, S Horváth, P Richtárik
Algorithmic Learning Theory, 451-467, 2020
482020
Acceleration for compressed gradient descent in distributed and federated optimization
Z Li, D Kovalev, X Qian, P Richtárik
arXiv preprint arXiv:2002.11364, 2020
212020
Revisiting stochastic extragradient
K Mishchenko, D Kovalev, E Shulgin, P Richtárik, Y Malitsky
International Conference on Artificial Intelligence and Statistics, 4573-4582, 2020
172020
Rsn: Randomized subspace newton
RM Gower, D Kovalev, F Lieder, P Richtárik
arXiv preprint arXiv:1905.10874, 2019
152019
From Local SGD to Local Fixed-Point Methods for Federated Learning
G Malinovskiy, D Kovalev, E Gasanov, L Condat, P Richtarik
International Conference on Machine Learning, 6692-6701, 2020
122020
Stochastic Newton and cubic Newton methods with simple local linear-quadratic rates
D Kovalev, K Mishchenko, P Richtárik
arXiv preprint arXiv:1912.01597, 2019
122019
Accelerated methods for composite non-bilinear saddle point problem
M Alkousa, D Dvinskikh, F Stonyakin, A Gasnikov, D Kovalev
arXiv preprint arXiv:1906.03620, 2019
112019
Stochastic proximal langevin algorithm: Potential splitting and nonasymptotic rates
A Salim, D Kovalev, P Richtárik
arXiv preprint arXiv:1905.11768, 2019
72019
Stochastic spectral and conjugate descent methods
D Kovalev, E Gorbunov, E Gasanov, P Richtárik
arXiv preprint arXiv:1802.03703, 2018
72018
Linearly Converging Error Compensated SGD
E Gorbunov, D Kovalev, D Makarenko, P Richtárik
arXiv preprint arXiv:2010.12292, 2020
42020
Towards accelerated rates for distributed optimization over time-varying networks
A Rogozin, V Lukoshkin, A Gasnikov, D Kovalev, E Shulgin
arXiv preprint arXiv:2009.11069, 2020
42020
Optimal and practical algorithms for smooth and strongly convex decentralized optimization
D Kovalev, A Salim, P Richtárik
arXiv preprint arXiv:2006.11773, 2020
42020
Variance reduced coordinate descent with acceleration: New method with a surprising application to finite-sum problems
F Hanzely, D Kovalev, P Richtarik
International Conference on Machine Learning, 4039-4048, 2020
32020
Distributed fixed point methods with compressed iterates
S Chraibi, A Khaled, D Kovalev, P Richtárik, A Salim, M Takáè
arXiv preprint arXiv:1912.09925, 2019
32019
A hypothesis about the rate of global convergence for optimal methods (Newton's type) in smooth convex optimization
AV Gasnikov, DA Kovalev
Computer research and modeling 10 (3), 305-314, 2018
32018
Accelerated Methods for Saddle-Point Problem
MS Alkousa, AV Gasnikov, DM Dvinskikh, DA Kovalev, FS Stonyakin
Computational Mathematics and Mathematical Physics 60 (11), 1787-1809, 2020
22020
Fast linear convergence of randomized bfgs
D Kovalev, RM Gower, P Richtárik, A Rogozin
arXiv preprint arXiv:2002.11337, 2020
22020
A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free!
D Kovalev, A Koloskova, M Jaggi, P Richtarik, SU Stich
arXiv preprint arXiv:2011.01697, 2020
12020
An Optimal Algorithm for Strongly Convex Minimization under Affine Constraints
A Salim, L Condat, D Kovalev, P Richtárik
arXiv preprint arXiv:2102.11079, 2021
2021
The system can't perform the operation now. Try again later.
Articles 1–20