Sotabase
Home
Researchers
Career
·
Researcher
,
UC Berkeley Berkeley Artificial Intelligence Research Lab (BAIR)
2024–
·
Alumnus
,
Berkeley Artificial Intelligence Research Lab
2019–
·
Postdoctoral Researcher
,
University of California, Berkeley (UC Berkeley)
2017–
Publications
(48)
Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
Journal of machine learning research · 2016
238
cited
Sampling can be faster than optimization
Proceedings of the National Academy of Sciences of the United States of America · 2018
196
cited
From Averaging to Acceleration, There is Only a Step-size
Annual Conference Computational Learning Theory · 2015
143
cited
Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks
AAAI Conference on Artificial Intelligence · 2020
128
cited
Implicit Bias of SGD for Diagonal Linear Networks: a Provable Benefit of Stochasticity
Neural Information Processing Systems · 2021
117
cited
Averaging Stochastic Gradient Descent on Riemannian Manifolds
Annual Conference Computational Learning Theory · 2018
109
cited
On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo
International Conference on Machine Learning · 2018
87
cited
Fast Mean Estimation with Sub-Gaussian Rates
Annual Conference Computational Learning Theory · 2019
79
cited
Is There an Analog of Nesterov Acceleration for MCMC?
arXiv.org · 2019
79
cited
Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs
Neural Information Processing Systems · 2022
76
cited
Improved bounds for discretization of Langevin diffusions: Near-optimal rates without convexity
Bernoulli · 2019
76
cited
Saddle-to-saddle dynamics in diagonal linear networks
Neural Information Processing Systems · 2023
48
cited
Last iterate convergence of SGD for Least-Squares in the Interpolation regime
Neural Information Processing Systems · 2021
45
cited
Label noise (stochastic) gradient descent implicitly solves the Lasso for quadratic parametrisation
Annual Conference Computational Learning Theory · 2022
41
cited
Optimal Robust Linear Regression in Nearly Linear Time
arXiv.org · 2020
38
cited
Online Robust Regression via SGD on the l1 loss
Neural Information Processing Systems · 2020
36
cited
An Efficient Sampling Algorithm for Non-smooth Composite Potentials
Journal of machine learning research · 2019
29
cited
Stochastic Composite Least-Squares Regression with Convergence Rate $O(1/n)$
Annual Conference Computational Learning Theory · 2017
28
cited
A Continuized View on Nesterov Acceleration
arXiv.org · 2021
24
cited
(S)GD over Diagonal Linear Networks: Implicit Regularisation, Large Stepsizes and Edge of Stability
arXiv.org · 2023
24
cited
Show all 48 papers →
Sotabase
Nicolas Flammarion | Researcher Profile | Sotabase | Sotabase