About

I’m an Associate Professor in The Blavatnik School of Computer Science and AI at Tel Aviv University and a Senior Research Scientist at Google Research, Tel Aviv. I received my PhD from the Technion—Israel Institute of Technology.

My research interests are in machine learning, optimization, and reinforcement learning.


Preprints

Flat Minima and Generalization: Insights from Stochastic Convex Optimization.
Matan Schliserman, Shira Vansover-Hager, Tomer Koren.
Oral presentation at OPT 2025
[arXiv]

Convergence and Sample Complexity of First-Order Methods for Agnostic Reinforcement Learning.
Uri Sherman, Tomer Koren, Yishay Mansour.
Preliminary version in ARLET 2025
[arXiv]

Nearly Optimal Sample Complexity for Learning with Label Proportions.
Lorne Applebaum, Travis Dick, Claudio Gentile, Haim Kaplan, Tomer Koren.
[arXiv]

Benefits of Learning Rate Annealing for Tuning-Robustness in Stochastic Optimization.
Amit Attia, Tomer Koren.
Preliminary version in OPT 2025
[arXiv]

From Continual Learning to SGD and Back: Better Rates for Continual Linear Models.
Itay Evron, Ran Levinstein, Matan Schliserman, Uri Sherman, Tomer Koren, Daniel Soudry, Nathan Srebro.
[arXiv]

Complexity of Vector-valued Prediction: From Linear Models to Stochastic Convex Optimization.
Matan Schliserman, Tomer Koren.
[arXiv]

A General Reduction for High-Probability Analysis with General Light-Tailed Distributions.
Amit Attia, Tomer Koren.
[arXiv]


Publications

Fast Last-Iterate Convergence of SGD in the Smooth Interpolation Regime.
Amit Attia, Matan Schliserman, Uri Sherman, Tomer Koren.
NeurIPS 2025 (to appear)
[arXiv]

From Contextual Combinatorial Semi-Bandits to Bandit List Classification: Improved Sample Complexity with Sparse Rewards.
Liad Erez, Tomer Koren.
NeurIPS 2025 (to appear)
[arXiv]

Multiclass Loss Geometry Matters for Generalization of Gradient Descent in Separable Classification.
Matan Schliserman, Tomer Koren.
NeurIPS 2025 (to appear)
[arXiv]

Optimal Rates in Continual Linear Regression via Increasing Regularization.
Ran Levinstein, Amit Attia, Matan Schliserman, Uri Sherman, Tomer Koren, Daniel Soudry, Itay Evron.
NeurIPS 2025 (to appear)
[arXiv]

Multiplicative Reweighting for Robust Neural Network Optimization.
Noga Bar, Raja Giryes, Tomer Koren.
SIAM Journal on Imaging Sciences (to appear)
[arXiv]

Rapid Overfitting of Multi-Pass SGD in Stochastic Convex Optimization.
Shira Vansover-Hager, Tomer Koren, Roi Livni.
ICML 2025 (Spotlight)
[arXiv]

Convergence of Policy Mirror Descent Beyond Compatible Function Approximation.
Uri Sherman, Tomer Koren, Yishay Mansour.
ICML 2025
[arXiv]

Faster Stochastic Optimization with Arbitrary Delays via Adaptive Asynchronous Mini-Batching.
Amit Attia, Tomer Koren.
ICML 2025
[arXiv]

Nearly Optimal Sample Complexity for Learning with Label Proportions.
Robert Busa-Fekete, Travis Dick, Claudio Gentile, Haim Kaplan, Tomer Koren, Uri Stemmer.
ICML 2025
[arXiv]

Dueling Convex Optimization with General Preferences.
Aadirupa Saha, Tomer Koren, Yishay Mansour.
ICML 2025
[arXiv]

Locally Optimal Descent for Dynamic Stepsize Scheduling.
Gilad Yehudai, Alon Cohen, Amit Daniely, Yoel Drori, Tomer Koren, Mariano Schain.
AISTATS 2025
[arXiv]

The Dimension Strikes Back with Gradients: Generalization of Gradient Methods in Stochastic Convex Optimization.
Matan Schliserman, Uri Sherman, Tomer Koren.
ALT 2025 (Outstanding Paper Award); Oral presentation at OPT 2024
[arXiv]

Fast Rates for Bandit PAC Multiclass Classification.
Liad Erez, Alon Cohen, Tomer Koren, Yishay Mansour, Shay Moran.
NeurIPS 2024
[arXiv]

Private Online Learning via Lazy Algorithms.
Hilal Asi, Tomer Koren, Daogao Liu, Kunal Talwar.
NeurIPS 2024
[arXiv]

Rate-Optimal Policy Optimization for Linear Markov Decision Processes.
Uri Sherman, Alon Cohen, Tomer Koren, Yishay Mansour.
ICML 2024 (Oral)
[arXiv]

How Free is Parameter-Free Stochastic Optimization?
Amit Attia, Tomer Koren.
ICML 2024 (Spotlight)
[arXiv]

The Real Price of Bandit Information in Multiclass Classification.
Liad Erez, Alon Cohen, Tomer Koren, Yishay Mansour, Shay Moran.
COLT 2024
[arXiv]

Faster Convergence with Multiway Preferences.
Aadirupa Saha, Vitaly Feldman, Tomer Koren, Yishay Mansour.
AISTATS 2024
[arXiv]

Tight Risk Bounds for Gradient Descent on Separable Data.
Matan Schliserman, Tomer Koren.
NeurIPS 2023 (Spotlight)
[arXiv]

Improved Regret for Efficient Online Reinforcement Learning with Linear Function Approximation.
Uri Sherman, Tomer Koren, Yishay Mansour.
ICML 2023
[arXiv]

SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to Unknown Parameters, Unbounded Gradients and Affine Variance.
Amit Attia, Tomer Koren.
ICML 2023
[arXiv]

Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime.
Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar.
ICML 2023
[arXiv]

Regret Minimization and Convergence to Equilibria in General-sum Markov Games.
Liad Erez, Tal Lancewicki, Uri Sherman, Tomer Koren, Yishay Mansour.
ICML 2023
[arXiv]

Private Online Prediction from Experts: Separations and Faster Rates.
Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar.
COLT 2023; Oral presentation at TPDP 2023
[arXiv]

Benign Underfitting of Stochastic Gradient Descent.
Tomer Koren, Roi Livni, Yishay Mansour, Uri Sherman.
NeurIPS 2022
[arXiv]

Rate-Optimal Online Convex Optimization in Adaptive Linear Control.
Asaf Cassel, Alon Cohen, Tomer Koren.
NeurIPS 2022
[arXiv]

Better Best-of-Both-Worlds Bounds for Bandits with Switching Costs.
Idan Amir, Guy Azov Tomer Koren, Roi Livni.
NeurIPS 2022
[arXiv]

Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond.
Matan Schliserman, Tomer Koren.
COLT 2022
[arXiv]

Efficient Online Linear Control with Stochastic Convex Costs and Unknown Dynamics.
Asaf Cassel, Alon Cohen, Tomer Koren.
COLT 2022
[arXiv]

Uniform Stability for First-Order Empirical Risk Minimization.
Amit Attia, Tomer Koren.
COLT 2022
[arXiv]

Best-of-All-Worlds Bounds for Online Learning with Feedback Graphs.
Liad Erez, Tomer Koren.
NeurIPS 2021
[arXiv]

Optimal Rates for Random Order Online Optimization.
Uri Sherman, Tomer Koren, Yishay Mansour.
NeurIPS 2021 (Oral)
[arXiv]

Never Go Full Batch (in Stochastic Convex Optimization).
Idan Amir, Yair Carmon, Tomer Koren, Roi Livni.
NeurIPS 2021
[arXiv]

Asynchronous Stochastic Optimization Robust to Arbitrary Delays.
Alon Cohen, Amit Daniely, Yoel Drori, Tomer Koren, Mariano Schain.
NeurIPS 2021
[arXiv]

Algorithmic Instabilities of Accelerated Gradient Descent.
Amit Attia, Tomer Koren.
NeurIPS 2021
[arXiv]

SGD Generalizes Better Than GD (And Regularization Doesn’t Help).
Idan Amir, Tomer Koren, Roi Livni.
COLT 2021
[arXiv]

Lazy OCO: Online Convex Optimization on a Switching Budget.
Uri Sherman, Tomer Koren.
COLT 2021
[arXiv]

Online Markov Decision Processes with Aggregate Bandit Feedback.
Alon Cohen, Haim Kaplan, Tomer Koren, Yishay Mansour.
COLT 2021
[arXiv]

Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry.
Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar.
ICML 2021
[arXiv]

Online Policy Gradient for Model Free Learning of Linear Quadratic Regulators with $\sqrt{T}$ Regret.
Asaf Cassel, Tomer Koren.
ICML 2021
[arXiv]

Adversarial Dueling Bandits.
Aadirupa Saha, Tomer Koren, Yishay Mansour.
ICML 2021
[arXiv]

Dueling Convex Optimization.
Aadirupa Saha, Tomer Koren, Yishay Mansour.
ICML 2021

Stochastic Multi-Armed Bandits with Unrestricted Delay Distributions.
Tal Lancewicki, Shahar Segal, Tomer Koren, Yishay Mansour.
ICML 2021
[arXiv]

Bandit Linear Control.
Asaf Cassel, Tomer Koren.
NeurIPS 2020 (Spotlight)
[arXiv]

Stochastic Optimization for Laggard Data Pipelines.
Naman Agarwal, Rohan Anil, Tomer Koren, Kunal Talwar, Cyril Zhang.
NeurIPS 2020
[arXiv]

Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study.
Assaf Dauber, Meir Feder, Tomer Koren, Roi Livni.
NeurIPS 2020
[arXiv]

Prediction with Corrupted Expert Advice.
Idan Amir, Idan Attias, Tomer Koren, Roi Livni, Yishay Mansour.
NeurIPS 2020 (Spotlight)
[arXiv]

Logarithmic Regret for Learning Linear Quadratic Regulators Efficiently.
Asaf Cassel, Alon Cohen, Tomer Koren.
ICML 2020
[arXiv]

Private Stochastic Convex Optimization: Optimal Rates in Linear Time.
Vitaly Feldman, Tomer Koren, Kunal Talwar.
STOC 2020; preliminary version in NeurIPS’19 Workshop on “Privacy in Machine Learning” (PriML’19)
[arXiv]

Memory-Efficient Adaptive Optimization.
Rohan Anil, Vineet Gupta, Tomer Koren, Yoram Singer.
NeurIPS 2019
[arXiv]

Robust Bi-Tempered Logistic Loss Based on Bregman Divergences.
Ehsan Amid, Manfred K. Warmuth, Rohan Anil, Tomer Koren.
NeurIPS 2019
[arXiv]

Better Algorithms for Stochastic Bandits with Adversarial Corruptions.
Anupam Gupta Tomer Koren, Kunal Talwar.
COLT 2019
[arXiv]

Learning Linear-Quadratic Regulators Efficiently with only $\sqrt{T}$ Regret.
Alon Cohen, Tomer Koren, Yishay Mansour.
ICML 2019
[arXiv]

Semi-Cyclic Stochastic Gradient Descent.
Hubert Eichner, Tomer Koren, Brendan McMahan, Nathan Srebro, Kunal Talwar.
ICML 2019
[arXiv]

Online Linear-Quadratic Control.
Alon Cohen, Avinatan Hassidim, Tomer Koren, Nevena Lazic, Yishay Mansour, Kunal Talwar.
ICML 2018
[arXiv]

Shampoo: Preconditioned Stochastic Tensor Optimization.
Vineet Gupta, Tomer Koren, Yoram Singer.
ICML 2018
[arXiv]

Multi-Armed Bandits with Metric Movement Costs.
Tomer Koren, Roi Livni, Yishay Mansour.
NIPS 2017
[arXiv]

Affine-Invariant Online Optimization and the Low-rank Experts Problem.
Tomer Koren, Roi Livni,
NIPS 2017
[pdf]

Bandits with Movement Costs and Adaptive Pricing.
Tomer Koren, Roi Livni, Yishay Mansour.
COLT 2017
[arXiv]

Tight Bounds for Bandit Combinatorial Optimization.
Alon Cohen, Tamir Hazan, Tomer Koren.
COLT 2017
[arXiv]

The Limits of Learning with Missing Data.
Brian Bullins, Elad Hazan, Tomer Koren.
NIPS 2016
[pdf]

Online Pricing With Strategic and Patient Buyers.
Michal Feldman, Tomer Koren, Roi Livni, Yishay Mansour, Aviv Zohar.
NIPS 2016
[pdf]

Online Learning with Feedback Graphs Without the Graphs.
Alon Cohen, Tamir Hazan, Tomer Koren.
ICML 2016
[arXiv]

Online Learning with Low Rank Experts.
Elad Hazan, Tomer Koren, Roi Livni, Yishay Mansour.
COLT 2016
[arXiv]

The Computational Power of Optimization in Online Learning.
Elad Hazan, Tomer Koren.
STOC 2016
[arXiv]

A Linear-Time Algorithm for Trust Region Problems.
Elad Hazan, Tomer Koren.
Mathematical Programming, 158(1-2): 363-381, 2016
[arXiv]

Fast Rates for Exp-concave Empirical Risk Minimization.
Tomer Koren, Kfir Levy.
NIPS 2015
[pdf]

Bandit Convex Optimization: $\sqrt{T}$ Regret in One Dimension.
Sébastien Bubeck, Ofer Dekel, Tomer Koren, Yuval Peres.
COLT 2015
[arXiv]

Online Learning with Feedback Graphs: Beyond Bandits.
Noga Alon, Nicolò Cesa-Bianchi, Ofer Dekel, Tomer Koren.
COLT 2015
[arXiv]

Oracle-Based Robust Optimization via Online Learning.
Aharon Ben-Tal, Elad Hazan, Tomer Koren, Shie Mannor.
Operations Research, 63(3), 628-638, 2015
[arXiv]

The Blinded Bandit: Learning with Adaptive Feedback.
Ofer Dekel, Elad Hazan, Tomer Koren.
NIPS 2014
[pdf] [full]

Chasing Ghosts: Competing with Stateful Policies.
Uriel Feige, Tomer Koren, Moshe Tennenholtz.
FOCS 2014 (Invited to SICOMP)
[arXiv]

Logistic Regression: Tight Bounds for Stochastic and Online Optimization.
Elad Hazan, Tomer Koren, Kfir Levy.
COLT 2014
[arXiv]

Online Learning with Composite Loss Functions.
Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres.
COLT 2014
[arXiv]

Bandits with Switching Costs: $T^{2/3}$ Regret.
Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres.
STOC 2014
[arXiv]

Distributed Exploration in Multi-Armed Bandits.
Eshcar Hillel, Zohar Karnin, Tomer Koren, Ronny Lempel, Oren Somekh.
NIPS 2013 (Spotlight)
[arXiv]

Almost Optimal Exploration in Multi-Armed Bandits.
Zohar Karnin, Tomer Koren, Oren Somekh.
ICML 2013
[pdf]

Linear Regression with Limited Observation.
Elad Hazan, Tomer Koren.
ICML 2012 (Best Student Paper Runner-up)
[arXiv]

Supervised System Identification Based on Local PCA Models.
Tomer Koren, Ronen Talmon, Israel Cohen.
ICASSP 2012
[pdf]

Beating SGD: Learning SVMs in Sublinear Time.
Elad Hazan, Tomer Koren, Nathan Srebro.
NIPS 2011
[pdf] [full]


Technical Reports and Open Problems

Open Problem: Tight Convergence of SGD in Constant Dimension.
Tomer Koren, Shahar Segal
COLT 2020
[pdf]

Disentangling Adaptive Gradient Methods from Learning Rates.
Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, Cyril Zhang.
Manuscript; appeared in OPT 2019
[arXiv]

Scalable Second-Order Optimization for Deep Learning.
Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, Yoram Singer.
Manuscript; appeared in NeurIPS’19 Workshop on “Beyond First Order Methods in ML”
[arXiv]

A Unified Approach to Adaptive Regularization in Online and Stochastic Optimization.
Vineet Gupta, Tomer Koren, Yoram Singer.
Manuscript, 2017
[arXiv]

Open Problem: Fast Stochastic Exp-Concave Optimization.
Tomer Koren
COLT 2013
[pdf]