About

I’m an Assistant Professor (aka Senior Lecturer) in the Blavatnik School of Computer Science at Tel Aviv University since Fall 2019. Previously, I was a Senior Research Scientist at Google Brain, Mountain View. I received my PhD in 2017 from the Technion—Israel Institute of Technology, where my advisor was Prof. Elad Hazan.

My research interests are in machine learning and optimization.


Preprints

Regret Minimization and Convergence to Equilibria in General-sum Markov Games.
Liad Erez, Tal Lancewicki, Uri Sherman, Tomer Koren, Yishay Mansour,
[arXiv]

Benign Underfitting of Stochastic Gradient Descent.
Tomer Koren, Roi Livni, Yishay Mansour, Uri Sherman.
[arXiv]

Rate-Optimal Online Convex Optimization in Adaptive Linear Control.
Asaf Cassel, Alon Cohen, Tomer Koren.
[arXiv]

Private Online Prediction from Experts: Separations and Faster Rates.
Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar.

Better Best-of-Both-Worlds Bounds for Bandits with Switching Costs.
Idan Amir, Guy Azov Tomer Koren, Roi Livni.
[arXiv]

Dueling Convex Optimization with General Preferences.
Aadirupa Saha, Tomer Koren, Yishay Mansour.
[arXiv]

Multiplicative Reweighting for Robust Neural Network Optimization.
Noga Bar, Raja Giryes, Tomer Koren.
[arXiv]


Publications

Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond.
Matan Schliserman, Tomer Koren.
COLT 2022
[arXiv]

Efficient Online Linear Control with Stochastic Convex Costs and Unknown Dynamics.
Asaf Cassel, Alon Cohen, Tomer Koren.
COLT 2022
[arXiv]

Uniform Stability for First-Order Empirical Risk Minimization.
Amit Attia, Tomer Koren.
COLT 2022
[arXiv]

Best-of-All-Worlds Bounds for Online Learning with Feedback Graphs.
Liad Erez, Tomer Koren.
NeurIPS 2021
[arXiv]

Optimal Rates for Random Order Online Optimization.
Uri Sherman, Tomer Koren, Yishay Mansour.
NeurIPS 2021 (Oral)
[arXiv]

Never Go Full Batch (in Stochastic Convex Optimization).
Idan Amir, Yair Carmon, Tomer Koren, Roi Livni.
NeurIPS 2021
[arXiv]

Asynchronous Stochastic Optimization Robust to Arbitrary Delays.
Alon Cohen, Amit Daniely, Yoel Drori, Tomer Koren, Mariano Schain.
NeurIPS 2021
[arXiv]

Algorithmic Instabilities of Accelerated Gradient Descent.
Amit Attia, Tomer Koren.
NeurIPS 2021
[arXiv]

SGD Generalizes Better Than GD (And Regularization Doesn’t Help).
Idan Amir, Tomer Koren, Roi Livni.
COLT 2021
[arXiv]

Lazy OCO: Online Convex Optimization on a Switching Budget.
Uri Sherman, Tomer Koren.
COLT 2021
[arXiv]

Online Markov Decision Processes with Aggregate Bandit Feedback.
Alon Cohen, Haim Kaplan, Tomer Koren, Yishay Mansour.
COLT 2021
[arXiv]

Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry.
Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar.
ICML 2021
[arXiv]

Online Policy Gradient for Model Free Learning of Linear Quadratic Regulators with $\sqrt{T}$ Regret.
Asaf Cassel, Tomer Koren.
ICML 2021
[arXiv]

Adversarial Dueling Bandits.
Aadirupa Saha, Tomer Koren, Yishay Mansour.
ICML 2021
[arXiv]

Dueling Convex Optimization.
Aadirupa Saha, Tomer Koren, Yishay Mansour.
ICML 2021

Stochastic Multi-Armed Bandits with Unrestricted Delay Distributions.
Tal Lancewicki, Shahar Segal, Tomer Koren, Yishay Mansour.
ICML 2021
[arXiv]

Bandit Linear Control.
Asaf Cassel, Tomer Koren.
NeurIPS 2020 (Spotlight)
[arXiv]

Stochastic Optimization for Laggard Data Pipelines.
Naman Agarwal, Rohan Anil, Tomer Koren, Kunal Talwar, Cyril Zhang.
NeurIPS 2020
[arXiv]

Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study.
Assaf Dauber, Meir Feder, Tomer Koren, Roi Livni.
NeurIPS 2020
[arXiv]

Prediction with Corrupted Expert Advice.
Idan Amir, Idan Attias, Tomer Koren, Roi Livni, Yishay Mansour.
NeurIPS 2020 (Spotlight)
[arXiv]

Logarithmic Regret for Learning Linear Quadratic Regulators Efficiently.
Asaf Cassel, Alon Cohen, Tomer Koren.
ICML 2020
[arXiv]

Private Stochastic Convex Optimization: Optimal Rates in Linear Time.
Vitaly Feldman, Tomer Koren, Kunal Talwar.
STOC 2020; preliminary version in NeurIPS’19 Workshop on “Privacy in Machine Learning” (PriML’19)
[arXiv]

Memory-Efficient Adaptive Optimization.
Rohan Anil, Vineet Gupta, Tomer Koren, Yoram Singer.
NeurIPS 2019
[arXiv]

Robust Bi-Tempered Logistic Loss Based on Bregman Divergences.
Ehsan Amid, Manfred K. Warmuth, Rohan Anil, Tomer Koren.
NeurIPS 2019
[arXiv]

Better Algorithms for Stochastic Bandits with Adversarial Corruptions.
Anupam Gupta Tomer Koren, Kunal Talwar.
COLT 2019
[arXiv]

Learning Linear-Quadratic Regulators Efficiently with only $\sqrt{T}$ Regret.
Alon Cohen, Tomer Koren, Yishay Mansour.
ICML 2019
[arXiv]

Semi-Cyclic Stochastic Gradient Descent.
Hubert Eichner, Tomer Koren, Brendan McMahan, Nathan Srebro, Kunal Talwar.
ICML 2019
[arXiv]

Online Linear-Quadratic Control.
Alon Cohen, Avinatan Hassidim, Tomer Koren, Nevena Lazic, Yishay Mansour, Kunal Talwar.
ICML 2018
[arXiv]

Shampoo: Preconditioned Stochastic Tensor Optimization.
Vineet Gupta, Tomer Koren, Yoram Singer.
ICML 2018
[arXiv]

Multi-Armed Bandits with Metric Movement Costs.
Tomer Koren, Roi Livni, Yishay Mansour.
NIPS 2017
[arXiv]

Affine-Invariant Online Optimization and the Low-rank Experts Problem.
Tomer Koren, Roi Livni,
NIPS 2017
[pdf]

Bandits with Movement Costs and Adaptive Pricing.
Tomer Koren, Roi Livni, Yishay Mansour.
COLT 2017
[arXiv]

Tight Bounds for Bandit Combinatorial Optimization.
Alon Cohen, Tamir Hazan, Tomer Koren.
COLT 2017
[arXiv]

The Limits of Learning with Missing Data.
Brian Bullins, Elad Hazan, Tomer Koren.
NIPS 2016
[pdf]

Online Pricing With Strategic and Patient Buyers.
Michal Feldman, Tomer Koren, Roi Livni, Yishay Mansour, Aviv Zohar.
NIPS 2016
[pdf]

Online Learning with Feedback Graphs Without the Graphs.
Alon Cohen, Tamir Hazan, Tomer Koren.
ICML 2016
[arXiv]

Online Learning with Low Rank Experts.
Elad Hazan, Tomer Koren, Roi Livni, Yishay Mansour.
COLT 2016
[arXiv]

The Computational Power of Optimization in Online Learning.
Elad Hazan, Tomer Koren.
STOC 2016
[arXiv]

A Linear-Time Algorithm for Trust Region Problems.
Elad Hazan, Tomer Koren.
Mathematical Programming, 158(1-2): 363-381, 2016
[arXiv]

Fast Rates for Exp-concave Empirical Risk Minimization.
Tomer Koren, Kfir Levy.
NIPS 2015
[pdf]

Bandit Smooth Convex Optimization: Improving the Bias-Variance Tradeoff.
Ofer Dekel, Ronen Eldan, Tomer Koren.
NIPS 2015 (Spotlight)
[pdf]

Bandit Convex Optimization: $\sqrt{T}$ Regret in One Dimension.
Sébastien Bubeck, Ofer Dekel, Tomer Koren, Yuval Peres.
COLT 2015
[arXiv]

Online Learning with Feedback Graphs: Beyond Bandits.
Noga Alon, Nicolò Cesa-Bianchi, Ofer Dekel, Tomer Koren.
COLT 2015
[arXiv]

Oracle-Based Robust Optimization via Online Learning.
Aharon Ben-Tal, Elad Hazan, Tomer Koren, Shie Mannor.
Operations Research, 63(3), 628-638, 2015
[arXiv]

The Blinded Bandit: Learning with Adaptive Feedback.
Ofer Dekel, Elad Hazan, Tomer Koren.
NIPS 2014
[pdf] [full]

Chasing Ghosts: Competing with Stateful Policies.
Uriel Feige, Tomer Koren, Moshe Tennenholtz.
FOCS 2014 (Invited to SICOMP)
[arXiv]

Logistic Regression: Tight Bounds for Stochastic and Online Optimization.
Elad Hazan, Tomer Koren, Kfir Levy.
COLT 2014
[arXiv]

Online Learning with Composite Loss Functions.
Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres.
COLT 2014
[arXiv]

Bandits with Switching Costs: $T^{2/3}$ Regret.
Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres.
STOC 2014
[arXiv]

Distributed Exploration in Multi-Armed Bandits.
Eshcar Hillel, Zohar Karnin, Tomer Koren, Ronny Lempel, Oren Somekh.
NIPS 2013 (Spotlight)
[arXiv]

Almost Optimal Exploration in Multi-Armed Bandits.
Zohar Karnin, Tomer Koren, Oren Somekh.
ICML 2013
[pdf]

Linear Regression with Limited Observation.
Elad Hazan, Tomer Koren.
ICML 2012 (Best Student Paper Runner-up)
[arXiv]

Supervised System Identification Based on Local PCA Models.
Tomer Koren, Ronen Talmon, Israel Cohen.
ICASSP 2012
[pdf]

Beating SGD: Learning SVMs in Sublinear Time.
Elad Hazan, Tomer Koren, Nathan Srebro.
NIPS 2011
[pdf] [full]


Technical Reports and Open Problems

Open Problem: Tight Convergence of SGD in Constant Dimension.
Tomer Koren, Shahar Segal
COLT 2020
[pdf]

Disentangling Adaptive Gradient Methods from Learning Rates.
Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, Cyril Zhang.
Manuscript; appeared in OPT2019
[arXiv]

Scalable Second-Order Optimization for Deep Learning.
Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, Yoram Singer.
Manuscript; appeared in NeurIPS’19 Workshop on “Beyond First Order Methods in ML”
[arXiv]

A Unified Approach to Adaptive Regularization in Online and Stochastic Optimization.
Vineet Gupta, Tomer Koren, Yoram Singer.
Manuscript, 2017
[arXiv]

Open Problem: Fast Stochastic Exp-Concave Optimization.
Tomer Koren
COLT 2013
[pdf]