.

Omar Rivasplata
Senior Research Fellow Department of Statistical Science University College London 
Noteworthy News ‖ CV ‖ Email: try or ‖ Math Genealogy ‖ Follow:
Algorithmic Learning Theory. Machine Learning. Mathematics. Probability and Statistics.

I work at the Department of Statistical Science at UCL, where I lead the DELTA group.
Mine is one of two departments of the upcoming Institute for Mathematical and Statistical Sciences, the other one being the Department of Mathematics.
My outstanding colleagues include no less than the likes of Ioanna Manolopoulou, Codina Cotar, Sam Livingstone, Brieuc Lehmann, FX Briol, Ricardo Silva, Serge Guillas, Gianluca Baio, and many others from my home department (Statistical Science); and the likes of Andrea Macrina, Timo Betcke, Helen Wilson, and many others from our sister department (Mathematics). I have close connections and collaboration with outstanding colleagues across UCL, inlcuding the Centre for Artificial Intelligence, the Advanced Research Computing centre, and the Information, Inference and Machine Learning group. I am a member of the European Laboratory for Learning and Intelligent Systems and the London Mathematical Society, and a fellow of the Royal Statistical Society and the Institute of Mathematics and its Applications.
My work is on machine learning research. This field is fascinating! One of the things I enjoy most about it being the confluence of maths and stats, and computer science experiments, to answer research questions. Besides statistical learning I am interested also in other learning frameworks such as online learning and reinforcement learning, and of course deep learning, which is quite popular these days. It looks that optimization is one pervasive theme across machine learning theory and practice, though it comes up in such a variety of flavours and colours that it isn't boring. It reminds of the least action principle of Maupertuis, saying that "everything happens as if some quantity was to be made as small as possible." (This principle has lead the optimists to believe that we live in the best possible world.) But just optimization doesn't quite do it for machine learning... to really be talking about learning one has to pay attention to generalization! 
I did research studies in statistical machine learning at the Department of Computer Science, University College London, sponsored by DeepMind. In parallel with these studies, I was affiliated with DeepMind as a research scientist intern, for three years.
Outstanding ML/AI people with whom I have worked at UCL Computer Science include David Barber, Marc Deisenroth, Mark Herbster, María PérezOrtiz, and John ShaweTaylor.
Outstanding ML/AI people with whom I have worked at DeepMind include Marcus Hutter, Laurent Orseau, Ilja Kuzborskij, Csaba Szepesvári, András György from my own team; and many others from friend teams including Amal RannenTriki, Razvan Pascanu, Agnieszka GrabskaBarwińska, Thore Graepel, Arnaud Doucet, Benjamin Van Roy, and Geoffrey Irving. 
I spent a year at the Department of Computing Science, University of Alberta. During this time I started building my mental model of the machine learning field and finetuning my hyperparameters. My host was Rich Sutton in theory, but in practice I was developing my research plans together with Csaba Szepesvári.

I spent some time with Mauricio Sacchi's group looking at problems related to seismic signal analysis. Before that I worked with Sasha Litvak and Nicole TomczakJaegermann on the smallest singular value of a sparse random matrix, using methods from geometric functional analysis and probability. Even before that I worked with Byron Schmuland on reversibility of a Brownian motion with drift. As an undergrad, with Loretta Gasco I worked on a fun project about repeated twoplayer games with incomplete information on one side.

 Tighter risk certificates for (probabilistic) neural networks.
UCL Centre for AI.
Slides
Video
 Statistical Learning Theory: A Hitchhiker's Guide.
NeurIPS 2018 Tutorial. (with J. ShaweTaylor)
Slides
Video
Conference & Journal Papers 
 I. Kuzborskij, Cs. Szepesvári, O. Rivasplata, A. RannenTriki, R. Pascanu,
On the Role of Optimization in Double Descent: A Least Squares Study.
NeurIPS 2021.
arXiv PDF
 M. PérezOrtiz, O. Rivasplata, J. ShaweTaylor, Cs. Szepesvári,
Tighter risk certificates for neural networks.
JMLR, 22, 227 (2021), 140.
PDF /
revised PDF /
published PDF
 L. Orseau, M. Hutter, O. Rivasplata,
Logarithmic pruning is all you need.
NeurIPS 2020 .
PDF
 O. Rivasplata, I. Kuzborskij, Cs. Szepesvári, J. ShaweTaylor,
PACBayes analysis beyond the usual bounds.
NeurIPS 2020.
PDF
 O. Rivasplata, E. ParradoHernández, J. ShaweTaylor, S. Sun, Cs. Szepesvári,
PACBayes bounds for stable algorithms with instancedependent priors.
NeurIPS 2018.
PDF
 A.E. Litvak, O. Rivasplata,
Smallest singular value of sparse random matrices.
Studia Math., 212, 3 (2012), 195218.
PDF
 O. Rivasplata, J. Rychtar, B. Schmuland,
Reversibility for diffusions via quasiinvariance.
Acta Univ. Carolin. Math. Phys., 48, 1 (2007), 310.
PDF
 O. Rivasplata, J. Rychtar, C. Sykes,
Evolutionary games in finite populations.
Pro Mathematica, 20, 39/40 (2006), 147164.
PDF
 O. Rivasplata, B. Schmuland,
Invariant and reversible measures for random walks on Z.
Pro Mathematica, 19, 37/38 (2005), 117124.
PDF
 O. Rivasplata,
A note on a confidence bound of Kuzborskij and Szepesvári.
(2021)
PDF
 O. Rivasplata,
Subgaussian random variables: An expository note.
(2012)
PDF
 O. Rivasplata, V. Tankasali, Cs. Szepesvári,
PACBayes with Backprop.
(2019)
PDF
Probability Links (probably accessible) 
Stats Links (most significant) 
Peruvian Links 
My birth town is Trujillo, the marinera dance town. 
Sometimes people ask me about Machu Picchu, it's a great place to see. 
They ask me less about Arequipa though it is also a great place to visit. 
Last link, in case you care to know, is about Pisco. 

