In this year’s ICML, some interesting work was presented on Neural Processes. In this blog post, I discuss what Neural Processes are and how they behave as a prior over functions.
I have recently finished my PhD in Statistical Machine Learning at the University of Oxford, as part of the OxCSML group in the Department of Statistics. I was supervised by Christopher Yau and Chris Holmes. I currently share my time between the Alan Turing Institute (where I am a postdoctoral Research Fellow, a recipient of the Turing-Crick Biomedical Data Science Award) and Apple Health AI.
I have a broad interest in statistical machine learning and Bayesian inference techniques with a focus on tackling real problems in genomics and healthcare. In particular, I have worked on various non-linear latent variable models (such as Gaussian Process Latent Variable Models and Variational Autoencoders) with the goal to enable certain notions of feature-level interpretability.
In general, I am excited about research in the intersection of probabilistic modelling and deep learning, where on the one hand, building blocks from the deep learning toolbox have led to advances in Bayesian inference, and on the other hand, embedding Bayesian concepts within neural network models can lead to various benefits such as uncertainty quantification.
Prior to my PhD, I studied for BSc and MSc at the University of Tartu in Estonia. There I was part of the BIIT research group at the Institute of Computer Science, where I worked on statistical modelling in genomics under the supervision of Raivo Kolde and Leopold Parts.
R package implementing the Polya-Gamma augmentation scheme
Here you can find course material (in Estonian!) on Data Science and Visualisation, which we created together with Tanel Pärnamaa. This course “Statistiline andmeteadus ja visualiseerimine” is centered around a number of interesting case studies, and it focuses on teaching good practices of data science in R by applying statistical methods to solve these real-life problems.