In this year’s ICML, some interesting work was presented on Neural Processes. In this blog post, I discuss what Neural Processes are and how they behave as a prior over functions.
I recently finished my PhD in Statistical Machine Learning at the University of Oxford, as part of the OxCSML group in the Department of Statistics, where I was supervised by Christopher Yau and Chris Holmes. I currently share my time between the Alan Turing Institute (where I am a postdoctoral Research Fellow, a recipient of the Turing-Crick Biomedical Data Science Award) and Apple Health AI.
I have a broad interest in developing probabilistic machine learning methods (such as Deep Generative Models) with a focus on tackling real-world problems in genomics and healthcare.
My interests range from the intersection of Bayesian inference and deep learning (such as incorporating prior knowledge and structure within deep neural networks, multi-task/meta-learning etc) to research towards trustworthy ML for biomedical applications.
During my PhD, I developed extensions of Gaussian Process Latent Variable Models and Variational Autoencoders with the goal to enable certain notions of feature-level interpretability for the analysis of high-dimensional tabular data. More recently, I have been working on the identification of rare clusters, as well as developing targeted data augmentations to ensure fairness across demographic subpopulations.
R package implementing the Polya-Gamma augmentation scheme
Here you can find course material (in Estonian!) on Data Science and Visualisation, which we created together with Tanel Pärnamaa. This course “Statistiline andmeteadus ja visualiseerimine” is centered around a number of interesting case studies, and it focuses on teaching good practices of data science in R by applying statistical methods to solve these real-life problems.