Optimisation methods play one of the most important roles in machine learning area. High-dimensionality of machine learning models and large volume of training data introduce a variety of challenges, both from the
fundamental optimisation methodology perspective and distributed computation perspective. In this talk, I will present techniques that allow us to accelerate training of machine learning models in distributed computing
systems, and approximately solve certain classes of submodular optimisation problems by using simple surrogate functions. In both these problems, we leverage combining lossy data compression with optimisation. Time permitting, I will also briefly discuss some recent results and open
questions that arise in online decision making under uncertainty, statistical relational learning, and inverse problems for stochastic processes on graphs.