Provable representation learning in deep learning

With Jason Lee (Princeton University)

Provable representation learning in deep learning

Deep representation learning seeks to learn a data representation that transfers to downstream tasks. In this talk, we study two forms of representation learning: supervised pre-training and self-supervised learning.

Supervised pre-training uses a large labeled source dataset to learn a representation, then trains a classifier on top of the representation. We prove that supervised pre-training can pool the data from all source tasks to learn a good representation which transfers to downstream tasks with few labeled examples.

Self-supervised learning creates auxiliary pretext tasks that do not require labeled data to learn representations. These pretext tasks are created solely using input features, such as predicting a missing image patch, recovering the colour channels of an image, or predicting missing words. Surprisingly, predicting this known information helps in learning a representation effective for downstream tasks. We prove that under a conditional independence assumption, self-supervised learning provably
learns representations.

Add to your calendar or Include in your list

How can mathematics help us to understand the behaviour of ants? Read more about the fanscinating work being carri… https://t.co/iCODvvxqE6 View on Twitter