0% found this document useful (0 votes)
21 views49 pages

Week1 UDL CM20315 01 Intro

The document outlines a course on Machine Learning, covering topics such as supervised learning, unsupervised learning, and reinforcement learning, along with their applications in various domains like image and text processing. It details different types of neural networks used for tasks like regression, classification, and generative modeling. Additionally, it highlights significant milestones in deep learning and the focus areas of the course, including training deep neural networks and improving their performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views49 pages

Week1 UDL CM20315 01 Intro

The document outlines a course on Machine Learning, covering topics such as supervised learning, unsupervised learning, and reinforcement learning, along with their applications in various domains like image and text processing. It details different types of neural networks used for tasks like regression, classification, and generative modeling. Additionally, it highlights significant milestones in deep learning and the focus areas of the course, including training deep neural networks and improving their performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

CM20315 - Machine Learning

Prof. Simon Prince, Dr. Georgios Exarchakis and Dr. Andrew Barnes
1. Introduction

This is a VERY large lecture theatre. Please leave the back five rows empty!
CM20315 - Machine Learning
Prof. Simon Prince, Dr. Georgios Exarchakis and Dr. Andrew Barnes
1. Introduction
Semester 1
Book
http://udlbook.com
Supervised learning
• Define a mapping from input to output
• Learn this mapping from paired input/output data examples
Regression

• Univariate regression problem (one output, real value)


• Fully connected network
Graph regression

• Multivariate regression problem (>1 output, real value)


• Graph neural network
Text classification

• Binary classification problem (two discrete classes)


• Transformer network
Music genre classification

• Multiclass classification problem (discrete classes, >2 possible values)


• Recurrent neural network (RNN)
Image classification

• Multiclass classification problem (discrete classes, >2 possible classes)


• Convolutional network
What is a supervised learning model?

• An equation relating input (age) to output (height)


• Search through family of possible equations to find one that fits training data well
What is a supervised learning model?

• Deep neural networks are just a very flexible family of equations


• Fitting deep neural networks = “Deep Learning”
Image segmentation

• Multivariate binary classification problem (many outputs, two discrete classes)


• Convolutional encoder-decoder network
Depth estimation

• Multivariate regression problem (many outputs, continuous)


• Convolutional encoder-decoder network
Pose estimation

• Multivariate regression problem (many outputs, continuous)


• Convolutional encoder-decoder network
Terms
• Regression = continuous numbers as output
• Classification = discrete classes as output
• Two class and multiclass classification treated differently
• Univariate = one output
• Multivariate = more than one output
Translation
Image captioning
Image generation from text
What do these examples have in common?
• Very complex relationship between input and output
• Sometimes may be many possible valid answers
• But outputs (and sometimes inputs) obey rules

Language obeys Natural images also


grammatical rules have “rules”
Idea
• Learn the “grammar” of the data from unlabeled examples
• Can use a gargantuan amount of data to do this (as unlabeled)
• Make the supervised learning task earlier by having a lot of
knowledge of possible outputs
Unsupervised Learning
• Learning about a dataset without labels
• Clustering
• Finding outliers
• Generating new examples
• Filling in missing data
DeepCluster: Deep Clustering for Unsupervised Learning of Visual Features (Caron et al., 2018)
DeepCluster: Deep Clustering for Unsupervised Learning of Visual Features (Caron et al., 2018)
Unsupervised Learning
• Learning about a dataset without labels
• e.g., clustering
• Generative models can create examples
• e.g., generative adversarial networks
Unsupervised Learning
• Learning about a dataset without labels
• e.g., clustering
• Generative models can create examples
• e.g., generative adversarial networks
• PGMs learn distribution over data
• e.g., variational autoencoders,
• e.g., normalizing flows,
• e.g., diffusion models
Generative models
Generative models
Latent variables
Why should this work?
Interpolation
Conditional synthesis
Reinforcement learning
• A set of states
• A set of actions
• A set of rewards

• Goal: take actions to change the state so that you receive rewards

• You don’t receive any data – you have to explore the environment
yourself to gather data as you go
Example: chess
• States are valid states of the chess board
• Actions at a given time are valid possible moves
• Positive rewards for taking pieces, negative rewards for losing them
Example: chess
• States are valid states of the chess board
• Actions at a given time are valid possible moves
• Positive rewards for taking pieces, negative rewards for losing them
Why is this difficult?
• Stochastic
• Make the same move twice, the opponent might not do the same thing
• Rewards also stochastic (opponent does or doesn’t take your piece)
• Temporal credit assignment problem
• Did we get the reward because of this move? Or because we made good
tactical decisions somewhere in the past?
• Exploration-exploitation trade-off
• If we found a good opening, should we use this?
• Or should we try other things, hoping for something better?
Landmarks in Deep Learning
• 1958 Perceptron (Simple `neural’ model)
• 1986 Backpropagation (Practical Deep Neural networks)
• 1989 Convolutional networks (Supervised learning)
• 2012 AlexNet Image classification (Supervised learning)
• 2014 Generative adversarial networks (Unsupervised learning)
• 2014 Deep Q-Learning -- Atari games (Reinforcement learning)
• 2016 AlphaGo (Reinforcement learning)
• 2017 Machine translation (Supervised learning)
• 2019 Language models ((Un)supervised learning)
• 2022 Dall-E2 Image synthesis from text prompts ((Un)supervised learning)
• 2022 ChatGPT ((Un)supervised learning)
• 2023 GPT4 Multimodal model ((Un)supervised learning)
2018 Turing award winners
This course
Deep neural networks
How to train them
How to measure their performance
How to make that performance better
This course

Networks specialized to images


Image classification
Image segmentation
Pose estimation
This course

Networks specialized to text


Text generation
Automatic translation
ChatGPT

You might also like