Download as PPTX, PDF, TXT or read online from Scribd
Download as pptx, pdf, or txt
You are on page 1of 39
CM20315 - Machine Learning
Prof. Simon Prince and Dr. Harish Tayyar Madabushi
1. Introduction Semester 1 Logistics Semester 1 • 2 lectures per week • 1 lab session per week • 5 weeks / consolidation week / 5 weeks • 1 coursework • Set Monday 21st November • Due Monday 5th December • 1 exam (Jan/Feb) – closed book • Lectures will be recorded • Slides and notes online • Ask questions on Moodle logistics message board Feedback • Please ask questions in the lectures – put your hand up • There will be tutors to help in the labs – sometimes Python programming, sometimes problem sheets • The solutions to the labs will be posted one week after the lab so you can see how you are doing and clear up any misconceptions. • You can ask questions on Moodle • Please answer other people’s questions if you can! • There is no such thing as a stupid question. • Anonymous feedback to me via: • https://forms.office.com/r/UjVD6Yzz01 Lab sessions • Python notebooks in CoLab • You will need a Google account (make one before tomorrow) • Alternatively, set up Jupyter Notebooks yourself • Numpy • Matplotlib • PyTorch • Problem sheets How to pass this course • Ideas are simple, but build on one another • Come to all the lectures (take notes) • Come to all the lab sessions • Complete the Python notebooks • Complete the problems • Work together on non-assessed work • Read the notes (and take your own) after the class • Ask questions on the Moodle forums (and answer them) Book Book • Examinable unless specified • Chapters 1-11,13 Book • Examinable unless specified • Chapters 1-11,13 • Not examinable unless specified • Notes at end of chapters Landmarks in AI • 2012 AlexNet (Image classification) • 2014 Generative adversarial networks (Image generation) • 2016 AlphaGo • 2017 Machine translation • 2019- Language models (Bert, GPT3) • 2022 Dall-E2 (Image synthesis from text prompts) 2018 Turing award winners Deep Learning • Supervised learning • Tasks • Models • Deep learning models • Unsupervised learning • Generative models • Probabilistic generative models • Latent variable models • Reinforcement learning Deep Learning • Supervised learning • Tasks • Models • Deep learning models • Unsupervised learning • Generative models • Probabilistic generative models • Latent variable models • Reinforcement learning Supervised learning • Define a mapping from input to output • Learn this mapping from paired input/output data examples Five simple examples Regression
• Univariate regression problem (one output, real value)
• Fully connected network Graph regression
• Multivariate regression problem (>1 output, real value)
• Graph neural network Text classification
• Binary classification problem (two discrete classes)
• Transformer network Music genre classification
• Multiclass classification problem (discrete classes, >2 possible values)
• Convolutional network Image classification
• Multiclass classification problem (discrete classes, >2 possible classes)
• Convolutional network What is a supervised learning model?
• An equation relating input (age) to output (height)
• Search through family of possible equations to find one that fits training data well What is a supervised learning model?
• Deep neural networks are just a very flexible family of equations
• Fitting deep neural networks = “Deep Learning” Five more examples
• More complex outputs
• But constrained by input (labelling) Image segmentation
• Multivariate binary classification problem (many outputs, two discrete classes)
• Multivariate regression problem (many outputs, continuous)
• Convolutional encoder-decoder network Terms • Classification = discrete classes as output • Regression = continuous numbers as output • Two class and multiclass classification treated differently • Univariate = one output • Multivariate = more than one output Three more examples Translation Image captioning Image generation from text What do these examples have in common? • Very complex relationship between input and output • Sometimes may be many possible valid answers • But outputs (and sometimes inputs) obey rules
Language obeys Natural images also
grammatical rules have “rules” Idea • Learn the “grammar” of the data from unlabeled examples • Can use a gargantuan amount of data to do this (as unlabeled) • Make the supervised learning task earlier by having a lot of knowledge of possible outputs