0% found this document useful (0 votes)
6 views24 pages

1 Introduction

The document provides an introduction to machine learning, discussing the black box model, its components, and various types of learning such as supervised, unsupervised, and reinforcement learning. It highlights the importance of machine learning in solving complex problems where traditional programming is insufficient, and outlines its historical development and applications. Additionally, it emphasizes the differences between machine learning and statistics, as well as the relationship between machine learning and artificial intelligence.

Uploaded by

arnooshanajafi26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views24 pages

1 Introduction

The document provides an introduction to machine learning, discussing the black box model, its components, and various types of learning such as supervised, unsupervised, and reinforcement learning. It highlights the importance of machine learning in solving complex problems where traditional programming is insufficient, and outlines its historical development and applications. Additionally, it emphasizes the differences between machine learning and statistics, as well as the relationship between machine learning and artificial intelligence.

Uploaded by

arnooshanajafi26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Introduction to Machine

Learning
Session 2
Problems
Black box model
• Black Box Model

• Black box model consists of 3 components


• When one component is unknown: new problem type

3 / 20
Black box model:
Optimisation
Optimization
• Model and desired output is known, task is to find inputs

Black box model:


Optimisation example 3: 8 queens problem
• Examples:
Time tables for university, call center, or hospital

• Given an 8-by-8
• Examples: • Design
chessboard
specifications
and 8 queens
• Traveling salesman problem (TSP)
• Place the 8 queens on the
• • Eight-queens
Time tables for university problem,
chessboard without any
etc.

conflict 4 / 20

• Design speci cations


• Two queens conflict if they
share same row, column or
• Travelingdiagonal
salesman problem (TSP)
• Can be extended to an n
• Eight-queens
queensproblem,
problemetc.
(n>8)
fi
Black box model:
Modelling
Modeling
• We have corresponding sets of inputs & outputs and
seek model that delivers correct output for every known
input

• Note: modelling problems can be transformed into


optimisation problems
• Modeling Examples:
• Evolutionary machine learning
• Predicting stock exchange
• Machine Learning problems
• Voice control system for smart homes

9 / 20

• Predicting stock exchange

• Voice control for smart homes

• Bank credit prediction


Black box model:
Simulation
Simulation
• We have a given model and wish to know the outputs
that arise under different input conditions

• Often used to answer what-if questions in evolving


• Simulation Examples:
dynamic environments
• Economics,
Evolutionary economics, Artificial Life
Arti cial Life
Weather forecast system
• Weather forecast system
Impact analysis new tax systems
• Impact analysis new tax systems
11 / 20

• Tra c Simulation
ffi
fi
Why Machine Learning?
• For many problems, it’s di cult to program the correct behavior by hand

• recognizing people and objects

• understanding human speech

• Machine learning approach: program an algorithm to automatically learn


from data, or from experience

• Why might you want to use a learning algorithm?

• hard to code up a solution by hand (e.g. vision, speech)

• system needs to adapt to a changing environment (e.g. spam detection)

• want the system to perform better than the human programmers


ffi
What is Machine Learning?
• It’s similar to statistics…

• Both elds try to uncover patterns in data

• Both elds draw heavily on calculus, probability, and linear algebra,


and share many of the same core algorithms

• But it’s not statistics!

• Stats is more concerned with helping scientists and policymakers


draw good conclusions; ML is more concerned with building
autonomous agents

• Stats puts more emphasis on interpretability and mathematical rigor;


ML puts more emphasis on predictive performance, scalability, and
autonomy
fi
fi
Relations to AI

• Nowadays, “machine learning” is often brought up with “arti cial intelligence” (AI)

• AI does not always imply a learning based system

• Symbolic reasoning

• Rule based system

• Tree search

• etc.

• Learning based system → learned based on the data → more exibility, good at
solving pattern recognition problems.
fi
fl
Relations to human learning
• Human learning is:

• Very data e cient

• An entire multitasking system (vision, language, motor control, etc.)

• Takes at least a few years

• For serving speci c purposes, machine learning doesn’t have to look


like human learning in the end.

• It may borrow ideas from biological systems, e.g., neural networks.

• It may perform better or worse than humans.


ffi
fi
Machine Learning
• Building algorithms which, to be useful, rely on a collection of
examples of some phenomenon.

• These examples can come from nature, be handcrafted by


humans or generated by another algorithm.

• Machine learning can also be de ned as the process of


solving a practical problem by

• 1) gathering a dataset, and

• 2) algorithmically building a statistical model based on that


dataset.
fi
Types of Learning

• Supervised Learning

• Unsupervised Learning

• Reinforcement Learning
Supervised Learning
• The dataset is the collection of labeled examples; {(xi, yi)}i=1,…,N

• Each element xi among N is called a feature vector.

• A feature vector is a vector in which each dimension contains a value that


describes the example somehow.

• Example of features for a person:

• Height

• Weight

• Gender

• …
Supervised Learning
• The label yi can be either an element belonging to

• a nite set of classes {1, 2, . . . , C},

• a real number,

• a more complex structure, like a vector, a matrix, a tree,


or a graph.

• Example for spam detection:

• Classes = {spam, not_spam}


fi
Unsupervised Learning
• In unsupervised learning, the dataset is a collection of unlabeled examples, {xi}i=1,…,N

• The goal is to create a model that

• takes a feature vector x as input and either

• transforms it into another vector or

• into a value,

• that can be used to solve a practical problem.


Unsupervised Learning
• Examples:

• Clustering

• the model returns the id of the cluster for each feature vector in the
dataset

• Dimensionality Reduction

• the output of the model is a feature vector that has fewer features than
the input x

• Outlier Detection

• the output is a real number that indicates how x is di erent from a


“typical” example in the dataset
ff
Reinforcement Learning
• Reinforcement learning is a sub eld of machine learning where

• the machine “lives” in an environment

• and is capable of perceiving the state of that environment as a vector of features.

• The machine can execute actions in every state.

• Di erent actions bring di erent rewards and could also move the machine to another
state of the environment.

• The goal of a reinforcement learning algorithm is to learn a policy.


ff
ff
fi
Reinforcement Learning
• Reinforcement learning solves a particular kind of problem
where decision making is sequential, and the goal is long-
term, such as

• game playing,

• robotics,

• resource management,

• or logistics
History of machine learning
• 1957 — Perceptron algorithm (implemented as a circuit!)

• 1959 — Arthur Samuel wrote a learning-based checkers program that could defeat
him

• 1969 — Minsky and Papert’s book Perceptrons (limitations of linear models)

• 1980s — Some foundational ideas

• Connectionist psychologists explored neural models of cognition

• 1984 — Leslie Valiant formalized the problem of learning as PAC learning

• 1988 — Backpropagation (re-)discovered by Geo rey Hinton and colleagues

• 1988 — Judea Pearl’s book Probabilistic Reasoning in Intelligent Systems


introduced Bayesian networks
ff
History of machine learning
• 1990s — the “AI Winter”, a time of pessimism and low funding But
looking back, the ’90s were also sort of a golden age for ML research

• Markov chain Monte Carlo

• variational inference

• kernels and support vector machines

• boosting

• convolutional networks

• reinforcement learning
History of machine learning
• 2000s — applied AI elds (vision, NLP, etc.) adopted ML

• 2010s — deep learning

• 2010–2012 — neural nets smashed previous records in speech-to-text and


object recognition

• increasing adoption by the tech industry

• 2016 — AlphaGo defeated the human Go champion

• 2018-now — generating photorealistic images and videos

• 2020 — GPT3 language model

• now — increasing attention to ethical and societal implications


fi
ML Applications
Example for Supervised Learning

• Spam Detection

• Goal: Whether the incoming email is spam or not

• You gather the data, for example, 10,000 email


messages, each with a label either “spam” or “not_spam”

• You have to convert each email message into a feature


vector

• One common way to convert a text into a feature


vector, called bag of words
Bag of Words
• the rst feature is equal to 1 if the email message contains
the word “a”; otherwise, this feature is 0;

• the second feature is equal to 1 if the email message


contains the word “aaron”; otherwise, this feature equals 0;

• …

• the feature at position 20,000 is equal to 1 if the email


message contains the word
“zulu”; otherwise, this feature is equal to 0.
fi
Supervised Algorithm
• We are going to nd a 9999-dimensional line (hyperplane)
that separates examples with positive labels from
examples with negative labels

x(2)

1
2

=
b
w


x
w

0
=
b

x
w

1

=
b

x
w
x(1)
b
w
fi

You might also like