0% found this document useful (0 votes)
3 views

Lecture 1 (Part 2)- Definitions and Examples of Machine Learning(1)

The document provides an introduction to machine learning, explaining its purpose, types, and applications. It discusses supervised, unsupervised, and reinforcement learning, along with examples such as chronic kidney disease prediction and various successful applications in fields like speech recognition and robotics. Additionally, it outlines the historical development of artificial intelligence and machine learning from the 1940s to the present.

Uploaded by

Sahlah Adesina
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture 1 (Part 2)- Definitions and Examples of Machine Learning(1)

The document provides an introduction to machine learning, explaining its purpose, types, and applications. It discusses supervised, unsupervised, and reinforcement learning, along with examples such as chronic kidney disease prediction and various successful applications in fields like speech recognition and robotics. Additionally, it outlines the historical development of artificial intelligence and machine learning from the 1940s to the present.

Uploaded by

Sahlah Adesina
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

ku.ac.

ae

Introduction to Machine Learning


Professor Panos Liatsis
Department of Computer Science
D04208
Tel: 02-3123977
Email: [email protected]
ku.ac.ae

Machine Learning
• A machine learning system is a magic box that can be used to
• Automate a process
• Automate decision making
• Extract knowledge from data
• Predict future event
• Adapt systems dynamically to enable better user experiences
• …

• How do we build a machine learning system?


ku.ac.ae 3

Machine Learning

• “The goal of machine learning is to make a computer learn just like a


baby — it should get better at tasks with experience.”

• Basic idea:
• To represent experiences with data.
• To convert a task to a parametric model.
• To convert the learning quality to an objective function.
• To determine the model through optimizing an objective function.

• Machine learning research builds on optimisation theory, linear


algebra, statistics…
ku.ac.ae 4

Example: CKD dataset

• Doctors can identify the occurrence of


chronic kidney disease by evaluating a
variety of biomarkers and the presence of
certain symptoms.

• 24 features: age, blood pressure, specific gravity, albumin, sugar, red blood
cells, pus cells, pus cell clumps, bacteria, blood glucose random, blood urea,
serum creatinine, sodium, potassium, hemoglobin, packed cell volume, white
blood cell count, red blood cell count, hypertension, diabetes mellitus,
coronary artery disease, appetite, pedal edema, anemia. Too many numbers!!

Can build a machine learning


system to automatically identify the
presence of CKD!
ku.ac.ae 5

Example: CKD dataset

• Task: To identify the presence of chronic kidney disease based on


physiological markers and the physical status of a patient!

❖ Collecting clinical measurements from patients in hospitals.

❖ Characterizing the presence of CKD with 24 features (11 numeric + 13 nominal).

Feature Class
Vectors Labels

400 samples, split


equally between the two
classes, each
characterized by 24
features.
ku.ac.ae 6

Example: CKD dataset


❖Design a mathematical model to predict the occurrence of CKD. The model
below is controlled by 25 parameters: [w1, w2, …, w24, b]

24

1, 𝑖𝑓 ෍ 𝑤𝑖 𝑥𝑗,𝑖 + 𝑏 ≥ 0
𝑖=1
𝑦ො = 𝑔 𝒙 = 24

0, 𝑖𝑓 ෍ 𝑤𝑖 𝑥𝑗,𝑖 + 𝑏 < 0
𝑖=1

❖ System training is the process of finding the best model parameters by


minimizing a loss function:

[𝑤1∗ ,𝑤2∗ , … 𝑤24



,𝑏 ∗ ]=𝑎𝑟𝑔𝑚𝑖𝑛𝑤1 ,𝑤2,…𝑤24, ,𝑏 𝑂𝐿𝑜𝑠𝑠 (𝑤1 , 𝑤2 , … , 𝑤24 , 𝑏)
where the loss function is predictive inaccuracy.
ku.ac.ae 7

The World Generates Data!


• Data is recorded on real-world phenomena. The World is driven by data.
• Germany’s climate research center generates 10 petabytes per year.
• Google processes 24 petabytes per day.
• PC users crossed over 300 billion videos in August 2014 alone, with an average of
202 videos and 952 minutes per viewer.
• There were 223 million credit card purchases in March 2016, with a total value of
£12.6 billion in UK.
• Photo uploads in Facebook is around 300 million per day.
• Approximately 2.5 million new scientific papers are published each year.
•…

• What might we want to do with that data?


• Prediction - what can we predict about this phenomenon?
• Description - how can we describe/understand this phenomenon in a new way?

• Humans cannot handle manually data in such scale any more. A machine
learning system can learn from data and offer insights.
ku.ac.ae

The Five Vs
ku.ac.ae 9

Machine learning is important!

Speech
Recognition

Speech
Robotics All of these
Synthesis
are subfields
of Artificial
Intelligence
Machine (A.I.)
Learning
Data Mining, Natural
Analysis, Language
Engineering Processing

Computer Text
Vision Mining

9
ku.ac.ae 10

Learning Type: Supervised


• In supervised learning, there is a “teacher” who provides a target
output for each data pattern. This guides the computer to build a
predictive relationship between the data pattern and the target output.
• The target output can be a real-valued number, an integer, a symbol,
a set of real-valued numbers, a set of integers, or a set of symbols.

• A training example (also called sample) is a pair consisting of an input


data pattern (also called object) and a target output.
• A test example is used to assess the strength and utility of a predictive
relationship. Its target output is only used for evaluation purpose, and
never contributes to the learning process.
• Typical supervised learning tasks include classification and
regression.
ku.ac.ae 11

Classification Examples:
The target output is a category label.

• Medical diagnosis: x=patient data, y=positive/negative of some


pathology
• Optical character recognition: x=pixel values and writing curves,
y=‘A’, ‘B’, ‘C’, …
• Image analysis: x=image pixel features, y=scene/objects contained
in image

• Weather: x=current & previous conditions per location,


y=tomorrow’s weather
… … … this list can never end, applications of classification are vast
and extremely active!
ku.ac.ae 12

Regression Examples:
The target output is a continuous number (or a set of such numbers).
• Finance: x=current market conditions and other possible side
information, y=tomorrow’s stock market price
• Social Media: x=videos the viewer is watching on YouTube,
y=viewer’s age
• Robotics: x=control signals sent to motors, y=the 3D location of a
robot arm end effector
• Medical Health: x=a number of clinical measurements, y=the amount
of prostate specific antigen in the body
• Environment: x=weather data, time, door sensors, etc., y=the
temperature at any location inside a building
… … … this list is never ending, applications of regression are vast and
extremely active!
ku.ac.ae 13

Successful Applications
• Convert speech to text, translate from one language to the other.
ku.ac.ae 14

Successful Applications
• Face recognition
ku.ac.ae 15

Successful Applications
• Object recognition, speech synthesis,
information retrieval.
ku.ac.ae 16

Learning Type: Unsupervised


• In unsupervised learning, there is no explicit “teacher”.
• The systems form a natural “understanding” of the hidden structure
from unlabelled data.

• Typical unsupervised learning task includes:


– Clustering: group similar data patterns together.

– Generative modelling: estimate distribution of the observed data patterns.

– Unsupervised representation learning: remove noise, capture data


statistics, capture inherent data structure.
MATLAB’s example
MATLAB’s example

From https://cambridge-intelligence.com/keylines-network-clustering/
ku.ac.ae 17

Successful Applications
• Document clustering and visualization
ku.ac.ae 18

Learning Type: Reinforcement


• In reinforcement learning, there is a “teacher” who provides
feedback on the action of an agent, in terms of reward and
punishment.
• Examples:
– Helicopter manoeuvres: reward for following desired trajectory,
punishment for crashing.
– Manage an investment portfolio: reward for each $ in bank.
– Control a power station: reward for producing power, punishment
for exceeding safety thresholds.
– Make a humanoid robot walk: reward for forward motion,
punishment for falling over.
– Play many different Atari games better than humans: reward for
increasing score, punishment for decreasing score.
ku.ac.ae 19

Successful Applications
• Game player, self-driving cars, trading strategy.
ku.ac.ae 20

Historical Overview
• 1940s, Human reasoning / logic first studied as a formal subject within
mathematics (Claude Shannon, Kurt Godel et al).

• 1950s, The Turing Test is proposed: a test for true machine intelligence,
expected to be passed by year 2000. Various game-playing programs built.
1956, Dartmouth conference coins the phrase artificial intelligence. 1959,
Arthur Samuel wrote a program that learnt to play draughts (also known as
checkers).

• 1960s, A.I. funding increased (mainly military). Famous quote: “Within a


generation ... the problem of creating 'artificial intelligence' will substantially
be solved."

• 1970s, A.I. winter. Funding dries up as people realize it is hard. Limited


computing power and dead-end frameworks.
ku.ac.ae 21

Historical Overview
• 1980s, Revival through bio-inspired algorithms: Neural networks, Genetic
Algorithms. A.I. promises the world – lots of commercial investment –
mostly fails. Rule based expert systems used in medical / legal professions.

• 1990s, AI diverges into separate fields: Machine Learning, Computer Vision,


Automated Reasoning, Planning systems, Natural Language processing…
Machine Learning begins to overlap with statistics / probability theory.

• 2000s, ML merging with statistics continues. Other subfields continue in


parallel. First commercial-strength applications: Google, Amazon, computer
games, route-finding, credit card fraud detection, etc... Tools adopted as
standard by other fields e.g. biology.

• 2010s, deep neural networks have led to significant performance


improvement in speech recognition, reinforcement learning, image
classification, machine translation, etc..

• Future?

You might also like