0% found this document useful (0 votes)
59 views4 pages

Supervised Learning: Week 1

Supervised learning involves using a dataset where the correct outputs are known to find relationships between inputs and outputs. It is categorized into regression problems which predict continuous outputs, and classification problems which predict discrete categories. Unsupervised learning approaches problems without known outputs, allowing structure to be derived from the relationships between variables through clustering or other methods. A cost function measures the accuracy of a hypothesis function by taking the average difference between its predicted outputs and the actual outputs.

Uploaded by

Anh Thi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views4 pages

Supervised Learning: Week 1

Supervised learning involves using a dataset where the correct outputs are known to find relationships between inputs and outputs. It is categorized into regression problems which predict continuous outputs, and classification problems which predict discrete categories. Unsupervised learning approaches problems without known outputs, allowing structure to be derived from the relationships between variables through clustering or other methods. A cost function measures the accuracy of a hypothesis function by taking the average difference between its predicted outputs and the actual outputs.

Uploaded by

Anh Thi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Week 1

Supervised Learning

Supervised Learning

In supervised learning, we are given a data set and already know what
our correct output should look like, having the idea that there is a
relationship between the input and the output.

Supervised learning problems are categorized into "regression" and


"classification" problems. In a regression problem, we are trying to
predict results within a continuous output, meaning that we are trying to
map input variables to some continuous function. In a classification
problem, we are instead trying to predict results in a discrete output. In
other words, we are trying to map input variables into discrete
categories.

Example 1:

Given data about the size of houses on the real estate market, try to
predict their price. Price as a function of size is a continuous output, so
this is a regression problem.

We could turn this example into a classification problem by instead


making our output about whether the house "sells for more or less than
the asking price." Here we are classifying the houses based on price into
two discrete categories.

Example 2:

(a) Regression - Given a picture of a person, we have to predict their


age on the basis of the given picture

(b) Classification - Given a patient with a tumor, we have to predict


whether the tumor is malignant or benign.
Unsupervised Learning

Unsupervised Learning
Unsupervised learning allows us to approach problems with little or no idea what our results
should look like. We can derive structure from data where we don't necessarily know the effect of
the variables.

We can derive this structure by clustering the data based on relationships among the variables in
the data.

With unsupervised learning there is no feedback based on the prediction results.

Example:

Clustering: Take a collection of 1,000,000 different genes, and find a way to automatically group
these genes into groups that are somehow similar or related by different variables, such as
lifespan, location, roles, and so on.

Non-clustering: The "Cocktail Party Algorithm", allows you to find structure in a chaotic
environment. (i.e. identifying individual voices and music from a mesh of sounds at a cocktail
party).
Cost Function
We can measure the accuracy (độ chính xác) of our hypothesis function by using a cost
function. This takes an average difference (actually a fancier version of an average) of all the
results of the hypothesis with inputs from x's and the actual output y's.

1
To break it apart, it is x́ where x́ is the mean of the squares of hθ ( x i )− y i , or the difference
2
between the predicted value and the actual value.

This function is otherwise called the "Squared error function", or "Mean squared error". The
1
mean is halved ( ) as a convenience for the computation of the gradient descent (độ dốc
2
1
xuống), as the derivative (đạo hàm) term of the square function will cancel out the ( ) term. The
2
following image summarizes what the cost function does:

You might also like