0% found this document useful (0 votes)
45 views3 pages

Decision Boundary and Hyperplanes

A decision boundary separates different classes in feature space, represented by a hyperplane in various dimensions. The hyperplane's equation, T w x + w0 = 0, allows classification of data points based on their position relative to the boundary. This concept extends from 2D line equations to higher dimensions, facilitating efficient computation in classification problems.

Uploaded by

Daniel Solomon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views3 pages

Decision Boundary and Hyperplanes

A decision boundary separates different classes in feature space, represented by a hyperplane in various dimensions. The hyperplane's equation, T w x + w0 = 0, allows classification of data points based on their position relative to the boundary. This concept extends from 2D line equations to higher dimensions, facilitating efficient computation in classification problems.

Uploaded by

Daniel Solomon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Decision Boundary and Hyperplanes

A decision boundary is the surface that separates different classes in the


feature space, allowing us to distinguish between categories like cats and
dogs in a classification problem. A hyperplane is a generalization of a flat
surface in an m -dimensional space; in 2D, it is a line, in 3D, it is a plane,
and in higher dimensions, it remains a flat division of space.
The equation for a hyperplane is given by:
T
w x+ w0 =0

where:
 w (vector) – Represents the weights (or the normal direction of the
hyperplane).

 x (vector) – Represents the input feature values.

 w 0 (scalar) – Bias term that shifts the hyperplane.

 w x – Denotes the dot product (linear combination of features and


T

weights).

Decision Boundary and Classification


The hyperplane acts as a decision boundary by splitting the space into two
regions:

 If w T x+ w0 > 0 → The point belongs to Class 1 (e.g., cats).

 If w T x+ w0 < 0 → The point belongs to Class 2 (e.g., dogs).

 If w T x+ w0 =0 → The point lies exactly on the decision boundary.

Example in 2D
Consider a 2D case where we classify data using the equation:
2 x1 +3 x 2−6=0

This equation represents a line (a 2D hyperplane) that separates two


classes:
 If 2 x1 +3 x 2−6> 0 → The point belongs to Class 1.

 If 2 x1 +3 x 2−6< 0 → The point belongs to Class 2.


Classifying Given Points
For each point ( x 1 , x 2 ), we compute:

f ( x 1 , x 2 ) =2 x 1 +3 x 2−6

and classify them accordingly:

Point Computation Value Classification


A ( 2 , 2) 2 ( 2 ) +3 ( 2 )−6 4 Class 1
B (1 , 1 ) 2 ( 1 )+ 3 (1 )−6 -1 Class 2
C ( 3 , 1) 2 ( 3 ) +3 ( 1 )−6 3 Class 1
D ( 0 , 3) 2 ( 0 ) +3 ( 3 )−6 3 Class 1

Visualizing the Decision Boundary


The equation 2 x1 +3 x 2−6=0 divides the plane into two regions:
 Above the line ( f ( x ) >0 ) → Class 1.

 Below the line ( f ( x ) <0 ) → Class 2.

1. The 2D Case: Line Equation


In two dimensions ( x 1 , x 2), the equation of a line follows:
x 2=m x 1 +c

where:
 x 1 is the independent variable (horizontal axis).

 x 2 is the dependent variable (vertical axis).

 m is the slope.

 c is the y-intercept.

Alternatively, this can be rewritten as:


m x1−x 2 +c=0

which has a structure similar to the hyperplane equation.

2. The General Case: Hyperplane Equation


For an m -dimensional feature space, the equation of a hyperplane
generalizes to:
T
w x+ w0 =0

which expands to:


w 1 x 1 +w 2 x 2+ …+w m x m + w0=0

where:
 w=( w1 , w2 ,... , w m ) is a vector of weights.

 x=( x 1 , x 2 ,... , x m ) is a vector of feature values.

 w 0 is the bias (similar to the intercept c in 2D).

 The term w T x is the dot product of the weight vector and the input
vector.

3. How the Two Forms Are Related


 In 2D, if we set w 1=m, w 2=−1, and w 0=c , then:
w 1 x 1 +w 2 x 2+ w0 =0

simplifies to:
x 2=m x 1 +c

which is the familiar line equation.

 In higher dimensions (3D, 4D, etc.), we extend this by adding more


variables x 3 , x 4 , .. ., but the fundamental linear separation remains the
same.

4. Why Use the Hyperplane Form?


 Works for any number of dimensions: Unlike y=mx+c , which only
works in 2D, the hyperplane equation can represent decision
boundaries in any number of dimensions.

 Consistent with linear algebra: The dot product formulation


T
w x+ w0 =0 allows efficient computation in high-dimensional spaces.

5. Example: 3D Hyperplane (A Plane)


In 3D space ( x 1 , x 2 , x 3), a plane is given by:
w 1 x 1 +w 2 x 2+ w3 x 3+ w0 =0

which generalizes the concept of a line into three dimensions.

You might also like