0% found this document useful (0 votes)
26 views14 pages

UCS - 401 - Unit-LV - Probabilistic Models Normal Distribution and Its Geometric Interpretations - 03

Uploaded by

buest21ucs028
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views14 pages

UCS - 401 - Unit-LV - Probabilistic Models Normal Distribution and Its Geometric Interpretations - 03

Uploaded by

buest21ucs028
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 14

ML UCS-401

Topics: Probabilistic Models :Normal Distribution


and Its Geometric Interpretations, Naïve Bayes
Classifier, Discriminative learning with Maximum
likelihood.
Normal Distribution in Probabilistic Models

The Normal Distribution (or Gaussian distribution)


is a fundamental concept in probabilistic models,
widely used due to its natural occurrence in real-
world phenomena and mathematical properties.
Probability Density Function (PDF):The normal
distribution is defined by its mean μ and standard
deviation σ. The PDF is given by:
f(x;μ,σ^2)=1/2πσ^2e^(−(x−μ)^2/2 σ^2)​
μ: Mean (center of the distribution)
σ^2: Variance (spread of the distribution)
Geometric Interpretations:
1. Normal Distribution as a Curve:
The normal distribution represents a bell-shaped
curve in 2D, where the height of the curve at any
point x is proportional to the probability density
at x.
2. Standard Normal Distribution and Z-Score:
When μ=0 and σ=1, the distribution is called
the standard normal distribution.
The Z-score transforms any normal distribution into
the standard normal:
Z= (X−μ)/ σ .
Geometrically, this shifts and scales the original
distribution to have a mean of 0 and a variance of 1.​
Naïve Bayes Classifier
The Naïve Bayes classifier is a probabilistic
machine learning algorithm based on Bayes'
Theorem. It assumes that features are
conditionally independent given the class label,
which simplifies the computation of the posterior
probabilities.
Bayes' Theorem:
P(C∣X)=P(X∣C)⋅P(C)​/P(X).
Where:
P(C∣X) = Posterior probability of class C given
predictor X.
P(X∣C) = Likelihood of predictor X given class C.
P(C) = Prior probability of class C.
P(C) = Prior probability of class C.
P(X) = Marginal probability of predictor X.
Types of Naïve Bayes Classifiers:
Gaussian Naïve Bayes: Assumes that features
follow a Gaussian distribution.
Multinomial Naïve Bayes: Suitable for discrete
data (e.g., text classification).
Bernoulli Naïve Bayes: Suitable for binary/boolean
features
Applications:
•Spam Detection
•Sentiment Analysis
•Document Classification
•Medical Diagnosis
Discriminative learning with
Maximum likelihood

Discriminative learning with maximum likelihood


focuses on modeling the conditional probability
P(Y∣X), where X is the input and Y is the output.
This approach directly estimates the parameters of
the conditional distribution by maximizing the
likelihood of the observed data.
Key Concepts:
1. Discriminative vs. Generative Models:
• Discriminative models (e.g., logistic regression, neural
networks) directly estimate P(Y∣X).
• Generative models estimate P(X∣Y)P(Y) and use Bayes'
theorem to compute P(Y∣X).
Applications:
•Logistic Regression: Maximizing the likelihood of the binary
classification labels.
•Conditional Random Fields (CRFs): Sequence labeling tasks.
•Neural Networks: Training with cross-entropy loss
corresponds to MLE in classification.

You might also like