0% found this document useful (0 votes)
137 views1 page

Logistic Regression

Logistic regression is a statistical algorithm used for binary classification that predicts the probability of an outcome based on input features. It transforms the output of a linear regression model into a probability using a logistic function. The model is trained on labeled data to learn the mapping from features to the probability of the positive class by minimizing the difference between predicted and true probabilities. Once trained, it can predict probabilities for new examples and convert them into binary predictions using a threshold.

Uploaded by

Abir Baidya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
137 views1 page

Logistic Regression

Logistic regression is a statistical algorithm used for binary classification that predicts the probability of an outcome based on input features. It transforms the output of a linear regression model into a probability using a logistic function. The model is trained on labeled data to learn the mapping from features to the probability of the positive class by minimizing the difference between predicted and true probabilities. Once trained, it can predict probabilities for new examples and convert them into binary predictions using a threshold.

Uploaded by

Abir Baidya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 1

Logistic regression is a statistical algorithm used for binary classification

tasks, where the goal is to predict the probability of a certain outcome (such as
whether a customer will purchase a product or not) based on input features. It's
called "logistic" because it uses a logistic function to transform the output of a
linear regression model into a probability value.

The logistic regression model is trained on a labeled dataset, where each example
is labeled with the binary outcome (0 or 1). The model then learns to map the input
features to the probability of the positive class (1). During training, the model
adjusts its parameters to minimize the difference between its predicted
probabilities and the true labels of the training data.

The logistic regression model can be represented mathematically as:

p = 1 / (1 + e^-(b0 + b1x1 + b2x2 + ... + bnxn))

where:

p is the probability of the positive class


b0, b1, b2, ..., bn are the coefficients of the model
x1, x2, ..., xn are the input features
The logistic function (1 / (1 + e^-z)) ensures that the output is always between 0
and 1, which can be interpreted as the probability of the positive class. The
coefficients of the model are typically learned using gradient descent, which
involves iteratively adjusting the parameters to minimize the cost function, such
as the cross-entropy loss.

Once the model is trained, it can be used to predict the probability of the
positive class for new examples. A decision threshold can be applied to convert the
probability value into a binary prediction. For example, if the threshold is set to
0.5, any probability greater than or equal to 0.5 is predicted as the positive
class, and any probability less than 0.5 is predicted as the negative class.

Logistic regression is a simple and interpretable model that can be used for a
variety of binary classification tasks. However, it assumes a linear relationship
between the input features and the log-odds of the positive class, and may not
perform well if the relationship is non-linear. It also assumes that the input
features are independent of each other, and may not handle correlated features
well.

You might also like