Machine Learning Algorithms: A Quick Revision
Machine Learning Algorithms: A Quick Revision
Machine Learning
Algorithms
A quick Revision
K-Means Clustering
Groups similar data points together. Imagine
sorting a pile of unsorted laundry - you put shirts
with shirts, pants with pants, etc. K- Means does
this with data. Finds groups in data without labels.
Examples:
- Figuring out what kind of customers a store has
(who buys what).
- Spotting weird activity on a computer network
(someone hacking?).
- Making images smaller by grouping similar
colors.
Linear Regression
Predicts a number based on other numbers. Like
predicting someone's height based on their weight.
Examples:
- Guessing house prices from their size and
location.
- Predicting how much a company will sell based
on how much they advertise.
- Estimating how much a crop will grow based on
rain and sun.
Decision Tree
Makes decisions like a flow chart. Asks a series of
yes/no questions to arrive at a conclusion.
Examples:
Doctors diagnosing diseases based on symptoms.
Banks deciding who gets a loan.
Scientists classifying plants and animals.
Logistic Regression
Predicts a "yes" or "no" answer. Like figuring out if
an email is spam or not spam.
Examples:
- Email spam filters.
- Fraud detection.
- Predicting if a customer will cancel a service.
Support Vector Machine
(SVM)
Finds the best line (or plane) to separate different
groups of data.
Examples:
- Recognizing images (cats vs. dogs).
- Sorting text into categories.
- Recognizing faces.
K-Nearest Neighbors
(KNN)
Classifies things based on what their neighbors
are. Like if your 3 closest neighbors all like pizza,
you probably like pizza too.
Simple to understand.
No training needed.
Can be slow with lots of data.
Sensitive to irrelevant information.
Examples:
- Recommending movies or products. -
Recognizing images.
- Finding unusual data points.
Random Forest
Combines many decision trees to make better
predictions.
Examples:
- Predicting credit risk.
- Predicting stock prices.
- Diagnosing medical conditions.
Dimensionality
Reduction
Makes data simpler by reducing the number of
features. Like turning a 3D object into a 2D drawing.
Examples:
Making images smaller.
Extracting the most important information from
data.
Showing complex data in a simple chart.
Naive Bayes
A simple way to classify things based on
probabilities. Assumes everything is independent
(which is "naive," hence the name).
Examples:
Spam filtering.
Classifying news articles.
Figuring out someone's feelings from their
writing.
Machine Learning Algorithms Graphs
WA S T H I S P O ST U S E F U L?
FOLLOW
FOR MORE