ML Mod1 Ad
ML Mod1 Ad
A computer program is said to learn from experience E in context to some task T and some
performance measure P, if its performance on T, as was measured by P, upgrades with experience E.
A well-posed learning problem is one that is clearly defined and has reliable solutions. The idea of a
"well-posed problem" comes from the French mathematician Jacques Hadamard, who described three
essential conditions for a problem to be well-posed:
1. Existence – There should be at least one solution to the problem.
2. Uniqueness – The solution should be unique (i.e., there shouldn’t be multiple conflicting
answers).
3. Stability – Small changes in the input data should not lead to wildly different results.
Example of a Well-Posed Learning Problem
Consider the task of predicting house prices based on various factors like:
Input features: Square footage, number of bedrooms, location, year built, etc.
Output (label): The price of the house.
If we have a well-defined dataset with sufficient data points, and the relationship between features and
house prices follows a logical pattern, then:
A solution (price prediction) exists.
The predicted price is unique for a given set of features.
Small changes in input (e.g., a minor change in square footage) should not drastically affect
the price, ensuring stability.
Example of an Ill-Posed Learning Problem
Now, consider trying to predict house prices using irrelevant features, such as:
The color of the house
The number of trees in the backyard
The owner’s name
In this case, the problem is not well-posed because:
The solution may not exist (these features don’t determine price).
The solution may not be unique (houses with the same color can have different prices).
Small changes in input (e.g., changing the owner’s name) might drastically alter predictions,
violating stability.
Regression problem: Predict a student's final exam score based on study hours.
o Input: Number of hours studied, attendance percentage
o Experience: Training on labeled email datasets where emails are marked as spam or
non-spam.
Machine learning models analyze patterns in email content, such as keywords, sender details, and
message structure, to predict whether an email is spam. The model learns from past email
classifications and continuously improves its accuracy over time.
o Experience: Playing simulated games and improving through trial and error.
The model can be trained using reinforcement learning, where it plays multiple games against itself or
other players and learns optimal strategies through trial and error. Over time, it improves by refining
its decision-making ability to maximize its win rate.
3. Handwriting Recognition
o Task: Recognizing handwritten words from images.
This task involves training an ML model using image processing techniques. Convolutional Neural
Networks (CNNs) are commonly used for recognizing handwritten characters by learning from a
labeled dataset of different handwriting styles. This is useful in applications like digit recognition for
postal services and check processing in banks.
4. Autonomous Driving (Robot Driving Problem)
o Task: Driving on highways using visual sensors.
Self-driving cars use deep learning models trained on thousands of hours of human driving data. They
process real-time images from cameras and sensors to make driving decisions like lane changes,
braking, and obstacle avoidance. The system improves its performance based on experience.
5. Fruit Prediction Problem
o Task: Identifying different types of fruits.
This problem involves using image classification models to distinguish between different fruits based
on their shape, color, texture, and size. A CNN can be trained using labeled fruit images, and the
model can be fine-tuned to improve accuracy by learning from diverse datasets.
6. Face Recognition Problem
o Task: Identifying and categorizing different human faces.
Face recognition is used in security systems, social media tagging, and surveillance. ML models learn
facial features using deep learning techniques, such as feature extraction and facial embeddings, to
distinguish between individuals accurately.
7. Automatic Translation of Documents
o Task: Translating text from one language to another.
Neural Machine Translation (NMT) models, such as Google Translate, use deep learning techniques
to understand the structure and grammar of different languages. The model learns from parallel
datasets where the same text is available in multiple languages, allowing it to generate high-quality
translations.
A learning system in machine learning works just like how people learn. It follows these steps:
1. Training Data – The machine studies examples.
2. Features – It looks at important clues.
3. Model – It remembers patterns like a notebook.
4. Learning Algorithm – It learns and improves.
5. Error Checking – It fixes mistakes.
6. Optimization – It fine-tunes its knowledge.
7. Testing – It takes an exam with new data.
8. Predictions – It makes smart decisions.
9. Feedback – It keeps improving over time.