0% found this document useful (0 votes)
17 views7 pages

2.1 Modeling and Evaluation

To represent data relationships. To make predictions based on input data.

Uploaded by

senthil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views7 pages

2.1 Modeling and Evaluation

To represent data relationships. To make predictions based on input data.

Uploaded by

senthil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Modeling and Evaluation

Dr.A.SENTHILKUMAR
Assistant Professor
Department of Computer Science with Data Analytics
Sri Ramakrishna College of Arts and Science
Coimbatore - 641 006
Tamil Nadu, India

SRCAS
1
23CDAP03– Machine Learning with Lab
Introduction to Machine Learning

• Definition: Machine Learning is the study of algorithms that


improve their performance at a task with experience.

• Key components: Performance PP, Task TT, Experience EE


Importance of Modeling

Purpose of modeling in ML: To represent data relationships.

To make predictions based on input data.

Types of models:

• Supervised Learning (e.g., regression, classification).

• Unsupervised Learning (e.g., clustering).

• Reinforcement Learning
Steps in Model Development

1.Data Collection: Gather relevant data.

2.Data Preprocessing: Clean and prepare data for modeling.

3.Model Selection: Choose appropriate algorithms (e.g., decision trees,


SVMs).

4.Training: Fit the model to the training data.

5.Testing: Evaluate model performance on unseen data


Evaluation Metrics

• Common metrics for assessing model performance:Accuracy:


Proportion of correct predictions.

• Precision and Recall: Useful for imbalanced datasets.

• F1 Score: Harmonic mean of precision and recall.

• ROC-AUC: Area under the Receiver Operating Characteristic


curve
Overfitting vs. Underfitting

• Overfitting: Model learns noise in the training data; performs


poorly on test data.

• Underfitting: Model is too simple to capture underlying patterns;


poor performance on both training and test data.
Cross-Validation Techniques

• Importance of cross-validation in evaluating model


performance:K-Fold Cross-Validation

• Leave-One-Out Cross-Validation (LOOCV)

• Benefits include reducing overfitting and providing a better


estimate of model performance

You might also like