0% found this document useful (0 votes)
45 views6 pages

Class Xii Model Life Cycle

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views6 pages

Class Xii Model Life Cycle

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Notes model life cycle of AI

Q1. What is model life cycle of AI?


Ans. The AI life cycle involves various stages, from data collection, data analysis, feature engineering
and algorithm selection to model building, tuning, testing, deployment, management, monitoring and
feedback loops for continuous improvement. Every AI project lifecycle encompasses three main stages:
project scoping, design or build phase, and deployment in production.

Stage of AI Project Cycle


Q2. What are the stages of AI model life cycle?

Ans. There are 5 steps in building an AI powered solution . Have a look at the picture below , we
discuss each stage one by one.

The 5 stages of an AI project cycle


Problem scoping
Understanding the problem statement and business constraints is very important before jumping into
developing a solution . Business constraints help you to realize the quality and terms of the desired
solution. For example compare two situations , suppose you are building “google translate” , what are
the business constraints ?

1. Your model needs to understand text data


2. the result should be as close as possible and grammatically correct. Slight errors are ignorable.
3. The result should be displayed in milliseconds for better user experience.
Now compare it to an AI solution which predicts presence of a certain tumor using CT scans. So how
do your business constraints look like now?

1. Your model needs to find patterns in image data (pixels)


2. The consequences of false predictions can be fatal. Errors can be deadly.
3. The result can take an hour to be displayed : “take your time ,just be accurate. “
See how different cases lead to different demands? Also the type of desirable output changes. In the
first case you want a text output , in the second case you want a classification , that is to what category
a patient belongs ? Healthy or needing diagnosis.

Data Acquisition
For performing analysis on data first you need to gather data , from reliable data sources. Real life data
can be weird and misleading. Human entries are always prone to errors, for example someone
mistyping 30.0 as 3.00 . Somebody making spelling mistakes or maybe labelling the data wrong.

Data can be collected from various sources like:

1. Databases
2. web pages
3. devices like cameras and sensors ( eg. in Autopilots , weather predictions)
4. Public surveys /records of purchases, transactions, registrations and more.

Data exploration
Once you gather data you need to perform operations like data cleaning to find missing values , to
remove useless data and perform basic statistical analysis like drawing plots, comparing different
features of the data set and more. The entire process is known as EXPLORATORY DATA
ANALYSIS.
It helps to see which features are more important and what is the overall trend of the data that you have.
For example suppose you have a data set which has the features of a house ( like number of bedrooms ,
bathrooms , floor area etc.) and the final market price of the house.

You would expect the house price to vary linearly with data in smaller cities and it may vary
quadratically in metro-politan areas depending on location . A sea facing duplex near marine drive in
Mumbai will be way more expensive than a duplex located in a rural area.

Modelling
Now having cleaned the data and understanding the basic trends , a data scientist tries to formulate an
approximate mathematical relation between features and the final market price. We use feature
importances in deciding the final price.

The ability to mathematically describe the relationship between parameters is the heart of every AI
model.
Evaluation
The final task is to test the trained AI model on new real life data and see how it is performing.
According to problem statement different “loss functions” are used to see how much error our model is
making. The purpose of these functions is to provide a mathematical estimate as to how far we are from
making correct predictions

Finally if the model performs well on unseen new data , the deployment( using it as service on internet
applications) stage is started.

Q3. What do u mean by 4w’s?


Ans The 4Ws Canvas: The 4Ws Canvas is a helpful tool in Problem Scoping. Basically, questions

which help us understand the problems in a better, more structured way.

Who?: Refers that who is facing a problem, who the stakeholders of the problem are and who are

affected because of the problem.

 What?: Refers to what the problem is and what you know about the problem. What is the nature
of the problem? Can it be explained simply? How do you know it’s a problem? What is the
evidence to support that the problem exists? What solutions are possible in this situation? etc. At
this stage, you need to determine the exact nature of the problem.

 Where?: It is related to the context or situation or location of the problem, focus on the
context/situation/location of the problem.

 Why?: Refers to the reason we need to solve the problem, the benefits which the stakeholders
would get from the solution and how would it benefit them as well as the society, what are the
benefits to the stakeholders after solving the problem.

Q3 . What do u mean by Design/Building the Model?


Ans.. Design/Building the Model
The next part of the machine learning lifecycle is the design or build
phase, which can take a few days to several months, depending on the nature of the project, after the
pertinent tasks have been chosen and adequately scoped. In essence, the Design phase is an iterative
process that includes all the steps necessary to build an AI or machine learning model, including data
acquisition, exploration, preparation, cleaning, feature engineering, testing, and running a number of
models to look for patterns in the data or predict behaviours.

a. Open languages — Python is the most popular, with R and Scala also in the mix.
b. Open frameworks — Scikit-learn, XGBoost, TensorFlow, etc.
c. Approaches and techniques — Classic ML techniques from regression all the way to state-of-theart
GANs and RL
d. Productivity-enhancing capabilities — Visual modelling, AutoAI to help with feature engineering,
algorithm selection and hyperparameter optimization
e. Development tools — DataRobot, H2O, Watson Studio, Azure ML Studio, Sagemaker, Anaconda,
etc.

To aid the development teams, various AI development platforms include substantial documentation.
You need to go to the relevant webpages for this documentation, which are as follows, depending on
the AI platform you choose:

a. Microsoft Azure AI Platform;


b. Google Cloud AI Platform;
c. IBM Watson Developer platform;
d. BigML;
e. Infosys Nia resources.

Q4. What do you mean by model validation?


Ans. Another key success factor to consider is model validation: how will you determine, measure, and
evaluate the performance of each iteration with regards to the defined ROI objective
During this phase, you need to evaluate the various AI development platforms, e.g.:
1. Open languages — Python is the most popular, with R and Scala also in the mix.
2. Open frameworks — Scikit-learn, XGBoost, TensorFlow, etc.
3.Approaches and techniques — Classic ML techniques from regression all the way to state-of-the-art
GANs and RL
4. Productivity-enhancing capabilities — Visual modelling, AutoAI to help with feature engineering,
algorithm selection and hyperparameter optimization
5. Development tools — DataRobot, H2O, Watson Studio, Azure ML Studio, Sagemaker, Anaconda,
etc.
Q5. Why testing are required while preparing the AI model?
Ans. the fundamental testing concepts are fully applicable in AI development projects, there are
additional considerations too. These are as follows:
1.The volume of test data can be large, which presents complexities.
2.Human biases in selecting test data can adversely impact the testing phase, therefore, data validation
is important.
3.Your testing team should test the AI and ML algorithms keeping model validation, successful
learnability, and algorithm effectiveness in mind.
4.Regulatory compliance testing and security testing are important since the system might deal with
sensitive data, moreover, the large volume of data makes performance testing crucial.
5.You are implementing an AI solution that will need to use data from your other systems, therefore,
systems integration testing assumes importance.
6.Test data should include all relevant subsets of training data, i.e., the data you will use for training the
AI system.
7.Your team must create test suites that help you validate your ML models.

You might also like