AI Final Report
AI Final Report
ARTIFICAL LEARNING
DEVELOPED BY
IN PARTIAL FULFILLMENT OF
DIPLOMA IN COMPUTER TECHNOLOGY
2023-2025
CERTIFICATE
ARTIFICAL LEARNIGNG
IN PARTIAL FULFILLMENT OF
ACADEMIC REQUIREMENT FOR THE YEAR 2023-2025
DATE
:
PLACE : Pune
CERTIFICATE
This is to certify that Mr. Ramesh Popat Shelar from Dr. D.Y. Patil Vidyapeeth, Centre for
Online Learning having PRN No. 23050203125 has completed Project of Final Year having
Title “Artificial Learning” during Academic Year 2023-2025.The project completed in a under
--------------------------------------------
Project Guide
Ramesh P Shelar
ARTIFICAL LEARNING PROJECTS
Key Learning Outcomes
Topic Covered
PROJECT DESCRIPTION
Rock paper scissors is a hand game in which each player simultaneously forms one of three moves with an
outstretched hand. These moves are ‘rock”, “paper”, and “scissors” as shown in the image.
Each move wins against one shape and loses to another.
Rock ‘crushes’ scissors but is ‘covered’ by paper Paper
‘covers’ rock but is ‘cut’ by scissors Scissors is ‘crushed’ by
rock but ‘cuts’ paper
PREREQUISITES
Compatible platforms:
1. Machine Learning for Kids
2. Cognimates
3. PictoBlox
THE SOLUTION
PROJECT DESCRIPTION
You have to control a robot by identifying different hand gestures from the camera. You made mobile robots
and have controlled it using your keyboard, smartphone,
joystick and what not. But, this time let’s level up and control the 2-Wheel Drive Robot
using the hand gestures. The
controls are:
1. If we keep our palm straight, the robot should move forward.
2. If we tilt our hand in the left direction, the robot should turn left.
3. If we tilt our hand in the right direction, the robot should turn right.
4. If we close the fist, the robot should stop moving.
Learning Outcomes:
1. Physical interaction of ML
2. Controlling robot using Bluetooth
PREREQUISITES
Compatible platforms
1. PictoBlox (for arduino or microbit)
2. Makecode (for microbit)
3. Scratch 3.0 (for microbit)
https://tinkercademy.com/tutorials/krazy-kar-v2/
FLOW CHART
The following flow chart shows the overview of the program required to implement Gesture controlled
Robot using Arduino or Micro:Bit in the compatible platforms. You can find various graphical blocks in
the compatible platforms to make the face expression recognizer. Go ahead!
Option 1 - Code for Arduino Robot
The following code snippets show a sample code of PictoBlox for making the Gesture controlled Arduino
Robot.
This project shows how we can even control robots with gestures using Machine Learning.
Let's Learn Artificial Intelligence - Step-UP Module
THE PROBLEM
A new pizza restaurant is opening up in town. They want to make it simple for customers to place orders.
They feel the easier it is to order the pizza, the more likely customers will be to return to the restaurant. You
will take on the role of AI designer and design a chatbot for customers to use to order pizza.
Step 2: Enter your information into the fields. Click the icons on the right for more
detailed
information. Enter your country of residence, your first name and your last name
(surname).
Enter your institution (school) issued email address: e.g. St u [email protected] .
If you do not have an institution issued email address, additional verification may be required. You will
receive further instructions in your email.
Enter your birth month and year, your institution name, your expected graduation, or program completion
date. If you have a valid promo code* you can apply it in the bottom field. After completing the CAPTCHA,
click “Next”.
Step 3: Please review AWS Educate’s Terms and Conditions. Selecting “Agree” allows you to proceed.
Step 4: Check your email to follow the automated email verification process.
Step 5: After email verification, you will be prompted to set a password to log into the student portal. Make
sure your password meets the required security level.
Now you are able to log in!
(The AWS Educate Starter Account (ESA) offers you free access to AWS cloud resources without requiring a
credit card for payment. With an ESA, you will receive $100 in preloaded credits at member institutions, or
$30 in preloaded credits at non-member institutions.Your AWS Educate Starter Account will renew on an
annual basis as long as you are an active student and a member of AWS Educate. Account credit is
automatically added to your AWS Educate Starter Account once every 12 months from the date your
application is approved until your graduation/ program completion date.)
THE SOLUTION
How to proceed towards a solution:
Let’s start building our chatbot! Follow along closely with the instructions. We will walk through the steps of
getting set up together, and then you’ll have the chance to build your chatbot on your own.
Project the steps on the screen using the teacher computer. Model each step for the students using the teacher
computer and projector.
Step 1. Log in to AWS Educate.
• Go to awseducate.com.
• Log in to your AWS Educate starter account.
• Click Login to AWS Educate and sign in using your AWS Educate account.
• Students should have an AWS Educate login from the “Introduction to AWS Educate” module. If
not, please review that activity and create an account.
• Once logged in to the AWS Educate portal, select AWS Account.
Step 2. On the new screen, select AWS Educate Starter Account in the center of the screen
• Utterances: These are phrases that the user types or speaks to the chatbot. For example, “Alexa, what’s
the weather? Alexa, what time is it?”
•
Prompts: These are questions the chatbot will ask the user to gather more information. For example, if
• you ask Alexa what the weather is, she will need to know where you are.
Slots: These represent data the user provides to the chatbot in response to prompts.
Confirmations: These confirm the information the user has input; each typically requires a yes/no
• response from the user.
Fulfillment: This is the logic needed to complete the intent.
•
Click the Get Started button.
•
•
Step 5. Set up your chatbot. Click Custom Bot on the left side of the main menu.
• Name the bot OrderPizza. (Note: There are no spaces in the name.) Make the following
• initial settings:
Let's Learn Artificial Intelligence - Step-UP Module
o Select None for voice output. Students can change this later on their own if they so
choose.
o Select 5 Minutes for the session timeout. This is how long the bot should wait before
ending the chat session.
o Select No under “COPPA: Please indicate if your use of this bot is subject to the
Children’s Online Privacy Protection Act (COPPA).” (Note: We are not creating a bot that
will gather private information on children younger than 13.)
• Click the blue Create button.
The AWS Management Console will now be displayed. This is a menu for all cloud services AWS provides.
Step 4. Navigate to Amazon Lex. Click the search bar and search for and then click Amazon Lex.
• You can also click Machine Learning and then click Amazon Lex. The Amazon Lex
• product page will now be displayed.
• This page provides more information about Amazon Lex.
Read the• Let's Learn Artificial Intelligence - Step-UP Module
Step 7. Build your utterances. These are phrases the user will use to start the conversation with the chatbot.
• Think about what the intent means. Use those brainstorms as we build
utterances, slots, and fulfillments.
• Type the following utterances: o I’d
like to Order Pizza
o I want pizza
o Pizza Please
• Look at the left side of the console in the editor. Click + next
• to Slot types.
• Add additional values in the Value section by hitting the + sign to add additional rows. These are the
different types of pizza users can order.
o Cheese
o Pepperoni
o Veggie
o Chicken
o One or two more types of pizza
o Click Add slot to intent.
• You should see your slot added to the intent under the slot heading in the dashboard as slotOne.
Click in the box with slotOne to rename this PizzaType (no spaces).
•
• Change the prompt from “What city?” to “What type of pizza would you like?” This is telling the
chatbot to ask the user what type of pizza they would like when they type the original utterance.
•
Let's Learn Artificial Intelligence - Step-UP Module
• Make all three slots required by clicking the Required checkbox next to each slot.
Step 11. Create a confirmation prompt. This lets the user know that the action they
have requested has been completed.
• Scroll down under Slots to the Confirmation prompt section. Select the checkbox
• next to Confirmation prompt.
• Use the names of the slots you created: PizzaType, PickupDate, and PickupTime. Type “Is this correct:
• you would like to order {PizzaType} pizza on {PickupDate} at
{PickupTime}?”
Note: Be sure to type the slot names exactly as they are typed in the slots section. All words,
capitalizations, and spacings must match exactly between confirmation and slots.
• Set the “No” answer to a response of Ok, your pizza order is cancelled.
Let's Learn Artificial Intelligence - Step-UP Module
THE PROBLEM
Governments and private enterprises generate a lot of data that is publicly accessible. The government
publishes data in a number of areas that are of significant public concern such as health indicators, list of
hospitals and primary care centers, weather data and warnings, vaccinations, access to public services such as
banks and others.
We now take one example of such an area that is of significant impact to the citizens. The central and state
governments publish data about COVID-19. This data is available on the Department of Public Health
websites. However, certain sections of the society such as citizens with visual or hearing impairments find it
difficult to access the data.
The aim of this project is to build a chatbot using a voice interface to make it possible for those with visual
impairments to access the data.
PRE-REQUISITE
1. Database and SQL concepts as explained in the Database and SQL sections of the
basic and step-up modules
a. How to use SELECT and GROUP BY to query data
b. How to use mathematical functions such as aggregations, min, max and average
Let's Learn Artificial Intelligence - Step-UP Module
2. Statistics concepts as explained in the Statistics section of the basic and step-up
modules
a. Concepts such as how to calculate min, max, average
3. The Time series forecasting as explained in the forecasting section of the basic and
step-up modules
a. Train and predict with a forecasting model using methods such as exponential smoothing
4. Machine learning models for Text Processing as explained in the text processing
section of the basic and step-up modules
a. How to train and use an OCR model or service
b. How to use text summarization
1. All India COVID data is available from the Indian Statistical Institute, Bangalore
website -https://www.isibang.ac.in/~athreya/incovid19/data.html
2. Government bulletins and travel advisories can be downloaded from the Ministry of Health and
Family Welfare websites.
THE SOLUTION
For example, the first question in the previous slide can be rephrased as “What is the weekly average of new
cases in Karnataka?”. The app should be able to respond to both variations with the same answer.
Text summarization
1. Every state has issued SOP for travel
2. People find it very difficult to identify the SOP for their particular travel, for example, “What
is the quarantine requirement if I am 40 years of age and travelling by flight from Delhi to
Bangalore?”
3. Travel advisories and SOPs are published by each state as PDFs on the respective health
department websites
4. Use OCR and text summarization to summarize these PDFs
5. Extend the app to provide answer to questions such as “What is the latest travel advisory for
Karnataka?”. In this case, provide the summary of the latest travel advisory from Karnataka
government.
Solution Outline
This solution outline provides details about one possible solution to the problem us- ing SAP Conversational
AI. Students are encouraged to attempt to solve the problem without looking at the solution outline. However, if
they get stuck, the outline can provide valuable hints to help them proceed.
1. Create an account – Go to https://cai.tools.sap/ and Sign Up.
2. Familiarize yourself with the steps for building a bot - https://cai.tools.sap/blog/ build-your-
first-bot-with-sap-conversational-ai/
3. Create an SAP Cloud Platform (SCP) Trial Account
4. Create a Postgres database instance
5. Load the data from Indian Statistical Institute into the Postgres database
6. Follow the steps in https://cai.tools.sap/blog/nodejs-chatbot-movie-bot/ with
the following changes
a. Connect to the Postgres database on SCP instead of the movie database
b. Connect to the OCR and Text services on SCP
TOPIC 2 - ADVANCED PROJECTS FOR AI
THE PROBLEM
A major problem in India is access to fresh drinking water. Fresh water sources such as rivers and lakes are
drying up at an unprecedented pace due to various reasons including climate change, pollution and overuse.
Policy makers need data-driven evidence to formulate policies to protect freshwater resources.
Satellite images are available that show the lakes across India. These satellite images are available for a
number of years. The task is to identify the increase or decrease in the area of these lakes across time. Lakes
that have the highest decrease in area across time can be identified. These lakes can be selected for
conservation efforts.
PREREQUISITES
The following tutorials and articles can be used by the student to understand how to train and apply object
detection models.
https://www.datacamp.com/community/tutorials/object-detection-guide
https://towardsdatascience.com/airplanes-detection-for-satellite-using-faster- rcnn-
d307d58353f1
https://medium.com/intel-software-innovators/ship-detection-in-satellite- images-
from-scratch-849ccfcc3072
https://www.pyimagesearch.com/2018/05/14/a-gentle-guide-to-deep-learning- object-
detection/
https: //towardsdatascience.com/data-science-and-satellite-imagery-
985229e1cd2f?gi=e93ba19f0a56
https://medium.com/data-f rom-the-trenches/object-detection-with-deep- learning-
on-aerial-imagery-2465078db8a9
2. Register as a new user if this is your first visit to the site
3. Select category “Theme/Products”
4. Select theme “Land and Terrain”
5. Select product “OCM: Surface Water Layer Products_2D”
6. Select year 2015 and month Jan
7. Download the image for period “Jan01to02-2015”
8. Select year 2016 and month Jan
9. Download the image for period “Jan01to02-2016”
10. Unzip the image ZIP files
Each satellite image covers the entire Indian sub-continent. The images are in black and white. Water bodies
such as lakes, rivers and reservoirs are shown in white while the rest of the surface is black.
THE SOLUTION
a. Install Python 3
b. Setup tensorflow for object detection. Follow the steps in
https://christopherstoll.org/2018/12/12/tensorflow-object-detection-mac-setup. html
c. Install OpenCV library
5. The images all belong to just one class. Create a label map with just this one class.
6. We will use transfer learning to speed up the training process. Use An object detection training
pipeline from https://github.com/tensorflow/models/blob/
master/research/object_detection/g3doc/configuring_jobs.md. You can also find sample config
files at https://github.com/tensorflow/models/tree/master/research/
object_detection/samples/configs. “ssd_mobilenet_v2_coco.config” is a good starting point.
8. It is also recommended during the training to start the evaluation job. You can then monitor the
process of the training and evaluation jobs by running Tensorboard on your local machine.
9. After finishing with training, export the trained model to a single file (Tensorflow
graph proto)
11.Create the images for inference by selecting the Jan 2016 image and splitting it into tiles just like
you did for the Jan 2015 image.
12.Ensure that you split the image such that each tile from Jan 2016 covers the exact same portion
of the map as the corresponding tile from Jan 2015.
14.You will now get the bounding boxes for the detected lakes. You can calculate the areas and
compare them to the corresponding areas calculated from the 2015 image.
Let's Learn Artificial Intelligence - Step-UP Module
THE PROBLEM
The COVID-19 pandemic crisis has forced public transport to completely shut down. And has prevented
people from moving around for their livelihood. In Bengaluru, around 41% population uses the public
transport buses to move around
The public transport during an epidemic or a pandemic is shut down partially or completely to prevent
spread of the infection. And this is important in densely populated cities where public transport can quickly
spread the infection
So, is it possible to operate the buses in a safe manner, without spreading the infection?
And, this is the question we will try to answer using data science and epidemic models
Our goal is to leverage data science to enable effective operation of Bengaluru Metropolitan Transport
Corporation (BMTC) as well as contain spread of infection in Bengaluru City
Data Requirements
Now, that we have the tools setup and running, lets identify the data required for the project. Our objective is
to operate buses safely in Bengaluru. So, we need data about all buses operated by Bengaluru’s city
transportation operator, BMTC. And we also need data about Bengaluru. Note that Bengaluru city is managed
by BBMP, a city corporation and the city is divided into WARDs. Each ‘ward’ has a clearly defined
geographic boundary, and has data about its population. So, let’s get the following data:
• BMTC data (Routes, Bus Stops, …)
• BBMP ward details (Zone, Number, Population, Geo Boundaries)
• Ward wise infection data
BMTC data can be found at https://github.com/geohacker/bmtc. Data is available for 2045 routes with
information about bus stops, the latitude and longitude of each bus stop, the sequence of stops, etc
Let's Learn Artificial Intelligence - Step-UP Module
BBMP data can be found at https://github.com/openbangalore/bangalore. And, the ward wise population data
from https://indikosh.com/city/708740/bruhat-bengaluru- mahanagara-palike
Note, that the BBMP ward and their GEO boundaries are required, so that we can locate all the bus stops
within each ward. This is required to create the Origin-Destination flow matrix, for urban mobility which I
will explain in a bit
Infection data is not readily available. And the one I could find was at
https://indianexpress.com/article/cities/bangalore/covid-19-101-cases-in-bengaluru- so-far-heres-the-list-of-
wards-affected-6368503/
The project will use a Python language and libraries.
THE SOLUTION
So, to analyze the spread of infection, we need data about the number of people moving between these
wards in a given day, using buses. So, how can we get this information?
Well, we have latitude and longitude of every bus stop, so we can find out all the bus stops in a WARD, and
as we also have the bus stop sequence of the bus route, we know which is the previous and the next bus stop.
And using this information, we can find all the origin and destination bus stops. Let’s start creating the OD
flow matrix. All the code shown below is available on GitHub
https://github.com/arcexe/btcat
Notebook – 1 “covid19_data_preparation”
Let's Learn Artificial Intelligence - Step-UP Module
In the code above, we read the BMTC data which is a CSV file using the Pandas read_ csv () function. And
then we get all the route numbers and their bus stops. And in the for loop we iterate through each bus stop of
the route and get the latitude and longitude. Now we create origin and destination combination between all
possible bus stops using the itertools. combinations () function
Finally, the OD combinations are stored to od.csv file
Now that we have generated the routes for Bangalore, we will need to locate every OD combination for every
single ward in Bangalore. You can notice on the heatmap on my dashboard about the OD combination which I
have called route counts which are the total number OD combinations. This cell can be used to do that.
Let's Learn Artificial Intelligence - Step-UP Module
tip – If you wish to perform and run the cell, keep a note that this would run for around half a day so make sure
that you have not made any errors while writing the code
This concludes the first of four notebooks we would be viewing and understanding
Notebook – 2 “covid19_data_processing”
This notebook is the continuation of the notebook “covid19_data_preparation”
Now that we’ve generated the file od_summary which was made in the last notebook,
let’s make use of it
Sometimes data can’t be perfect and have everything present in it, and it was the same case for me as well and
in my case we had some rows which were empty and some parts of the columns missing like some rows of the
columns; ‘origin_ward’, ‘destination_ward’.
The “notnull ()” function, you guessed it, gets only the lines which are not empty or null.
The above cell filters all the rows which have everything present and nothing missing and stores them as the
file “od_summary_final.csv”
csv stands for ‘comma separated values’ which is a file extension. You may have seen some extensions while
you were saving a file for example: ‘.txt’, ‘.pptx’, ‘.pdf’.
The “df.to… index=False line gets” rid of the index whilst we are writing the csv. Let’s move on to the next
cell
(If you cannot view the cell properly, please go to the link mentioned above to see the git hub and view it
there)
We are now going to get the file that we had generated in the above cell (using the df.to_csv…) function and
load the columns and reindex them, meaning the data gets reversed, what that means is that, let’s say for
example the origin_ward and destination column and use the reindex function, and now that we have
reindexed them the data under origin_ward goes to destination_ward and vice versa and finally we store the
data we just got a csv as well
I have used the “rename” function since we have reversed the data and since we do not want wrong column
names, this function is used if you were thinking about why I had used this function
Now you might be wondering why I have made this cell so here’s the reason, Since vehicles travel the both
ways, we can’t assume that people will travel in a single direction so that is why we have performed this
action and yes I have stored this as a csv as well since we will be using this later in the below cells. Let’s
continue
Let's Learn Artificial Intelligence - Step-UP Module
Now that we have made the reversed direction, let’s make a small change which would be helpful in the
future. Another note we should make is that whenever we generate csv(s) there is an additional column called
‘^Unnamed’ and since we do not require that I am going to remove it, Let’s see how we can do that in the
above cell and since we do not want the change we made to be lost we have saved it as csv
This is an example about what is might look like if we open it;
Str.Contains() function is used to test if a pattern is contained within a string of a series or index. It is an index
in our case. We have used contains since even we have a pattern of the unnamed column which is that 1 is
getting added after every row. Now let’s run this and voila we have the additional column removed. Let’s
move on
“What does this cell do?” you ask. The answer is that it makes use of the groupby () and A groupby operation
involves some combination of splitting the object, applying a function, and combining the results.
And that is what we did, first we got some columns and have performed an operation which is that the count
or number of ‘route_numbers’ are equal to the route_counts which I have talked about earlier and that data
would be necessary so we have used the reset_index() function in a groupbywhich then results in the operation
we performed into a whole new column, we have also renamed the column destination_ward to ward_no.
Next cell
Let's Learn Artificial Intelligence - Step-UP Module
We read a csv which contains the population of the wards which I have got from here , and we convert some
of the columns ‘ward_no’, ‘origin_ward’ and ‘route_counts’ to an integer as they are floats(decimals, we call
them floats in programming languages). You also might be wondering what I meant by route counts, they are
nothing but the total number of od_combinations.
We then continue to merge the columns of df and the population data and then calculate the population for
the origin_wards.
“Merging” two datasets is the process of bringing two datasets together into one, and aligning the rows from
each based on common attributes or columns and insert the column we just calculated which is
‘origin_population’ to the merged data and store it as a csv.
Next cell
Note – I have used some files which I have generated in some file like ‘proportion_ data’; etc: - in another
notebook of mine which I have published in git hub as well.
This consists of the merged data of the two files I had generated in the other notebook I had mentioned before.
I have just generated a csv which consists of all the data together
Next cell
Now with all the data we just collected let’s generate the OD matrix, which describes people’s movement in a
certain area. This will be useful in other notebooks, now that we have generated the od_matrix we would need
to plot the graph, this will be our final step.
Let's Learn Artificial Intelligence - Step-UP Module
Notebook – 3 ““covid19_modelling”
This notebook will mainly consist of plotting the graphs using modules such as matplotlib or plotly
Plotly
Plotly allows users to import, copy and paste, or stream data to be analyzed and visualized. For analysis and
styling graphs, Plotly offers a Python sandbox (NumPy supported), datagrid, and GUI. Python scripts can be
saved, shared, and collaboratively edited in Plotly.
The Plotly Python graphing library is a scientific graphing library. Graphs can be styled with Python and a
GUI, and shared with a URL for others to view, collaborate, or save a copy.
Matplotlib
Matplotlib is one of the most popular Python packages used for data visualization. It provides an object-
oriented API that helps in embedding plots in applications using Python GUI toolkits such as PyQt,
WxPythonotTkinter. It can be used in Python and IPython shells, Jupyter notebook and web application
servers also.
The notebook itself
Predictive modeling uses statistics to predict outcomes. Most often the event one wants to predict is in the
future, but predictive modelling can be applied to any type of unknown event, regardless of when it occurred.
In my case it was predictive modelling for the future
I have used the compartmental model SIR, which stands for Susceptible, Infected and Recovered/Removed.
• Susceptible means a person who is likely to get affected by the virus,
• Infected meaning people who have Covid-19, and Recovered meaning a person
who has come to a normal state of health after getting infected with
• Covid-19. The cell below calculates the SIR population from the data we have
given to the code
And since we have used that model, we will have to calculate the values we require
and the above cell does just, it calculates the values for the SIR using various formulas
Let’s continue.
This cell generates the normalized SIR with formula mentioned above, If you are wondering about what
normalized means here’s the answer, (in floating-point/decimal representation) express (a number) in the
standard form as regards the position of the radix point(the base of a system of numeration.), usually
immediately following the first non-zero digit.
The “np. newaxis” function is used to is generally used with slicing. It indicates that you want to add an
additional dimension to the array.
Let's Learn Artificial Intelligence - Step-UP Module
You probably guessed what this cell does, yes it plots SIR graphs “the susceptible, Infected, Removed” are the
graphs values that it would display
The transpose of a matrix is obtained by moving the rows data to the column and columns data to the rows.
Let's Learn Artificial Intelligence - Step-UP Module
The cell below determines the total number of returns and departures for the BMTC buses (This is the data we
have got from a dataset of BMTC which is The Bangalore metropolitan transport corporation.)
We also need it to determine the mobility we would need the data for the trips made
and here’s how you can do it;
This cell below calculates the average for departures and returns which would then be used to determine the
number of trips for buses in a single day.
The cell below helps us determine the number of the total combinations of routes for the whole of Bangalore,
we have made the column and saved it as ‘route_combinations_ count.csv’
Read the bmtc_dump and calculate the counts for the departures and returns.
We proceed to transpose the columns of a list and finally store them as a csv
The data we have used is the Bangalore’s commercial activity data. If you’re wondering why I have used this
file it is because We can’t assume that people would be moving around randomly and the chances of them
going to work is higher than the chances of them going someplace else so that is why I have used this file and
get the proportion for the values and save it as a csv which I would be using in some other cells.