0% found this document useful (0 votes)
54 views69 pages

AI Manual

The document outlines 10 experiments to be performed for an artificial intelligence subject. The first experiment involves performing grouping, filtering, sorting and merging operations on datasets using Pandas and NumPy libraries. The second experiment involves implementing a decision tree algorithm on a sample housing or finance dataset.

Uploaded by

Dev Sejvani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views69 pages

AI Manual

The document outlines 10 experiments to be performed for an artificial intelligence subject. The first experiment involves performing grouping, filtering, sorting and merging operations on datasets using Pandas and NumPy libraries. The second experiment involves implementing a decision tree algorithm on a sample housing or finance dataset.

Uploaded by

Dev Sejvani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

VISHWAKARMA GOVERNMENT ENGINEERING COLLEGE – CHANDKHEDA

Gujarat Technological University, Ahmadabad

Electronics & Communica on Department

Winter 2021-22

SEMESTER – 7

Introduc on of Ar ficial Intelligence


(Subject Code: 3171105)

Name of Student:

Enrollment Number:
Certificate

This is to certify that Shri/Ms . of Branch ELECTRONICS

AND COMMUNICATION Semester 7 in Enrollment No has

satisfactorily completed the term work in subject code 3171105 subject

name INTRODUCTION OF ARTIFICIAL INTELLIGENCE within four walls of

Vishwakarma government Engineering College, Chandkheda, Ahmedabad.

Date of Submission:
LIST OF EXPERIMENTS

Exp.No. TITLE DATE

1 Create a program using the Pandas, NumPy library that implements grouping,
filtering, sorting, merging operations.

2 Create a program using a sample dataset(e.g. Housing, finance) to


implement a decision tree algorithm.

3 Create a program to implement a backpropagation algorithm in python

4 Create a program to implement a simple stock market prediction based on


historical datasets.

5 Create a program using NumPy to implement a simple perceptron model

6 Create a program to perform sentiment analysis on a textual dataset (Twitter


feeds, E-commerce reviews).

7 Create a program using any machine learning framework like TensorFlow, Keras
to implement a Linear regression algorithm.

8 Create a program using any machine learning framework like TensorFlow, Keras
to implement a simple convolutional neural network.

9 Create a program using a convolutional neural network that identifies objects like
water bottles, cap, books, etc using the webcam.

10 Create a program using any machine learning framework like TensorFlow, Keras
to implement a Logistic regression algorithm.

EXPERIMENT‌‌:‌‌1‌ ‌
AIM‌‌:‌T‌ o‌‌create‌‌a‌‌program‌‌using‌‌the‌‌Pandas,‌‌Numpy‌‌library‌‌that‌‌implements‌‌grouping,‌‌
filtering,‌‌sorting,‌‌and‌‌merging‌‌operations.‌ ‌

1) GROUPING‌‌:‌ ‌

Code‌‌:‌ ‌

import‌‌pandas‌‌as‌‌pd‌ ‌

data‌‌=‌‌ ‌
{'co2':‌‌[95,‌‌90,‌‌99,‌‌104,‌‌105,‌‌94,‌‌99,‌‌104],‌ ‌
'model':‌‌['Citigo',‌‌'Fabia',‌‌'Fiesta',‌‌'Rapid',‌‌'Focus',‌‌'Mondeo',‌‌'Octavia',‌‌'B-Max'],‌ ‌
'car':‌‌['Skoda',‌‌'Skoda',‌‌'Ford',‌‌'Skoda',‌‌'Ford',‌‌'Ford',‌‌'Skoda',‌‌'Ford']‌ ‌
}‌ ‌

df‌‌=‌‌pd.DataFrame(data)‌‌

print(df.groupby(["model"]).mean())‌ ‌

‌Output‌‌:‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌


2) ‌FILTERING‌‌:‌‌

‌Code‌‌:‌ ‌

import‌‌numpy‌‌as‌‌np‌ ‌

arr‌‌=‌‌np.array([411,‌‌422,‌‌433,‌‌444])‌‌ ‌

x‌‌=‌‌arr[[True,‌‌False,‌‌True,‌‌False]]‌‌print(x)‌ ‌

filter_arr‌‌=‌‌arr‌‌>‌‌422‌‌ ‌

newarr‌‌=‌‌arr[filter_arr]‌‌ ‌

print(filter_arr)‌‌print(newarr)‌ ‌

filter_arr‌‌=‌‌arr‌‌%‌‌2‌‌==‌‌0‌ ‌

‌newarr‌‌=‌‌arr[filter_arr]‌ ‌‌

‌print(filter_arr)‌‌print(newarr)‌ ‌

‌ ‌

Output‌‌:‌ ‌

‌‌
Artificial‌
‌ ‌intelligence‌ ‌


3) SORTING‌‌:‌‌

Code‌‌:‌ ‌

import‌‌numpy‌‌as‌‌np‌ ‌

arr‌‌=‌‌np.array([345,‌‌122,‌‌110,‌‌
111])‌‌print(np.sort(arr))‌ ‌

arr‌‌=‌‌np.array(['banana',‌‌'cherry',‌‌'apple'])‌‌ ‌
print(np.sort(arr))‌ ‌

arr‌‌=‌‌np.array([True,‌‌False,‌‌True])‌‌
print(np.sort(arr))‌ ‌

arr‌‌=‌‌np.array([[355,‌‌122,‌‌114],‌‌[555,‌‌50,‌‌21]])‌ ‌
print(np.sort(arr))‌ ‌



Output:‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌



4) MERGING‌‌:‌‌ ‌

‌Code‌‌:‌ ‌

‌import‌‌pandas‌‌as‌‌pd‌ ‌

‌data1‌‌=‌‌{"name":‌‌["rohan",‌‌"vansh",‌‌"jay"],‌‌"age":‌‌[50,‌‌40,‌‌30]}‌ ‌

‌data2‌‌=‌‌{"name":‌‌["rohan",‌‌"vansh",‌‌"jay‌‌"],‌‌"age":‌‌[77,‌‌44,‌‌22]}‌ ‌

‌df1‌‌=‌‌pd.DataFrame(data1)‌ ‌
‌df2‌‌=‌‌pd.DataFrame(data2)‌ ‌

‌newdf‌‌=‌‌df1.merge(df2,‌‌how='right')‌‌ ‌

‌print(newdf)‌ ‌

‌‌

‌Output:‌ ‌





CONCLUSION‌‌:‌F ‌ rom‌‌this‌‌practical‌‌I‌‌have‌‌studied‌‌about‌‌the‌‌grouping‌‌,‌‌filtering,‌‌sorting‌‌and‌‌
merging‌‌operations‌‌using‌‌the‌‌pandas‌‌and‌‌numpy‌‌library.‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌


EXPERIMENT‌‌:‌‌2‌ ‌

AIM‌‌:‌‌Create‌‌a‌‌program‌‌using‌‌a‌‌sample‌‌dataset(e.g.‌‌Housing,‌‌finance)‌‌to‌‌implement‌‌a‌‌
decision‌‌tree‌‌algorithm.‌ ‌

Code‌‌:‌ ‌

#‌‌Load‌‌libraries‌ ‌

import‌‌p
‌ andas‌‌a
‌ s‌‌p
‌ d‌ ‌

from‌‌s
‌ klearn.tree‌‌i‌mport‌‌‌DecisionTreeClassifier‌#
‌ ‌‌Import‌‌Decision‌‌Tree‌‌Classifier‌ ‌

from‌‌s
‌ klearn.model_selection‌‌i‌mport‌‌‌train_test_split‌#
‌ ‌‌Import‌‌train_test_split‌‌function‌ ‌

from‌‌s
‌ klearn‌‌i‌mport‌‌‌metrics‌#
‌ Import‌‌scikit-learn‌‌metrics‌‌module‌‌for‌‌accuracy‌‌calculation‌ ‌

import‌‌s
‌ klearn.datasets‌‌a
‌ s‌‌d
‌ atasets‌‌#
‌ For‌‌loading‌‌iris‌‌dataset‌ ‌

#‌‌Loading‌‌the‌‌iris‌‌dataset‌ ‌

iris‌=d
‌ atasets‌.l‌oad_iris()‌ ‌

‌‌

#‌‌Forming‌‌the‌‌iris‌‌dataframe‌

df‌=p
‌ d‌.D
‌ ataFrame(iris‌.d
‌ ata,‌‌columns‌=i‌ris‌.f‌eature_names)‌ ‌

print‌(df‌.h
‌ ead(‌10‌))‌ ‌

#split‌‌dataset‌‌in‌‌features‌‌and‌‌target‌‌variable‌ ‌

X‌=
‌ ‌‌df‌ ‌

y‌=
‌ ‌‌iris‌.t‌arget‌ ‌

print‌(y)‌ ‌

#‌‌Split‌‌dataset‌‌into‌‌training‌‌set‌‌and‌‌test‌‌set‌‌into‌‌ratio‌‌of‌‌0.75:0.25‌ ‌

X_train,‌‌X_test,‌‌y_train,‌‌y_test‌=
‌ ‌‌train_test_split(X,‌‌y,‌‌test_size‌=0.25‌,‌‌random_state‌=1‌)‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

#‌‌Create‌‌Decision‌‌Tree‌‌classifer‌‌object‌ ‌

classifier‌=
‌ ‌‌DecisionTreeClassifier()‌ ‌

‌‌

#‌‌Train‌‌Decision‌‌Tree‌‌Classifer‌ ‌

classifier‌=
‌ ‌‌classifier‌.f‌it(X_train,y_train)‌ ‌

‌‌

print‌(‌"Decision‌‌Tree‌‌Classifier‌‌created‌‌successfully!"‌)‌ ‌

#‌‌Visualize‌‌the‌‌decision‌‌tree‌ ‌

from‌‌s
‌ klearn.tree‌‌i‌mport‌‌‌export_graphviz‌ ‌

from‌‌s
‌ ix‌‌i‌mport‌‌‌StringIO‌ ‌ ‌

from‌‌I‌Python.display‌‌i‌mport‌‌‌Image‌ ‌ ‌

import‌‌p
‌ ydotplus‌ ‌

‌‌

dot_data‌=
‌ ‌‌StringIO()‌ ‌

export_graphviz(classifier,‌‌out_file‌=d
‌ ot_data,‌ ‌ ‌

‌filled‌=T
‌ rue‌,‌‌rounded‌=T
‌ rue‌,‌ ‌


special_characters‌=T
‌ rue‌,feature_names‌=i‌ris‌.f‌eature_names,class_names‌=[‌'‌0'‌,'‌1'‌,'‌2'‌])‌ ‌

graph‌=
‌ ‌‌pydotplus‌.g
‌ raph_from_dot_data(dot_data‌.g
‌ etvalue())‌ ‌ ‌

graph‌.w
‌ rite_png(‌'iris.png'‌)‌

Image(graph‌.c‌ reate_png())‌ ‌

#Predict‌‌the‌‌response‌‌for‌‌test‌‌dataset‌ ‌

y_pred‌=
‌ ‌‌classifier‌.p
‌ redict(X_test)‌ ‌

‌‌

Artificial‌
‌ ‌intelligence‌ ‌

#Compare‌‌between‌‌predicted‌‌and‌‌actual‌‌class‌

df‌=
‌ ‌‌pd‌.D
‌ ataFrame({‌'Predicted‌‌Class'‌:y_pred,‌'‌Actual‌‌Class'‌:y_test})‌ ‌

print‌(df‌.h
‌ ead(‌10‌))‌ ‌

#‌‌Model‌‌Accuracy‌ ‌

print‌(‌"Accuracy‌‌of‌‌Decision‌‌Tree‌‌Classifier:"‌,metrics‌.a
‌ ccuracy_score(y_test,‌‌y_pred))‌ ‌

#Creating‌‌confusion‌‌matrix‌‌and‌‌report‌‌f‌ rom‌‌s
‌ klearn.metrics‌‌i‌mport‌‌‌classification_report,‌‌
confusion_matrix‌p‌ rint‌(confusion_matrix(y_test,‌‌y_pred))‌‌print‌(classification_report(y_test,‌‌
y_pred))‌ ‌

Output‌‌:‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

CONCLUSION‌‌:‌A ‌ fter‌‌using‌ ‌various‌‌functions,‌‌we‌‌were‌‌able‌‌to‌‌train‌‌a‌‌decision‌‌tree‌‌model‌‌


with‌‌97.23%‌‌accuracy‌‌on‌‌the‌‌test‌‌data.‌‌Furthermore,‌‌we‌‌were‌‌able‌‌to‌‌verify‌‌it‌‌by‌‌using‌‌
different‌‌evaluation‌‌metrics.‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

‌‌

EXPERIMENT‌‌:‌‌3‌ ‌
AIM‌‌:‌C
‌ reate‌‌a‌‌program‌‌to‌‌implement‌‌a‌‌backpropagation‌‌algorithm‌‌in‌‌Python.‌ ‌

THEORY‌‌:‌ ‌

Artificial‌‌Neural‌‌Networks:‌ ‌

A‌ ‌neural‌ ‌network‌ ‌is‌ ‌a‌ ‌group‌ ‌of‌ ‌connected‌ ‌I/O‌ ‌units‌ ‌where‌ ‌each‌ ‌connection‌ ‌has‌ ‌a‌ ‌weight‌‌
associated‌ ‌with‌ ‌its‌ ‌computer‌ ‌programs.‌ ‌It‌ ‌helps‌ ‌you‌ ‌to‌ ‌build‌ ‌predictive‌ ‌models‌ ‌from‌ ‌large‌‌
databases.‌‌This‌‌model‌‌builds‌‌upon‌‌the‌‌human‌‌nervous‌‌system.‌‌It‌‌helps‌‌you‌‌to‌‌conduct‌‌image‌‌
understanding,‌‌human‌‌learning,‌‌computer‌‌speech,‌‌etc.‌ ‌

Backpropagation:‌ ‌

Backpropagation‌‌is‌‌the‌‌essence‌‌of‌‌neural‌‌network‌‌training.‌‌It‌‌is‌‌the‌‌method‌‌of‌‌fine-tuning‌‌the‌‌
weights‌ ‌of‌ ‌a‌ ‌neural‌ ‌network‌ ‌based‌ ‌on‌ ‌the‌ ‌error‌ ‌rate‌ ‌obtained‌ ‌in‌ ‌the‌ ‌previous‌ ‌epoch‌ ‌(i.e.,‌‌
iteration).‌‌Proper‌‌tuning‌‌of‌‌the‌‌weights‌‌allows‌‌you‌‌to‌‌reduce‌‌error‌‌rates‌‌and‌‌make‌‌the‌‌model‌‌
reliable‌‌by‌‌increasing‌‌its‌‌generalization.‌ ‌

Backpropagation‌‌in‌‌neural‌‌network‌‌is‌‌a‌‌short‌‌form‌‌for‌‌"backward‌‌propagation‌‌of‌‌errors."‌‌It‌‌is‌‌a‌‌
standard‌ ‌method‌ ‌of‌ ‌training‌ ‌artificial‌ ‌neural‌ ‌networks.‌ ‌This‌ ‌method‌ ‌helps‌ ‌calculate‌ ‌the‌‌
gradient‌‌of‌‌a‌‌loss‌‌function‌‌with‌‌respect‌‌to‌‌all‌‌the‌‌weights‌‌in‌‌the‌‌network.‌ ‌

How‌‌Backpropagation‌‌Algorithm‌‌Works‌ ‌

The‌‌Back‌‌propagation‌‌algorithm‌‌in‌‌neural‌‌network‌‌computes‌‌the‌‌gradient‌‌of‌‌the‌‌loss‌‌function‌‌
for‌‌a‌‌single‌‌weight‌‌by‌‌the‌‌chain‌‌rule.‌‌It‌‌efficiently‌‌computes‌‌one‌‌layer‌‌at‌‌a‌‌time,‌‌unlike‌‌a‌‌native‌‌
direct‌‌computation.‌‌It‌‌computes‌‌the‌‌gradient,‌‌but‌‌it‌‌does‌‌not‌‌define‌‌how‌‌the‌‌gradient‌‌is‌‌used.‌‌
It‌‌generalizes‌‌the‌‌computation‌‌in‌‌the‌‌delta‌‌rule.‌ ‌

Consider‌‌the‌‌following‌‌Back‌‌propagation‌‌neural‌‌network‌‌example‌‌diagram‌‌to‌‌understand:‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

‌‌

1.‌‌Inputs‌‌X,‌‌arrive‌‌through‌‌the‌‌preconnected‌‌path‌ ‌
2.‌‌Input‌‌is‌‌modeled‌‌using‌‌real‌‌weights‌‌W.‌‌The‌‌weights‌‌are‌‌usually‌‌randomly‌‌selected.‌
3.‌‌Calculate‌‌the‌‌output‌‌for‌‌every‌‌neuron‌‌from‌‌the‌‌input‌‌layer,‌‌to‌‌the‌‌hidden‌‌layers,‌‌to‌‌
the‌‌output‌‌layer.‌ ‌
4.‌‌Calculate‌‌the‌‌error‌‌in‌‌the‌‌outputs‌ ‌

ErrorB=‌‌Actual‌‌Output‌‌–‌‌Desired‌‌Output‌ ‌

5.‌‌Travel‌‌back‌‌from‌‌the‌‌output‌‌layer‌‌to‌‌the‌‌hidden‌‌layer‌‌to‌‌adjust‌‌the‌‌weights‌‌such‌‌that‌‌
the‌‌error‌‌is‌‌decreased.‌

Keep‌‌repeating‌‌the‌‌process‌‌until‌‌the‌‌desired‌‌output‌‌is‌‌achieved‌ ‌

Why‌‌We‌‌Need‌‌Backpropagation?‌ ‌

Most‌‌prominent‌‌advantages‌‌of‌‌Backpropagation‌‌are:‌ ‌
● B
‌ ackpropagation‌‌is‌‌fast,‌‌simple‌‌and‌‌easy‌‌to‌‌program‌ ‌
● It‌‌has‌‌no‌‌parameters‌‌to‌‌tune‌‌apart‌‌from‌‌the‌‌numbers‌‌of‌‌input‌ ‌
● I‌ t‌‌is‌‌a‌‌flexible‌‌method‌‌as‌‌it‌‌does‌‌not‌‌require‌‌prior‌‌knowledge‌‌about‌‌the‌‌network‌ ‌
● It‌‌is‌‌a‌‌standard‌‌method‌‌that‌‌generally‌‌works‌‌well‌ ‌
● I‌ t‌‌does‌‌not‌‌need‌‌any‌‌special‌‌mention‌‌of‌‌the‌‌features‌‌of‌‌the‌‌function‌‌to‌‌be‌‌learned.‌ ‌
Artificial‌
‌ ‌intelligence‌ ‌

Disadvantages‌‌of‌‌using‌‌Backpropagation‌ ‌

● The‌‌actual‌‌performance‌‌of‌‌backpropagation‌‌on‌‌a‌‌specific‌‌problem‌‌is‌‌dependent‌‌on‌‌the‌‌
input‌‌data.‌ ‌
● Back‌‌propagation‌‌algorithm‌‌in‌‌data‌‌mining‌‌can‌‌be‌‌quite‌‌sensitive‌‌to‌‌noisy‌‌data‌ ‌
● You‌‌need‌‌to‌‌use‌‌the‌‌matrix-based‌‌approach‌‌for‌‌backpropagation‌‌instead‌‌of‌‌mini-batch.‌ ‌
‌‌
Code:‌ ‌
import‌‌numpy‌‌as‌‌np‌ ‌
#‌‌X‌‌=‌‌(hours‌‌sleeping,‌‌hours‌‌studying),‌‌y‌‌=‌‌test‌‌score‌‌of‌‌the‌‌student‌ ‌
X‌‌=‌‌np.array(([2,‌‌9],‌‌[1,‌‌5],‌‌[3,‌‌6]),‌‌dtype=float)‌ ‌
y‌‌=‌‌np.array(([92],‌‌[86],‌‌[89]),‌‌dtype=float)‌ ‌
‌‌
#‌‌scale‌‌units‌ ‌
X‌‌=‌‌X/np.amax(X,‌‌axis=0)‌‌#maximum‌‌of‌‌X‌‌array‌ ‌
y‌‌=‌‌y/100‌‌#‌‌maximum‌‌test‌‌score‌‌is‌‌100‌ ‌
‌‌
class‌‌NeuralNetwork(object):‌ ‌
‌def‌‌_init_(self):‌ ‌
‌#parameters‌ ‌
‌self.inputSize‌‌=‌‌2‌ ‌
‌self.outputSize‌‌=‌‌1‌ ‌
‌self.hiddenSize‌‌=‌‌3‌ ‌
‌‌
‌#weights‌ ‌
‌self.W1‌‌=‌‌np.random.randn(self.inputSize,‌‌self.hiddenSize)‌‌#‌‌(3x2)‌‌weight‌‌matrix‌‌from‌‌
input‌‌to‌‌hidden‌‌layer‌ ‌
‌self.W2‌‌=‌‌np.random.randn(self.hiddenSize,‌‌self.outputSize)‌‌#‌‌(3x1)‌‌weight‌‌matrix‌‌from‌‌
hidden‌‌to‌‌output‌‌layer‌ ‌
‌‌
‌def‌‌feedForward(self,‌‌X):‌ ‌
‌#forward‌‌propogation‌‌through‌‌the‌‌network‌ ‌
‌self.z‌‌=‌‌np.dot(X,‌‌self.W1)‌‌#dot‌‌product‌‌of‌‌X‌‌(input)‌‌and‌‌first‌‌set‌‌of‌‌weights‌‌(3x2)‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

‌self.z2‌‌=‌‌self.sigmoid(self.z)‌‌#activation‌‌function‌ ‌
‌self.z3‌‌=‌‌np.dot(self.z2,‌‌self.W2)‌‌#dot‌‌product‌‌of‌‌hidden‌‌layer‌‌(z2)‌‌and‌‌second‌‌set‌‌of‌
weights‌‌(3x1)‌ ‌
‌output‌‌=‌‌self.sigmoid(self.z3)‌ ‌
‌return‌‌output‌ ‌
‌‌
‌def‌‌sigmoid(self,‌‌s,‌‌deriv=False):‌ ‌
‌if‌‌(deriv‌‌==‌‌True):‌ ‌
‌return‌‌s‌‌*‌‌(1‌‌-‌‌s)‌ ‌
‌return‌‌1/(1‌‌+‌‌np.exp(-s))‌ ‌
‌‌
‌def‌‌backward(self,‌‌X,‌‌y,‌‌output):‌ ‌
‌#backward‌‌propogate‌‌through‌‌the‌‌network‌ ‌
‌self.output_error‌‌=‌‌y‌‌-‌‌output‌‌#‌‌error‌‌in‌‌output‌ ‌
‌self.output_delta‌‌=‌‌self.output_error‌‌*‌‌self.sigmoid(output,‌‌deriv=True)‌ ‌
‌‌
‌self.z2_error‌‌=‌‌self.output_delta.dot(self.W2.T)‌‌#z2‌‌error:‌‌how‌‌much‌‌our‌‌hidden‌‌layer‌‌
weights‌‌contribute‌‌to‌‌output‌‌error‌ ‌
‌self.z2_delta‌‌=‌‌self.z2_error‌‌*‌‌self.sigmoid(self.z2,‌‌deriv=True)‌‌#applying‌‌derivative‌‌of‌‌
sigmoid‌‌to‌‌z2‌‌error‌ ‌
‌‌
‌self.W1‌‌+=‌‌X.T.dot(self.z2_delta)‌‌#‌‌adjusting‌‌first‌‌set‌‌(input‌‌->‌‌hidden)‌‌weights‌ ‌
‌self.W2‌‌+=‌‌self.z2.T.dot(self.output_delta)‌‌#‌‌adjusting‌‌second‌‌set‌‌(hidden‌‌->‌‌output)‌‌
weights‌ ‌
‌‌
‌def‌‌train(self,‌‌X,‌‌y):‌ ‌
‌output‌‌=‌‌self.feedForward(X)‌ ‌
‌self.backward(X,‌‌y,‌‌output)‌ ‌
‌‌
NN‌‌=‌‌NeuralNetwork()‌ ‌
‌‌
for‌‌i‌‌in‌‌range(1000):‌‌#trains‌‌the‌‌NN‌‌1000‌‌times‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

‌if‌‌(i‌‌%‌‌100‌‌==‌‌0):‌ ‌
‌print("Loss:‌‌"‌‌+‌‌str(np.mean(np.square(y‌‌-‌‌NN.feedForward(X)))))‌ ‌
‌NN.train(X,‌‌y)‌ ‌
‌‌
print("Input:‌‌"‌‌+‌‌str(X))‌ ‌
print("Actual‌‌Output:‌‌"‌‌+‌‌str(y))‌ ‌
print("Loss:‌‌"‌‌+‌‌str(np.mean(np.square(y‌‌-‌‌NN.feedForward(X)))))‌ ‌
print("\n")‌ ‌
print("Predicted‌‌Output:‌‌"‌‌+‌‌str(NN.feedForward(X)))‌ ‌

Output:‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

‌‌
CONCLUSION‌‌:‌A ‌ s‌‌we‌‌can‌‌see‌‌in‌‌the‌‌output‌‌predicted‌‌output‌‌is‌‌depends‌‌on‌‌input‌‌values‌‌and‌‌
if‌‌we‌‌increase‌‌the‌‌range‌‌of‌‌test‌‌samples‌‌the‌‌difference‌‌between‌‌predicted‌‌output‌‌and‌‌actual‌‌
output‌‌will‌‌be‌‌decrease‌‌and‌‌value‌‌of‌‌predicted‌‌output‌‌we‌‌get‌‌will‌‌be‌‌near‌‌to‌‌actual‌‌output.‌ ‌

Artificial‌‌neural‌‌networks‌‌use‌‌backpropagation‌‌as‌‌a‌‌learning‌‌algorithm‌‌to‌‌compute‌‌a‌‌gradient‌‌
descent‌‌with‌‌respect‌‌to‌‌weights.‌ ‌


Artificial‌
‌ ‌intelligence‌ ‌

EXPERIMENT‌‌:‌‌4‌ ‌

‌ reate‌‌a‌‌program‌‌to‌‌implement‌‌a‌‌simple‌‌stock‌‌market‌‌prediction.‌ ‌
AIM:‌‌ C

THEORY‌‌:‌ ‌

Stocks‌‌are‌‌possibly‌‌the‌‌most‌‌popular‌‌financial‌‌instrument‌‌invented‌‌for‌‌building‌‌wealth‌‌and‌‌are‌‌
the‌‌centerpiece‌‌of‌‌any‌‌investment‌‌portfolio.‌‌The‌‌advances‌‌in‌‌trading‌‌technolo-gy‌‌has‌‌opened‌‌
up‌ ‌the‌ ‌markets‌ ‌so‌ ‌that‌ ‌nowadays‌ ‌nearly‌ ‌anybody‌ ‌can‌ ‌own‌ ‌stocks.‌ ‌From‌ ‌last‌ ‌few‌‌decades,‌‌
there‌‌seen‌‌explosive‌‌increase‌‌in‌‌the‌‌average‌‌person’s‌‌interest‌‌for‌‌stock‌‌market.‌ ‌

In‌‌a‌‌financially‌‌explosive‌‌market,‌‌as‌‌the‌‌stock‌‌market,‌‌it‌‌is‌‌important‌‌to‌‌have‌‌a‌‌very‌‌accurate‌‌
prediction‌ ‌of‌ ‌a‌ ‌future‌ ‌trend.‌ ‌Because‌ ‌of‌ ‌the‌ ‌financial‌ ‌crisis‌ ‌and‌ ‌re-cording‌ ‌profits,‌ ‌it‌ ‌is‌‌
compulsory‌‌to‌‌have‌‌a‌‌secure‌‌prediction‌‌of‌‌the‌‌values‌‌of‌‌the‌‌stocks.‌ ‌

This‌ ‌is‌ ‌a‌ ‌simple‌ ‌kernel‌ ‌in‌ ‌which‌ ‌we‌ ‌will‌ ‌forecast‌ ‌stock‌ ‌prices‌ ‌using‌ ‌Prophet‌ ‌(Facebook's‌‌
library‌‌for‌‌time‌‌series‌‌forecasting).‌‌However,‌‌historical‌‌prices‌‌are‌‌no‌‌indication‌‌whether‌‌a‌‌price‌‌
will‌‌go‌‌up‌‌or‌‌down.‌‌I'll‌‌rather‌‌use‌‌my‌‌own‌‌variables‌‌and‌‌use‌‌machine‌‌learning‌‌for‌‌stock‌‌price‌‌
prediction‌‌rather‌‌than‌‌just‌‌using‌‌historical‌‌prices‌‌as‌‌an‌‌indication‌‌of‌‌stock‌‌price‌‌increase.‌ ‌

About‌‌Prophet‌‌:‌ ‌

The‌‌analysts‌‌can‌‌produce‌‌high‌‌quality‌‌forecasting‌‌data‌‌that‌‌is‌‌rarely‌‌seen.‌‌This‌‌is‌‌one‌‌of‌‌the‌‌
reasons‌ ‌why‌ ‌Facebook's‌ ‌research‌ ‌team‌ ‌came‌ ‌to‌ ‌an‌ ‌easily‌ ‌approachable‌ ‌way‌ ‌for‌ ‌using‌‌
advanced‌ ‌concepts‌ ‌for‌‌time‌‌series‌‌forecasting‌‌and‌‌us‌‌Python‌‌users,‌‌can‌‌easily‌‌relate‌‌to‌‌this‌‌
library‌‌since‌‌it‌‌uses‌‌‌Scikit-Learn's‌‌api‌‌(Similar‌‌to‌‌Scikit-Learn)‌.‌ ‌

There‌‌are‌‌several‌c
‌ haracteristics‌‌‌of‌‌Prophet‌ ‌

● ‌hourly,‌‌daily,‌‌or‌‌weekly‌‌observations‌‌with‌‌at‌‌least‌‌a‌‌few‌‌months‌‌(preferably‌‌a‌‌year)‌‌of‌‌
history‌ ‌
● strong‌‌multiple‌‌“human-scale”‌‌seasonalities:‌‌day‌‌of‌‌week‌‌and‌‌time‌‌of‌‌year‌ ‌
● Important‌‌holidays‌‌that‌‌occur‌‌at‌‌irregular‌‌intervals‌‌that‌‌are‌‌known‌‌in‌‌advance‌‌(e.g.‌‌the‌‌
Super‌‌Bowl)‌ ‌
● A‌‌reasonable‌‌number‌‌of‌‌missing‌‌observations‌‌or‌‌large‌‌outliers‌ ‌
● Historical‌‌trend‌‌changes,‌‌for‌‌instance‌‌due‌‌to‌‌product‌‌launches‌‌or‌‌logging‌‌changes‌ ‌
● Trends‌ ‌that‌ ‌are‌ ‌non-linear‌ ‌growth‌ ‌curves,‌ ‌where‌ ‌a‌ ‌trend‌ ‌hits‌ ‌a‌ ‌natural‌ ‌limit‌ ‌or‌‌
saturates</ul>‌ ‌

‌‌

‌‌

Artificial‌
‌ ‌intelligence‌ ‌

Code‌‌:‌ ‌

‌‌

‌‌

Artificial‌
‌ ‌intelligence‌ ‌

Output‌‌:‌ ‌‌


Artificial‌
‌ ‌intelligence‌ ‌

‌CONCLUSION‌‌:‌‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

EXPERIMENT:‌‌5‌ ‌

‌ reate‌‌a‌‌program‌‌using‌‌NumPy‌‌to‌‌implement‌‌a‌‌simple‌‌perceptron‌‌model.‌ ‌
AIM‌‌:‌ C

THEORY‌‌:‌ ‌

Artificial‌‌Neural‌‌Networks(ANNs)‌‌are‌‌the‌‌newfound‌‌love‌‌for‌‌all‌‌data‌‌scientists.‌‌From‌‌classical‌‌
machine‌‌learning‌‌techniques,‌‌it‌‌is‌‌now‌‌shifted‌‌towards‌‌deep‌‌learning.‌‌Neural‌‌networks‌‌mimic‌‌
the‌ ‌human‌ ‌brain‌ ‌which‌ ‌passes‌ ‌information‌ ‌through‌ ‌neurons.‌ ‌Perceptron‌ ‌is‌ ‌the‌ ‌first‌ ‌neural‌‌
network‌ ‌to‌ ‌be‌ ‌created.‌ ‌It‌ ‌was‌ ‌designed‌ ‌by‌‌Frank‌‌Rosenblatt‌‌in‌‌1957.‌‌Perceptron‌‌is‌‌a‌‌single‌‌
layer‌ ‌neural‌‌network.‌‌This‌‌is‌‌the‌‌only‌‌neural‌‌network‌‌without‌‌any‌‌hidden‌‌layer.‌‌Perceptron‌‌is‌‌
used‌‌in‌‌supervised‌‌learning‌‌generally‌‌for‌‌binary‌‌classification.‌ ‌

Code‌‌:‌ ‌

import‌‌numpy‌‌as‌‌np‌ ‌

class‌‌Perceptron(object):‌ ‌

def‌‌_init_(self,‌‌learning_rate=0.01,‌‌n_iter=100,‌‌random_state=1):‌ ‌

‌self.learning_rate‌‌=‌‌learning_rate‌ ‌

‌self.n_iter‌‌=‌‌n_iter‌ ‌

‌self.random_state‌‌=‌‌random_state‌ ‌

def‌‌fit(self,‌‌X,‌‌y):‌ ‌

‌rand‌‌=‌‌np.random.RandomState(self.random_state)‌ ‌

‌self.weights‌‌=‌‌rand.normal(loc=0.0,‌‌scale=0.01,‌‌size=1‌‌+‌ ‌X.shape[1])‌ ‌

‌self.errors_‌‌=‌‌[]‌ ‌

for‌‌_‌‌in‌‌range(self.n_iter):‌ ‌

‌errors‌‌=‌‌0‌ ‌

‌for‌‌x,‌‌target‌‌in‌‌zip(X,‌‌y):‌ ‌

‌update‌‌=‌‌self.learning_rate‌‌*‌‌(target‌‌-‌‌self.predict(x))‌ ‌

‌self.weights[1:]‌‌+=‌‌update‌‌*‌‌x‌ ‌

‌self.weights[0]‌‌+=‌‌update‌ ‌
Artificial‌
‌ ‌intelligence‌ ‌

‌errors‌‌+=‌‌int(update‌‌!=‌‌0.0)‌ ‌

‌self.errors_.append(errors)‌ ‌

‌return‌‌self‌ ‌

‌def‌‌net_input(self,‌‌X):‌ ‌

‌z‌‌=‌‌np.dot(X,‌‌self.weights[1:])‌‌+‌‌self.weights[0]‌ ‌

‌return‌‌z‌ ‌

‌def‌‌predict(self,‌‌X):‌ ‌

return‌‌np.where(self.net_input(X)‌‌>=‌‌0,‌‌1,‌‌-1)‌ ‌

from‌‌sklearn.datasets‌‌import‌‌load_iris‌ ‌

X,y‌‌=‌‌load_iris(return_X_y=True)‌ ‌

import‌‌matplotlib.pyplot‌‌as‌‌plt‌ ‌

import‌‌numpy‌‌as‌‌np‌ ‌

%matplotlib‌‌inline‌ ‌

plt.scatter(X[:50,‌‌0],‌‌X[:50,‌‌1],‌ ‌

‌color='green',‌‌marker='x',‌‌label='setosa')‌ ‌

plt.scatter(X[50:100,‌‌0],‌‌X[50:100,‌‌1],‌ ‌

‌color='red',‌‌marker='o',‌‌label='versicolor')‌ ‌

plt.xlabel('sepal‌‌length')‌ ‌

plt.ylabel('petal‌‌length')‌ ‌

plt.legend(loc='upper‌‌right')‌ ‌

plt.show()‌ ‌

per‌‌=‌‌Perceptron(learning_rate=0.1,‌‌n_iter=100,‌‌random_state=1)‌ ‌

per.fit(X,‌‌y)‌ ‌

plt.plot(range(1,‌‌len(per.errors_)‌‌+‌‌1),‌‌per.errors_,‌‌marker='o')‌ ‌
Artificial‌
‌ ‌intelligence‌ ‌

plt.xlabel('Epochs')‌ ‌

plt.ylabel('Number‌‌of‌‌updates')‌ ‌

plt.show()‌ ‌

‌Output‌‌:‌ ‌

‌‌

‌‌

CONCLUSION‌‌:‌‌‌A‌‌basic‌‌implementation‌‌of‌‌the‌‌perceptron‌‌algorithm‌‌in‌‌Python‌‌to‌‌classify‌‌the‌‌
flowers‌‌in‌‌the‌‌iris‌‌dataset.‌ ‌


Artificial‌
‌ ‌intelligence‌ ‌

EXPERIMENT‌‌:‌‌6‌ ‌

AIM‌‌:‌‌Create‌‌a‌‌program‌‌to‌‌perform‌‌sentiment‌‌analysis‌‌on‌‌a‌‌textual‌‌dataset.‌ ‌

THEORY‌‌:‌ ‌

What‌‌is‌‌sentiment‌‌analysis?‌ ‌

Sentiment‌ ‌analysis‌ ‌is‌ ‌the‌ ‌automated‌ ‌process‌ ‌of‌ ‌identifying‌ ‌and‌ ‌classifying‌ ‌subjective‌‌
information‌‌in‌‌text‌‌data.‌‌This‌‌might‌‌be‌‌an‌‌opinion,‌‌a‌‌judgment,‌‌or‌‌a‌‌feeling‌‌about‌‌a‌‌particular‌‌
topic‌‌or‌‌product‌‌feature.‌‌It’s‌‌also‌‌known‌‌as‌‌opinion‌‌mining,‌‌deriving‌‌the‌‌opinion‌‌or‌‌attitude‌‌of‌‌a‌‌
speaker.‌ ‌

The‌ ‌most‌ ‌common‌ ‌type‌ ‌of‌ ‌sentiment‌ ‌analysis‌ ‌is‌ ‌‘polarity‌ ‌detection’‌ ‌and‌‌involves‌‌classifying‌‌
statements‌‌as‌‌Positive,‌‌Negative‌‌or‌‌Neutral.‌ ‌

Why‌‌sentiment‌‌analysis?‌ ‌

● Business:‌‌In‌‌marketing‌‌field‌‌companies‌‌use‌‌it‌‌to‌‌develop‌‌their‌‌strategies,‌‌to‌‌understand‌‌
customers’‌‌feelings‌‌towards‌‌products‌‌or‌‌brand,‌‌how‌‌people‌‌respond‌‌to‌‌their‌‌campaigns‌‌
or‌‌product‌‌launches‌‌and‌‌why‌‌consumers‌‌don’t‌‌buy‌‌some‌‌products.‌ ‌
● Politics:‌‌In‌‌political‌‌field,‌‌it‌‌is‌‌used‌‌to‌‌keep‌‌track‌‌of‌‌political‌‌view,‌‌to‌‌detect‌‌consistency‌‌
and‌ ‌inconsistency‌‌between‌‌statements‌‌and‌‌actions‌‌at‌‌the‌‌government‌‌level.‌‌It‌‌can‌‌be‌‌
used‌‌to‌‌predict‌‌election‌‌results‌‌as‌‌well!‌ ‌
● Public‌ ‌Actions:‌ ‌Sentiment‌ ‌analysis‌ ‌also‌ ‌is‌ ‌used‌ ‌to‌ ‌monitor‌ ‌and‌ ‌analyse‌ ‌social‌‌
phenomena,‌ ‌for‌ ‌the‌ ‌spotting‌ ‌of‌ ‌potentially‌ ‌dangerous‌ ‌situations‌ ‌and‌ ‌determining‌ ‌the‌‌
general‌‌mood‌‌of‌‌the‌‌blogosphere.‌ ‌

Some‌‌of‌‌the‌‌common‌‌examples‌‌of‌‌Sentiment‌‌Analysis‌‌are‌ ‌

● Customer‌‌Feedback‌ ‌
● Product‌‌Analysis‌ ‌
● Social‌‌Media‌‌Monitoring‌ ‌
● Emotion‌‌Recognition‌ ‌
● Chatbot‌‌reactions‌ ‌
● Threat‌‌Detection‌‌etc.‌ ‌

‌‌

Artificial‌
‌ ‌intelligence‌ ‌

‌Code:‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌


Artificial‌
‌ ‌intelligence‌ ‌

Output‌‌:‌ ‌

CONCLUSION‌ ‌:‌ ‌Sentiment‌ ‌analysis‌ ‌using‌ ‌API‌ ‌is‌ ‌a‌ ‌good‌ ‌option‌ ‌but‌ ‌we‌ ‌can‌ ‌make‌ ‌our‌ ‌own‌‌
LSTM‌ ‌or‌ ‌classic‌ ‌RNN‌ ‌to‌ ‌get‌ ‌better‌ ‌results‌ ‌on‌ ‌our‌ ‌data‌ ‌by‌ ‌changing‌ ‌hyperparameters‌ ‌and‌‌
model‌‌architecture.‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

EXPERIMENT‌‌:‌‌7‌ ‌

AIM‌ ‌:‌ ‌Create‌ ‌a‌ ‌program‌ ‌using‌ ‌any‌ ‌machine‌ ‌learning‌ ‌framework‌ ‌like‌ ‌TensorFlow,‌ ‌Keras‌ ‌to‌‌
implement‌‌a‌‌Linear‌‌regression‌‌algorithm.‌ ‌

THEORY‌‌:‌ ‌

Linear‌‌regression‌‌attempts‌‌to‌‌model‌‌the‌‌relationship‌‌between‌‌two‌‌variables‌‌by‌‌fitting‌‌a‌‌linear‌‌
equation‌‌to‌‌observed‌‌data.‌‌One‌‌variable‌‌is‌‌considered‌‌to‌‌be‌‌an‌‌explanatory‌‌variable,‌‌and‌‌the‌‌
other‌‌is‌‌considered‌‌to‌‌be‌‌a‌‌dependent‌‌variable.‌‌For‌‌example,‌‌a‌‌modeler‌‌might‌‌want‌‌to‌‌relate‌‌
the‌‌weights‌‌of‌‌individuals‌‌to‌‌their‌‌heights‌‌using‌‌a‌‌linear‌‌regression‌‌model.‌ ‌
Before‌ ‌attempting‌ ‌to‌ ‌fit‌ ‌a‌ ‌linear‌ ‌model‌ ‌to‌ ‌observed‌ ‌data,‌ ‌a‌ ‌modeler‌ ‌should‌ ‌first‌ ‌determine‌‌
whether‌ ‌or‌ ‌not‌ ‌there‌ ‌is‌ ‌a‌ ‌relationship‌ ‌between‌ ‌the‌ ‌variables‌ ‌of‌ ‌interest.‌ ‌This‌ ‌does‌ ‌not‌‌
necessarily‌‌imply‌‌that‌‌one‌‌variable‌‌‌causes‌‌the‌‌other‌‌(for‌‌example,‌‌higher‌‌SAT‌‌scores‌‌do‌‌not‌‌
cause‌ ‌higher‌‌college‌‌grades),‌‌but‌‌that‌‌there‌‌is‌‌some‌‌significant‌‌association‌‌between‌‌the‌‌two‌‌
variables.‌ ‌A‌ ‌scatterplot‌ ‌can‌ ‌be‌ ‌a‌ ‌helpful‌ ‌tool‌ ‌in‌ ‌determining‌ ‌the‌ ‌strength‌ ‌of‌ ‌the‌ ‌relationship‌
between‌ ‌two‌ ‌variables.‌ ‌If‌ ‌there‌ ‌appears‌ ‌to‌ ‌be‌ ‌no‌ ‌association‌ ‌between‌ ‌the‌ ‌proposed‌‌
explanatory‌‌and‌‌dependent‌‌variables‌‌(i.e.,‌‌the‌‌scatterplot‌‌does‌‌not‌‌indicate‌‌any‌‌increasing‌‌or‌‌
decreasing‌‌trends),‌‌then‌‌fitting‌‌a‌‌linear‌‌regression‌‌model‌‌to‌‌the‌‌data‌‌probably‌‌will‌‌not‌‌provide‌‌
a‌ ‌useful‌ ‌model.‌ ‌A‌ ‌valuable‌ ‌numerical‌ ‌measure‌ ‌of‌ ‌association‌ ‌between‌ ‌two‌ ‌variables‌ ‌is‌ ‌the‌‌
correlation‌ ‌coefficient,‌ ‌which‌ ‌is‌ ‌a‌ ‌value‌ ‌between‌ ‌-1‌ ‌and‌ ‌1‌ ‌indicating‌ ‌the‌ ‌strength‌ ‌of‌ ‌the‌‌
association‌‌of‌‌the‌‌observed‌‌data‌‌for‌‌the‌‌two‌‌variables.‌ ‌
A‌‌linear‌‌regression‌‌line‌‌has‌‌an‌‌equation‌‌of‌‌the‌‌form‌‌‌Y‌‌=‌‌a‌‌+‌‌b*X‌,‌‌where‌‌‌X‌‌is‌‌the‌‌explanatory‌‌
variable‌‌and‌‌‌Y‌‌is‌‌the‌‌dependent‌‌variable.‌‌The‌‌slope‌‌of‌‌the‌‌line‌‌is‌‌‌b,‌‌‌and‌‌‌a‌‌is‌‌the‌‌intercept‌‌(the‌‌
value‌‌of‌‌y‌‌when‌‌x‌‌=‌‌0).‌ ‌
In‌ ‌higher‌ ‌dimensions‌ ‌when‌ ‌we‌ ‌have‌ ‌more‌ ‌than‌ ‌one‌ ‌input‌‌(x),‌‌the‌‌line‌‌is‌‌called‌‌a‌‌plane‌‌or‌‌a‌‌
hyper-plane.‌‌The‌‌representation‌‌therefore‌‌is‌‌the‌‌form‌‌of‌‌the‌‌equation‌‌and‌‌the‌‌specific‌‌values‌‌
used‌‌for‌‌the‌‌coefficients‌‌(e.g.,‌a
‌ ‌‌and‌‌b‌‌in‌‌the‌‌above‌‌example).‌ ‌
It‌ ‌is‌ ‌common‌ ‌to‌ ‌talk‌ ‌about‌ ‌the‌ ‌complexity‌ ‌of‌ ‌a‌ ‌regression‌ ‌model‌‌like‌‌linear‌‌regression.‌‌This‌‌
refers‌‌to‌‌the‌‌number‌‌of‌‌coefficients‌‌used‌‌in‌‌the‌‌model.‌ ‌
When‌‌a‌‌coefficient‌‌becomes‌‌zero,‌‌it‌‌effectively‌‌removes‌‌the‌‌influence‌‌of‌‌the‌‌input‌‌variable‌‌on‌‌
the‌ ‌model‌ ‌and‌ ‌therefore‌ ‌from‌ ‌the‌ ‌prediction‌ ‌made‌ ‌from‌ ‌the‌ ‌model‌ ‌(0*‌X=
‌ 0).‌ ‌This‌ ‌becomes‌‌
relevant‌‌if‌‌you‌‌look‌‌at‌‌regularization‌‌methods‌‌that‌‌change‌‌the‌‌learning‌‌algorithm‌‌to‌‌reduce‌‌the‌‌
complexity‌ ‌of‌‌regression‌‌models‌‌by‌‌putting‌‌pressure‌‌on‌‌the‌‌absolute‌‌size‌‌of‌‌the‌‌coefficients,‌‌

Artificial‌
‌ ‌intelligence‌ ‌

driving‌‌some‌‌to‌‌zero.‌ ‌
‌‌
The‌ ‌linear‌ ‌regression‌ ‌is‌ ‌explained‌ ‌using‌ ‌this‌ ‌equation‌ ‌in‌ ‌the‌ ‌following‌ ‌practical‌‌as‌‌a‌‌jupyter‌‌
notebook‌‌script.‌ ‌



Artificial‌
‌ ‌intelligence‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌


Artificial‌
‌ ‌intelligence‌ ‌

CONCLUSION‌ ‌:‌ ‌Despite‌ ‌its‌ ‌simplicity,‌ ‌linear‌ ‌regression‌ ‌is‌ ‌one‌ ‌of‌ ‌the‌ ‌most‌‌commonly‌‌used‌‌
machine‌ ‌learning‌ ‌algorithms‌ ‌in‌ ‌the‌ ‌industry,‌ ‌and‌ ‌some‌ ‌companies‌ ‌test‌ ‌how‌ ‌well‌ ‌you‌‌
understand‌‌it.‌ ‌


‌ ‌

‌‌

‌‌

Artificial‌
‌ ‌intelligence‌ ‌

EXPERIMENT‌‌:‌‌8‌ ‌

AIM‌ ‌:‌ ‌Create‌ ‌a‌ ‌program‌ ‌using‌ ‌any‌ ‌machine‌ ‌learning‌ ‌framework‌ ‌like‌ ‌Tensorflow,‌ ‌Keras‌ ‌to‌‌
implement‌‌a‌‌simple‌‌convolution‌‌neural‌‌network.‌ ‌

THEORY‌‌:‌ ‌

In‌ ‌the‌ ‌past‌ ‌few‌ ‌years,‌ ‌Deep‌ ‌Learning‌ ‌has‌ ‌been‌ ‌proved‌ ‌that‌ ‌its‌ ‌a‌ ‌very‌ ‌powerful‌ ‌tool‌ ‌due‌ ‌to‌ ‌its‌‌
ability‌‌to‌‌handle‌‌huge‌‌amounts‌‌of‌‌data.‌‌The‌‌use‌‌of‌‌hidden‌‌layers‌‌exceeds‌‌traditional‌‌techniques,‌‌
especially‌ ‌for‌ ‌pattern‌ ‌recognition.‌ ‌One‌ ‌of‌ ‌the‌ ‌most‌ ‌popular‌ ‌Deep‌ ‌Neural‌ ‌Networks‌ ‌is‌‌
Convolutional‌ ‌Neural‌ ‌Networks(CNN).A‌‌convolutional‌‌neural‌‌network(CNN)‌‌is‌‌a‌‌type‌‌of‌‌Artificial‌‌
Neural‌ ‌Network(ANN)‌‌used‌‌in‌‌image‌‌recognition‌‌and‌‌processing‌‌which‌‌is‌‌specially‌‌designed‌‌for‌‌
processing‌‌data(pixels).‌ ‌

Tensorflow‌ ‌is‌ ‌an‌ ‌open‌ ‌source‌ ‌artificial‌ ‌intelligence‌ ‌library,‌ ‌using‌ ‌data‌ ‌flow‌ ‌graphs‌ ‌to‌ ‌build‌‌
models.‌‌It‌‌allows‌‌developers‌‌to‌‌create‌‌large-scale‌‌neural‌‌networks‌‌with‌‌many‌‌layers.‌‌TensorFlow‌‌
is‌ ‌mainly‌ ‌used‌ ‌for:‌ ‌Classification,‌ ‌Perception,‌ ‌Understanding,‌ ‌Discovering,‌ ‌Prediction‌ ‌and‌‌
Creation.‌ ‌

Keras‌ ‌is‌ ‌the‌ ‌high-level‌ ‌API‌ ‌of‌ ‌TensorFlow‌ ‌2:‌ ‌an‌ ‌approachable,‌ ‌highly-productive‌ ‌interface‌ ‌for‌‌
solving‌ ‌machine‌ ‌learning‌‌problems,‌‌with‌‌a‌‌focus‌‌on‌‌modern‌‌deep‌‌learning.‌‌It‌‌provides‌‌essential‌‌
abstractions‌ ‌and‌ ‌building‌ ‌blocks‌ ‌for‌ ‌developing‌ ‌and‌ ‌shipping‌ ‌machine‌ ‌learning‌ ‌solutions‌ ‌with‌‌
high‌‌iteration‌‌velocity.‌ ‌

matplotlib.‌ ‌pyplot‌ ‌is‌ ‌a‌ ‌collection‌ ‌of‌ ‌functions‌ ‌that‌ ‌make‌ ‌matplotlib‌ ‌work‌ ‌like‌ ‌MATLAB.‌ ‌Each‌‌
pyplot‌‌function‌‌makes‌‌some‌‌changes‌‌to‌‌a‌‌figure:‌‌e.g.,‌‌creates‌‌a‌‌figure,‌‌creates‌‌a‌‌plotting‌‌area‌‌in‌‌
a‌‌figure,‌‌plots‌‌some‌‌lines‌‌in‌‌a‌‌plotting‌‌area,‌‌decorates‌‌the‌‌plot‌‌with‌‌labels,‌‌etc.‌‌In‌‌matplotlib.‌ ‌

NumPy‌ ‌is‌ ‌the‌ ‌fundamental‌ ‌package‌ ‌for‌ ‌scientific‌ ‌computing‌ ‌in‌‌Python.‌‌NumPy‌‌arrays‌‌facilitate‌‌


advanced‌‌mathematical‌‌and‌‌other‌‌types‌‌of‌‌operations‌‌on‌‌large‌‌numbers‌‌of‌‌data.‌‌Typically,‌‌such‌‌
operations‌ ‌are‌ ‌executed‌ ‌more‌ ‌efficiently‌ ‌and‌ ‌with‌ ‌less‌ ‌code‌ ‌than‌ ‌is‌ ‌possible‌ ‌using‌ ‌Python's‌‌
built-in‌‌sequences.‌ ‌

CIFAR10‌‌is‌‌a‌‌dataset‌‌of‌‌50,000‌‌32x32‌‌color‌‌training‌‌images‌‌and‌‌10,000‌‌test‌‌images,‌‌labeled‌‌over‌‌10‌‌
categories.‌ ‌

‌‌

‌‌

Artificial‌
‌ ‌intelligence‌ ‌

Label‌ Description‌ ‌

airplane‌ ‌
0‌ ‌

1‌ ‌ automobile‌ ‌

2‌ ‌ bird‌ ‌

3‌ ‌ cat‌ ‌

4‌ ‌ deer‌ ‌

5‌ ‌ dog‌ ‌

6‌ ‌ frog‌ ‌

7‌ ‌ horse‌ ‌

8‌ ‌ ship‌ ‌

9‌ ‌ truck‌ ‌

C
‌ ode‌‌:‌ ‌

In‌[
‌1]:‌‌

import‌‌t‌ ensorflow‌‌a
‌ s‌‌t‌ f‌ ‌

from‌‌t‌ ensorflow.keras‌‌i‌mport‌‌‌datasets,‌‌layers,‌‌models‌ ‌

import‌‌m
‌ atplotlib.pyplot‌‌a
‌ s‌‌p
‌ lt‌ ‌

import‌‌n
‌ umpy‌‌a
‌ s‌‌n
‌ p‌ ‌

‌‌

Load‌‌the‌‌dataset‌ ‌

In‌[
‌2]:‌‌
Artificial‌
‌ ‌intelligence‌ ‌

(X_train,‌‌y_train),‌‌(X_test,y_test)‌‌=‌‌datasets‌.c‌ ifar10‌.l‌oad_data()‌ ‌

X_train‌.s‌ hape‌ ‌

‌‌

Out[2]:‌‌

(50000,‌‌32,‌‌32,‌‌3)‌ ‌

In‌[
‌3]:‌‌

X_test‌.s‌ hape‌ ‌

‌‌

Out[3]:‌‌

(10000,‌‌32,‌‌32,‌‌3)‌ ‌

Here‌‌we‌‌see‌‌there‌‌are‌‌50000‌‌training‌‌images‌‌and‌‌1000‌‌test‌‌images‌ ‌

In‌[
‌4]:‌‌

y_train‌.s‌ hape‌ ‌

‌‌

Out[4]:‌‌

(50000,‌‌1)‌ ‌

In‌[
‌5]:‌‌

y_train[:‌5]‌‌ ‌

‌‌

Out[5]:‌‌

array([[6],‌ ‌

‌[9],‌ ‌

‌[9],‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

‌[4],‌ ‌

‌[1]],‌‌dtype=uint8)‌ ‌

y_train‌‌is‌‌a‌‌2D‌‌array,‌‌for‌‌our‌‌classification‌‌having‌‌1D‌‌array‌‌is‌‌good‌‌enough.‌‌so‌‌we‌‌will‌‌convert‌‌
this‌‌to‌‌now‌‌1D‌‌array‌ ‌

In‌[
‌6]:‌‌

y_train‌=
‌ ‌‌y_train‌.r‌ eshape(‌-1‌,)‌ ‌

y_train[:‌5]‌‌ ‌

‌‌

Out[6]:‌‌

array([6,‌‌9,‌‌9,‌‌4,‌‌1],‌‌dtype=uint8)‌ ‌

In‌[
‌7]:‌‌

y_test‌=
‌ ‌‌y_test‌.r‌ eshape(‌-1‌,)‌ ‌

‌‌

In‌[
‌8]:‌‌

classes‌=
‌ ‌‌[‌"airplane"‌,"‌ automobile"‌,"‌ bird"‌,"‌ cat"‌,"‌ deer"‌,"‌ dog"‌,"‌ frog"‌,"‌ horse"‌,"‌ ship"‌,"‌ truck"‌]‌ ‌

‌‌

Let's‌‌plot‌‌some‌‌images‌‌to‌‌see‌‌what‌‌they‌‌are‌ ‌

In‌[
‌9]:‌‌

def‌‌p
‌ lot_sample‌(X,‌‌y,‌‌index):‌ ‌

‌plt‌.f‌igure(figsize‌=
‌ ‌‌(1
‌ 5‌,2
‌ )‌ )‌ ‌

‌plt‌.i‌mshow(X[index])‌ ‌

‌plt‌.x‌ label(classes[y[index]])‌ ‌

In‌[
‌10]:‌‌

plot_sample(X_train,‌‌y_train,‌0
‌ )‌ ‌ ‌
Artificial‌
‌ ‌intelligence‌ ‌

‌ ‌

In‌[
‌11]:‌‌

plot_sample(X_train,‌‌y_train,‌1
‌ )‌ ‌ ‌

‌ ‌

Normalize‌‌the‌‌images‌‌to‌‌a‌‌number‌‌from‌‌0‌‌to‌‌1.‌‌Image‌‌has‌‌3‌‌channels‌‌(R,G,B)‌‌and‌‌each‌‌value‌‌
in‌‌the‌‌channel‌‌can‌‌range‌‌from‌‌0‌‌to‌‌255.‌‌Hence‌‌to‌‌normalize‌‌in‌‌0-->1‌‌range,‌‌we‌‌need‌‌to‌‌divide‌‌
it‌‌by‌‌255‌ ‌

Normalizing‌‌the‌‌training‌‌data‌ ‌

In‌[
‌12]:‌‌

X_train‌=
‌ ‌‌X_train‌/‌‌2
‌ 55.0‌ ‌

X_test‌=
‌ ‌‌X_test‌‌/‌2
‌ 55.0‌ ‌

‌‌

Build‌‌simple‌‌artificial‌‌neural‌‌network‌‌for‌‌image‌‌classification‌ ‌

In‌[
‌13]:‌‌

ann‌=
‌ ‌‌models‌.S
‌ equential([‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

‌layers‌.F
‌ latten(input_shape‌=(‌ 3
‌ 2‌,3
‌ 2‌,3
‌ )‌ ),‌ ‌

‌layers‌.D
‌ ense(‌3000‌,‌‌activation‌='‌relu'‌),‌ ‌

‌layers‌.D
‌ ense(‌1000‌,‌‌activation‌='‌relu'‌),‌ ‌

‌layers‌.D
‌ ense(‌10‌,‌‌activation‌='‌softmax'‌)‌ ‌ ‌

‌])‌ ‌

ann‌.c‌ ompile(optimizer‌='‌SGD'‌,‌ ‌

‌loss‌='‌sparse_categorical_crossentropy'‌,‌ ‌

‌metrics‌=[‌'‌accuracy'‌])‌ ‌

‌ann‌.f‌it(X_train,‌‌y_train,‌‌epochs‌=5‌)‌ ‌

Epoch‌‌1/5‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌1.8074‌‌-‌‌accuracy:‌‌
0.3561‌ ‌

Epoch‌‌2/5‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌1ms/step‌‌-‌‌loss:‌‌1.6208‌‌-‌‌accuracy:‌‌
0.4285‌ ‌

Epoch‌‌3/5‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌1.5380‌‌-‌‌accuracy:‌‌
0.4585‌ ‌

Epoch‌‌4/5‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌1.4808‌‌-‌‌accuracy:‌‌
0.4806‌ ‌

Epoch‌‌5/5‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌1.4326‌‌-‌‌accuracy:‌‌
0.4928‌ ‌

‌‌

Out[13]:‌‌
Artificial‌
‌ ‌intelligence‌ ‌

<tensorflow.python.keras.callbacks.History‌‌at‌‌0x295ab873c10>‌ ‌

You‌‌can‌‌see‌‌that‌‌at‌‌the‌‌end‌‌of‌‌5‌‌epochs,‌‌accuracy‌‌is‌‌at‌‌around‌‌49%‌ ‌

In‌[
‌14]:‌‌

from‌‌s
‌ klearn.metrics‌‌i‌mport‌‌‌confusion_matrix‌‌,‌‌classification_report‌ ‌

import‌‌n
‌ umpy‌‌a
‌ s‌‌n
‌ p‌ ‌

y_pred‌=
‌ ‌‌ann‌.p
‌ redict(X_test)‌ ‌

y_pred_classes‌=
‌ ‌‌[np‌.a
‌ rgmax(element)‌f‌ or‌‌‌element‌‌in‌‌‌y_pred]‌ ‌

p
‌ rint‌(‌"Classification‌‌Report:‌‌\n‌",‌‌‌classification_report(y_test,‌‌y_pred_classes))‌ ‌

Classification‌‌Report:‌ ‌

‌precision‌ ‌recall‌ ‌f1-score‌ ‌support‌ ‌

‌‌

‌0‌ ‌0.63‌ ‌0.45‌ ‌0.53‌ ‌1000‌ ‌

‌1‌ ‌0.72‌ ‌0.46‌ ‌0.56‌ ‌1000‌ ‌

‌2‌ ‌0.33‌ ‌0.46‌ ‌0.39‌ ‌1000‌ ‌

‌3‌ ‌0.36‌ ‌0.25‌ ‌0.29‌ ‌1000‌ ‌

‌4‌ ‌0.44‌ ‌0.37‌ ‌0.40‌ ‌1000‌ ‌

‌5‌ ‌0.34‌ ‌0.46‌ ‌0.39‌ ‌1000‌ ‌

‌6‌ ‌0.56‌ ‌0.47‌ ‌0.51‌ ‌1000‌ ‌

‌7‌ ‌0.39‌ ‌0.67‌ ‌0.50‌ ‌1000‌ ‌

‌8‌ ‌0.64‌ ‌0.60‌ ‌0.62‌ ‌1000‌ ‌

‌9‌ ‌0.59‌ ‌0.53‌ ‌0.55‌ ‌1000‌ ‌

‌‌

‌accuracy‌ ‌0.47‌ ‌10000‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

‌macro‌‌avg‌ ‌0.50‌ ‌0.47‌ ‌0.47‌ ‌10000‌ ‌

weighted‌‌avg‌ ‌0.50‌ ‌0.47‌ ‌0.47‌ ‌10000‌ ‌

Now‌‌let‌‌us‌‌build‌‌a‌‌convolutional‌‌neural‌‌network‌‌to‌‌train‌‌our‌‌images‌ ‌

In‌[
‌15]:‌‌

cnn‌=
‌ ‌‌models‌.S
‌ equential([‌ ‌

‌layers‌.C
‌ onv2D(filters‌=32‌,‌‌kernel_size‌=(‌ ‌3,‌‌‌3)‌ ,‌‌activation‌='‌relu'‌,‌‌input_shape‌=(‌ 3
‌ 2‌,‌3
‌ 2‌,‌3
‌ )‌ ),‌ ‌

‌layers‌.M
‌ axPooling2D((‌2,‌‌‌2)‌ ),‌ ‌

‌‌

‌layers‌.C
‌ onv2D(filters‌=64‌,‌‌kernel_size‌=(‌ ‌3,‌‌‌3)‌ ,‌‌activation‌='‌relu'‌),‌ ‌

‌layers‌.M
‌ axPooling2D((‌2,‌‌‌2)‌ ),‌ ‌

‌layers‌.F
‌ latten(),‌ ‌

‌layers‌.D
‌ ense(‌64‌,‌‌activation‌='‌relu'‌),‌ ‌

‌layers‌.D
‌ ense(‌10‌,‌‌activation‌='‌softmax'‌)‌ ‌

])‌ ‌

cnn‌.c‌ ompile(optimizer‌='‌adam'‌,‌ ‌

‌loss‌='‌sparse_categorical_crossentropy'‌,‌ ‌

‌metrics‌=[‌'‌accuracy'‌])‌ ‌

‌‌

In‌[
‌17]:‌‌

cnn‌.f‌it(X_train,‌‌y_train,‌‌epochs‌=10‌)‌ ‌

‌‌

Epoch‌‌1/10‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌1.4407‌‌-‌‌accuracy:‌‌
0.4810‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

Epoch‌‌2/10‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌1.1084‌‌-‌‌accuracy:‌‌
0.6109‌ ‌

Epoch‌‌3/10‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌0.9895‌‌-‌‌accuracy:‌‌
0.6574‌ ‌

Epoch‌‌4/10‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌0.9071‌‌-‌‌accuracy:‌‌
0.6870‌ ‌

Epoch‌‌5/10‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌0.8416‌‌-‌‌accuracy:‌‌
0.7097‌ ‌

Epoch‌‌6/10‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌0.7847‌‌-‌‌accuracy:‌‌
0.7262‌ ‌

Epoch‌‌7/10‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌0.7350‌‌-‌‌accuracy:‌‌
0.7448‌ ‌

Epoch‌‌8/10‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌0.6941‌‌-‌‌accuracy:‌‌
0.7574‌ ‌

Epoch‌‌9/10‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌1ms/step‌‌-‌‌loss:‌‌0.6516‌‌-‌‌accuracy:‌‌
0.7731‌ ‌

Epoch‌‌10/10‌ ‌

1563/1563‌‌[==============================]‌‌-‌‌2s‌‌2ms/step‌‌-‌‌loss:‌‌0.6187‌‌-‌‌accuracy:‌‌
0.7836‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

‌‌

Out[17]:‌‌

<tensorflow.python.keras.callbacks.History‌‌at‌‌0x296555783d0>‌ ‌

With‌ ‌CNN,‌ ‌at‌ ‌the‌ ‌end‌ ‌5‌ ‌epochs,‌ ‌accuracy‌ ‌was‌ ‌at‌ ‌around‌ ‌70%‌ ‌which‌ ‌is‌ ‌a‌ ‌significant‌‌
improvement‌ ‌over‌ ‌ANN.‌ ‌CNN's‌ ‌are‌ ‌best‌ ‌for‌ ‌image‌ ‌classification‌ ‌and‌ ‌gives‌ ‌superb‌‌
accuracy.‌ ‌Also‌ ‌computation‌ ‌is‌ ‌much‌ ‌less‌ ‌compared‌ ‌to‌ ‌simple‌ ‌ANN‌ ‌as‌ ‌maxpooling‌‌
reduces‌‌the‌‌image‌‌dimensions‌‌while‌‌still‌‌preserving‌‌the‌‌features‌ ‌

In‌[
‌18]:‌‌

cnn‌.e
‌ valuate(X_test,y_test)‌ ‌

‌‌

313/313‌ ‌[==============================]‌ ‌-‌ ‌0s‌ ‌1ms/step‌ ‌-‌ ‌loss:‌ ‌0.9022‌ ‌-‌ ‌accuracy:‌‌
0.7028‌ ‌

‌‌

Out[18]:‌‌

[0.9021560549736023,‌‌0.7027999758720398]‌ ‌

In‌[
‌19]:‌‌

y_pred‌=
‌ ‌‌cnn‌.p
‌ redict(X_test)‌ ‌

y_pred[:‌5]‌‌ ‌

‌‌

Out[19]:‌‌

array([[4.3996371e-04,‌‌3.4844263e-05,‌‌1.5558505e-03,‌‌8.8400185e-01,‌ ‌

‌1.9452239e-04,‌‌3.5314459e-02,‌‌7.2777577e-02,‌‌6.9044131e-06,‌ ‌

‌5.6417785e-03,‌‌3.2224660e-05],‌ ‌

‌[8.1062522e-03,‌‌5.0841425e-02,‌‌1.2453231e-07,‌‌5.3348430e-07,‌ ‌

‌9.1728407e-07,‌‌1.0009186e-08,‌‌2.8985988e-07,‌‌1.7532484e-09,‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

‌9.4089705e-01,‌‌1.5346886e-04],‌ ‌

‌[1.7055811e-02,‌‌1.1841061e-01,‌‌4.6799007e-05,‌‌2.7727904e-02,‌ ‌

‌1.0848254e-03,‌‌1.0896578e-03,‌‌1.3575243e-04,‌‌2.8652203e-04,‌ ‌

‌7.8895986e-01,‌‌4.5202184e-02],‌ ‌

‌[3.1300801e-01,‌‌1.1591638e-02,‌‌1.1511055e-02,‌‌3.9592334e-03,‌ ‌

‌7.7280165e-03,‌‌5.6289224e-05,‌‌2.3531138e-04,‌‌9.4204297e-06,‌ ‌

‌6.5178138e-01,‌‌1.1968113e-04],‌ ‌

‌[1.3230885e-05,‌‌2.1221960e-05,‌‌9.2594400e-02,‌‌3.3585075e-02,‌ ‌

‌4.4722903e-01,‌‌4.1028224e-03,‌‌4.2241842e-01,‌‌2.8064171e-05,‌ ‌

‌6.6392668e-06,‌‌1.0745022e-06]],‌‌dtype=float32)‌ ‌

y_classes‌=
‌ ‌‌[np‌.a
‌ rgmax(element)‌f‌ or‌‌‌element‌‌in‌‌‌y_pred]‌ ‌

y_classes[:‌5]‌‌ ‌

‌‌

Out[20]:‌‌

[3,‌‌8,‌‌8,‌‌8,‌‌4]‌ ‌

In‌[
‌21]:‌‌

y_test[:‌5]‌‌ ‌

‌‌

Out[21]:‌‌

array([3,‌‌8,‌‌8,‌‌0,‌‌6],‌‌dtype=uint8)‌ ‌

In‌[
‌22]:‌‌

plot_sample(X_test,‌‌y_test,‌3)‌ ‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌

‌ ‌

In‌[
‌23]:‌‌

classes[y_classes[‌3]‌]‌ ‌

‌‌

Out[23]:‌‌

'ship'‌ ‌

In‌[
‌24]:‌‌

classes[y_classes[‌3]‌]‌ ‌

Out[24]:‌‌

'ship'‌ ‌

CONCLUSION‌ ‌:‌ ‌After‌ ‌using‌ ‌various‌‌libraries,‌‌we‌‌were‌‌able‌‌to‌‌classify‌‌the‌‌CIFAR10‌‌image‌‌


dataset‌ ‌with‌ ‌78.36%‌ ‌accuracy‌ ‌on‌ ‌the‌ ‌test‌ ‌data.‌ ‌And‌ ‌learned‌ ‌about‌ ‌Convolution‌ ‌Neural‌‌
Network.‌ ‌

Artificial‌
‌ ‌intelligence‌ ‌
EXPERIMENT-9

AIM : Create A Program using a Convolutional Neural Network that


identifies objects using the webcam.

STEP 1:
 Preparing the data for data collection and data labeling we used
Google photos... We downloaded photos for 2 classes namely
"People with Helmet" and "People without Helmet".

 Then after downloading the data we annotated the data and


classified them into separate classes.

 And drew bounding boxes over them.

(This file has been improved

--removed many code for faster exicution and training

--added GPU support

Note: Go to runtime-->change runtime type--> GPU

(change to GPU if it is none))

In [ ]:
# mount drive with colab from
google.colab import drive
drive.mount('/content/drive')

Go to this URL in a browser:https://accounts.google.com/o/oauth2/auth?client_id=94731898 9803-


6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%
3awg%3aoauth%3a2.0%3aoob&scope=email%20https%3a%2f%2ffanyv88.com%3a443%2fhttps%2fwww.googleapis.com%2fauth%2fdocs.tes
t%20https%3a%2f%2ffanyv88.com%3a443%2fhttps%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2ffanyv88.com%3a443%2fhttps%2fwww.googleapis.com%2f
auth%2fdrive.photos.readonly%20https%3a%2f%2ffanyv88.com%3a443%2fhttps%2fwww.googleapis.com%2fauth%2fpeopleapi.readon
ly&response_type=code

Enter your authorization code:


··········
Mounted at /content/drive

In [ ]:
!pwd
/content

In [ ]:
Artificial intelligence
# copy the zip file fromdrive
# Give zip file name according to yours in 5 places # 1
!cp -r drive/'My Drive'/yolo-v2-darknet-master.zip /content/

In [ ]:
from google.colab import drive
drive.mount('/content/drive')

Go to this URL in a browser:https://accounts.google.com/o/oauth2/auth?client_id=94731898 9803-


6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%
3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2ffanyv88.com%3a443%2fhttps%2fwww.googleapis.co
m%2fauth%2fdocs.test%20https%3a%2f%2ffanyv88.com%3a443%2fhttps%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2ffanyv88.com%3a443%2fhttps%2fww
w.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2ffanyv88.com%3a443%2fhttps%2fwww.googleapis.com%2fauth
%2fpeopleapi.readonly

Enter your authorization code:


··········
Mounted at /content/drive
In [ ]:

# unzip the file # 2

PART 2: Model training.

 We used Yolo3 algorithm which is a pre trained model used for object
detection.
 To train our own custom data on this model we have to make some changes in
the script.
 Like we have to change coco.names file and add some regularisation
parameters.
 Then after doing all the changes we have to train model's last layer only
on Google colab.
 After running through apporx 5k epochs we stopped model training and
extracted the model's .h file.

PART 3: Testing.

 After downloading the model and save it into a folder, We used


opencv to detect face first
 For face detection we have to first convert the whole frame into
black and white frame for better feature extraction.
 And then after the model will classify the people into 2 seperate classes
with helmet or without helm

Artificial intelligence
In [ ]:

!pwd

In [ ]:
#3
%cd yolo-v2-darknet-master
/content/yolo-v2-darknet-master In [ ]:

--2020-08-30 07:47:23-- https://pjreddie.com/media/files/darknet19_448.conv.23 Resolving


!wget https://pjreddie.com/media/files/darknet19_448.conv.23
pjreddie.com (pjreddie.com)... 128.208.4.108
Connecting to pjreddie.com (pjreddie.com)|128.208.4.108|:443...connected. HTTP request sent, awaiting
response... 200 OK
Length: 79327120 (76M) [application/octet-stream]
Saving to: ‘darknet19_448.conv.23’

darknet19_448.conv. 100%[===================>] 75.65M 3.74MB/s in 18s

2020-08-30 07:47:40 (4.29 MB/s) - ‘darknet19_448.conv.23’ saved [79327120/79327120]

In [ ]:
!pip install tensorflow-gpu==1.15.0

In [ ]:
import tensorflow as tf
device_name = tf.test.gpu_device_name()
print(device_name)

print("'sup!'")

!/usr/local/cuda/bin/nvcc --version
/device:GPU:0 'sup!'
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-
2019 NVIDIA Corporation Built on
Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1,V10.1.243 In [ ]:

/content/yolo-v2-darknet-master
!pwd In [ ]:

In [ ]: !make
!pwd
/content/yolo-v2-darknet-master In [ ]:

#5
%cd yolo-v2-darknet-master

/content/yolo-v2-darknet-master

Artificial intelligence
In [ ]:
!pwd
/content In [ ]:

!./darknet detector train data/obj.data yolov2-tiny.cfg darknet19_448.conv.23 -dont_show

yolov2-tiny
layer filters size/strd(dil) input output
0 conv 16 3 x 3/ 1 416 x 416 x 3 -> 416 x 416 x 16 0.150 BF
1 max 2x 2/ 2 416 x 416 x 16 -> 208 x 208 x 16 0.003 BF
2 conv 32 3 x 3/ 1 208 x 208 x 16 -> 208 x 208 x 32 0.399 BF
3 max 2x 2/ 2 208 x 208 x 32 -> 104 x 104 x 32 0.001 BF
4 conv 64 3 x 3/ 1 104 x 104 x 32 -> 104 x 104 x 64 0.399 BF
5 max 2x 2/ 2 104 x 104 x 64 -> 52 x 52 x 64 0.001 BF
6 conv 128 3 x 3/ 1 52 x 52 x 64 -> 52 x 52 x 128 0.399 BF
7 max 2x 2/ 2 52 x 52 x 128 -> 26 x 26 x 128 0.000 BF
8 conv 256 3 x 3/ 1 26 x 26 x 128 -> 26 x 26 x 256 0.399 BF
9 max 2x 2/ 2 26 x 26 x 256 -> 13 x 13 x 256 0.000 BF
10 conv 512 3 x 3/ 1 13 x 13 x 256 -> 13 x 13 x 512 0.399 BF
11 max 2x 2/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.000 BF
12 conv 1024 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF
13 conv 512 3 x 3/ 1 13 x 13 x1024 -> 13 x 13 x 512 1.595 BF
14 conv 35 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 35 0.006 BF
15 detection
mask_scale: Using default '1.000000' Total BFLOPS
5.345
Allocate additional workspace_size = 24.92 MB Loading weights
from darknet19_448.conv.23...
seen 32
Done! Loaded 16 layers from weights-file
Learning Rate: 0.001, Momentum: 0.9, Decay: 0.0005 Resizing
608 x 608
try to allocate additional workspace_size = 53.23 MB CUDA allocate
done!
Loaded: 5.595350 seconds
Region Avg IOU: 0.627353, Class: 0.500803, Obj: 0.499625, No Obj: 0.500780, Avg Recall: 0
.900000, count: 10
Region Avg IOU: 0.538402, Class: 0.500120, Obj: 0.499223, No Obj: 0.500780, Avg Recall: 0
.700000, count: 10
Region Avg IOU: 0.556594, Class: 0.499588, Obj: 0.499590, No Obj: 0.500779, Avg Recall: 0
.750000, count: 8
Region Avg IOU: 0.522307, Class: 0.499648, Obj: 0.500532, No Obj: 0.500781, Avg Recall: 0
.307692, count: 13
Region Avg IOU: 0.511505, Class: 0.499404, Obj: 0.499507, No Obj: 0.500774, Avg Recall: 0
.461538, count: 13
Region Avg IOU: 0.560672, Class: 0.499632, Obj: 0.500007, No Obj: 0.500779, Avg Recall: 0
.750000, count: 12
Region Avg IOU: 0.548019, Class: 0.500677, Obj: 0.499954, No Obj: 0.500775, Avg Recall: 0
.615385, count: 13
Region Avg IOU: 0.680718, Class: 0.500640, Obj: 0.499684, No Obj: 0.500781, Avg Recall: 0
.875000, count: 8

Artificial intelligence
1: 29.775316, 29.775316 avg loss, 0.000000 rate, 3.492920 seconds, 64 images
Loaded: 6.188967 seconds
Region Avg IOU: 0.550686, Class: 0.499604, Obj: 0.499484, No Obj: 0.500775, Avg Recall: 0
.666667, count: 9
Region Avg IOU: 0.605319, Class: 0.499000, Obj: 0.499849, No Obj: 0.500772, Avg Recall: 0
.777778, count: 9
Region Avg IOU: 0.582052, Class: 0.499254, Obj: 0.499314, No Obj: 0.500775, Avg Recall: 0
.727273, count: 11
Region Avg IOU: 0.530104, Class: 0.500334, Obj: 0.499795, No Obj: 0.500789, Avg Recall: 0
.555556, count: 18
Region Avg IOU: 0.587147, Class: 0.501375, Obj: 0.499689, No Obj: 0.500787, Avg Recall: 0
.750000, count: 8
Region Avg IOU: 0.616064, Class: 0.500162, Obj: 0.499818, No Obj: 0.500781, Avg Recall: 0
.900000, count: 10
Region Avg IOU: 0.529849, Class: 0.499673, Obj: 0.500145, No Obj: 0.500782, Avg Recall: 0
.454545, count: 11

Artificial intelligence
Region Avg IOU: 0.562595, Class: 0.500093, Obj: 0.499835, No Obj: 0.500772, Avg Recall:0
.636364, count: 11

2: 29.768538, 29.774639 avg loss, 0.000000 rate, 1.710242 seconds, 128 images
Loaded: 4.741398 seconds
Region Avg IOU: 0.511447, Class: 0.500700, Obj: 0.500336, No Obj: 0.500778, Avg Recall: 0
.583333, count: 12
Region Avg IOU: 0.545875, Class: 0.499549, Obj: 0.499389, No Obj: 0.500773, Avg Recall: 0
.583333, count: 12
Region Avg IOU: 0.583438, Class: 0.499870, Obj: 0.499768, No Obj: 0.500774, Avg Recall: 0
.700000, count: 10
Region Avg IOU: 0.561381, Class: 0.500571, Obj: 0.499926, No Obj: 0.500774, Avg Recall: 0
.500000, count: 10
Region Avg IOU: 0.604184, Class: 0.498997, Obj: 0.499250, No Obj: 0.500781, Avg Recall: 0
.666667, count: 9
Region Avg IOU: 0.517270, Class: 0.499957, Obj: 0.499748, No Obj: 0.500776, Avg Recall: 0
.500000, count: 22
Region Avg IOU: 0.655324, Class: 0.500785, Obj: 0.499735, No Obj: 0.500781, Avg Recall: 1
.000000, count: 11
Region Avg IOU: 0.569523, Class: 0.500028, Obj: 0.499365, No Obj: 0.500770, Avg Recall: 0
.727273, count: 11

3: 29.979342, 29.795109 avg loss, 0.000000 rate, 1.761622 seconds, 192 images
Loaded: 6.386554 seconds
Region Avg IOU: 0.539232, Class: 0.500169, Obj: 0.499703, No Obj: 0.500783, Avg Recall: 0
.555556, count: 9
Region Avg IOU: 0.574748, Class: 0.499734, Obj: 0.499518, No Obj: 0.500772, Avg Recall: 0
.615385, count: 13
Region Avg IOU: 0.579219, Class: 0.500285, Obj: 0.499683, No Obj: 0.500777, Avg Recall: 0
.750000, count: 12
Region Avg IOU: 0.571484, Class: 0.500268, Obj: 0.499574, No Obj: 0.500777, Avg Recall: 0
.636364, count: 11
Region Avg IOU: 0.581703, Class: 0.500266, Obj: 0.500234, No Obj: 0.500780, Avg Recall: 0
.750000, count: 8
Region Avg IOU: 0.586029, Class: 0.500628, Obj: 0.499909, No Obj: 0.500771, Avg Recall: 0
.909091, count: 11
Region Avg IOU: 0.514878, Class: 0.500312, Obj: 0.500054, No Obj: 0.500786, Avg Recall: 0
.555556, count: 9
Region Avg IOU: 0.516218, Class: 0.500147, Obj: 0.499465, No Obj: 0.500773, Avg Recall: 0
.555556, count: 9

4: 29.758102, 29.791409 avg loss, 0.000000 rate, 1.687349 seconds, 256 images

Artificial intelligence
Continue training

1. replace XXXX with ur iteration number

In [ ]:
!./darknet detector train data/obj.data yolov2-tiny.cfg backup/yolov2-tiny_XXXX.weights
-dont_show
yolov2-tiny
layer filters size/strd(dil) input output
0 conv 16 3 x 3/ 1 416 x 416 x 3 -> 416 x 416 x 16 0.150 BF
1 max 2x 2/ 2 416 x 416 x 16 -> 208 x 208 x 16 0.003 BF
2 conv 32 3 x 3/ 1 208 x 208 x 16 -> 208 x 208 x 32 0.399 BF
3 max 2x 2/ 2 208 x 208 x 32 -> 104 x 104 x 32 0.001 BF
4 conv 64 3 x 3/ 1 104 x 104 x 32 -> 104 x 104 x 64 0.399 BF
5 max 2x 2/ 2 104 x 104 x 64 -> 52 x 52 x 64 0.001 BF
6 conv 128 3 x 3/ 1 52 x 52 x 64 -> 52 x 52 x 128 0.399 BF
7 max 2x 2/ 2 52 x 52 x 128 -> 26 x 26 x 128 0.000 BF
8 conv 256 3 x 3/ 1 26 x 26 x 128 -> 26 x 26 x 256 0.399 BF
9 max 2x 2/ 2 26 x 26 x 256 -> 13 x 13 x 256 0.000 BF
10 conv 512 3 x 3/ 1 13 x 13 x 256 -> 13 x 13 x 512 0.399 BF
11 max 2x 2/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.000 BF
12 conv 1024 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF
13 conv 512 3 x 3/ 1 13 x 13 x1024 -> 13 x 13 x 512 1.595 BF
14 conv 35 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 35 0.006 BF
15 detection
mask_scale: Using default '1.000000'

Artificial intelligence
Total BFLOPS 5.345
Allocate additional workspace_size = 24.92 MB
Loading weights from backup/yolov2-tiny_last.weights... seen 64
Done! Loaded 16 layers from weights-file
Learning Rate: 0.001, Momentum: 0.9, Decay: 0.0005 Resizing
608 x 608
try to allocate additional workspace_size = 53.23 MB CUDA allocate
done!
Loaded: 8.172604 seconds Region
Avg IOU: 0.587063 , Class: 0.470875, Obj: 0.026575, No Obj: 0.017508, Avg Recall: 0
.625000, count: 8
, Class: 0.538545, Obj: 0.026367, No Obj: 0.017514, Avg Recall: 0
Region Avg IOU: 0.699091
.875000, count: 8 , Class: 0.516680, Obj: 0.026260, No Obj: 0.017503, Avg Recall: 0
Region Avg IOU: 0.593985
.750000, count: 8 , Class: 0.500792, Obj: 0.027091, No Obj: 0.017506, Avg Recall: 0
Region Avg IOU: 0.496399
.500000, count: 8 , Class: 0.506804, Obj: 0.027753, No Obj: 0.017512, Avg Recall: 0
Region Avg IOU: 0.599441
.750000, count: 8 , Class: 0.538516, Obj: 0.025524, No Obj: 0.017508, Avg Recall: 0
Region Avg IOU: 0.612027
.909091, count: 11 Region , Class: 0.482512, Obj: 0.025866, No Obj: 0.017512, Avg Recall: 0
Avg IOU: 0.521956 , Class: 0.522588, Obj: 0.027358, No Obj: 0.017508, Avg Recall: 0
.625000, count: 8
Region Avg IOU: 0.619575
.750000, count: 8

901: 0.778847, 0.778847 avg loss, 0.000659 rate, 5.390234 seconds, 57664 images
Loaded: 3.186666 seconds Region
Avg IOU: 0.635049 , Class: 0.533244, Obj: 0.025559, No Obj: 0.017489, Avg Recall: 0
.875000, count: 8
, Class: 0.533279, Obj: 0.025453, No Obj: 0.017501, Avg Recall: 0
Region Avg IOU: 0.517874
.555556, count: 9 , Class: 0.537263, Obj: 0.027130, No Obj: 0.017496, Avg Recall: 0
Region Avg IOU: 0.591678
.750000, count: 8 , Class: 0.529907, Obj: 0.026649, No Obj: 0.017502, Avg Recall: 0
Region Avg IOU: 0.597649
.625000, count: 8 , Class: 0.516808, Obj: 0.028722, No Obj: 0.017496, Avg Recall: 0
Region Avg IOU: 0.585413
.750000, count: 8 , Class: 0.500452, Obj: 0.028154, No Obj: 0.017497, Avg Recall: 0
Region Avg IOU: 0.607593
, Class: 0.518584, Obj: 0.027909, No Obj: 0.017499, Avg Recall: 0
.625000, count: 8
Region Avg IOU: 0.624009 , Class: 0.512007, Obj: 0.029247, No Obj: 0.017502, Avg Recall: 0
.625000, count: 8
Region Avg IOU: 0.607235
.750000, count: 8

Artificial intelligence
902: 0.753749, 0.776337 avg loss, 0.000662 rate, 4.540176 seconds, 57728 images
Loaded: 4.361846 seconds Region
Avg IOU: 0.535653 , Class: 0.487701, Obj: 0.027125, No Obj: 0.017468, Avg Recall: 0
.750000, count: 8
, Class: 0.506410, Obj: 0.028238, No Obj: 0.017476, Avg Recall: 0
Region Avg IOU: 0.718924
.875000, count: 8 , Class: 0.515400, Obj: 0.026513, No Obj: 0.017475, Avg Recall: 0
Region Avg IOU: 0.550562
.750000, count: 8 , Class: 0.506550, Obj: 0.026936, No Obj: 0.017480, Avg Recall: 0
Region Avg IOU: 0.552143
.500000, count: 8 , Class: 0.512358, Obj: 0.028463, No Obj: 0.017478, Avg Recall: 0
Region Avg IOU: 0.607738
.875000, count: 8 , Class: 0.539708, Obj: 0.024569, No Obj: 0.017481, Avg Recall: 0
Region Avg IOU: 0.566102
, Class: 0.529610, Obj: 0.027779, No Obj: 0.017482, Avg Recall: 0
.625000, count: 8
Region Avg IOU: 0.572899 , Class: 0.512464, Obj: 0.027969, No Obj: 0.017477, Avg Recall: 0
.625000, count: 8
Region Avg IOU: 0.498766
.500000, count: 8

903: 0.756722, 0.774375 avg loss, 0.000665 rate, 4.495909 seconds, 57792 images
Loaded: 5.057903 seconds
Region Avg IOU: 0.578272, Class: 0.500932, Obj: 0.028362, No Obj: 0.017439, Avg Recall:0
.750000, count: 8
Region Avg IOU: 0.493966, Class: 0.529457, Obj: 0.026211, No Obj: 0.017455, Avg Recall:0
.500000, count: 8

Artificial intelligence
Region Avg IOU: 0.502708 , Class: 0.470803, Obj: 0.025945, No Obj: 0.017444, Avg Recall: 0
.625000, count: 8
Region Avg IOU: 0.497024 , Class: 0.538885, Obj: 0.025197, No Obj: 0.017454, Avg Recall: 0
.555556, count: 9
Region Avg IOU: 0.514969 , Class: 0.494584, Obj: 0.026162, No Obj: 0.017447, Avg Recall: 0
.500000, count: 8
Region Avg IOU: 0.559263 , Class: 0.487798, Obj: 0.027270, No Obj: 0.017451, Avg Recall: 0
.500000, count: 8
, Class: 0.493622, Obj: 0.028932, No Obj: 0.017447, Avg Recall: 0
Region Avg IOU: 0.585084
.750000, count: 8 , Class: 0.529489, Obj: 0.027082, No Obj: 0.017442, Avg Recall: 0
Region Avg IOU: 0.622322
.875000, count: 8

904: 0.827233, 0.779661 avg loss, 0.000668 rate, 4.472591 seconds, 57856 images
Loaded: 3.518058 seconds Region
Avg IOU: 0.512012 , Class: 0.506623, Obj: 0.026626, No Obj: 0.017410, Avg Recall: 0
.625000, count: 8
, Class: 0.512592, Obj: 0.027801, No Obj: 0.017407, Avg Recall: 0
Region Avg IOU: 0.619694
.875000, count: 8 , Class: 0.500081, Obj: 0.026993, No Obj: 0.017414, Avg Recall: 0
Region Avg IOU: 0.530685
.500000, count: 8 , Class: 0.500337, Obj: 0.026982, No Obj: 0.017416, Avg Recall: 0
Region Avg IOU: 0.601141
.875000, count: 8 , Class: 0.513162, Obj: 0.027226, No Obj: 0.017410, Avg Recall: 0
Region Avg IOU: 0.612369
.875000, count: 8 , Class: 0.499073, Obj: 0.026793, No Obj: 0.017406, Avg Recall: 0
Region Avg IOU: 0.625290
.750000, count: 8 , Class: 0.551102, Obj: 0.022590, No Obj: 0.017416, Avg Recall: 0
Region Avg IOU: 0.492630 , Class: 0.506591, Obj: 0.027780, No Obj: 0.017403, Avg Recall: 0
.375000, count: 16 Region
Avg IOU: 0.538952
.500000, count: 8

905: 0.825587, 0.784254 avg loss, 0.000671 rate, 4.555971 seconds, 57920 images
Loaded: 4.794248 seconds Region
Avg IOU: 0.607950 , Class: 0.493934, Obj: 0.027677, No Obj: 0.017370, Avg Recall: 0
.750000, count: 8
, Class: 0.551617, Obj: 0.025115, No Obj: 0.017363, Avg Recall: 0
Region Avg IOU: 0.668452
.916667, count: 12 Region , Class: 0.497360, Obj: 0.026540, No Obj: 0.017367, Avg Recall: 0
Avg IOU: 0.599795
.666667, count: 9 , Class: 0.506174, Obj: 0.028689, No Obj: 0.017357, Avg Recall: 0
Region Avg IOU: 0.682319
.875000, count: 8 , Class: 0.512350, Obj: 0.025763, No Obj: 0.017357, Avg Recall: 0
Region Avg IOU: 0.494159
.250000, count: 8 , Class: 0.494119, Obj: 0.026185, No Obj: 0.017354, Avg Recall: 0
Region Avg IOU: 0.540287
.500000, count: 8 , Class: 0.489994, Obj: 0.028412, No Obj: 0.017370, Avg Recall: 0
Region Avg IOU: 0.558729 , Class: 0.522253, Obj: 0.026609, No Obj: 0.017355, Avg Recall: 1
.500000, count: 8
Region Avg IOU: 0.610243
.000000, count: 8

Artificial intelligence
906: 0.781782, 0.784007 avg loss, 0.000674 rate, 4.501444 seconds, 57984 images
Loaded: 5.522838 seconds Region
Avg IOU: 0.592935 , Class: 0.506335, Obj: 0.027464, No Obj: 0.017310, Avg Recall: 0
.750000, count: 8
, Class: 0.522380, Obj: 0.027291, No Obj: 0.017313, Avg Recall: 0
Region Avg IOU: 0.663365
.750000, count: 8 , Class: 0.538819, Obj: 0.025330, No Obj: 0.017319, Avg Recall: 0
Region Avg IOU: 0.498231
.333333, count: 9 , Class: 0.500419, Obj: 0.028074, No Obj: 0.017316, Avg Recall: 0
Region Avg IOU: 0.600541
.750000, count: 8 , Class: 0.497383, Obj: 0.027274, No Obj: 0.017317, Avg Recall: 0
Region Avg IOU: 0.629027
.875000, count: 8 , Class: 0.518491, Obj: 0.027555, No Obj: 0.017317, Avg Recall: 0
Region Avg IOU: 0.581013
, Class: 0.512217, Obj: 0.027575, No Obj: 0.017317, Avg Recall: 0
.750000, count: 8
Region Avg IOU: 0.554630 , Class: 0.517488, Obj: 0.026396, No Obj: 0.017317, Avg Recall: 0
.500000, count: 8
Region Avg IOU: 0.678860
.875000, count: 8

907: 0.725763, 0.778182 avg loss, 0.000677 rate, 4.534517 seconds, 58048 images
Loaded: 2.217117 seconds

Artificial intelligence
Region Avg IOU: 0.586949 , Class: 0.500388, Obj: 0.028179, No Obj: 0.017255, Avg Recall: 0
.625000, count: 8
Region Avg IOU: 0.526965 , Class: 0.494181, Obj: 0.026256, No Obj: 0.017261, Avg Recall: 0
.375000, count: 8
Region Avg IOU: 0.664718 , Class: 0.522216, Obj: 0.029202, No Obj: 0.017259, Avg Recall: 0
.875000, count: 8
Region Avg IOU: 0.580165 , Class: 0.534550, Obj: 0.026303, No Obj: 0.017249, Avg Recall: 0
.444444, count: 9
, Class: 0.559388, Obj: 0.025909, No Obj: 0.017256, Avg Recall: 0
Region Avg IOU: 0.533479
.444444, count: 9 , Class: 0.533078, Obj: 0.026060, No Obj: 0.017262, Avg Recall: 0
Region Avg IOU: 0.539843
.444444, count: 9 , Class: 0.500532, Obj: 0.027518, No Obj: 0.017249, Avg Recall: 0
Region Avg IOU: 0.568934
.625000, count: 8 , Class: 0.587338, Obj: 0.025786, No Obj: 0.017261, Avg Recall: 1
Region Avg IOU: 0.681720
.000000, count: 12

908: 0.794273, 0.779791 avg loss, 0.000680 rate, 4.540304 seconds, 58112 images
Loaded: 5.422947 seconds Region
Avg IOU: 0.602358 , Class: 0.509275, Obj: 0.027018, No Obj: 0.017191, Avg Recall: 0
.888889, count: 9
, Class: 0.528318, Obj: 0.027275, No Obj: 0.017193, Avg Recall: 0
Region Avg IOU: 0.554341
.625000, count: 8 , Class: 0.544324, Obj: 0.026793, No Obj: 0.017198, Avg Recall: 0
Region Avg IOU: 0.666896
.875000, count: 8 , Class: 0.503511, Obj: 0.026989, No Obj: 0.017192, Avg Recall: 0
Region Avg IOU: 0.613371
.888889, count: 9 , Class: 0.516450, Obj: 0.024945, No Obj: 0.017195, Avg Recall: 0
Region Avg IOU: 0.677645
.875000, count: 8 , Class: 0.537852, Obj: 0.025622, No Obj: 0.017194, Avg Recall: 0
Region Avg IOU: 0.543336
, Class: 0.535973, Obj: 0.027804, No Obj: 0.017198, Avg Recall: 0
.375000, count: 8
Region Avg IOU: 0.593575 , Class: 0.544574, Obj: 0.027159, No Obj: 0.017196, Avg Recall: 0
.750000, count: 8
Region Avg IOU: 0.603691
.750000, count: 8

Artificial intelligence
909: 0.718174, 0.773630 avg loss, 0.000683 rate, 4.553277 seconds, 58176 images
Loaded: 5.007428 seconds Region
Avg IOU: 0.622234 , Class: 0.523599, Obj: 0.027644, No Obj: 0.017136, Avg Recall: 0
.750000, count: 8
, Class: 0.506914, Obj: 0.027764, No Obj: 0.017126, Avg Recall: 0
Region Avg IOU: 0.589353
.750000, count: 8 , Class: 0.506976, Obj: 0.027030, No Obj: 0.017133, Avg Recall: 0
Region Avg IOU: 0.548656
.625000, count: 8 , Class: 0.544239, Obj: 0.026613, No Obj: 0.017135, Avg Recall: 0
Region Avg IOU: 0.573936
.555556, count: 9 , Class: 0.496195, Obj: 0.026193, No Obj: 0.017121, Avg Recall: 0
Region Avg IOU: 0.582338
.500000, count: 8 , Class: 0.540183, Obj: 0.024493, No Obj: 0.017134, Avg Recall: 0
Region Avg IOU: 0.558470
, Class: 0.516150, Obj: 0.026625, No Obj: 0.017135, Avg Recall: 0
.750000, count: 8
Region Avg IOU: 0.508086 , Class: 0.500967, Obj: 0.026958, No Obj: 0.017132, Avg Recall: 0
.625000, count: 8
Region Avg IOU: 0.615099
.750000, count: 8

910: 0.747126, 0.770979 avg loss, 0.000686 rate, 4.512347 seconds, 58240images Resizing
608 x 608

BACKUP to Google drive

In [ ]:
! cp -r /content/yolo-v2-darknet-master/backup/ /content/drive/'MyDrive'/

Get back files from google drive

In [ ]:
!mkdir backup

Artificial intelligence
In [ ]:
! cp -r /content/drive/'My Drive'/backup/*/content/yolo-v2-darknet-master/backup/

Detecting Helmet Through Webcam

In [ ]:
import cv2
from darkflow.net.build import TFNet
import numpy
as np import
time

options = {
'model': 'cfg/yolov2-tiny.cfg', 'load':
'bin/yolov2-tiny_3000.weights', 'threshold':
0.3,
'gpu': 0.5
}

tfnet = TFNet(options)
colors = [tuple(255 * np.random.rand(3)) for _ in range(10)]

capture = cv2.VideoCapture(0)
capture.set(cv2.CAP_PROP_FRAME_WIDTH,
1920)
capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)

while True:
stime = time.time()
ret, frame = capture.read()
if ret:
results = tfnet.return_predict(frame)
for color, result in zip(colors, results):
tl = (result['topleft']['x'], result['topleft']['y'])
br = (result['bottomright']['x'], result['bottomright']['y']) label =
result['label']
confidence = result['confidence']
text = '{}: {:.0f}%'.format(label, confidence * 100) frame =
cv2.rectangle(frame, tl, br, color, 5)
frame = cv2.putText(
frame, text, tl, cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 0), 2)
cv2.imshow('frame', frame)
print('FPS {:.1f}'.format(1 / (time.time() - stime)))
if cv2.waitKey(1) & 0xFF == ord('q'):
break

capture.release()
cv2.destroyAllWindows()

Conclusion : YOLO v2 is the fastest among all the other versions and it uses logistic regression
instead of softmax to perform multi class classification within one anchor box.

Artificial intelligence
EXPERIMENT : 10

AIM : To implement logistic regression algorithm using Tensorflow ans keras libraries

THEORY :

Logistic regression is basically a supervised classification algorithm. In a classification problem, the


target variable(or output), y, can take only discrete values for given set of features(or inputs), X.

Contrary to popular belief, logistic regression IS a regression model. The model builds a regression
model to predict the probability that a given data entry belongs to the category numbered as “1”. Just
like Linear regression assumes that the data follows a linear function, Logistic regression models the
data using the sigmoid function.

Here are the libraries required to implement a spam classifier for sms:

import time
import pickle
import tensorflow as tf
gpus tf.config.experimental.list_physical_devices('GPU')if
gpus:

tf.config.experimental.set_memory_growth(gpus[0], enable=True)import

tqdm
import numpy as np
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequencesfrom
tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoardfrom
sklearn.model_selection import train_test_split
from tensorflow.keras.layers import Embedding, LSTM, Dropout, Densefrom
tensorflow.keras.models import Sequential
from tensorflow.keras.metrics import Recall, Precision

These are the hyperparameters of our model, They should be defined as constants before the model so
that we can change this hyperparameters easily to optimize our mode

SEQUENCE_LENGTH = 100
EMBEDDING_SIZE = 100
TEST_SIZE = 0.25

BATCH_SIZE = 64
EPOCHS = 10

Artificial intelligence
label2int = {”ham": 0, “spam": 1}
int2label = {0: "ham", 1: ”spam"}

We have used sms spam collection dataset to train our model and it is a txt format file in which each line
contains a sentence, this function is doing some preprocessing on the data to convert it to an
appropriate format for tokenizing process.

def load_data():

Loads SMS Spam Collection dataset

texts, labels : [], []


with open(“SMSSpamCollection“) as f:
for line in f:
split : line.split()
labels.append(split[0].strip())
texts.append(' '.join(split[1:]).strip())
return texts, labels

X, y = load_data()

In this process each word gets their weights according to their length or embeddings, Weight means
they will be replaced by an integer qauntity as you can see the output of the first sentence below. It is a
sequence of numbers which is converted from text.

tokenizer = Tokenizer()
tokenizer.fit_on_texts(X)

X = tokenizer.texts_to_sequences(X)

Now, this sequence should be converted in numpy array because tensorflow and keras only supports
numpy array datatype, We also have to fix the length of each senetnce to reduce complexity

X = np . a anay(X)
y = np . a nnay(y)
X = pad_sequences(X, maxlen=SEQUENCE_LENGTH)

X[0]

Artificial intelligence
In the output we have labels like "spam" and "ham so we have to convert them in 0 and 1 format

y = [ label2int[label] for label in y ]


y = to_categorical(y)

This function splits the data in train and test dataset, In this test size is 25%

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SIZE, random_s

print("X_train.shape:“, X_train.shape)
print("X_test.shape:”, X_test.shape)
print("y_train.shape:“, y_train.shape)
print("y_test.shape:”, y_test.shape)

Word embeddings are a type of word representation that allows words with similar meaning to have a
similar representation.

Word embeddings are in fact a class of techniques where individual words are represented as real-
valued vectors in a predefined vector space. Each word is mapped to one vector and the vector values
are learned in a way that resembles a neural network, and hence the technique is often lumped into the

def get_embedding_vectors(tokenizer, dim=100):


embedding_index = {}
with open(“glove.6B.100d.txt“, encoding='utf8') as f:
for line in tqdm.tqdm(f, ”Reading GloVe”):
values = line.split()
word = values[0]
vectors = np.asarray(values[1:], dtype='float32')
embedding_index[word] = vectors

word_index: tokenizer.word_index
embedding_matrix np.zeros((len(word_index)+1, dim))for word, i in
word_index.items():
embedding_vector: embedding_index.get(word)ifembedding
vector is not None:
field of deep learning.
embedding matrix[i] = embedding vector

return embedding_matrix

This is the model architecture, we have used LSTM to impliment text classification. LSTM stands for

Artificial intelligence
long short term memory

LSTM networks were designed specifically to overcome the long-term dependency problem faced by
recurrent neural networks RNNs (due to the vanishing gradient problem). LSTMs have feedback
connections which make them different to more traditional feedforward neural networks. This property
enables LSTMs to process entire sequences of data (e.g. time series) without treating each point in the
sequence independently, but rather, retaining useful information about previous data in the sequence to
help with the processing of new data points. As a result, LSTMs are particularly good at processing
sequences of data such as text, speech and general time-series.

model = get_model(tokenizer=tokenizer, lstm_units=128)

Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 100, 100) 901300

def get_model(tokenizer, lstm_units):

embedding_matrix get_embedding_vectors(tokenizer)model = Sequential()


model.add(Embedding(len(tokenizer.word_index)+1,
EMBEDDING_SIZE,
weights=[embedding_matrix], trainable=False,
input_length=SEQUENCE_LENGTH))

model.add(LSTM(lstm_units, recurrent_dropout=0.2))model.add(Dropout(0.3))
model.add(Dense(2, activation=“softmax"))

model.compile(optimizer=”rmsprop”, loss=”categorical crossentropy“,metrics=[“accuracy”]


model.summary()
return model

_________________________________________________________________
lstm (LSTM) (None, 128) 117248
_________________________________________________________________
dropout (Dropout) (None, 128) 0
_________________________________________________________________
dense (Dense) (None, 2) 258
=================================================================
Total params: 1,018,806
Trainable params: 117,506
Non-trainable params: 901,300

model_checkpoint = ModelCheckpoint("spam_classifier_{val_loss:.2f}.h5", save_best_only=True,verbose=1)

model.fit(X_train, y_train, validation_data=(X_test, y_test),


batch_size=BATCH_SIZE, epochs=EPOCHS,

Artificial intelligence
callbacks=[model_checkpoint],
verbose=1)

Train on 4180 samples, validate on 1394 samplesEpoch 1/10


4160/4180 [===============---=---------'l. - ETA: 0s - loss: 0.1636 - accuracy: 0.9404
precision: 0.9404 - recall: 0.9404
Epoch 00001: val_loss improved from inf to 0.10619, saving model to spam_classifier_0.1
1.h5
4180/4180 [================----=--------=l - 11s 3ms/sample - loss: 0.1641 - accuracy:
0.9404 - precision: 0.9404 - recall: 0.9404 - val loss: 0.1062 - val accuracy: 0.9613 -
val precision: 0.9613 - val recall: 0.9613
Epoch 2/10
4160/4180 [============== =---=---------'l. - ETA: 0s - loss: 0.0982 - accuracy: 0.9668 -
precision: 0.9668 - recall: 0.9668
Epoch 00002: val_loss improved from 0.10619 to 0.08249, saving model to spam_classifier_
0.08.h5
4180/4180 [==============================] - 9s 2ms/sample - loss: 0.0990 - accuracy: 0.
9667 - precision: 0.9667 - recall: 0.9667 - val_loss: 0.0825 - val_accuracy: 0.9720 - va
l_precision: 0.9720 - val_recall: 0.9720
Epoch 3/10
4160/4180 [============================›.] - ETA: 0s - loss: 0.0748 - accuracy: 0.9757
precision: 0.9757 - recall: 0.9757
Epoch 00003: val_loss improved from 0.08249 to 0.07019, saving model to spam_classifier_
0.07.h5
4180/4180 [==============================] - 9s 2ms/sample - loss: 0.0748 - accuracy: 0.
9756 - precision: 0.9756 - recall: 0.9756 - val_loss: 0.0702 - val_accuracy: 0.9763 - va
l_precision: 0.9763 - val_recall: 0.9763
Epoch 4/10
4160/4180 [================---=-=------»l. - ETA: 0s - loss: 0.0636 - accuracy: 0.9803
precision: 0.9803 - recall: 0.9803
Epoch 00004: val_loss did not improve from 0.07019
/4180 m g @ C
9800 ec sion '0'9804' 'recall '0'9804'-’v9s 2 ’ 0p0e,3l : 0 063 c ’ 0 96 4 0
l_precision: 0.9684’- val_recall: ’0.9684
Epoch 5/10
4160/4180 [============== =----------------------- »l. ETA: 0s loss: 0.0571 accuracy: 0.9822
precision: 0.9822 - recall: 0.9822
Epoch 00005: val_loss did not improve from 0.07019
4180/4180 [===============---=-----------l - 10s 2ms/sample - loss: 0.0570 - accuracy:
0.9823 - precision: 0.9823 - recall: 0.9823 - val_loss: 0.0713 - val_accuracy: 0.9749 -
val precision: 0.9749 - val recall: 0.9749
Epoch 6/10
4160/4180 [===============---=---------»l. - ETA: 0s - loss: 0.0494 - accuracy: 0.9851 -
precision: 0.9851 - recall: 0.9851
Epoch 00006: val loss improved from 0.07019 to 0.06254, saving model to spam classifier
0.06.h5
4180/4180 [===============-----------=---l - 9s 2ms/sample - loss: 0.0494 - accuracy: 0.
9852 - precision: 0.9852 - recall: 0.9852 - val_loss: 0.0625 - val_accuracy: 0.9778 - va
1 precision: 0.9778 - val recall: 0.9778
Epoch 7/10
4160/4180 [===============---=---------»l. - ETA: 0s - loss: 0.0420 - accuracy: 0.9887 –

Artificial intelligence
5/6

Artificial intelligence
Here is the example and final results on how this model responds according to the type of message

def get predictions(text):


sequence = tokenizer.texts_to_sequences([text])
sequence pad_sequences(sequence, maxlen=SEQUENCE_LENGTH)
prediction = model.predict(sequence)[0]
return int2label[np.argmax(prediction)]

text = "You won a prize of 1,000$, click here to claim!"


get_predictions(text)

text : "Hi man, I was wondering if we can meet tomorrow."


print(get_predictions(text))

Conclusion: When you have more than one outputs from one input you should use logistic regression
instead of softmax and also logistic regression is well optimized and faster than softmax.

Artificial intelligence

You might also like