0% found this document useful (0 votes)
31 views29 pages

ML Lab Mannual

Uploaded by

ecehod.sdgi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views29 pages

ML Lab Mannual

Uploaded by

ecehod.sdgi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Sree Dattha Institute of Engineering & Science

Sheriguda(V), Ibrahimpatnam (M), Ranga Reddy District – 501510

MACHINE LEARNING LAB


(CS604PC)
LAB MANUAL

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


(2021-22)

Dr. M. VARAPRASAD RAO


INDEX
Page
S. No Name of the Experiment
No
The probability that it is Friday and that a student is absent is 3 %. Since there
are 5 school days in a week, the probability that it is Friday is 20 %. What is the
1
probability that a student is absent given that today is Friday? Apply Baye’s
rule in python to get the result.
2 Extract the data from the database using python?

3 Implement k-nearest neighbours classification using python?

Given the following data, which specify classifications for nine combinations of
VAR1 and VAR2 predict a classification for a case where VAR1=0.906 and
VAR2=0.606, using the result of k-means clustering with 3 means (i.e., 3
centroids)

VAR1 VAR2 CLASS


1.713 1.586 0
4 0.180 1.786 1
0.353 1.240 1
0.940 1.566 0
1.486 0.759 1
1.266 1.106 0
1.540 0.419 1
0.459 1.799 1
0.773 0.186 1

The following training examples map descriptions of individuals onto high,


medium and low creditworthiness.

Home
Income Recreation Job Status Age group Risk level
owner
medium skiing design single twenties no highRisk
high golf trading married forties yes lowRisk
low speedway transport married thirties yes medRisk
5 medium football banking single thirties yes lowRisk
high flying media married fifties yes highRisk
low football security single twenties no medRisk
medium golf media single thirties yes medRisk
medium golf transport married forties yes lowRisk
high skiing banking single thirties yes highRisk
low golf unemployed married forties yes highRisk

Find the unconditional probability of `golf' and the conditional probability of


`single' given `medRisk' in the dataset?
6 Implement linear regression using python?

7 Implement the Naïve Bayes theorem to classify the English text?

Implement an algorithm to demonstrate the significance of a genetic


8
algorithm?

Implement the finite words classification system using the Backpropagation


9
algorithm?

10 Extracting data from the text file, excel file, remote location file
#1

The probability that it is Friday and that a student is absent is 3 %. Since there are 5 school days in
a week, the probability that it is Friday is 20 %. What is the probability that a student is absent
given that today is Friday? Apply Baye’s rule in python to get the result.

Source Code:

likelywood_prob = float(input("Enter likelywood probability:"))

Prior_prob = float(input("Enter prior probability:"))

posterior_prob=0

posterior_prob = ((likelywood_prob) /(Prior_prob)) * 100

print('posterior_prob',posterior_prob)

Input/Dataset: Key board

Enter likelihood probability: 3

Enter prior probability: 20

Output: 15.0
#2

Extract the data from the database using python?

Source Code:

import csv

import pandas as pd

mydata= pd.read_csv("D:\\iris.csv")

print(mydata)

Output:
#3

Implement k-nearest neighbours classification using python?

Source Code1:

%matplotlib inline

import matplotlib.pyplot as plt

import seaborn as sns; sns.set() # for plot styling

import numpy as np

from sklearn.datasets.samples_generator import make_blobs

X, y_true = make_blobs(n_samples=300, centers=3,

cluster_std=0.60, random_state=0)

plt.scatter(X[:, 0], X[:, 1], s=50);

from sklearn.cluster import KMeans

kmeans = KMeans(n_clusters=4)

kmeans.fit(X)

y_kmeans = kmeans.predict(X)

plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='viridis')

centers = kmeans.cluster_centers_

plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5);

Output:

4 clusters 3 clusters
Source Code2:

# Import necessary modules

from sklearn.neighbors import KNeighborsClassifier

from sklearn.model_selection import train_test_split

from sklearn.datasets import load_iris

# Loading data

irisData = load_iris()

# Create feature and target arrays

X = irisData.data

y = irisData.target

# Split into training and test set

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=42)

knn = KNeighborsClassifier(n_neighbors=7)

knn.fit(X_train, y_train)

# Predict on dataset which model has not seen before

print(knn.predict(X_test))

# Import necessary modules

from sklearn.neighbors import KNeighborsClassifier

from sklearn.model_selection import train_test_split

from sklearn.datasets import load_iris

# Loading data

irisData = load_iris()

# Create feature and target arrays

X = irisData.data

y = irisData.target

# Split into training and test set

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=42)


knn = KNeighborsClassifier(n_neighbors=7)

knn.fit(X_train, y_train)

# Calculate the accuracy of the model

print(knn.score(X_test, y_test))

# Import necessary modules

from sklearn.neighbors import KNeighborsClassifier

from sklearn.model_selection import train_test_split

from sklearn.datasets import load_iris

import numpy as np

import matplotlib.pyplot as plt

irisData = load_iris()

# Create feature and target arrays

X = irisData.data

y = irisData.target

# Split into training and test set

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=42)

neighbors = np.arange(1, 9)

train_accuracy = np.empty(len(neighbors))

test_accuracy = np.empty(len(neighbors))

# Loop over K values

for i, k in enumerate(neighbors):

knn = KNeighborsClassifier(n_neighbors=k)

knn.fit(X_train, y_train)

# Compute traning and test data accuracy

train_accuracy[i] = knn.score(X_train, y_train)

test_accuracy[i] = knn.score(X_test, y_test)

# Generate plot

plt.plot(neighbors, test_accuracy, label = 'Testing dataset Accuracy')

plt.plot(neighbors, train_accuracy, label = 'Training dataset Accuracy')

plt.legend()

plt.xlabel('n_neighbors')
plt.ylabel('Accuracy')

plt.show()

Output:

Clusters: [1 0 2 1 1 0 1 2 2 1 2 0 0 0 0 1 2 1 1 2 0 2 0 2 2 2 2 2 0 0]

Accuracy: 0.9666666666666667

Accuracy plot
#4

Given the following data, which specify classifications for nine combinations of VAR1 and VAR2
predict a classification for a case where VAR1=0.906 and VAR2=0.606, using the result of k-means
clustering with 3 means (i.e., 3 centroids)

VAR1 VAR2 CLASS


1.713 1.586 0
0.180 1.786 1
0.353 1.240 1
0.940 1.566 0
1.486 0.759 1
1.266 1.106 0
1.540 0.419 1
0.459 1.799 1
0.773 0.186 1

#importing the libraries

import numpy as np

import matplotlib.pyplot as plt

import pandas as pd

#importing the dataset with pandas

dataset = pd.read_csv(r'E:\Sree Dattha\Machine Learning\kmeansdata.csv')

print(dataset)

x = dataset.iloc[:, [1, 2]].values

print(x)

#Finding the optimum number of clusters for k-means classification

from sklearn.cluster import KMeans

wcss = []

for i in range(1, 8):

kmeans = KMeans(n_clusters = i, init = 'k-means++', max_iter = 300, n_init = 10, random_state = 0)


kmeans.fit(x)

wcss.append(kmeans.inertia_)

print(wcss)

#Plotting the results onto a line graph, allowing us to observe 'The elbow'

plt.plot(range(1, 8), wcss)

plt.title('The elbow method')

plt.xlabel('Number of clusters')

plt.ylabel('WCSS') #within cluster sum of squares

plt.show()

#Applying kmeans to the dataset / Creating the kmeans classifier

kmeans = KMeans(n_clusters = 3, init = 'k-means++', max_iter = 300, n_init = 10, random_state = 0)

print(kmeans)

#test = pd.read_csv(r'E:\Sree Dattha\Machine Learning\kmeansdata.csv')

y_kmeans = kmeans.fit_predict(x)

#z_kmeans = kmeans.fit_predict(test)

print(y_kmeans)

#Visualising the clusters

plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1], s = 100, c = 'red', label = 'red')

plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1], s = 100, c = 'blue', label = 'blue')

plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1], s = 100, c = 'green', label = 'green')

#Plotting the centroids of the clusters

plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1], s = 100, c = 'yellow', label =


'Centroids')

plt.legend()
Output:

[4.814377555555556, 2.0707095000000004, 0.517128, 0.29037616666666666,


0.14310949999999997, 0.027429000000000002, 0.000284500000000002]

KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,

n_clusters=3, n_init=10, n_jobs=None, precompute_distances='auto',

random_state=0, tol=0.0001, verbose=0)

[0 1 1 0 2 0 2 1 2]
#5

The following training examples map descriptions of individuals onto high, medium and low
creditworthiness.

Age Home
Income Recreation Job Status Risk level
group owner
medium skiing design single twenties no highRisk
high golf trading married forties yes lowRisk
low speedway transport married thirties yes medRisk
medium football banking single thirties yes lowRisk
high flying media married fifties yes highRisk
low football security single twenties no medRisk
medium golf media single thirties yes medRisk
medium golf transport married forties yes lowRisk
high skiing banking single thirties yes highRisk
low golf unemployed married forties yes highRisk

Find the unconditional probability of `golf' and the conditional probability of `single' given
`medRisk' in the dataset?

import numpy as np

import matplotlib.pyplot as plt

import pandas as pd

import os

# Caluclating unconditional probability

dataset = pd.read_csv('credit.csv')

total= len(dataset)

#print(os.getcwd())

k= dataset.recreation.value_counts().golf

Unconditional_probability= k/total*100
print('Unconditional probability:',Unconditional_probability)

# Caluclating conditional probability

# for i in dataset:

# if dataset.status.value == 'single' and dataset.risk == 'medRisk':

count=0

# count=count+1

#print(count)

def cond_prob(dataset):

if dataset['status'] == 'single' and dataset['risk'] == 'medRisk':

global count

count=count+1

count_value = dataset.apply(cond_prob,axis=1)

#print(count)

conditional_probability = (count/total)*100

print('conditional probability:',conditional_probability)

Output
#6

Implement linear regression using python?

Source Code:

import numpy as np

import matplotlib.pyplot as plt

def estimate_coef(x, y):

# number of observations/points

n = np.size(x)

# mean of x and y vector

m_x = np.mean(x)

m_y = np.mean(y)

# calculating cross-deviation and deviation about x

SS_xy = np.sum(y*x) - n*m_y*m_x

SS_xx = np.sum(x*x) - n*m_x*m_x

# calculating regression coefficients

b_1 = SS_xy / SS_xx

b_0 = m_y - b_1*m_x

return (b_0, b_1)

def plot_regression_line(x, y, b):

# plotting the actual points as scatter plot

plt.scatter(x, y, color = "m", marker = "o", s = 30)

# predicted response vector

y_pred = b[0] + b[1]*x

# plotting the regression line

plt.plot(x, y_pred, color = "g")

# putting labels

plt.xlabel('x')
plt.ylabel('y')

# function to show plot

plt.show()

def main():

# observations / data

x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])

y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])

# estimating coefficients

b = estimate_coef(x, y)

print("Estimated coefficients:\nb_0 = {} \

\nb_1 = {}".format(b[0], b[1]))

# plotting regression line

plot_regression_line(x, y, b)

if name == " main ":

main()

Input:

x = ([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])

y = ([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])

Output:

Estimated coefficients:

b_0 = 1.2363636363636363

b_1 = 1.1696969696969697

Linear Regression Plot:


#7

Implement the Naïve Bayes theorem to classify the English text?

Naïve Bayes Classification Example-1

%matplotlib inline

import numpy as np

import matplotlib.pyplot as plt

import seaborn as sns; sns.set()

from sklearn.datasets import make_blobs

X, y = make_blobs(100, 2, centers=2, random_state=2, cluster_std=1.5)

plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='RdBu');

Output NBC Plot:


Naïve Bayes Classification Example-2

Source Code:

%matplotlib inline

import numpy as np

import matplotlib.pyplot as plt

import seaborn as sns; sns.set()

sklearn.datasets import make_blobs

X, y = make_blobs(100, 2, centers=2, random_state=2, cluster_std=1.5)

plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='RdBu');

from sklearn.naive_bayes import GaussianNB

model = GaussianNB()

model.fit(X, y);
rng = np.random.RandomState(0)

Xnew = [-6, -14] + [14, 18] * rng.rand(2000, 2)

ynew = model.predict(Xnew)

plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='RdBu')

lim = plt.axis()

plt.scatter(Xnew[:, 0], Xnew[:, 1], c=ynew, s=20, cmap='RdBu', alpha=0.1)

plt.axis(lim);

yprob = model.predict_proba(Xnew)

yprob[-8:].round(2)

Output Plot:
Gaussian naïve Bayes Classification Example-3

# load the iris dataset

from sklearn.datasets import load_iris

iris = load_iris()

# store the feature matrix (X) and response vector (y)

X = iris.data

y = iris.target

# splitting X and y into training and testing sets

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=1)

# training the model on training set

from sklearn.naive_bayes import GaussianNB

gnb = GaussianNB()

gnb.fit(X_train, y_train)

# making predictions on the testing set

y_pred = gnb.predict(X_test)

# comparing actual response values (y_test) with predicted response values (y_pred)

from sklearn import metrics

print("Gaussian Naive Bayes model accuracy(in %):", metrics.accuracy_score(y_test, y_pred)*100)

Output:

Gaussian Naive Bayes model accuracy (in %): 95.0


#8

Implement an algorithm to demonstrate the significance of a genetic algorithm?

# Genetic algorithm to evaluate a binary string based on the number of 1's in the string. Example:
a bit string with a length of 20 bits will have a score of 20 for a string of all 1's in the string.
(1111111111111111111 = 20, 11111111110000000000 =10)

Source Code:

from numpy.random import randint

from numpy.random import rand

import matplotlib.pyplot as plt

# objective function

def onemax(x):

return -sum(x)

# tournament selection

def selection(pop, scores, k=3):

# first random selection

selection_ix = randint(len(pop))

for ix in randint(0, len(pop), k-1):

# check if better (e.g. perform a tournament)

if scores[ix] < scores[selection_ix]:

selection_ix = ix

return pop[selection_ix]

# crossover two parents to create two children

def crossover(p1, p2, r_cross):

# children are copies of parents by default

c1, c2 = p1.copy(), p2.copy()

# check for recombination

if rand() < r_cross:

# select crossover point that is not on the end of the string

pt = randint(1, len(p1)-2)
# perform crossover

c1 = p1[:pt] + p2[pt:]

c2 = p2[:pt] + p1[pt:]

return [c1, c2]

# mutation operator

def mutation(bitstring, r_mut):

for i in range(len(bitstring)):

# check for a mutation

if rand() < r_mut:

# flip the bit

bitstring[i] = 1 - bitstring[i]

# genetic algorithm

def genetic_algorithm(objective, n_bits, n_iter, n_pop, r_cross, r_mut):

# initial population of random bitstring

pop = [randint(0, 2, n_bits).tolist() for _ in range(n_pop)]

# keep track of best solution

best, best_eval = 0, objective(pop[0])

# enumerate generations

for gen in range(n_iter):

# evaluate all candidates in the population

scores = [objective(c) for c in pop]

# check for new best solution

for i in range(n_pop):

if scores[i] < best_eval:

best, best_eval = pop[i], scores[i]

print(">%d, new best f(%s) = %.3f" % (gen, pop[i], scores[i]))

# select parents

selected = [selection(pop, scores) for _ in range(n_pop)]

# create the next generation

children = list()

for i in range(0, n_pop, 2):


# get selected parents in pairs

p1, p2 = selected[i], selected[i+1]

# crossover and mutation

for c in crossover(p1, p2, r_cross):

# mutation

mutation(c, r_mut)

# store for next generation

children.append(c)

# replace population

pop = children

return [best, best_eval]

# define the total iterations

n_iter = 100

# bits

n_bits = 20

# define the population size

n_pop = 100

# crossover rate

r_cross = 0.9

# mutation rate
r_mut = 1.0 / float(n_bits)

# perform the genetic algorithm search

best, score = genetic_algorithm(onemax, n_bits, n_iter, n_pop, r_cross, r_mut)

print('Done!')

print('f(%s) = %f' % (best, score))

plt.plot(best)
Output:

>0, new best f([1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1]) = -12.000

>0, new best f([0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1]) = -14.000

>0, new best f([0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1]) = -15.000

>1, new best f([1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1]) = -16.000

>1, new best f([1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1]) = -17.000

>2, new best f([1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1]) = -18.000

>4, new best f([1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) = -19.000

>7, new best f([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) = -20.000

f([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) = -20.000000


#9

Implement the finite words classification system using the Backpropagation algorithm?

Source Code:

from math import exp

from random import seed

from random import random

# Initialize a network

def initialize_network(n_inputs, n_hidden, n_outputs):

network = list()

hidden_layer = [{'weights':[random() for i in range(n_inputs + 1)]} for i in range(n_hidden)]

network.append(hidden_layer)

output_layer = [{'weights':[random() for i in range(n_hidden + 1)]} for i in range(n_outputs)]

network.append(output_layer)

return network

# Calculate neuron activation for an input

def activate(weights, inputs):

activation = weights[-1]

for i in range(len(weights)-1):

activation += weights[i] * inputs[i]

return activation

# Transfer neuron activation

def transfer(activation):

return 1.0 / (1.0 + exp(-activation))

# Forward propagate input to a network output

def forward_propagate(network, row):

inputs = row

for layer in network:

new_inputs = []
for neuron in layer:

activation = activate(neuron['weights'], inputs)

neuron['output'] = transfer(activation)

new_inputs.append(neuron['output'])

inputs = new_inputs

return inputs

# Calculate the derivative of an neuron output

def transfer_derivative(output):

return output * (1.0 - output)

# Backpropagate error and store in neurons

def backward_propagate_error(network, expected):

for i in reversed(range(len(network))):

layer = network[i]

errors = list()

if i != len(network)-1:

for j in range(len(layer)):

error = 0.0

for neuron in network[i + 1]:

error += (neuron['weights'][j] * neuron['delta'])

errors.append(error)

else:

for j in range(len(layer)):

neuron = layer[j]

errors.append(expected[j] - neuron['output'])

for j in range(len(layer)):

neuron = layer[j]

neuron['delta'] = errors[j] * transfer_derivative(neuron['output'])

# Update network weights with error

def update_weights(network, row, l_rate):

for i in range(len(network)):

inputs = row[:-1]
if i != 0:

inputs = [neuron['output'] for neuron in network[i - 1]]

for neuron in network[i]:

for j in range(len(inputs)):

neuron['weights'][j] += l_rate * neuron['delta'] * inputs[j]

neuron['weights'][-1] += l_rate * neuron['delta']

# Train a network for a fixed number of epochs

def train_network(network, train, l_rate, n_epoch, n_outputs):

for epoch in range(n_epoch):

sum_error = 0

for row in train:

outputs = forward_propagate(network, row)

expected = [0 for i in range(n_outputs)]

expected[row[-1]] = 1

sum_error += sum([(expected[i]-outputs[i])**2 for i in


range(len(expected))])

backward_propagate_error(network, expected)

update_weights(network, row, l_rate)

print('>epoch=%d, lrate=%.3f, error=%.3f' % (epoch, l_rate, sum_error))

# Test training backprop algorithm

seed(1)

dataset = [[2.7810836,2.550537003,0],

[1.465489372,2.362125076,0],

[3.396561688,4.400293529,0],

[1.38807019,1.850220317,0],

[3.06407232,3.005305973,0],

[7.627531214,2.759262235,1],

[5.332441248,2.088626775,1],

[6.922596716,1.77106367,1],

[8.675418651,-0.242068655,1],

[7.673756466,3.508563011,1]]
n_inputs = len(dataset[0]) - 1

n_outputs = len(set([row[-1] for row in dataset]))

network = initialize_network(n_inputs, 2, n_outputs)

train_network(network, dataset, 0.5, 20, n_outputs)

for layer in network:

print(layer)

Output:

>epoch=0, lrate=0.500, error=6.350

>epoch=1, lrate=0.500, error=5.531

>epoch=2, lrate=0.500, error=5.221

>epoch=3, lrate=0.500, error=4.951

>epoch=4, lrate=0.500, error=4.519

>epoch=5, lrate=0.500, error=4.173

>epoch=6, lrate=0.500, error=3.835

>epoch=7, lrate=0.500, error=3.506

>epoch=8, lrate=0.500, error=3.192

>epoch=9, lrate=0.500, error=2.898

>epoch=10, lrate=0.500, error=2.626

>epoch=11, lrate=0.500, error=2.377

>epoch=12, lrate=0.500, error=2.153

>epoch=13, lrate=0.500, error=1.953

>epoch=14, lrate=0.500, error=1.774

>epoch=15, lrate=0.500, error=1.614

>epoch=16, lrate=0.500, error=1.472

>epoch=17, lrate=0.500, error=1.346

>epoch=18, lrate=0.500, error=1.233

>epoch=19, lrate=0.500, error=1.132


[{'weights': [-1.4688375095432327, 1.850887325439514, 1.0858178629550297], 'output':
0.029980305604426185, 'delta': -0.0059546604162323625}, {'weights': [0.37711098142462157, -
0.0625909894552989, 0.2765123702642716], 'output': 0.9456229000211323, 'delta':
0.0026279652850863837}]

[{'weights': [2.515394649397849, -0.3391927502445985, -0.9671565426390275], 'output':


0.23648794202357587, 'delta': -0.04270059278364587}, {'weights': [-2.5584149848484263,
1.0036422106209202, 0.42383086467582715], 'output': 0.7790535202438367, 'delta':
0.03803132596437354}]
Project Ideas:

1. Stock market Price Prediction


2. Digit Classification
3. Fake News Detection
4. Music Frequency Classification
5. Bitcoin Price Predictor
6. Human Personality Prediction
7. Handwritten Character Recognition
8. Credit Card Fraud Detection
9. Sign Language Recognition
10. Speech Emotion Recognition
11. Catching cyber Illegal Fishing
12. Online Product Recommendation

You might also like