0% found this document useful (0 votes)
116 views9 pages

Project Report (Minor Project)

The document describes a project to create a machine learning model for the rock-paper-scissors game using Teachable Machine. It includes: 1) Code snippet of a JavaScript model that loads the trained model and uses a webcam to make predictions. 2) Details on converting the model to Keras .h5 format to make predictions on single images. 3) A shareable link and downloaded zip file of the trained Teachable Machine model.

Uploaded by

HDKH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
116 views9 pages

Project Report (Minor Project)

The document describes a project to create a machine learning model for the rock-paper-scissors game using Teachable Machine. It includes: 1) Code snippet of a JavaScript model that loads the trained model and uses a webcam to make predictions. 2) Details on converting the model to Keras .h5 format to make predictions on single images. 3) A shareable link and downloaded zip file of the trained Teachable Machine model.

Uploaded by

HDKH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

PROJECT

REPORT
ARTIFICIAL INTELLIGENCE TRAINING PROGRAM
(MINOR PROJECT)

Submitted by-

HANSIKA KOUR
PROJECT ASSIGNED:
Create a Teachable Machine Learning Model Designing the
Rock/Paper/Scissors Game. Teachable Machine Learning Model is a fast, easy
way to create machine learning models for sites, apps, and more. The model
trained should be able to classify rock, paper and scissor as the classes.

MODEL CODE

1. Code snippet of the model in JavaScript:

<div>Teachable Machine Image Model</div>


<button type="button" onclick="init()">Start</button>
<div id="webcam-container"></div>
<div id="label-container"></div>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]/dist/tf.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@teachablemachine/[email protected]/dist/
teachablemachine-image.min.js"></script>
<script type="text/javascript">
// More API functions here:
// https://github.com/googlecreativelab/teachablemachine-community/tree/master/
libraries/image

// the link to your model provided by Teachable Machine export panel


const URL = "./my_model/";

let model, webcam, labelContainer, maxPredictions;

// Load the image model and setup the webcam


async function init() {
const modelURL = URL + "model.json";
const metadataURL = URL + "metadata.json";

// load the model and metadata


// Refer to tmImage.loadFromFiles() in the API to support files from a file picker
// or files from your local hard drive
// Note: the pose library adds "tmImage" object to your window (window.tmImage)
model = await tmImage.load(modelURL, metadataURL);
maxPredictions = model.getTotalClasses();

// Convenience function to setup a webcam


const flip = true; // whether to flip the webcam
webcam = new tmImage.Webcam(200, 200, flip); // width, height, flip
await webcam.setup(); // request access to the webcam
await webcam.play();
window.requestAnimationFrame(loop);

// append elements to the DOM


document.getElementById("webcam-container").appendChild(webcam.canvas);
labelContainer = document.getElementById("label-container");
for (let i = 0; i < maxPredictions; i++) { // and class labels
labelContainer.appendChild(document.createElement("div"));
}
}

async function loop() {


webcam.update(); // update the webcam frame
await predict();
window.requestAnimationFrame(loop);
}
// run the webcam image through the image model
async function predict() {
// predict can take in an image, video or canvas html element
const prediction = await model.predict(webcam.canvas);
for (let i = 0; i < maxPredictions; i++) {
const classPrediction =
prediction[i].className + ": " + prediction[i].probability.toFixed(2);
labelContainer.childNodes[i].innerHTML = classPrediction;
}
}
</script>

2. Model converted to Keras .h5 model:


from keras.models import load_model
from PIL import Image, ImageOps
import numpy as np

# Load the model


model = load_model('keras_model.h5')

# Create the array of the right shape to feed into the keras model
# The 'length' or number of images you can put into the array is
# determined by the first position in the shape tuple, in this case 1.
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)
# Replace this with the path to your image
image = Image.open('<IMAGE_PATH>')
#resize the image to a 224x224 with the same strategy as in TM2:
#resizing the image to be at least 224x224 and then cropping from the center
size = (224, 224)
image = ImageOps.fit(image, size, Image.ANTIALIAS)

#turn the image into a numpy array


image_array = np.asarray(image)
# Normalize the image
normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1
# Load the image into the array
data[0] = normalized_image_array

# run the inference


prediction = model.predict(data)
print(prediction)
Shareable link of the model:
https://teachablemachine.withgoogle.com/models/F97w-Nbba/

Downloaded zip file of the model is also attached along with the word file.

OUTPUT
 Scissors
 Rock
 Paper

You might also like