Minor Project Report Sign Language Detection
Minor Project Report Sign Language Detection
A PROJECT
degree OF
INFORMATION TECHNOLOGY
UNIVERSITY,DELHI 2019-2022
DECLARATION
Sahil Sharma
(02591102019)
CERTIFICATE
This is to certify that the project entitled: “SIGN LANGUAGE DETECTION” done by Sahil
Sharma having university enrolment number 02591102019 is an authentic work carried out
by them at SRI GURU TEGH BAHADUR INSTITUTE OF MANAGEMENT & INFORMATION
TECHNOLOGY. The matter embodied in this project work has not been submitted earlier for
the award of any degree or diplomas to the best of my knowledge and belief. The
suggestion as approved by the faculty was duly incorporated.
DATE: 04/12/2021
We are extremely grateful to our guide DR. SUPREET KAUR SAHI for being a source of
inspiration and for her constant support in the Design, Implementation and Evaluation of
the project. I am thankful to her for constant constructive criticism and in valuable
suggestions, which benefited me a lot while developing the project on “ SIGN LANGUAGE
DETECTION”.
She has been a constant source of inspiration and motivation for hard work. She has been
very cooperative throughout this project work with condor and pleasure. Through this
column, it would be my utmost pleasure to express my warm thanks to her for the
encouragement, co- operation and consent without which we mightn’t be able to
accomplish this project.
Sahil Sharma
BCA-5th SEM
The project aims at building a program that is able to recognize the various hand gestures used for
different signs to communicate or help us in doing some contactless communication with the
system. We are taking the help of different libraries like mediapipe as it has the ability to perceive
the shape and motion of hands that is the main requirement for the program. MediaPipe Hands is
a high-fidelity hand and finger tracking solution. It employs machine learning to infer 21 3D
landmarks of a hand from just a single frame. MediaPipe Hands utilizes an ML pipeline
consisting of multiple models working together: A palm detection model that operates on the full
image and returns an oriented hand bounding box. A hand landmark model that operates on the
cropped image region defined by the palm detector and returns high-fidelity 3D hand keypoints.
We can assign different hand gestures to functions that can be performed by doing the hand
gesture like clicking images, applying filters, controlling volume by using hands only. The
program can also tell about some sign language gestures.
TABLE OF CONTENT
1. INTRODUCTION
1.1. INTRODUCTION TO THE PROJECT
2. REQUIRMENT ANALYSIS
2.1. SOFTWARE REUIRMENT ANALYSIS
2.2. USE CASE DIAGRAM
3. SOFTWARE DESIGN
3.1. INTRODUCTION
3.2. DATA FLOW DIAGRAM (LEVEL O)
3.3. DATA FLOW DIAGRAM (LEVEL 1)
3.4 DATA FLOW DIAGRAM (LEVEL 2)
4 TESTING
4.1. INTRODUCTION
4.2. TEST CASES
5. FUTURE WORK
6. CONCLUSION AND ACHIEVEMENTS
APPENDICES
LIST OF FIGURES
FIGURE
FIGURE1. USE CASE
FIGURE 2. DATA FLOW DIAGRAM LEVEL 0
FIGURE 3. DATA FLOW DIAGRAM LEVEL 1
FIGURE 4. ER DIAGRAM
CHAPTER 1
INTRODUCTION
Introduction
The posture of the body or the gesture of the hand denotes a sign and the movement of the body
or the hand conveys some messages. Gesture can be used as a tool of communication between
computer and human. It is greatly different from the traditional hardware based methods and can
accomplish human-computer interaction through gesture recognition as we can make a lot of
different signs using our hands. Hand gesture recognition has great value in many applications
such as sign language recognition, augmented reality (virtual reality), sign language interpreters
for the disabled and lot more.
The objective of our project is to provide a real time hand finger counter that will also act as sign
language detector and will provide some functions on specific hand gestures and can help us do
some of the tasks hands free in contactless manner without the use of any special hardware
requirement.
We are using python for the project as it provides a lot of features and comes will a large
number of libraries and modules that makes the code small and easy to analyze and find errors in
efficient ways and can provides easy GUI which can make the using very friendly for the user.
8
CHAPTER 2
REQUIREMENTANALYSIS
9
REQUIREMENT ANALYSIS
A basic working computer with a working camera is required, as this does not require any
special hardware to run. In software requirements the system should have the following libraries-
matplotlib, opencv and mediapipe.
10
Use Case Diagram
Log In
Start Webcam
Capture Video
Capture Gesture
USER SYSTEM
Translate Gesture
Extract features
Recognize Gesture
Display Result
11
CHAPTER 3
SOFTWARE DESIGN
12
3.1 Introduction
This chapter will focus on the design of the system using diagrams to illustrate graphically
certain sections of software system. As information moves through software, it is modified
by a series of transformations. A data flow diagram is a graphical representation that depicts
information flow and the transforms that are applied as data move from input to output.
The data flow diagram may be used to represent a system or software at any level of
abstraction. In fact, DFDs may be partitioned into levels that represent increasing
information flow and functional detail. Therefore, the DFD provides a mechanism for
functional modelling as well as information flow modelling.
13
3.2 Data Flow Diagram (Level 0)
COUNT FINGERS
14
3.3 Data Flow Diagram (Level 1)
LOGIN/REGISTER
USER AUTHENTICATION
GIVES
OUTPUT
ADMIN
IMAGE
INPUT
CAMERA
IMAGE CAPTURE
15
3.4 Data Flow Diagram (Level 2)
1) LOGIN/REGISTER
USER
EXISTING USER
NEW USER
LOGIN
REGISTER
FETCHES USER
DATA
HOMEPAGE
SAVES USERID
AND
PASSWORD
USER INFORMATION
16
2) Sign language detection
USER
OUTPUT
SIGN LANGUAGE
DETECTION
IMAGE CAPTURE
AND INPUT
CAMERA
17
3) HAND GESTURE
RECOGNITION
USER
HAND GESTURE
RECOGNITION
CONTROLS
VOLUME HAND MOVEMENT
INPUT
CAMERA
18
CHAPTER 5
TESTING
19
4.1 INTRODUCTION
The testing of software means measuring or accessing the software to determine the
quality. Testing is a measuring instrument for software quality with the unit of
measurement being the number of defects found during testing. Testing activities also
help to achieve software quality. Testing is essential in the development of any system
software. Testing is essential in the development of any software system.
4.2LEVELS OF TESTING
4.2.1 Unit Testing
Unit testing comprises the set of tests performed usually by the programmers prior to the
integration of the unit in to a large Program. This is the lowest level of testing and is done
by the programmer (Who develops it) who can test it in great detail.
After in integration testing is completed, the entire system is tested as whole. The
functional specifications or requirements specification are used to derive the test case. At
this level the system testing looks for errors in the end-to-end functional quality.
Attributes such as performance, reliability, Volume, stress tolerance, usability,
maintainability, security etc. Independent testers can carry out this testing.
5.Future Work
6.Conclusion
SCREENSHOTS
21
AUTHENTICATION:
22
HOMEPAGE:
1) FINGER COUNTER
23
24
2) SIGN LANGUAGE DETECTION
25
3) VOLUME CONTROL USING HAND GESTURE
26
APPENDICES
CODING:
from tkinter import *
import os
import cv2
import time
import numpy as np
import mediapipe as mp
import matplotlib.pyplot as plt
import math
from ctypes import cast, POINTER
from comtypes import CLSCTX_ALL
from pycaw.pycaw import AudioUtilities, IAudioEndpointVolume
#import tkinter as tk
mp_hands = mp.solutions.hands
27
if results.multi_hand_landmarks and draw:
# Iterate over the found hands.
for hand_landmarks in results.multi_hand_landmarks:
# Draw the hand landmarks on the copy of the input image.
mp_drawing.draw_landmarks(image = output_image, landmark_list =
hand_landmarks,
connections = mp_hands.HAND_CONNECTIONS,
landmark_drawing_spec=mp_drawing.DrawingSpec(color=(255,255,255),
thickness=2, circle_radius=2),
connection_drawing_spec=mp_drawing.DrawingSpec(color=(0,255,0),
thickness=2, circle_radius=2))
# Check if the original input image and the output image are specified to be
displayed.
if display:
# Display the original input image and the output image.
plt.figure(figsize=[15,15])
plt.subplot(121);plt.imshow(image[:,:,::-1]);plt.title("Original
Image");plt.axis('off');
plt.subplot(122);plt.imshow(output_image[:,:,::-
1]);plt.title("Output");plt.axis('off');
# Otherwise
else:
# Return the output image and results of hands landmarks detection.
return output_image, results
def delete_login_success():
login_success_screen.destroy()
login_screen.destroy()
main_screen.destroy()
def qu():
camera_video = cv2.VideoCapture(0)
camera_video.set(3, 1280)
camera_video.set(4, 960)
cv2.namedWindow('Fingers Counter', cv2.WINDOW_NORMAL)
28
while camera_video.isOpened():
# Wait for 1ms. If a key is pressed, retreive the ASCII code of the key.
k = cv2.waitKey(1) & 0xFF
29
# Iterate until the webcam is accessed successfully.
while camera_video.isOpened():
# Read a frame.
ok, frame = camera_video.read()
# Check if frame is not read properly then continue to the next iteration
to read the next frame.
if not ok:
continue
# Flip the frame horizontally for natural (selfie-view)
visualization.
frame = cv2.flip(frame, 1)
# Get the height and width of the frame of the webcam video.
frame_height, frame_width, _ = frame.shape
frame, results = detectHandsLandmarks(frame, hands_videos,
draw=False, display=False)
print('ILOVEYOU')
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame,
'ILOVEYOU',
(100, 150),
font, 3,
(0, 255, 255),
5,
cv2.LINE_4)
30
counter['SPIDERMAN SIGN'] = 0
else:
# Update the counter value to zero. As we are counting the
consective frames with SPIDERMAN hand gesture.
counter['SPIDERMAN SIGN'] = 0
if any(hand_gesture == "HIGH-FIVE SIGN" for hand_gesture in
hands_gestures.values()):
counter['HIGH-FIVE SIGN'] += 1
31
# Update the counter value to zero.
counter['NO'] = 0
else:
counter['NO'] = 0
if any(hand_gesture == "PLAY" for hand_gesture in
hands_gestures.values()):
counter['PLAY'] += 1
if counter['PLAY'] == num_of_frames:
print('PLAY')
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame,
'PLAY',
(100, 150),
font, 3,
(0, 255, 255),
5,
cv2.LINE_4)
32
counter['YES'] = 0
if any(hand_gesture == "IHATEYOU" for hand_gesture in
hands_gestures.values()):
counter['IHATEYOU'] += 1
if counter['IHATEYOU'] == num_of_frames:
# Turn off the filter by updating the value of the
filter status variable to False.
print('IHATEYOU')
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame,
'IHATEYOU',
(100, 150),
font, 3,
(0, 255, 255),
5,
cv2.LINE_4)
# Wait for 1ms. If a key is pressed, retreive the ASCII code of the key.
k = cv2.waitKey(1) & 0xFF
frame = tk.Frame(parent)
frame.config(bg="blue")
frame.pack()
33
text_disp = tk.Button(frame,
text="sign detection",
command=write_text, bg="red")
# text_disp.pack(side=tk.LEFT)
text_disp.pack(side=tk.LEFT, padx=10)
exit_button = tk.Button(frame,
text="Finger counter", bg="red", command=qu)
# exit_button.pack(side=tk.RIGHT)
exit_button.pack(side=tk.LEFT, padx=10)
button3 = tk.Button(frame,
text="hand gesture", command=volume, bg="red")
# button3.pack(side=tk.BOTTOM)
button3.pack(side=tk.LEFT, padx=10)
parent.mainloop()
34