Batch - 01 Report
Batch - 01 Report
SPECIALLY ABLED
A PROJECT WORK PHASE II REPORT
Submitted by
DEVANAND M (113119UG03019)
DHINESH M (113119UG03020)
KOMESH S (113119UG03051)
VELURU BALAJI (113119UG03112)
In partial fulfilment for the award of the degree of
BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE AND ENGINEERING
VEL TECH MULTI TECH Dr. RANGARAJAN Dr. SAKUNTHALA
ENGINEERING COLLEGE, ALAMATHI ROAD, AVADI, CHENNAI-62
ANNA UNIVERSITY, CHENNAI 600 025.
MAY 2023
ANNA UNIVERSITY, CHENNAI 600 025
BONAFIDE CERTIFICATE
SIGNATURE SIGNATURE
HEAD OF THE DEPARTMENT SUPERVISOR
Dr.R.Saravanan, B.E, M.E(CSE)., Ph.D. Dr.R.Saravanan, B.E, M.E(CSE)., Ph.D.
PROFESSOR, PROFESSOR,
Department of Computer Science and Department of Computer Science and
Engineering, Engineering,
Vel Tech Multi Tech Dr. Rangarajan Vel Tech Multi Tech Dr. Rangarajan
Dr. Sakunthala Engineering College, Dr. Sakunthala Engineering College,
Avadi, Chennai-600 062 Avadi, Chennai-600 062
CERTIFICATE FOR EVALUATION
DEVANAND M (113119UG03019)
DHINESH M (113119UG03020)
KOMESH S (113119UG03051)
This project report was submitted for viva voce held on __________
At Vel Tech Multi Tech Dr. Rangarajan and Dr.Sakunthala Engineering College.
|
TABLE OF CONTENTS
||
6. SOFTWARE DESCRIPTION 24
6.1 Design and Implementation Constraints 25
6.1.1 Constraints in Analysis 25
6.1.2 Constraints in Analysis 25
6.2 System Features 25
6.2.1 User Interfaces 25
6.2.2 Hardware Interfaces 25
6.2.3 Software Interfaces 26
6.2.4 Communications Interfaces 26
6.3 User Documentation: 26
6.4 Software Quality Attributes: 26
6.4.1 User-friendliness 26
6.4.2 Reliability 26
6.4.3 Maintainability 26
6.5 Other Non-functional Requirements 27
6.5.1 Performance Requirements 27
6.5.2 Safety Requirements 29
6.5.3 Product Features: 29
6.5.4 Test Cases 29
7. CONCLUSION 31
7.1 CONCLUSION 32
7.2 FUTURE ENHANCEMENT 32
APPENDICES 33
APPENDIX-1 33
SCREENSHOTS 33
APPENDIX-2 38
IMPLEMENTATION CODE 42
REFERENCES 59
|||
LIST OF FIGURES
IV
FIGURE NO. NAME PAGE NO.
V
LIST OF TABLES
VI
CHAPTER 1
INTRODUCTION
1
1.1 OVERVIEW
Sign Language is a gesture-based language which involves hand movements,
hand orientation and facial expression instead of acoustic sound patterns. This
type of language has varying patterns according to the people and is not
universal. Nepali sign language differs from Indian, American and also with in
different parts of Nepal. However, since most people don’t have prior
knowledge of sign language of any sort, it becomes harder and harder for deaf-
mute people to communicate without a translator, thus they feel ostracized. Sign
Language Recognition has been accepted as a widely recognized
communication model between deaf-mute people and normal people.
Recognition models are categorized under computer vision-based and sensor-
based systems. In computer vision-based gesture recognition, camera is used for
input and image processing of input gestures is done before recognition. The
processed gestures then are recognized using various algorithms like Hidden
Markov Model and Neural network techniques. The main drawback of vision-
based sign language recognition system image acquisition process has many
environmental apprehensions such as the place of the camera, background
condition and lightning sensitivity. But it is easier and more economical than
using sensor and tracker for data. However, Neural Network techniques and
Hidden Markov Model are used together with sensor data for more accuracy.
The major dataset up-to now is American Sign Language alphabets. The
datasets of gesture are preprocessed using Python libraries and packages like
OpenCV and skimage, then trained using CNN VGG-16 model. The recognized
input is converted into speech. This will provide one-way communication as a
person who doesn’t understand sign language will get the meaning of the hand
signs shown to him/her. Furthermore, to make two-way communication
possible, this paper also presents text to sign language conversion, which allows
a person, who doesn’t understand sign language, to convert text to sign
language finger spelling that the signer would understand.
1.2. OBJECTIVE
The recognition of sign language gestures from real time video and successfully
classifying it into either one from a list of categories have been a popular and
2
challenging field of research. Many researchers have been working on this field
for a long time, so we have also thought of contributing to this field as by
working on it in our final year major project. Liang et al. [6] have also put their
research on this concept which has guided us throughout the implementation.
The process of recognizing a sign language gesture and classifying it is the one
line definition of the task performed by this proposed system. Along with this, a
text to ASL finger spelling feature is also available that makes the two-way
communication from sign to text and text to sign possible. The following steps
were taken while working on this project.Many vision-based and sensor-based
techniques have been used for sign language recognition. Pavlovic et al. The
paper published on 1997 emphasizes on the advantages and shortcomings and
important differences in the gesture interpretation approaches depending on
whether a 3D model of the human hand or an image appearance model of the
human hand is used. As of the time, this survey was done 3D hand models
offered a way of more elaborate modeling of hand gestures but lead to
computational hurdles that had not been overcome given the real-time
requirements of HCI
3
We will implement this program as a virtual camera on a device, allowing us to use
the sign language translator to translate hand gestures in real-time feeds from the
primary camera and output the translated video with subtitles in the virtual camera
of OBS software. By doing so, our proposed system will enable effective
communication between hearing-impaired individuals and those who do not
understand sign language. To evaluate the performance of our proposed system, we
will conduct several experiments to compare its accuracy and response time with
other state-of-the-art sign language translation systems. The results of these
experiments will help us further improve our system to achieve better performance.
In conclusion, our proposed work aims to develop a realtime sign language
translator using YOLOv5 and PyTorch with the aid of Nvidia CUDA toolkit, 3
implemented as a virtual camera on a device. The system will enable effective
communication between hearingimpaired individuals and others who do not
understand sign language. The proposed system's performance will be evaluated
through experiments, and the results will be used to further improve the system.
4
CHAPTER 2
LITERATURE REVIEW
5
2.1 Introduction
Disability impacts negatively on human life. Each disability presents their specific
barriers. These latter cause scarcity of people with disabilities from appropriate services
that facilitate their specific tasks using interactive systems as they find difficulties in
communicating with the user interfaces of digital applications (web, mobile, desktop, tv,
etc). Different solutions were proposed, but they still insufficient and not efficient
considering the pervasive environment and the bunch of contextual information that
contains. Otherwise, Artificial Intelligence (AI) is an emergent imitator technology to
represent the human brain thinks by the integration of the machine’s from computing
systems. computational power and speed with human perception and intelligence . AI is
in growing and possesses the necessary tools that could help users with disability
experience in accessing information In fact, users with disability have to use interactive
systems as well-bodied users. But, they are unable to do it, because user interfaces of
interactive systems are not adapted to their capabilities. Therefore, we need to improve
adaptive interactive systems in order to make them accessible to disabled users.
Accessibility of User Interfaces(UI)s is also an emergent and important domain that needs
more and more investment . The solutions given are insufficient, superficial and limited
to elementary disability. Therefore, to overcome all difficulties and challenges, we need
to propose solutions that cover almost of users with disability from different cultural
environments, considering almost of platforms used for the interaction. This paper
consolidates research findings in collaboration between accessibility, user interfaces and
artificial intelligence. In the end, we present a solution integrating accessibility, user
interface and artificial intelligence.The transformative impact of artificial intelligence on
our society will have farreaching economic, legal, political and regulatory implications
that we need to be discussing and preparing for. Determining who is at fault if an
autonomous vehicle hurts a pedestrian or how to manage a global autonomous arms race
are just a couple of examples of the challenges to be faced
2.2.1 MERITS:
6
2.2.2 DEMERITS:
Users can be imprecise and inconsisten
There is typically a degree of uncertainty in the relation between user input and
user intent.
2.3.2 DEMERITS
Cryptography is not enough to defend against adversaries and insiders,careful
protocol design is needed.
2.4.1 MERITS
It is used to send the data packets securely from source to destination,without any
interruption.
2.4.2 DEMERITS
Cryptography is not enough to defend against adversaries and insiders,careful
protocol design is needed to protect the user information.
2.6.2 DEMERITS
Although logistic regression and naive Bayes share the same conditional class
probability model, a major advantage of the logistic regression method is that it does
not make any assumption on how x is generated
2.7.1 MERITS
This paper presents a generic approach for the adaptation of UIs to the accessibility
context based on meta-model transformations
2.7.2 DEMERITS
Infact, the adaptation process has to be automatic and dynamic to free the users with
disabilities from the UI change control.
8
CHAPTER 3
SYSTEM DESIGN
9
3.1 ARCHITECTURE DIAGRAM
Infact, the adaptation process has to be automatic and dynamic to free the users with
disabilities from the UI change control.
10
3.2 USECASE DIAGRAM
A use case is a set of scenarios that describing an interaction between a user and a
system. A use case diagram displays the relationship among actors and use cases. Here
source is a user and destination is the receiver. The source sending the beacon signal to
the receiver. Then the receiver send the response to the source node.
A use case diagram is a graphical depiction of a user's possible interactions with a system.
A use case diagram shows various use cases and different types of users the system has
and will often be accompanied by other types of diagrams as well. The use cases are
represented by either circles or ellipses. The actors are often shown as stick figures.
11
3.3 SEQUENCE DIAGRAM
Sequence diagrams show a detailed flow for a specific use case or even just part of
a specific use case. The vertical dimension shows the sequence of messages/calls in
the time order that they occur; the horizontal dimension shows the object instances to
which the messages are sent.The consisting objects are Localizability testing,structure
analysis,network adjustment and localizability aided localization.
12
3.4 COLLABORATION DIAGRAM
Sequence diagrams are typically associated with use case realizations in the 4+1
architectural view model of the system under development. Sequence diagrams are
sometimes called event diagrams or event scenarios.Event diagrams and event scenarios
are other names for sequence diagrams. A sequence diagram depicts multiple processes or
things that exist simultaneously as parallel vertical lines (lifelines), and the messages
passed between them as horizontal arrows, in the order in which they occur. This enables
for the graphical specification of simple runtime scenarios..
State : It is a condition or situation in life cycle of an object during which it’s satisfies same
condition or performs some activity or waits for some event.
Transition: It is a relationship between two states indicating that object in first state
performs some actions and enters into the next state or event.
13
3.5 ARCHITECTURE FLOW DIAGRAM
The wireless Ad hoc and sensor networks to deploy the localization and
non_localization nodes.In localizability aided localization would consists of localization
testing,structure analysis,network adjustment by using tree prediction mobility.It is used
to increase the mobility values.It is used to find the ranging measured.
Class diagrams model class structure and contents using design elements such as classes,
packages and objects. Class diagram describe the different perspective when designing a
system-conceptual, specification and implementation. Classes are composed of three
things: name, attributes, and operations. Class diagram also display relationships such as
containment, inheritance, association etc. The association relationship is most common
relationship in a class diagram. The association shows the relationship between instances
of classes
14
3.6 DATA FLOW DIAGRAM
The data flow diagram is a graphic tool used for expressing system requirements in a
graphical form. The data flow diagram would consists step by step process it contains
source nodes ,channel splitting,Beacons used to receive the signal in narrow areas and to
give acknowledgement to the sender.Localizability aided localization carried the process
mobile networks behavior and to achieve the threshold.
16
CHAPTER 4
MODULES
17
4.1 MODULES LIST
Creating CNN Model
Testing CNN model
Creating Flask app
18
Fig 4.1 Train Model (sample code)
19
4.2.2 TESTING CNN MODEL
1. By using the YOLOv5 model we can predict the output of our project.
2. We use the Opencv and Pytorch library to predict the Hand Gestures and we also use
this module to detect the hand in our image frame.
3. Using this, our image predicting model can detect the intended output with higher
accuracy.
4. Our CNN Model is trained to localize the hand of the person who is trying to
communicate, even in a messy background.
5. It is trained to get the most desired aspect to detect the hand tracking with higher
precision.
6. The '0' in the VideoCapture() method in the below program denotes the external
camera connected to the device.
7. The maxHands=1 in the HandDetector() method in the above program denotes that
we are detecting just the one hand of the person/user
20
4.2.3 CREATING FLASK APP
1. Create a flask app. Command Line: from flask import Flask, ,render_template,
Response, jsonify.
2. Adding required functions to render the templates.
3.Imported the prediction program to flask app import opencv.
4.Imported train list to the flask app. import trainlist
5.Adding required functions to render the templates.
6.Run the app.py (Flask App).
22
5.1 SOFTWARE REQUIREMENT SPECIFICATION
The software requirements specification is produced at the culmination of the analysis
task. The function and performance allocated to software as part of system engineering
are refined by establishing a complete information description as functional
representation of system behavior, an indication of performance requirements and design
constraints, appropriate validation criteria.
RAM : 8GB
Nvidia CUDA.
23
CHAPTER 6
SOFTWARE DESCRIPTION
24
6.1 DESIGN AND IMPLEMENTATION CONSTRAINTS
6.1.1 CONSTRAINTS IN ANALYSIS
25
Routers use routing tables to determine what interface to forward packets (this can
include the "null" also known as the "black hole" interface because data can go into it,
however, no further processing is done for said data).
The application will be having a user manual for helping and guiding the users
on how to interact with system and perform various functions. The core components and
its usage will be explained in detail.
6.4.1 USER-FRIENDLINESS
The proposed system will be user-friendly, designed to be easy to use through simple
interface. The software could be used by anyone with necessary computer knowledge.
The software is created by an easy look and feel concept.
6.4.2 RELIABILITY
The system will never crash and fail. But in case of system failure, recovery could be
done by using advance backup features.
6.4.3 MAINTAINABILITY
All code shall be fully documented. Each function shall be commented with pre- and
post-conditions. All program files shall include comments concerning date of last change.
The code should be modular, to permit future modifications. Here for defects the system
maintains its solution database.
26
6.5 OTHER NON-FUNCTIONAL REQUIREMENTS
FUNCTOINAL REQUIREMENTS
27
NON-FUNCTOINAL REQUIREMENTS
The non-functoinal requirements are:
NON-FUNCTIONAL
NFR.NO DESCRIPTION
REQUIREMENT
28
6.5.2 Safety Requirements
The software may be safety-critical. If so, there are issues associated with its integrity
level. The software may not be safety-critical although it forms part of a safety-critical
system. For example, software may simply log transactions. If a system must be of a high
integrity level and if the software is shown to be of that integrity level, then the hardware
must be at least of the same integrity level. There is little point in producing 'perfect' code
in some language if hardware and system software (in widest sense) are not reliable. If a
computer system is to run software of a high integrity level then that system should not at
the same time accommodate software of a lower integrity level. Systems with different
requirements for safety levels must be separated. Otherwise, the highest level of integrity
required must be applied to all systems in the same environment
Communication -> communication occurs and data was sent in a data forwarding path.
User friendly -> The Architecture is simple and allows users to access the project easily.
29
Table 6.3 Test Cases
30
CHAPTER 7
CONCLUSION AND
FUTURE ENHANCEMENT
31
7.1 CONCLUSION
The future work for this project will involve several stages, starting with the
development of a robust sign language recognition system. This will require the
collection of a large dataset of sign language gestures and the training of a machine
learning model to recognize these gestures accurately. The system will also need to be
able to distinguish between different dialects of sign language and adapt to the user's
individual signing style.
Once the sign language recognition system is developed, the next stage will involve the
integration of the system with a speech or text translation engine. This will allow the
system to translate sign language gestures into spoken or written language in real-time,
enabling seamless communication between people who use sign language and those who
do not.
The final stage of the project will involve testing and refining the system to ensure its
accuracy and usability. This will involve user testing with individuals who use sign
language and those who do not to ensure that the system is effective in facilitating
communication between the two groups.
Overall, the development of a real-time sign language translator has the potential to
transform the lives of people with hearing and speech disabilities, enabling them to
communicate more effectively with others and breaking down barriers to communication
and social interaction.
32
APPENDIX I
SCREEN SHOTS 1
LANDING PAGE:
33
SCREEN SHOT 2
LOGIN PAGE :
SIGN UP PAGE :
35
SCREEN SHOT 4
PROFILE PAGE :
ABOUT PAGE :
36
SCREEN SHOT 5
PROFILE PAGE :
37
APPENDIX II
GRAPH:
The graph explained the ratio of the data packets to be lossed in the y axis and the
corresponding time will be mentioned in the x axis.
VARIATION OF LOSSES
38
CNN MODEL SUMMARY
39
Table A.14 Confusion Matrix Scores
40
TESTING CNN MODEL ACCURACY USING DIFFERENT TEST IMAGES
41
IMPLEMENTATION CODE :
CODE :
username, email, password, disability, role, dob, slink, llink, glink, bio,gender =
"", "", "", "", "", "", "", "", "", "", ""
video_camera = None
global_frame = None
image, pred_img,original,crop_bg,label = None, None, None, None, None
body.cap.release()
app = Flask(__name__)
#Index Page
@app.route('/home')
def index():
body.cap.release()
return render_template('index.html')
42
@app.route("/translate") #for translation
def translate():
body.cap = body.cv2.VideoCapture(0, cv2.CAP_DSHOW)
txt=label_text()
return render_template('video_out.html',txt=txt.json)
43
@app.route("/signup") # Sign up page
def sign_up():
body.cap.release()
return render_template('sign_up.html')
@app.route('/validate', methods=['POST'])
def validate_sign():
email = request.form['email']
password = request.form['password']
if mongo.validate(email,password):
userinfo=mongo.show(email)
return
render_template('login.html',accept="success",userinfo=userinfo,email=email,
password=password)
else:
return
render_template('login.html',accept="failed",userinfo=None,email=email,
password=None)
44
@app.route('/signup', methods=['POST'])
def getvalue():
global username, email, password, disability, role,gender
username = request.form['name']
email = request.form['email']
password = request.form['password']
disability = request.form['inputDisability']
role = request.form['inputRole']
gender = request.form['gender']
print(username, email, password, disability, role,gender)
if username and email and password and disability and role:
if not mongo.check(email):
print("Email does exist")
accept="exist"
return render_template('sign_up.html',accept=accept,email=email,
username=username)
else:
fpwd.otp(email)
print("Email does not exist")
return render_template('sign_up.html',accept="otp",email=email,
username=username)
else:
accept="failed"
return render_template('sign_up.html',accept=accept,email=email,
username=username)
@app.route('/otp', methods=['POST'])
def otp():
print("OTP: ")
print(fpwd.otp)
otp = request.form['otp']
print(otp)
print(username, email, password, disability, role,gender)
slink = "https://www.facebook.com/"
45
llink = "https://www.linkedin.com/"
glink = "https://www.github.com/"
if otp == str(fpwd.otp):
if mongo.insert(username,email,password,disability,role,dob, slink, llink,
glink, bio,gender):
accept="success"
return render_template('sign_up.html',accept=accept,email=email,
username=username)
else:
accept="otp-failed"
print("OTP failed")
return render_template('sign_up.html',accept=accept,email=email,
username=username)
@app.route('/forgot')
def forgot():
return render_template('login.html',accept="forgot",userinfo=None)
@app.route('/forgotpass', methods=['POST'])
def forgotpass():
email = request.form['f_email']
if not mongo.check(email):
fpwd.sendmail(email)
print("Email sent")
accept="sent"
return
render_template('login.html',accept=accept,userinfo=None,email=email,
password=None)
else:
print("Email not sent")
accept="not"
return
render_template('login.html',accept=accept,userinfo=None,email=email,
password=None)
46
@app.route('/changepass', methods=['POST'])
def changepass():
password = request.form['new_password']
email = request.form['chg_email']
print(password,email)
mongo.updatepwd(email,password)
userinfo=mongo.show(email)
return render_template('profile.html',userinfo=userinfo)
@app.route('/update', methods=['POST'])
def update():
name = request.form['name']
email = request.form['up_email']
role = request.form['role']
disability = request.form['disability']
dob = request.form['dob']
bio = request.form['bio']
slink = request.form['slink']
llink = request.form['llink']
glink = request.form['glink']
gender = request.form['gender']
print(name,email,role,disability,dob,bio,slink,llink,glink,gender)
mongo.update(name,email,disability,role,dob,slink,llink,glink,bio,gender)
userinfo=mongo.show(email)
return render_template('profile.html',userinfo=userinfo)
webbrowser.open('http://127.0.0.1:5000/')
if __name__ == '__main__':
app.run(host='0.0.0.0', threaded=True ,port=5000,debug=False)
47
body.py (prediction file)
The "body.py" program utilizes a trained machine learning model to make
predictions about the hand signs being made by a user. This program uses
advanced algorithms to analyze the input data and accurately identify the
specific hand signs being performed.
CODE :
import cv2
import mediapipe as mp
import numpy as np
import torch
# #initialize webcam
cap = cv2.VideoCapture(0,cv2.CAP_DSHOW)
model = torch.hub.load('ultralytics/yolov5',
'custom',
path='signLang/weights/best.pt' ,force_reload=True)
48
# Draw landmarks on the image
def draw_landmarks(image, results):
# Drawing landmarks for left hand
mp_drawing.draw_landmarks(
image,
results.left_hand_landmarks,
mp_holistic.HAND_CONNECTIONS,
mp_drawing_styles.get_default_hand_landmarks_style())
return image
letter = ""
offset = 1
#collect data
def collectData():
ret, frame = cap.read()
global letter
49
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image.flags.writeable = False
# Holistic of the given image is stored in result variable
results = holistic.process(image)
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
# Draw landmarks on the image
draw_landmarks(image, results)
draw_landmarks(black, results)
# Show output of the image with landmarks
results = model(black)
pred_img = np.squeeze(results.render())
#crop the predicted image bbox from the pytorch result
#print the class when 70% confidence is reached in pytorch
for result in results.pandas().xyxy[0].iterrows():
if result[1]['confidence'] > 0.8:
50
mongo.py (Database file)
This program contain all the necessary function for database connectivity
and functions for CRUD operations.
CODE :
from urllib.parse import quote_plus
from pymongo import MongoClient
client=MongoClient("mongodb+srv://[USER_NAME]:"+user+"@[DBNAME]
.mi8y86o.mongodb.net/?retryWrites=true&w=majority")
db=client["login_info"]
col=db["login0"]
51
#Update data in database
def update(name,email,disability,role,dob,slink,llink,glink,bio,gender):
if not check(email):
if name and email :
col.update_many({"email":email}, {"$set":
{"name":name,"disability":disability,"role":role,"dob":dob,"slink":slink,"glink":
glink,"llink":llink,"bio":bio,"gender":gender}})
return True
else:
return False
else:
return False
def updatepwd(email,password):
if not check(email):
if password and email :
col.update_many({"email":email}, {"$set":{"password":password}})
return True
else:
return False
else:
return False
sender = "[email protected]"
pwd = "mchsweaeifjwqxxr"
receiver = ""
otp=""
user_otp =""
def sendmail(email):
global receiver
receiver = email
details = mongo.show(email)
print(email)
finally:
server.quit()
print("OTP sent successfully")
54
SignCam.py (Virtual Camera for sign language translator)
Our real-time sign language translator was integrated into a virtual camera
using the pyvirtualcam library. This virtual camera was then connected to the
OBS (Open Broadcasting Software) platform to enable its usage in various
meeting and communication applications. This integration allows our translator
to work seamlessly with other software applications, making it more accessible
and convenient for users.
CODE :
import cv2
import mediapipe as mp
import numpy as np
import torch
import pyvirtualcam as pvc
# #initialize webcam
cap = cv2.VideoCapture(0,cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FPS, 30)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
#model loading
# model = torch.hub.load('ultralytics/yolov5',
# 'custom',
# path='modelN800/weights/best.pt' ,force_reload=True)
model = torch.hub.load('ultralytics/yolov5', 'custom',
path='modelS/weights/best.pt' ,force_reload=True)
55
# Draw landmarks on the image
def draw_landmarks(image, results):
# Drawing landmarks for left hand
mp_drawing.draw_landmarks(
image,
results.left_hand_landmarks,
mp_holistic.HAND_CONNECTIONS,
mp_drawing_styles.get_default_hand_landmarks_style())
return image
letter = ""
offset = 1
#collect data
def collectData():
with pvc.Camera(width=1280, height=720, fps=30) as cam:
while True:
ret, frame = cap.read()
global letter
56
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image.flags.writeable = False
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
image = original
start_point = (0, image.shape[0])
end_point = (image.shape[1], image.shape[0])
color = (0, 0, 0)
thickness = 150
line_type = cv2.LINE_AA
cv2.line(image, start_point, end_point, color, thickness, line_type)
57
start_point = (0, image.shape[0])
end_point = (image.shape[1], image.shape[0])
color = (0, 0, 0)
thickness = 150
line_type = cv2.LINE_AA
cv2.line(image, start_point, end_point, color, thickness, line_type)
font = cv2.FONT_HERSHEY_DUPLEX
fontScale = 1.5
color = (255,255,255)
thickness = 2
lineType = cv2.LINE_AA
collectData()
cap.release()
cap.destroyAllWindows()
58
REFERENCES
[1] "Real-Time Sign Language Translation Using Hand Tracking and Deep
in 2020.
[3] "Real-Time Sign Language Translation using Machine Learning and Deep
[5] "Real-Time Sign Language Translation using Machine Learning and Deep
[6] "Real-Time Sign Language Translation Using Machine Learning and Deep
[7] "Real-Time Sign Language Translation using Machine Learning and Deep
59
[8] "Real-Time Sign Language Translation using Deep Learning and Computer
[9] "Real-Time Sign Language Translation using Machine Learning and Deep
[10] "Real-Time Sign Language Translation using Machine Learning and Deep
60