Robotic Arm Using Raspberry Pi 3
Robotic Arm Using Raspberry Pi 3
5. Irene Jacob
Student, Department of EEE
Sapthagiri College of
Engineering , Bengaluru, India
[email protected]
Abstract—This paper focuses on Design and control of pick and place objects. Microcontroller is used to control
a robotic arm with human hand gestures using computer the rotation of wheels [2]. Kinect sensor traces the
vision techniques. Computer Vision is a field of deep skeletal movements to provide gestures for the robotic
learning that enables machines to see, identify and process arm prototype. Bluetooth is used for wireless transmission
images. Use of OpenCV (python) for computer vision of signals to the robotic arm [3]. The use of gesture
provides a powerful environment for learning since it is control is also used in remote rescue operation where no
easy to use compared to other techniques. In this humans can go and has the ability to become a part of
technology, a camera reads the movement of hand and massive rescue operations [4]. The gesture recognition
communicates with the computer that uses gestures as involving counting of fingertips in the gesture can be
input to the control devices. implemented [5]. The use of microcontroller to control
the robotic arm is less efficient as compared to Arduino
Keywords—Robotic arm, hand gestures, Computer [6]. Robotic arm can be a great boon to the handicapped
Vision, OpenCV, powerful environment, communicates, as prosthetic arm [7]. The design of glove-based arm
control devices. using neural network to detect objects and do pick and
place operations [8] or to navigate the arm in space with
appropriate gestures [9].
I. INTRODUCTION
Robotics is an inter-disciplinary branch of engineering II. PROPOSED SYSTEM
which deals with the design, construction, operation, and
We provide a system in which the user can navigate the
use of robots , as well as computer systems for their
wireless robot in the environment using various gesture
control, sensory feedback, and information processing.
commands.
Computer vision provides basis for applications like
Camera is used to capture real time hand gestures to
image analysis using automation which enables us to
generate commands for the robot. Gesture is taken as an
determine any object or activity in the given image. Some
input and processed using Image processing.
of the task performed using computer vision is
recognizing simple geometric object, analyzing printed or Arduino takes the signal from camera as input and
hand-written characters, identifying human faces and generates output signals. This output signal generation
hand gestures. depends on the gesture input, for every possible gesture
input, different output signal is generated. Servo motor
The efficiency of the control of robotic arm is achieved
takes digital signals as the input from the Arduino .Once a
using gesture recognition techniques. The gesture images
command signal is given to the robotic arm, it replicates
captured can be processed by computer vision techniques
the gesture
[1]. Gesture recognition using Kinect sensor in various
fields such as control of wheelchair movements and also .
GESTURES
3. Thresholding process
CAMERA
Image thresholding is an important intermediary step
for image processing pipelines. Thresholding can help
POWER ARDUINO UNO us to remove lighter or darker regions and contours of
SUPPLY images. Grabbing all pixels in the gray image greater
than 225 and setting them to 0 (black) which
SERVO MOTOR corresponds to the background of the image.
Thresholding is the process of assigning pixel
intensities to 0’s and 1’s based a particular threshold
level so that our object of interest alone is captured
ROBOTIC ARM
from an image.
Fig 1. BLOCK DIAGRAM
1. Capturing Frames
The input frames(RGB format) is captured by camera Fig 3.3. Threshold process
and it is converted into gray scale image and the
region of interest is extracted as shown in fig 1. 4. Drawing contours ,finding convex hull
and convexity defects
VI. REFERENCES