0% found this document useful (0 votes)
3 views5 pages

Computer Vision Based Object Grasping 6DoF Robotic Arm Using Picamera

The document discusses the development of a 6DoF robotic arm integrated with a computer vision system using a Raspberry Pi and Picamera for object detection and manipulation. The robotic arm is designed to identify and grasp red objects, utilizing algorithms for color detection and position identification to execute precise movements. The project demonstrates successful functionality in recognizing and handling objects, with future work aimed at enhancing wireless control capabilities.

Uploaded by

imhameem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views5 pages

Computer Vision Based Object Grasping 6DoF Robotic Arm Using Picamera

The document discusses the development of a 6DoF robotic arm integrated with a computer vision system using a Raspberry Pi and Picamera for object detection and manipulation. The robotic arm is designed to identify and grasp red objects, utilizing algorithms for color detection and position identification to execute precise movements. The project demonstrates successful functionality in recognizing and handling objects, with future work aimed at enhancing wireless control capabilities.

Uploaded by

imhameem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2018 4th International Conference on Control, Automation and Robotics

Computer Vision Based Object Grasping 6DoF Robotic Arm Using Picamera

Vishal Kumar Qiang Wang


Department of Control Science and Engineering Department of Control Science and Engineering
Harbin Institute of Technology Harbin Institute of Technology
Harbin, China Harbin, China
e-mail: [email protected] e-mail: [email protected]

Wang Minghua Syed Rizwan


Department of Control Science and Engineering School of Computer Science and Technology Harbin
Harbin Institute of Technology Institute of Technology
Harbin, China Harbin, China
e-mail: [email protected] e-mail: [email protected]

SM Shaikh Xuan Liu


Department of Control Science and Engineering Department of Control Science and Engineering
Harbin Institute of Technology Harbin Institute of Technology
Harbin, China Harbin, China
e-mail: [email protected] e-mail: [email protected]

Abstract—This article presents a project to design a robust disciplines (techniques), including kinematics, signal
robotic arm which can perform multifunctional tasks. The analysis, information theory, artificial intelligence and
controller of the manipulator is based on Arduino mega 2560 probability theory. An automated arm is a robot manipulator,
Microcontroller. The aim of the project is to focus all axes of typically programmable with comparable capabilities to a
manipulator to lift, carry and unload the objects at a desired human arm.
location. This requires a precise drive motion control that In this study, a robot arm system is intended to perform
incorporate electric motors as a drive system. Further multifunctional tasks. It detects and identifies red color item
experiments are done to implement a camera based 3D vision and grips the item and drops it in a desired location where an
system integrated with a computer vision algorithm to image of items is taken through a camera. All items in the
recognize object deformation and spatial coordination to picture are recognized utilizing image processing methods
control the deviation from the original training. The 3D and every single distinguished items’ coordinates are
visualization systems are able to detect the objects as well as
controlled on the computer and sent to the robot arm
their distance from the End-effecter and transmit the signals to
the drive system. The vision system requires a separate II. RELATED WORK
computing hardware capable of processing complex vision
algorithms. We utilize Raspberry pi microcontroller for Computer vision is a field that incorporates techniques
processing the vision data, separately making the vision system for acquiring, processing, analyzing, and understanding
capable of recognizing the specified object as per program images. Through these techniques, high-dimensional data is
commands. converted to numerical or symbolic data [3] [4] [5]. In the
realm of AI, CV attempts to emulate the capacities of human
Keywords-robotic arm; computer vision system; color vision via electronical observations and understanding of
detection; object recognition images [2]. Computers are preprogrammed in many
applications that make utilization of computer vision to play
I. INTRODUCTION out a particular undertaking. Recently, learning based
method are likewise usually utilized for that kind of
The contemporary research reveals that AI has
application [6], [7], [8]. This context implies the conversion
successfully adapted to the evolving field of computer vision.
The evidence suggests that modern manufacturing is of visual images (the input of retina) into depictions.
contingent on robotics [1]. A robot can be defined as a Subsequently, the interface will identify other points of
programmable, self-controlled device consisting of electronic, views inspire proper activity. This image comprehension can
electrical and mechanical units. Generally it is a machine be viewed as the unraveling of representative data from
performing in place of a living agent. To pick and place image information, utilizing models built with the guide of
objects is the major task in industrial environment [2]. geometry, material science, insights, and learning hypothesis.
Robotics include the coordination of a wide range of Image segmentation is based on color intensity and texture. It

978-1-5386-6338-7/18/$31.00 ©2018 IEEE 111


Authorized licensed use limited to: Rajshahi University Of Engineering and Technology. Downloaded on June 03,2024 at 17:39:50 UTC from IEEE Xplore. Restrictions apply.
is utilized as a part of numerous applications, for example, consistent information impaired for software development
region-based segmentation, feature-based edge detection, and integration. The hardware architecture of my system is
and thresholding based or model based [9]. Computer vision shown in Fig 1.
has additionally been depicted as the endeavor of
mechanizing and incorporating an extensive variety of A. Picamera
procedures and portrayals for visual observation. As a Picamera is an official product of Raspberry pi
scientific discipline, computer vision is concerned about the Foundation. It takes pictures, records video, and applies
hypothesis behind manufactured frameworks that image effects which is used as an input.
concentrate data from pictures. This image information can B. Raspberry pi
take many structures forward, for example, video
successions seen from various cameras or multi-dimensional The Raspberry Pi is a series of small single-board
information from a medicinal scanner. Computer vision tries computers that is used to handle complex and heavy
to apply its hypotheses and models to the development of algorithms like computer vision
computer vision frameworks [10]. C. Arduino
There are various analysis incorporating computer vision
with robot arm in literature. One such learning technique Arduino is an open-source electronics platform based on
involves using acknowledge points, in minimum of two easy-to-use hardware and software. It is mostly used to
photos, enabling a robot to grip an item. The overall control the servos through programming.
accuracy of these techniques was 87.8% [8]. In another TABLE I. 6DOF MANIPULATOR ALUMNI ROBOT ARM KTIS BASE
research, computer vision was utilized to control a robot arm W/SERVO
[11]. Few colored bottle stoppers were put on joints' of the
S.no Parameters Name Description
robot arm, the joints were perceived by means of these 1 1x Mechanical claws
stoppers, utilizing image recognition techniques. These two 2 5x Hard aluminum alloy multi-bracket
different research robot models were supposed to play the 3 3x Hard aluminum long U bracket
game “rock, paper, scissors” against an opponent [12]. In 4 1x Hard aluminum L-shaped bracket
these two investigations, an immersed in camera, which 5 2x Hard aluminum U-beam
6 3x Bushing bearings imported
utilized pictures of opponent's hand, suggested contenders’ 7 6x Metal helm
moves via computer vision techniques. One of, among them, 8 6x MG996R Servos
random moves have been played [12]. While in another 9 3x Servos Extension cord
research, the robot successfully reads opponent’s hand and 10 Package Weight 0.950kg (2.091b.)
using CVT robot configures its fingers, hence beating the 11 Package Size 40cm x 30cm x 20cm (15.75in x
11.81in x 7.87in)s
opponent [12]. In another work, the developments of a robot
arm are controlled by a human arm’s developments utilizing
a wireless connection and a vision system [13]. One of these
investigations shows a self-ruling robot framework including
a computer vision [14]. In their work, the robot arm can play
out the assignment of self-ruling item arranging as indicated
by the shape, size, and color of the item [10]. In another
work, an instructive robotic arm plays the assignment of
recognizing an arbitrarily put object, picking it and moving it
to a predefined compartment utilizing a computer vision [15].

Figure 2. Robotic arm complete kit.


Figure 1. Block diagram of the system.

D. 6DOF Manipulator Alumni Robot Arm Kits Base


III. HARDWARE ARCHITECTURE W/Servo
Hardware architecture (hardware design model) involves The 6DOF aluminum robot arm kit, is a robotic arm
the identification of a system’s components and their system that has six degrees of freedom (6DOF) and a claw to
interrelationships. It allows hardware designers to pick up and manipulate objects. The mechanic links of the
comprehend how their components permeate into a program robotic arm are built using hard aluminum brackets, in
architecture and provide to software principle designers different shapes with a multitude of predrilled holes for

112
Authorized licensed use limited to: Rajshahi University Of Engineering and Technology. Downloaded on June 03,2024 at 17:39:50 UTC from IEEE Xplore. Restrictions apply.
fixings and attachments etc. The list of all components in V. WORKING OF THE ROBOTIC ARM
robot arm kit is shown in Table I. Fig. 2 is the image of This work has successfully completed the zoned
complete kit of robotic arm with microcontroller and functionality for robot arm. A model robot which can rotate,
Picamera. magnetize item, lower and raise its arm, controlled all over
IV. COMPUTER VISION ALGORITHM by microcontroller is constructed successfully. The
development board is assembled and it utilized the required
Our system is based on two algorithms: Algorithm 1 and method for the right operation of the controller. The
Algorithm 2. advancement board has been interfaced to the servo and (DC)
Algorithm 1 is the algorithm of red color detection which motors.
is utilized to identify the object of red color. In this algorithm, Fig. 3 is describing the working flow of the system. First,
for each pixel, the distance (D) between pixel value and the computer vision detection model uses the camera and
reference color value, is computed. Whereas, the threshold takes continuous video to detect the red objects. It uses
value is set manually in accordance with tolerance level different filters on the video to mark the center of the object
given to algorithm, which is a constant value (a high value of with a circle. Second, the raspberry pi sends the serial signal
Threshold considers "less clear" red pixels as red, whereas a to Arduino, which in turn, generates the command and sends
lower value of Threshold will make the algorithm quite signal to robotic arm. Robotic arm is then activated,
"strict"). If the distance (D) is less than Threshold, pixel is collecting object and dispatching it to the desired destination.
selected otherwise not. These tasks are completed by servos with six DOFs
embedded in Arduino programs.
If the red object is not detected, robotic arm does not
perform any operation.

Algorithm 2 is the algorithm of identification of object’s


position which also used in our robotic arm after the
Algorithm 1. As the Algorithm 1 identifies the object, on that
time Algorithm 2 saves object’s location and then it
compares the object’s location with claw position. If the
difference is more than error then claw moves one step and
then again compares the locations. It will do the same work
until the difference is less than error. As the difference
between them is less than error then robotic arm picks the
object and drops it in a desired location.
Figure 3. Working f;low of the system.

VI. EXPERIMENTAL RESULTS

A. Testing the Functionality of Vision System


Our project has the functionality of vision system. In
vision system, first of all, our algorithm detects the color of
object from its particular range i.e., if it is red then system
marks green circle on the center of the original picture. It
sends the serial signal to Arduino, which will execute its
preprogramed command. If the detected object is red, claw
opens and bends a little, grips, lifts and rotates the object
towards desired location. The claw then opens and places the
object at given coordinates. Afterwards, the claw closes and
moves back to its original position to get ready for next
object.

113
Authorized licensed use limited to: Rajshahi University Of Engineering and Technology. Downloaded on June 03,2024 at 17:39:50 UTC from IEEE Xplore. Restrictions apply.
We use another object for detection which is not given in zone (area where servo cannot move). After performing
coordinates and except red color then it will not move and it experimentations we got the following results which is
will not pick that object because except red it will not detect shown in Fig. 6 and Table II which is describing the life and
different colors of object. It will wait only for red color’s dead zone of robotic arm.
object otherwise it will not move to any direction.
Fig. 4 and Fig. 5 are the outputs which come after
implementation. In Fig 4, it is detecting the red color object
and after detection it is marking the green circle on the center
of the original image. In the other picture (Fig. 5) the robotic
arm is picking the object after detection.

Figure 6. The life and dead zone of servos.

Figure 4. Object identified using computer vision algorithm.


C. Testing the accuracy of the grasping object

TABLE III. TESTING THE ACCURACY OF THE GRASPING OBJECT

Parameters Robotic Arm


Number of Axes 6 axes
Payload 3 kg
Reach 500mm
Weight 11kg
Speed 1m/s

VII. CONCLUSION
In this paper we present our experience with 6 DOF
articulated robotic manipulator and 3D vision systems. We
Figure 5. Picking and droping the object. developed a robotic arm which can perform multifunctional
tasks with the help of computer vision. It performs some
different functions by focusing all axes of the manipulator to
TABLE II. THE LIFE AND DEAD ZONE OF SERVOS a desired objects and unload at a desired location.
A 3D vision system based on a camera and a computer
Servos Life zone Dead zone
vision algorithm to recognize object deformation and spatial
1 00 to 3100 3110 to 3600 coordination/deviation from the original training. The 3-D
2 400 to 3200 00 to 390 , 3210 to 3600 visualization systems is able to detect the objects as well as
3 500 to 2900 00to 490 , 2910 to 3600 their distance from the end-effector and will transmit the
4 00 to 2300 2310 to 3600 signals to the drive system. The vision system would require
5 00 to 3000 3010 to 3600 a separate computing hardware capable of processing
6 00 to 1800 1810 to 3600 complex vision.
VIII. FUTURE WORK
B. Testing the Maneuvariablity of Arm/Manipulator Today, robots are doing human labor in all kinds of
There are 6 servos in the robotic arm because our project places. Best of all, they are doing the jobs that are unhealthy
is based on six Degree Of Freedom (6 DOF) and they can or impractical for people. This frees up workers to do the
move in different directions, for example, they can do pitch more skilled jobs, including the programming, maintenance
yaw and roll its elbow and each servo has attached on and operation of robots.
different places in mechanical structure of robotic arm. In the Future works aimed is implementing the wireless
robotic arm each servo will move on different angles to pick protocol so that operator at one end can control the robotic
up the object and drop it at the desired location. Each servo arm wirelessly at the other end. Voice recognition to control
has its life zone (the area in which servo can move) and dead a robotic arm over voice. And the most advanced to make it

114
Authorized licensed use limited to: Rajshahi University Of Engineering and Technology. Downloaded on June 03,2024 at 17:39:50 UTC from IEEE Xplore. Restrictions apply.
mind-controlled robotic arm by connecting to existing Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE
neurons or to electrodes implanted into the human brain to Conference on. IEEE, 2009, pp. 951–958.
decode the signals from the brain and use them to control a [8] A. Saxena, J. Driemeyer, and A. Y. Ng, “Robotic grasping of novel
objects using vision,” The International Journal of Robotics Research,
robotic arm. vol. 27, no. 2, pp. 157–173, 2008.
ACKNOWLEDGMENT [9] M. Kazemi, K. K. Gupta, and M. Mehrandezh, “Randomized
kinodynamic planning for robust visual servoing,” IEEE Transactions
This work of the paper is supported by National Science on Robotics, vol. 29, no. 5, pp. 1197–1211, 2013.
Foundations of China (No.61174016). [10] D. Song, C. H. Ek, K. Huebner, and D. Kragic, “Task-based robot
grasp planning using probabilistic inference,” IEEE transactions on
REFERENCES robotics, vol. 31, no. 3, pp. 546–561, 2015.
[1] A. E. Kuzucuoglu and G. Erdemir, “Development of a web-based [11] R. Szab´o and A. S. Gontean, “Full 3d robotic arm control with stereo
control and robotic applications laboratory for control engineering cameras made in labview.” in FedCSIS Position Papers, 2013, pp.
37–42.
education,” Information technology and control, vol. 40, no. 4, pp.
352–358, 2011. [12] Y. Hasuda, S. Ishibashi, H. Kozuka, H. Okano, and J. Ishikawa, “A
[2] C.-Y. Tsai, C.-C. Wong, C.-J. Yu, C.-C. Liu, and T.-Y. Liu, “A robot designed to play the game” rock, paper, scissors”,” in Industrial
hybrid switched reactive-based visual servo control of 5-dof robot Electronics, 2007. ISIE 2007. IEEE International Symposium on.
IEEE, 2007, pp. 2065–2070.
manipulators for pick-and-place tasks,” IEEE Systems Journal, vol. 9,
no. 1, pp. 119–130, 2015. [13] A. Shaikh, G. Khaladkar, R. Jage, and T. P. J. Taili, “Robotic arm
[3] A. D. Kulkarni, Computer vision and fuzzy-neural systems. Prentice movements wirelessly synchronized with human arm movements
Hall PTR, 2001. using real time image processing,” in India Educators’ Conference
(TIIEC), 2013 Texas Instruments. IEEE, 2013, pp. 277–284.
[4] M. Ebner, “A parallel algorithm for color constancy,” Journal of
Parallel and Distributed Computing, vol. 64, no. 1, pp. 79–88, 2004. [14] S. Manzoor, R. U. Islam, A. Khalid, A. Samad, and J. Iqbal, “An
open-source multi-dof articulated robotic educational platform for
[5] D. Forsyth and J. Ponce, “Prentice hall professional technical autonomous object manipulation,” Robotics and Computer-Integrated
reference,” Computer vision: a modern approach, 2002. Manufacturing, vol. 30, no. 3, pp. 351–362, 2014.
[6] G. Bradski, A. Kaehler, and V. Pisarevsky, “Learning-based [15] T. P. Cabre, M. T. Cairol, D. F. Calafell, M. T. Ribes, and J. P. Roca,
computer vision with intel’s open source computer vision library.” “Project-based learning example: controlling an educational robotic
Intel Technology Journal, vol. 9, no. 2, 2005. arm with computer vision,” IEEE Revista Iberoamericana de
[7] C. H. Lampert, H. Nickisch, and S. Harmeling, “Learning to detect Tecnologias del Aprendizaje, vol. 8, no. 3, pp. 135–142, 2013.
unseen object classes by between-class attribute transfer,” in

115
Authorized licensed use limited to: Rajshahi University Of Engineering and Technology. Downloaded on June 03,2024 at 17:39:50 UTC from IEEE Xplore. Restrictions apply.

You might also like