Design Approach of Visual Image Detection in Rescue Robot System of Urban Search Using Bayesian's Logical Algorithm
Design Approach of Visual Image Detection in Rescue Robot System of Urban Search Using Bayesian's Logical Algorithm
ICIIECS’15
I. INTRODUCTION
Fig.1: Example for unknown Disaster environment and
Visualization and mapping is also very difficult task rescue robot used in earthquake disaster like situation [1]
in rescuing any earthquake or in disaster like environment in
disaster situation, there is one of most challenging rescue
and search operation is performed. This is extremely
II. RELATED WORK
difficult not only highly cluttered but also unstructured
nature of the environments. There is also difficult to locate The author [1] proposed the development of unique
and found the exact position of any obstacles, victims in hierarchical reinforcement learning based controller which is
highly cluttered environment. In addition, in some scenario semi-autonomous architecture view for a rescue robot to
the task of rescuing victims from collapsed structures can be search for victims. And also explore the environment which
extremely hazardous due to asbestos and dust, generally is cluttered. It also proposed to overcome the both the
instability of damaged structures, and some cases, presence limitation of teleoperational robot as well as fully
of toxic chemicals or radiation in the environment, autonomous robot. They make a robot which is equipped
Moreover, rescuing victims from collapsed structures with real-time 3-D mapping sensors, five infrared sensors,
sometimes requires entering through small void which may
PLAN Decide what Sir Type of robot Cost Reliability Efficiency Navigation
no used in recue
action to take treatment
A Learning- Low Easy More More
1
Based Semi-
Determine situation Autonomous
Controller for
Robotic
Alive human High Difficult More More
Sensor Action 2
body
s/Algo Work detection
Percept
Model system using
rithm an
Action autonomous
mobile rescue
robot
Fully High Easy Moderate More
3
Fig.3: Path planning with the help of Feature Extraction autonomous
algorithm robot used in
rescue
operation
3. LOCATING THE VICTIMS AND OBSTACLES Multisensory High Difficult Moderate Moderate
4
Fusion-Based
Finding the exact position of the trapped is of first importing Mapping and
for their swift and safe approach and rescue. The method in Moving
current use and their effectiveness are described as follows; Object
Detection for
Intelligent
3.1. USED OF SOUND DETECTING DEVICES Service
They are quite effective provided the victims have Robots
retained consciousness and are able to produce sound that 3-D Mapping High Easy Less Less
5
will aid his/her rescue. The lack of any other noises is also a With an
necessary condition, something achieved with great RGB-D
difficulty due to both the gathering around the ruins of Camera
people not participating in the recues and the noises
produced by machinery and vehicles.
Fig 4: comparative analysis of using different rescue
In fig. 2, we are designing a rescue robot which will robot
build a map by itself about the environment. To make a
IEEE Sponsored 2nd International Conference on Innovations in Information Embedded and Communication Systems ICIIECS’15
IV. IMPLEMENTATION
1.1. IMAGE SEGMENTATION
We are using the server-client model which plays an
important role in image segmentation. In this we propose the
software model ,wireless camera built in client side that is in
our rescue robot and server consist of broadcast wireless
sensor network ,our rescue robot capture the image of any
live victims or obstacles and then directly broadcast into
server remote user system. When client side capture the
image of victims or obstacles ,it segmented the image into
small segmentation and draw the polyline which shown in
yellow color then make a localized map and send it to server
side. Server seem the image segmentation and collect the
information using the map and find out the exact position of
any victims in disasters environment.
We proposed the free-hand reconstruction of fully 3D Fig.5: Experimental demonstration of broadcasting of image
mapping as well as 2D mapping of system. It also analysis into remote server side
the influence of many parameter like co2 sensor, sound
detection sensor, obstacle detection sensors, etc which plays a
vital role in image acquisitions, segmented into small In above fig., we are using the map management system
segmentation. Segmentation of given image is done in terms which localized the environmental setup and gives the
of matrix formatting which is easy to calculate the position of appropriate outcomes. With the help of map management
given detected image of victims. We explain the choice of system, it not only easy to found the position of obstacles,
various features description as well as number of validation victims but also used in any critical disaster environment to
and visual features, images. We make a system which is fully detect and navigates the victims. The semi-autonomous
available as rescue open operating source and is widely used controller based on vision based subsystem and also used in
in many rescue robotic operations. explorer environment. The used of wireless sensor network
increases the efficiency of a system to operate for a long time
without delay occurred.
1.2 OUTCOMES
In fig 4, when client robot capture the image of any
victims then it small segmented the given image of victims 2.1 MAP-MANAGEMENT SYSTEM
and send it to server side, in server side there is a start point, In map management system, we proposed the vision
depth point and end point present which locate the exact based approach as well as the image segmentation and
position, depth, distance of victims which makes easy to
locate the position that provide rescue treatment in disaster
environment. The yellow line makes the boundary of all the Capturing phenomenon. In map management system. The
end point of segmented image and utilized a map of sensing of localization and visual detection of obstacles.
respective image. Victims it may be alive or dead have recently focused a
versatile means of formation of semi-autonomous robot.
After image segmentation ,our robot the focused on how map
IEEE Sponsored 2nd International Conference on Innovations in Information Embedded and Communication Systems ICIIECS’15
should be built to find the exact position of victims and also outcomes show the analytical view of broadcasting of image
the location of that victims Map should be built up with the using start, end and depth points. The robot will navigate
joining of all end point of poly line of capturing image of through obstacles with the help of its sensors, camera and its
given victims. other various mechanical and electrical components. It also
easy to find out the live human body in disaster environment.
2.2 OUTCOMES
VI. REFERRENCES
The figure below shows how map should be made with
the joining of all end point of polyline shown in green color. [1] Student member, Yugang Liu, Barzin Doroodgar, IEEE and
For making suitable map we required software part as well as Goldie nejat member, IEEE,”A Learning-Based semi-autonomous
hardware part. With the help of connectivity of hardware, we controller for robotic exploration of unknown Disaster Scenes
directly focused on output comes across through map- While Searching for Victims” IEEE Transaction on
Cybernatics’2014.
management system. We required baud rate, ip address of
connectivity of client and server shown in fig. below/ [2] Jurgen Hess, Felix Enders, Jurgen Sturn, Daniel Cremers, and
Wolframe Burguard,”3-D mapping with an RGB-D camera “,
IEEE Transaction on robotics, VOL 30, NO. 1, FEBRUARY 2014.
[3] Mike peas good, associate Member, IEEE, Chirstopher Michel
Clark, And John Mcphee.”A Complete and Scalable Strategy for
Coordinating Multiple Robots with in Roadmaps”, IEEE
Transaction on Robotics, Vol., 24, No. 2, April 2008
[4] Luo,R.C.,Chun Chi lai “Multisensor Fusion-Based Concurrent
Enviornment Mapping and Moving object Detection For Intelligent
Service Robotics” , IEEE Transaction on industrial Electronics.
[5] Bhatia’s., Dhillon. H.S.: Kumar.N.”Alive human boby
detection system using an autonomous mobile rescue robot”, IEEE
Conference (INDICON), Annual IEEE 2011.
[6] Z.Zhang., G.Nejat, H.Guo and P.Huaing, “A novel 3-D sensory
system for robot-assisted mapping of cluttered urban search and
rescue environments “, Intell Serv.Robot, .Vol.4.no. 2. pp 119-134,
2011.
[7] L.A. Jeni,Z. Istenes, P. Szernes, and H. hashimoto, “ Robot
navigation framework based on reinforcement learning for
intelligent space,”in proc.Conferrence human system
interaction,Karlow,Poland 2008,pp.761-766.
[8] Kevin Weekly,Andrew Tinks,Lesh Anderson and Alexander
.Bayern, ”Autonomous River Navigation Using The Hamilton-
Jocobi Framework For Under actuated Vehicles’” .IEEE
Transaction on Robotics,2014.
[9]k.S.Jones,B.R.Johnson,and E.A.Schmidlin,”Teleoperation
through apertures:Pasability verses driverability”,J.Cogn.Eng
Deciz,Making Vol.5,no.!,pp.10-28,2011.
[11] Eric Aislan Antonelo and Benjamin Schrauwen,”On learning
Navigation Behaviour For Small Mobile Robots With Reservoir
Computing Architectures”IEEE transaction on neural Network and
Learning System,2014.
Fig 6: Experimental demonstration of map-management
[11] B.Doroodgar,M.Ficocelli,B.Mobedi and G.Nejat,’The Serach
system
For Survivors ;cooperative human-robot interaction in search and
rescue environments using semi-autonomous robots.’in proc.IEEE
ICRA Anchorage,Ak,USA,2010,pp,2858-2863.
V. CONCLUSION
[12] B.Doroodgar and G.Nejat,’A Hierarchical reinforcement
The main proposed of the proposed system is to segment learning based control architecture for semi-autonomous rescue
the given image into small segmentation and utilized the map robot in cluttered environments’. In Proc.IEEE, CASE, Toronto,
of given respective image which is easy to find out the exact ON, Canada, 2010, pp.948-953.
location as well as position of victims. Detection and
navigation approach maximized the time and efficiency and
makes it smarter than fully control robotic system the