0% found this document useful (0 votes)
18 views4 pages

Implementing Robots in Defence Through Motion Capture With Mixed Reality

This document discusses implementing robots in defence using motion capture with mixed reality. Motion capture technology can send skeleton data such as location and rotation of joints. This allows human motions to be translated to control robots remotely. The proposed method is to design intelligent robotic devices controlled by humans to replace soldiers during war. This could help save lives by substituting humans with robots that mimic the movements and actions of the controlling person. Key components like sensors and actuators allow users to manipulate data and control the robotic system. Motion capture could be done using a smart suit or depth sensors like Kinect to capture body motions and control robots in real-time.

Uploaded by

9016 Anjali Rege
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views4 pages

Implementing Robots in Defence Through Motion Capture With Mixed Reality

This document discusses implementing robots in defence using motion capture with mixed reality. Motion capture technology can send skeleton data such as location and rotation of joints. This allows human motions to be translated to control robots remotely. The proposed method is to design intelligent robotic devices controlled by humans to replace soldiers during war. This could help save lives by substituting humans with robots that mimic the movements and actions of the controlling person. Key components like sensors and actuators allow users to manipulate data and control the robotic system. Motion capture could be done using a smart suit or depth sensors like Kinect to capture body motions and control robots in real-time.

Uploaded by

9016 Anjali Rege
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/329377160

Implementing Robots in Defence Through Motion Capture with Mixed Reality

Article in International Journal of Engineering & Technology · May 2018


DOI: 10.14419/ijet.v7i2.32.15382

CITATIONS READS

9 683

4 authors, including:

Yellamma Pachipala Challa Narasimham


K L University 32 PUBLICATIONS 237 CITATIONS
42 PUBLICATIONS 157 CITATIONS
SEE PROFILE
SEE PROFILE

All content following this page was uploaded by Yellamma Pachipala on 06 December 2018.

The user has requested enhancement of the downloaded file.


International Journal of Engineering & Technology, 7 (2.32) (2018) 114-116

International Journal of Engineering & Technology


Website: www.sciencepubco.com/index.php/IJET

Research paper

Implementing Robots in Defence Through Motion Capture with


Mixed Reality
Pachipala Yellamma1,5 Ch. Madhav Bharadwaj2, K. R. Krishna Sai3, Challa Narasimham4
1,2,3
Department Of CSE, Koneru Lakshmaiah Educational Foundation, Vaddeswaram, Guntur, Andhra Pradesh, India
4
Department Of CSE, Vignan’s Institute Of Information Technology, Duvada, Vishakhapatnam
5
Bharathiar University, Department Of Computer Science, Coimbatore, Tamilnadu Ffiliation Of The First Author
*Corresponding Author E-Mail: [email protected]

Abstract

Our soldiers are fighting for us, risking their lives and people working in mines spoiling their health. In this paper we will see how we
will implement the technology of mixed reality and motion capture will give solutions for replacing humans with robots. We can save a
lot of lot of human lives and it will be more cost efficient. As on today we are implementing motion capture in analyzing the responses of
military soldiers to test their capabilities and doing animations in movies. So let us extend the existing features to implement a remote
robot control system that allows us to replace humans with robots.

Keywords: Motion capture suit, Accelerometer, Robots, Motion Capture, Virtual Reality, Augmented Reality.

done by the robot. We will combine the concepts of remote con-


1. Introduction trolled robot arm with motion capture and will generate a wireless
motion controlled robots.
We can observe the world changing so rapidly. The technology is
evolving day by day. Today’s world is creating our own ideas and
going inside them and design our own world [1]. The technology
2. Literature Survey
that is allowing us to bridge the gap between the virtual world and
the real world is virtual reality. By scanning the real world data Remote controlled robots are being implemented in various fields
and performing actions as per the scanned data is called augment- like medicine, automobile industries. Mastura binti Muhammed
ed reality [2]. Here is a small example of a augmented reality ap- et.al (2006) was introduced MR-999-E wireless robotic arm. It has
plication, We will give a image to the computer and we will pro- modified a remote OS (Operating System) for a robotic arm by
gram the computer on what to do when it scans the application, means of infrared sensors to remote monitoring [10]. It has a con-
from then whenever the computer see that image with that camera straint where the infrared can only communicate in a small range.
it will perform those calculations[3]. Virtual reality is used for Stevens J et. al (2015) was particularly focuses on provide work
architectural visualization, as a gaming platform, Education [4]. for the optimal visual basic in virtual and mixed reality simula-
Augmented reality is today being used for teaching complex les- tions [11]. Amorim et al (2013) this paper is worked on training to
sons with visual 3d models and animations. guarantee order and law at the same time, get ready soldiers and
officers for interventions even in urban areas[12][13]. To allow
The motion capture technology will send our skeletons data (i.e. such training, this service counts with physical built sites to allow
the orientation of our skeleton, i.e. the location, rotation).A simple soldiers to train how to get inside houses, how to shoot at short
application of motion capture application is the animated movies ranges ,how to move and shelter while going up in a hill with
and video games that we play [5]. In the next phase of this paper many houses and corridors on the way. Joaquin Ortiz et. al. (2005)
we will see how further this motion can capture technology can focused on a robotic arm that can differentiate a colour for a golf
used [6]. Robotics is the word we hear every day as an engineer. ball using LabVIEW as a program to control the robot [14]. On
This is a field where we will be implementing robots instead of a the other hand, LabVIEW becomes inefficient when designing
human for doing repetitive jobs [7]. Sensors are the physical de- complex control algorithm and this will affect the results of the
vices that we will use to determine a physical entity [8]. Well system [15]. Liarokapis et al (2015) this paper worked on the an-
there are several ways we can implement the motion capture tech- gles for a 3D representation of the human arm. The angles thus
nology in several ways [9]. obtained are sent using a serial communication port to the Arduino
microcontroller, which in turn generates signals which are sent to
The proposed method is to design human controlled intelligent the servo motors [16]. The servo motors rotate based on the angles
devices which will work like soldiers during war time. The re- given as input. The combined motion of the servos results in a
search work is carried out to save humanitarians by Substituting complete Robotic arm movement which is a mimic of the human
human beings with Robots. The Robot will act exactly person who arm movement. Megalingam et al (2013) in this structure observes
is controlling it, Regardless of the place of that person i.e. Soldier the motion of the user's arm using a Kinect [17]. The skeletal im-
can do control from a remote location and all the fighting will be age of the arm obtained using the “Kinect Skeletal Image” project
Copyright © 2018 Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted
use, distribution, and reproduction in any medium, provided the original work is properly cited.
International Journal of Engineering & Technology 115

of Kinect SDK, consists of three joints and links connecting them. Rotation: P=-85.344604 Y=-158.969406 R=-107.629066;
In this system there are some components which directly interacts Scale: X=1.000; Y=1.000; Z=1.000
with users such as sensors, actuators etc plays a major role [18]. RIGHT LEG ORIENTATION 2:
Users manipulate data through these components [18]. Translation: X=-755.751; Y=380.589; Z=185.959;
Rotation: P=-28.117653; Y=-177.042282; R=-93.128624;
We can implement it using a smart suit or we can implement it Scale: X=1.000; Y=1.000; Z=1.000
using real time depth sensors such as Kinect sensor or readily RIGHT LEG ORIENTATION 3:
available smart suits. Now we will start discussing about the ap- Translation: X=-808.405 ;Y=376.774; Z=184.261;
plications that can be used. We will first start with the data flow Rotation: P=-44.200928 Y=179.734100 R=-84.927734;
sequence. Scale X=1.000 Y=1.000 Z=1.000
NECK ORIENTATION 1:
Translation: X=-749.172; Y=368.652 ;Z=281.361;
3. Implementation Rotation: P=79.874191; Y=20.284521; R=113.799759
Scale X=1.000 Y=1.000 Z=1.000
Methodology: The data from motion capture suit will be received NECK ORIENTATION 2:
into unreal engine-4. By using the unreal engine 4 we will store Translation: X=-745.790; Y=368.021; Z=277.561
the data of the skeleton such as position and rotation of bones into Rotation: P=81.947327; Y=19.093269; R=111.823235
a cloud database, Data will be uploaded. This cloud database will Scale X=1.000 Y=1.000 Z=1.000
be accessed by the robot and the robot will check the orientation NECK ORIENTATION 3:
of its skeleton. It will have a delta function which will calculate Translation: X=-746.839; Y=373.060; Z=282.840
the difference in orientation of the bones with the data it receives. Rotation: P=81.757683; Y=-5.845208; R=85.839424
Then as per the result of information from the delta function the Scale X=1.000 Y=1.000 Z=1.000
robot will know the calculations that it need to perform in order to NAMING CONVENTION:
exactly replicate the human skeleton data that it will receive. X,Y and Z are translation and scale values of coordinate.
X- X coordinates
In figure 1 shows data flow sequence of the proposed work. The Y- Y coordinates
robot will have a 360 degree camera. The 360 degree camera will Z- Z coordinate
stream the data to the mixed reality device .The robot will send P, Y and R are rotation along with X, Y and Z axis.
the video data into cloud .Now, the application on the user system P-rotation along X axis
will access the cloud database and takes the video and display it Y-rotation along Y axis
via the mixed reality device. R-rotation along Z axis

Figure 1: Data flow sequence

The position, rotation and movement of the robots will be tracked


by using gyroscope

Algorithm for synchronizing robots skeleton data with


human skeletal data

Step1: get into an infinite loop.


Step2: get the skeletal data through motion capture.
Step3: Send it to cloud storage. Figure 1: synchronizing human skeleton data with robot skeleton data
Step4: Robot controller should access the cloud storage
Step5: Calculate the difference between the orientation of the
robot and the skeletal data of the robot.
5. Applications
Step6: Change the orientation of the robot skeleton such that it
matches the orientation of the human skeleton. Our proposed algorithm is also applicable to the real life applica-
The orientation of the algorithm is shown in figure 2. tion such as

5.1. Substitute Humans via Robots


4. Experimental Results
In this part of application we will take the data from motion cap-
LEFT LEG ORIENTATION 1: ture devices and we will store the data from motion capture in
Translation: X=-747.948; Y=358.812; Z=198.658; cloud. Generally data is the data from the depth sensor, which
Rotation: P=74.975349; Y=-6.116202; R=85.905746 includes the skeleton offset, bones rotation etc. This data will be
Scale: X=1.000; Y=1.000; Z=1.000 received on the other side by accessing the common cloud storage.
LEFT LEG ORIENTATION 2: The data from the common storage will be sent into the control
Translation: X=-754.910 Y=357.337 Z=190.549 system of the robot. The control system will calculate the differ-
Rotation: P=19.244488 Y=-3.306334 R=85.766190 ence between the present position and orientation of the robot and
Scale: X=1.000; Y=1.000; Z=1.000 the data from the cloud. Then it will do the necessary movement
LEFT LEG ORIENTATION 3: operation to get the desired orientation of skeleton of the robot.
Translation: X=-808.125 Y=361.939 Z=189.649 With the above application we can send robots for construction
Rotation: P=13.699016 Y=-2.800477 R=87.720711 works. This application will be used in meetings. Rather than trav-
Scale: X=1.000; Y=1.000; Z=1.000 elling large distances for attending a meeting one can use this
RIGHT LEG ORIENTATION 1: application.
Translation: X=-752.058; Y=378.326; Z=192.364;
116 International Journal of Engineering & Technology

By using the above concepts we can implement robots in military [9] Michael Gleicher,” Animation From Observation: Motion Capture
where people will fight from a very place which is at a far away and Motion Editing”, Computer Graphics 33(4), p51-54. Special Is-
distance from the battlefield. Since motion capture is limited to a sue on Applications of Computer Vision to Computer Graphics.
[10] Muhammed, Mastura, Mohd Muji, Siti Zarina, Siti Rozaini Zakaria,
limited place joysticks can be used for performing movements. If
and Mohd Zarar Mohd Jenu. "MR999-E wireless robotic arm."
soldiers are trained in operating such kinds of robots then there (2006).
will be no need for a human being to be in a war field. [11] Stevens, J., Kincaid, P., & Sottilare, R. (2015). Visual modality re-
search in virtual and mixed reality simulation. The Journal of De-
5.2. Real time Military training via battle field simula- fense Modeling and Simulation, 12(4), 519-537.
[12] Insinna, V. Defense simulation firms turn to commercial sector for
tion inspiration. National Defense, February2014, pp. 20–21.
[13] Amorim, J. A., Matos, C., Cuperschmid, A. R., Gustavsson, P. M.,
Soldiers by using above technologies will be sent into virtual bat- & Pozzer, C. T. (2013). Augmented reality and mixed reality tech-
tlefield and can be tested on their behaviour in a battle. The expe- nologies: Enhancing training and mission preparation with simula-
rience from mixed reality will be close to our real world. Like the tions. In NATO Modelling and Simulation Group (MSG) Annual
multiplayer games we play soldiers can be divided into two groups Conference 2013 (MSG-111), 2013.
and they will fight with each other. In countries like Brazil mili- [14] Joaquin Ortiz, Ivan Godinez, Rosa I. Peña and Immanuel Edin-
barough.(2005). Robotic Arm Vision System. Texas, United State
tary simulations are already in implementation. At present they are
[15] Cyberbotics Ltd. Webots™: Professional Mobile Robot Simulation
giving some static data and are device do calculations on data and Olivier Michel
produce several results like illusion of an enemy being present etc. [16] Liarokapis, M.V., Artemiadis, P.K., Kyriakopoulos, K.J. Mapping
By implementing networking and motion capture to create a mul- human to robot motion with functional anthropomorphism for tele-
tiplayer environment we can generate a real time war situation operation and telemanipulation with robot arm hand systems
where a soldier will fight against another soldier. [17] Megalingam, Rajesh Kannan, Nihil Saboo, Nitin Ajithkumar, Sree-
ram Unny, and Deepansh Menon. "Kinect based gesture controlled
Robotic arm: A research work at HuT Labs." In Innovation and
5.3. Using Mixed reality in Education Technology in Education (MITE), 2013 IEEE International Confer-
ence in MOOC, pp. 294-299. IEEE, 2013.
Mixed reality with motion capture can be used to simulate opera- [18] Pachipala yellamma,v saranya manasa,a ramya,g.v Kalyani,challa
tions. Showing students practically in 3d about what they read. Narasimham,” controling and monitoring home Appliances through
This will help them to visualize what they learn and enhance their cloud using IoT”,PONTE International Journal of Sciences and Re-
understanding capabilities. search, Vol. 74 | No. 2 | Feb 2018 DOI: 10.21506/j.ponte.2018.2.8

6. Conclusion

This paper is mainly works on the effective utilization of mixed


reality and motion capture. In this paper Robot will operate accu-
rately human who is controlling it, in spite of the place of that
human i.e. Soldier can do control from a remote location and all
the fighting will be done by the robot. It is combination of remote
controlled robot arm with motion capture and will create a wire-
less motion controlled robots. We can make robots to perform
dangerous tasks like fighting a battle, working in coal mines,
working in constructions through manual control. This paper is
also used for architectural visualization, as a gaming platform,
Education (for teaching complex lessons with visual 3d models
and animations) and film industry.

References
[1] Jacky C.P. Chan, Howard Leung, Jeff K.T. Tang, and Taku Komu-
ra,” A Virtual Reality Dance Training System Using Motion Cap-
ture Technology”, IEEE TRANSACTIONS ON LEARNING
TECHNOLOGIES, VOL. 4, NO. 2, APRIL-JUNE 2011.
[2] Chris Bregler,” Motion Capture Technology for Entertainment”,
IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2007.
[3] D. Vlasic, R. Adelsberger, G. Vannucci, J.Barnwell, M. Gross, W.
Matusik, and J. Popovic´,“Practical motion capture in everyday sur-
roundings,”in Proc. SIGGRAPH 2007, ACM, San Diego,CA, 2007.
[4] S. Yabukami, H. Kikuchi, M. Yamaguchi, K. I. Arai, K. Takahashi,
A. Itagaki, N. Wako. "Motion capture system of magnetic markers
using three-axial magnetic field sensor". IEEE Transactions on
Magnetics, Vol: 36, Issue: 5, pp: 3646-3648. 2000.
[5] C. Chu, O. C. Jenkins, M. J. Matarie. "Markerless Kinematic Model
and Motion Capture from Volume Sequences". Proceedings of
IEEE Computer Vision and Pattern Recognition, Vol: 2, pp: 475-
482. 2003.
[6] B. Rosenhahn, T. Brox, H. Seidel. "Scaled Motion Dynamics for
MarkerlessMotion Capture". IEEE Conference on Computer Vision
and PatternRecognition”. 2007.
[7] Ascencion Technology. "MotionStar (Tethered Model)". 2011.
[8] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio. "Real-
Time HumanPose Recognition in Parts from Single Depth Images".
Computer Vision andPatter Recognition. 2011.

View publication stats

You might also like