Implementing Robots in Defence Through Motion Capture With Mixed Reality
Implementing Robots in Defence Through Motion Capture With Mixed Reality
net/publication/329377160
CITATIONS READS
9 683
4 authors, including:
All content following this page was uploaded by Yellamma Pachipala on 06 December 2018.
Research paper
Abstract
Our soldiers are fighting for us, risking their lives and people working in mines spoiling their health. In this paper we will see how we
will implement the technology of mixed reality and motion capture will give solutions for replacing humans with robots. We can save a
lot of lot of human lives and it will be more cost efficient. As on today we are implementing motion capture in analyzing the responses of
military soldiers to test their capabilities and doing animations in movies. So let us extend the existing features to implement a remote
robot control system that allows us to replace humans with robots.
Keywords: Motion capture suit, Accelerometer, Robots, Motion Capture, Virtual Reality, Augmented Reality.
of Kinect SDK, consists of three joints and links connecting them. Rotation: P=-85.344604 Y=-158.969406 R=-107.629066;
In this system there are some components which directly interacts Scale: X=1.000; Y=1.000; Z=1.000
with users such as sensors, actuators etc plays a major role [18]. RIGHT LEG ORIENTATION 2:
Users manipulate data through these components [18]. Translation: X=-755.751; Y=380.589; Z=185.959;
Rotation: P=-28.117653; Y=-177.042282; R=-93.128624;
We can implement it using a smart suit or we can implement it Scale: X=1.000; Y=1.000; Z=1.000
using real time depth sensors such as Kinect sensor or readily RIGHT LEG ORIENTATION 3:
available smart suits. Now we will start discussing about the ap- Translation: X=-808.405 ;Y=376.774; Z=184.261;
plications that can be used. We will first start with the data flow Rotation: P=-44.200928 Y=179.734100 R=-84.927734;
sequence. Scale X=1.000 Y=1.000 Z=1.000
NECK ORIENTATION 1:
Translation: X=-749.172; Y=368.652 ;Z=281.361;
3. Implementation Rotation: P=79.874191; Y=20.284521; R=113.799759
Scale X=1.000 Y=1.000 Z=1.000
Methodology: The data from motion capture suit will be received NECK ORIENTATION 2:
into unreal engine-4. By using the unreal engine 4 we will store Translation: X=-745.790; Y=368.021; Z=277.561
the data of the skeleton such as position and rotation of bones into Rotation: P=81.947327; Y=19.093269; R=111.823235
a cloud database, Data will be uploaded. This cloud database will Scale X=1.000 Y=1.000 Z=1.000
be accessed by the robot and the robot will check the orientation NECK ORIENTATION 3:
of its skeleton. It will have a delta function which will calculate Translation: X=-746.839; Y=373.060; Z=282.840
the difference in orientation of the bones with the data it receives. Rotation: P=81.757683; Y=-5.845208; R=85.839424
Then as per the result of information from the delta function the Scale X=1.000 Y=1.000 Z=1.000
robot will know the calculations that it need to perform in order to NAMING CONVENTION:
exactly replicate the human skeleton data that it will receive. X,Y and Z are translation and scale values of coordinate.
X- X coordinates
In figure 1 shows data flow sequence of the proposed work. The Y- Y coordinates
robot will have a 360 degree camera. The 360 degree camera will Z- Z coordinate
stream the data to the mixed reality device .The robot will send P, Y and R are rotation along with X, Y and Z axis.
the video data into cloud .Now, the application on the user system P-rotation along X axis
will access the cloud database and takes the video and display it Y-rotation along Y axis
via the mixed reality device. R-rotation along Z axis
By using the above concepts we can implement robots in military [9] Michael Gleicher,” Animation From Observation: Motion Capture
where people will fight from a very place which is at a far away and Motion Editing”, Computer Graphics 33(4), p51-54. Special Is-
distance from the battlefield. Since motion capture is limited to a sue on Applications of Computer Vision to Computer Graphics.
[10] Muhammed, Mastura, Mohd Muji, Siti Zarina, Siti Rozaini Zakaria,
limited place joysticks can be used for performing movements. If
and Mohd Zarar Mohd Jenu. "MR999-E wireless robotic arm."
soldiers are trained in operating such kinds of robots then there (2006).
will be no need for a human being to be in a war field. [11] Stevens, J., Kincaid, P., & Sottilare, R. (2015). Visual modality re-
search in virtual and mixed reality simulation. The Journal of De-
5.2. Real time Military training via battle field simula- fense Modeling and Simulation, 12(4), 519-537.
[12] Insinna, V. Defense simulation firms turn to commercial sector for
tion inspiration. National Defense, February2014, pp. 20–21.
[13] Amorim, J. A., Matos, C., Cuperschmid, A. R., Gustavsson, P. M.,
Soldiers by using above technologies will be sent into virtual bat- & Pozzer, C. T. (2013). Augmented reality and mixed reality tech-
tlefield and can be tested on their behaviour in a battle. The expe- nologies: Enhancing training and mission preparation with simula-
rience from mixed reality will be close to our real world. Like the tions. In NATO Modelling and Simulation Group (MSG) Annual
multiplayer games we play soldiers can be divided into two groups Conference 2013 (MSG-111), 2013.
and they will fight with each other. In countries like Brazil mili- [14] Joaquin Ortiz, Ivan Godinez, Rosa I. Peña and Immanuel Edin-
barough.(2005). Robotic Arm Vision System. Texas, United State
tary simulations are already in implementation. At present they are
[15] Cyberbotics Ltd. Webots™: Professional Mobile Robot Simulation
giving some static data and are device do calculations on data and Olivier Michel
produce several results like illusion of an enemy being present etc. [16] Liarokapis, M.V., Artemiadis, P.K., Kyriakopoulos, K.J. Mapping
By implementing networking and motion capture to create a mul- human to robot motion with functional anthropomorphism for tele-
tiplayer environment we can generate a real time war situation operation and telemanipulation with robot arm hand systems
where a soldier will fight against another soldier. [17] Megalingam, Rajesh Kannan, Nihil Saboo, Nitin Ajithkumar, Sree-
ram Unny, and Deepansh Menon. "Kinect based gesture controlled
Robotic arm: A research work at HuT Labs." In Innovation and
5.3. Using Mixed reality in Education Technology in Education (MITE), 2013 IEEE International Confer-
ence in MOOC, pp. 294-299. IEEE, 2013.
Mixed reality with motion capture can be used to simulate opera- [18] Pachipala yellamma,v saranya manasa,a ramya,g.v Kalyani,challa
tions. Showing students practically in 3d about what they read. Narasimham,” controling and monitoring home Appliances through
This will help them to visualize what they learn and enhance their cloud using IoT”,PONTE International Journal of Sciences and Re-
understanding capabilities. search, Vol. 74 | No. 2 | Feb 2018 DOI: 10.21506/j.ponte.2018.2.8
6. Conclusion
References
[1] Jacky C.P. Chan, Howard Leung, Jeff K.T. Tang, and Taku Komu-
ra,” A Virtual Reality Dance Training System Using Motion Cap-
ture Technology”, IEEE TRANSACTIONS ON LEARNING
TECHNOLOGIES, VOL. 4, NO. 2, APRIL-JUNE 2011.
[2] Chris Bregler,” Motion Capture Technology for Entertainment”,
IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2007.
[3] D. Vlasic, R. Adelsberger, G. Vannucci, J.Barnwell, M. Gross, W.
Matusik, and J. Popovic´,“Practical motion capture in everyday sur-
roundings,”in Proc. SIGGRAPH 2007, ACM, San Diego,CA, 2007.
[4] S. Yabukami, H. Kikuchi, M. Yamaguchi, K. I. Arai, K. Takahashi,
A. Itagaki, N. Wako. "Motion capture system of magnetic markers
using three-axial magnetic field sensor". IEEE Transactions on
Magnetics, Vol: 36, Issue: 5, pp: 3646-3648. 2000.
[5] C. Chu, O. C. Jenkins, M. J. Matarie. "Markerless Kinematic Model
and Motion Capture from Volume Sequences". Proceedings of
IEEE Computer Vision and Pattern Recognition, Vol: 2, pp: 475-
482. 2003.
[6] B. Rosenhahn, T. Brox, H. Seidel. "Scaled Motion Dynamics for
MarkerlessMotion Capture". IEEE Conference on Computer Vision
and PatternRecognition”. 2007.
[7] Ascencion Technology. "MotionStar (Tethered Model)". 2011.
[8] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio. "Real-
Time HumanPose Recognition in Parts from Single Depth Images".
Computer Vision andPatter Recognition. 2011.