Project Paper 4
Project Paper 4
ABSTRACT Monitoring the safe social distancing then conducting efficient sterilization in potentially
crowded public places are necessary but challenging especially during the COVID-19 pandemic. This work
presents the 3D human space-based surveillance system enabling selective cleaning framework. To this
end, the proposed AI-assisted perception techniques is deployed on Toyota Human Support Robot (HSR)
equipped with autonomous navigation, Lidar, and RGBD vision sensor. The human density mapping repre-
sented as heatmap was constructed to identify areas with the level being likely the risks for interactions. The
surveillance framework adopts the 3D human joints tracking technique and the accumulated asymmetrical
Gaussian distribution scheme modeling the human location, size, and direction to quantify human density.
The HSR generates the human density map as a grid-based heatmap to perform the safe human distance
monitoring task while navigating autonomously inside the pre-built map. Then, the cleaning robot uses the
levels of the generated heatmap to sterilize by the selective scheme. The experiment was tested in public
places, including food court and wet market. The proposed framework performance analyzed with standard
performance metrics in various map sizes spares about 19 % of the disinfection time and 15 % of the
disinfection liquid usage, respectively.
INDEX TERMS COVID-19, human space, social distance, cleaning robotics, human support robot.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 9, 2021 41407
A. V. Le et al.: Social Density Monitoring Toward Selective Cleaning by HSR With 3D-Based Perception System
distance measures is a manual process that can bring the overhead perception system. The system integrated trans-
surveillance staff in close proximity with people affected by fer learning of the YOLOv3 an open source deep learning
COVID-19. object detection framework with an overhead human data set
One significant aspect that has been studied widely in in video sequences. The proposed system simply uses the
social safety for spatial interaction is the idea of personal Euclidean distance of detected bounding box centroids to
space, or proxemics [4]–[8]. According to [4], based on estimate the distances of pairwise people. Then to estimate
different types of interaction and relationships between peo- social distance violations between people, an approximation
ple, people maintain different culturally defined interpersonal of physical distance to pixel are set as fixed threshold. How-
distances. Hall differentiated four different zones between ever, the quantification in term of 3d interaction between
the interaction distance as follows: Public interaction: Public human space and the utilization of human interaction are not
speeches in-crowd, more than 4 m away. Social interac- considered in the mentioned references.
tion: business meetings, 1–4 m. Personal interaction: friendly In this context, service robots are a viable candidate for
interaction distance gap of arm’s length, 0.5–1 m. Intimate monitoring safe distance measure. Robotics and several other
interaction: about 0.5 m apart. autonomous technologies in AGV vehicles have made great
In this paper, we specifically focus on the zone of personal strides in fighting the COVID-19 pandemic [22], [23]. The
space as it is a culturally defined zone of ‘‘spatial insula- service robot uses the AI-assisted technology to deliver the
tion’’ that people maintain around themselves and others [9]. medicine to covid 19 patients, safe entry check-in body tem-
Research work in [10], has described personal space for the perature measurement, sanitize the infected area, frequent
case of both people approaching each other and standing in cleaning of high touchpoints like hospital walls, floor, and
line is asymmetric [11]. Also [12] discusses personal space is door handle [16], [24], [25]. Hence, by considering the advan-
not a constant as it is dependent on individual attributes such tage of service robots and AI-assisted technology, as well
as volume, age, gender, and direction of interaction. as the flexible navigation in the vast and complex envi-
Automate the monitoring of social distancing measures is ronments such as shopping mall, food court, wet market,
a viable solution. Some effort worldwide that were imple- resents robot system gradually replace human in the tedious
mented on an ad-hoc framework to enforce social distanc- jobs. In the monitoring, surveillance tasks and cooperate
ing rules. Some industries recently use lightweight wearable conveniently through the comment operation system with
devices that employ ultra-wideband (UWB) technology to another service robot such as cleaning robot are the recent
measure people’s distance automatically. It alerts them imme- trends.
diately if they come closer than the required distance [13]. In this article, based on the literature survey on the human
Some countries have adopted ubiquitous technologies, such space and multi human interaction, we propose a human
as Wi-fi, cellular, GNSS positioning (localization) systems to safe distance monitoring technique using Toyota Human
monitor and alert the social distance in public and crowded Support Robot (HSR) and AI-assisted 3D computer vision
areas [1], [14]. Recently, many countries worldwide have framework. The computer vision framework was built with
used the drones, IoT, and AI-assisted techniques to monitor a modified Openpose 3D human tracking algorithm, depth
the human density, predict and alert the safe distance breach image fusion technique, Gaussian heat map scheme, and uses
in crowded areas in indoor and outdoor [3]. However, these the RGBD vision sensor data. The entire framework built
techniques have numerous limitations and have poor per- on top of Robot Operating System (ROS) [26] and tested in
formance in dynamic and complicated indoor environments. real-time with HSR Toyota robot [27] deployed in crowded
With the advanced high speed and accuacy devices, the cap- public areas of Singapore, including food court and wet
tured 3D object [15] have been applied in suveliance appli- market. The service robot navigates to clear the waypoints
caitons for quality assessmen. Moreover, the exiting CCTV around the mapped indoor area and performs the SDO tasks
system [16]–[18] consisting of monocular RGB cameras is that include detecting people’s clusters, space between the
not cover the whole the workspace and CCTV is hard to detect humans, human interaction pose, safe distance measure, and
the 3D human attributes. In the works of [19], the authors raising warning alerts commuters when violating the safe
have addressed the distance-time encounter patterns in a distance rule.
crowd that allows the fixed surveillance system to identify The main contributions of this paper are threefold. (1) the
social groups, such as families, by imposing adaptive thresh- design and development of safe social distance surveillance
olds on the distance-time contact patterns. On other hand, with collaborative multi-robot cleaning system, (2) to develop
in the works of [20], an artificial intelligence based social and implement a vision-based AI perception algorithm for the
distancing surveillance system is present to detect distances robot to closely generate a heatmap based on the 3D human
between human and warning them can slow down the spread interaction model, (3) to develop and test an adaptive velocity
of the deadly disease. The work presented the four essential behaviour model for the multi-robot cleaning systems to
ethical factors of surveillance system of: keep the privacy, not clean the environment efficiently based on the generated heat
target the particular detected human, no human supervisor, map. The efficiency of the proposed selective cleaning system
and open-source. The work of [21] has proposed a deep was assessed with standard performance metrics, and results
learning technique to track the social distance by an fixed are reported.
FIGURE 1. The framework for human density mapping based on human attributes.
The paper is organized as follows. The context of the appli- state that human interaction tends toward the human direc-
cation is introduced in Section 2. The methodology of the tion, and space occupied. This is fundamental to motivate us
proposed robot is detailed in Section 3. In Section 4, the HSR to quaintly the human by human space of 3D location, volume
platform is presented. The optimal human attributes estima- and facing direction. The perception unit of HSR outputs the
tion methods and Social Distancing and Density heat map human attributes includes the 3D positioning of human joints
are validated in Section 5 and Section 6, respectively. The in the map, the volume of space occupied by the detected
conclusion, together with potential future works, is explored person, and the person’s movement direction.
in the last Section 7. To quantify human-to-human interactions, we propose a
distribution kernel with direction and magnitude proportional
II. CONTEXT OF APPLICATION to the detected and tracked a person’s identity and plotted
Figure 1 depicts the workflow used in the proposed frame- on the map concerning the person’s position. To this end,
work to map the human density based on detected human we extract the color image data from the RGB-depth camera,
attributes by service robot. The output heatmap generated is and human joints are detected and marked by the AI-based
a distribution that highlights the level of interaction between Openpose algorithm [28]. The 3D depth information from the
humans. Based on this heatmap distribution, the system iden- camera frame is used to estimate the person’s joint positions,
tifies locations with the level of human interactions in a and the position is then converted to the map frame to be
pre-built map of the environment. The user can set a threshold tracked while it is in the field of view of HSR camera.
to determine the level of interaction and issue a warning We divide the pre-built map into grid cells and plot 3D
alert whenever safe distancing measures are violated. More- directional distributions of tracked human positions over the
over, the system helps deploy area sterilization by cleaning grid cells.
robot systems that activate adaptive cleaning based on the The asymmetrical Gaussian distribution has its peak value
area heatmap levels. The interactive monitoring system has set at the location where humans are detected and spread
been deployed for trials evaluations at testbed wet-market gradually along the human volume and moving direction of
in Singapore. each person. After plotting all the human positions, each cell
value in the map is updated by accumulating the human distri-
III. METHODOLOGY bution values during detection. Figure 2 shows the proposed
This software framework is developed on a Toyota HSR robot framework models in terms of quantity the interaction of three
equipped with an onboard AI-embedded perception system. persons at the same distance but own different attributes then
Unlike traditional approaches that only monitor safe physical interaction between them differently.
distancing based on a person’s position from a fixed camera
system, this study defines the degree of human-to-human IV. HSR ROBOT PLATFORM
interaction by quantifying social interaction using an asym- A. OVERVIEW OF HSR SYSTEM ARCHITECTURE
metrical Gaussian distribution with the shape derived from The Human Support Robot (HSR), as shown in Figure 3,
human attributes. Note that the theoretical background for is the research platform developed by Toyota Ltd. Co.
social interaction is based on the survey work of [4], which HSR has been implemented in multiple applications such as
2) 3D JOINT TRACKING
The DeepSORT is an improved version of the Simple Online
and Real-time Tracking (SORT) algorithm. The DeepSORT
tracking framework was build using the hypothesis tracking
technique with Kalman filtering algorithm and DL based
association metric approach (Deep SORT). Further, the Hun-
garian algorithm was utilized to resolve the uncertainty
between the estimated Kalman state and the newly received
measuring value. The tracking algorithm uses the appearance
data to improve the performance of Deepsort [40], [41].
In this work, the 3D joint coordinates of human detection
with the aligned RGB and depth frames are fed into the
modified DeepSORT network for tracking the human move-
ments. Then we retrained by transferred learning technique
the original DeepSORT object tracking algorithm which is
FIGURE 5. Human skeleton by Openpose. initially designed for 2D object tracking to track the detected
3D joints. According to bounding box coordinates and object
By taking the average of accuracy from every joint, we filter appearance, deep sort assigns an id for each human detection
the false detections. and performs the tracking in 3D camera frame coordinate.
FIGURE 6. In the left image a person facing directly the camera with the
Typically, a 2-dimensional Gaussian distribution is sym-
nose tracked by OpenPose, in the right a human facing contrary the metric along the x and y axes. However, to formulate a
camera without nose joint. distribution function for personal space between people, a
2-dimensional asymmetric Gaussian function is necessary;
C. HUMAN DIRECTION AND FACING this can be done with a shared σx and differing σy values.
The human direction is the vector with the θ direction is set ALGORITHM 1: Asymmetric Gaussian Based-
to orthogonal with the vector linking left and right shoulder Heatmap Generation
joints. Once we estimate the human left and right shoul- − atan2(y − yc , x − xc ) − θ + π/2
1 : find :α ←
der joints’ position, we will find the angle of the vectors 2: Normalize(α)
formed by those two joints. To deal with the situations of 3: a←− (cosθ)2 /(2σ 2 ) + (sinθ)2 /(2σs2 )
human facing upward and backward the camera, specifically, 4: b←− sin(2θ)/(4σ 2 ) − sin(2θ)2 /(4σs2 )
we classify the detected human direction into front and rear 5: c←− (sinθ)2 /(2σ 2 ) + (cosθ)2 /(2σs2 )
facing cases. Then the formula as in Equation 4 is used to 6 : return(exp(−(a(x − xc )2 + 2b(x − xc )(y − yc ) + c(y − yc )2 ))
find the direction for the case of existing the detected human
nose joint and in Equation 5 for the case of non-existing the Algorithm 1 explains the computation of an arbitrarily
detected human nose joint. given the left and right shoudlers rotated asymmetric Gaussian function at a detected human
have the tracked codinates xls , yls , zls , xrs , yrs , zrs , the orienta- location (x; y). The following notations are used: θ is the
tion value gives the angle of the line in a clockwise direction. direction of the distribution taken from the human’s estimated
Nevertheless, this angle can be wrong sometimes because the orientation. σh variance in θ direction. σs variance to the sides
shoulders can only be detected in an interval of 180 degrees. (θ +− π/2 direction). σr variance to the back (-θ direction).
To enhance the accuracy, with the known camera angle value Those parameters σh , σs , σr are set to be proportional with
concerning the tracked 3D human joints, we can estimate the detected human volume. Lines 1, 2, and 3 calculate the
whether this human is facing the camera. normalized angle of the human facing σs direction. This
OpenPose tries to detect every joint in the human body. If it means α points along the side of the function, and 0 < α < π .
is impossible to detect the joint by the probability mean lower The two 2D Gaussian functions in Line 3 will be used for the
than the threshold, it will fill with an empty value such as null point of interest, (x; y). In the case of α = 0, the point of
value for every column of the respective joint. This behavior interest is located directly to the side of the function center
is essential because if the human face a contrary direction to and depends only on σs . Figure 7 displays some views of
the camera, there will not be any joint-related with the face an Asymmetric Gaussian cost function as shown in Equa-
on the joint’s array. Taking that as a reference, we can know tion 8. This function has a rotation of θ = π/6, is centered
if the human is facing toward or backward, so we modify at (0; 0) and has as variances σh = 2:0, σs = 4=3, and
the formula as follows, taking into account that 0 degrees are σr = 1. The maximum cost of this function is 1 in the center
facing contrary to the camera: of the distribution.
If Joint 0 (Nose) exist: 2 +2b(x−x )(y−y )+c(y−y )2 )
f (x, y) = exp−(a(x−xc ) c c c (8)
θ = (atan2(|yls − yrs |, |xls − xrs |) × 180/π) + 180 (4)
The Figure 8 presents the example the human detected at
else: the origin (0,0) facing upward by θ = π rads, σs = 13.33,
θ = atan2(|yls − yrs |, |xls − xrs |) × 180/π (5) σr = 10.00, σh = 20.00
VI. SOCIAL DISTANCING AND DENSITY HEAT MAP B. HEAT MAP GENERATION
A. DISTRIBUTION KERNEL The heat map generation consists of following steps:
An asymmetric Gaussian integral function using for social 1) OpenPose that detect the humans and give the joints of
robot interaction in [42] was deployed to calculate the the skeleton.
FIGURE 7. Asymmetric Gaussian Distribution views located at (0, 0), FIGURE 9. Two persons facing opposite directions & Two persons facing
oriented by θ = π/6 rad, and with variances σh = 2, σs = 4, and σr = 1. each other.
Travel time and disinfection liquid spent were recorded [10] J. R. Aiello and D. E. Thompson, ‘‘Personal space, crowding, and spatial
during the 5 trials for each tesbed map. Table 1 describes the behavior in a cultural context,’’ in Environment and Culture. Boston, MA,
USA: Springer, 1980, pp. 107–178.
experiment’s comparison averaged results on testbed layouts [11] Y. Nakauchi and R. Simmons, ‘‘A social robot that stands in line,’’ Auton.
during the trials. The average per-time disinfectant solution Robots, vol. 12, no. 3, pp. 313–324, 2002.
offered by the selective method with heatmap can save about [12] J. W. Burgess, ‘‘Interpersonal spacing behavior between surrounding near-
est neighbors reflects both familiarity and environmental density,’’ Ethol.
19 %, 20 %, and 16 % the spent time and 15%, 19%, and 12% Sociobiol., vol. 4, no. 1, pp. 11–17, Jan. 1983.
of disinfection liquid, respectively. [13] M. M. Islam, S. Mahmud, L. J. Muhammad, M. R. Islam, S. Nooruddin,
and S. I. Ayon, ‘‘Wearable technology to assist the patients infected with
novel coronavirus (COVID-19),’’ Social Netw. Comput. Sci., vol. 1, no. 6,
VIII. CONCLUSION p. 320, Nov. 2020.
COVID-19 is the third pandemic of the 21st century. [14] A. A. R. Alsaeedy and E. K. P. Chong, ‘‘Detecting regions at risk for
spreading COVID-19 using existing cellular wireless network functionali-
COVID-19 pandemic is easily spread by people in close prox- ties,’’ IEEE Open J. Eng. Med. Biol., vol. 1, pp. 187–189, 2020.
imity, especially in crowds with mobile individuals (e.g., food [15] U. Stenz, J. Hartmann, J.-A. Paffenholz, and I. Neumann, ‘‘High-precision
3D object capturing with static and kinematic terrestrial laser scanning
court, wet market). The proposed social distant monitoring
in industrial applications—Approaches of quality assessment,’’ Remote
and selective sterilization strategy have validated efficiency in Sens., vol. 12, no. 2, p. 290, Jan. 2020.
the real public environment. The proposed system is the initial [16] A. V. Le, R. Parween, P. T. Kyaw, R. E. Mohan, T. H. Q. Minh, and
C. S. C. S. Borusu, ‘‘Reinforcement learning-based energy-aware area
works to deploy an adaptive multi-robot cleaning strategy coverage for reconfigurable hRombo tiling robot,’’ IEEE Access, vol. 8,
based on coverage path planning that works in synergy with pp. 209750–209761, 2020.
the human interaction heat map generated by safe social [17] A. V. Le, P. Veerajagadheswar, P. T. Kyaw, M. A. V. J. Muthugala,
M. R. Elara, M. Kuma, and N. H. K. Nhan, ‘‘Towards optimal hydro-
distance monitoring systems blasting in reconfigurable climbing system for corroded ship hull
Our future works will focus on: redesigning the long-term cleaning and maintenance,’’ Expert Syst. Appl., vol. 170, May 2021,
autonomy framework intensity, implement the autonomous Art. no. 114519.
[18] S. K. Udgata and N. K. Suryadevara, COVID-19, Sensors, and Internet of
path generation to re-clean the part of the surface concerning Medical Things (IoMT). Singapore: Springer, 2021, pp. 39–53.
the generated heatmap, working on the optimization algo- [19] C. A. S. Pouw, F. Toschi, F. van Schadewijk, and A. Corbetta, ‘‘Monitoring
rithm to control the generated heatmap to reduce the running physical distancing for crowd management: Real-time trajectory and group
analysis,’’ PLoS ONE, vol. 15, no. 10, Oct. 2020, Art. no. e0240963.
time usage.Since the deploying the system at a public food [20] D. Yang, E. Yurtsever, V. Renganathan, K. A. Redmill, and Ü. Özgüner,
court in Tampines, Singapore requires the particular setups ‘‘A vision-based social distancing and critical density detection sys-
so that the comparisons between the surveillance systems will tem for COVID-19,’’ 2020, arXiv:2007.03578. [Online]. Available:
http://arxiv.org/abs/2007.03578
be also considered as the future works. [21] I. Ahmed, M. Ahmad, J. J. P. C. Rodrigues, G. Jeon, and S. Din, ‘‘A deep
learning-based social distance monitoring framework for COVID-19,’’
Sustain. Cities Soc., vol. 65, Feb. 2021, Art. no. 102571.
ACKNOWLEDGMENT [22] M. Antony, M. Parameswaran, N. Mathew, S. V. S. J. Joseph, and
The authors would like to thank Jia Yin for insightful com- C. M. Jacob, ‘‘Design and implementation of automatic guided vehicle
ments and discussion. for hospital application,’’ in Proc. 5th Int. Conf. Commun. Electron. Syst.
(ICCES), Jun. 2020, pp. 1031–1036.
[23] Z. H. Khan, A. Siddique, and C. W. Lee, ‘‘Robotics utilization for health-
REFERENCES care digitization in global COVID-19 management,’’ Int. J. Environ. Res.
Public Health, vol. 17, no. 11, p. 3819, May 2020. [Online]. Available:
[1] V. Chamola, V. Hassija, V. Gupta, and M. Guizani, ‘‘A comprehen- https://www.mdpi.com/1660-4601/17/11/3819
sive review of the COVID-19 pandemic and the role of IoT, drones, [24] T. W. Teng, P. Veerajagadheswar, B. Ramalingam, J. Yin, R. E. Mohan,
AI, blockchain, and 5G in managing its impact,’’ IEEE Access, vol. 8, and B. F. Gómez, ‘‘Vision based wall following framework: A case
pp. 90225–90265, 2020. study with HSR robot for cleaning application,’’ Sensors, vol. 20, no. 11,
[2] M. Saez, A. Tobias, D. Varga, and M. A. Barceló, ‘‘Effectiveness of the p. 3298, Jun. 2020. [Online]. Available: https://www.mdpi.com/1424-
measures to flatten the epidemic curve of COVID-19. The case of Spain,’’ 8220/20/11/3298
Sci. Total Environ., vol. 727, Jul. 2020, Art. no. 138761. [25] K. P. Cheng, R. E. Mohan, N. H. K. Nhan, and A. V. Le, ‘‘Multi-objective
[3] M. Gupta, M. Abdelsalam, and S. Mittal, ‘‘Enabling and enforcing genetic algorithm-based autonomous path planning for Hinged-Tetro
social distancing measures using smart city and its infrastructures: reconfigurable tiling robot,’’ IEEE Access, vol. 8, pp. 121267–121284,
A COVID-19 use case,’’ 2020, arXiv:2004.09246. [Online]. Available: 2020.
https://arxiv.org/abs/2004.09246 [26] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler,
[4] E. T. Hall, The Hidden Dimension, vol. 609. Garden City, NY, USA: and A. Y. Ng, ‘‘ROS: An open-source robot operating system,’’ in Proc.
Doubleday, 1966. ICRA Workshop Open Source Softw., Kobe, Japan, 2009, vol. 3, nos. 3–2,
[5] S. Weitz, Nonverbal Communication: Readings With Commentary. p. 5.
New York, NY, USA: Oxford Univ. Press, 1979. [27] U. Yamaguchi, F. Saito, K. Ikeda, and T. Yamamoto, ‘‘HSR, human support
robot as research and development platform,’’ in Proc. Abstr. Int. Conf.
[6] P. Mishra, ‘‘Proxemics: Theory and research,’’ Perspect. Psychol. Res.,
Adv. Mechatronics, Toward Evol. Fusion IT Mechatronics (ICAM), 2015,
vol. 6, pp. 10–15, 1983.
pp. 39–40.
[7] P. Veerajagadheswar, K. Ping-Cheng, M. R. Elara, A. V. Le, and M. Iwase, [28] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, ‘‘OpenPose:
‘‘Motion planner for a tetris-inspired reconfigurable floor cleaning robot,’’ Realtime multi-person 2D pose estimation using part affinity fields,’’ 2018,
Int. J. Adv. Robot. Syst., vol. 17, no. 2, 2020, Art. no. 1729881420914441. arXiv:1812.08008. [Online]. Available: http://arxiv.org/abs/1812.08008
[8] A. V. Le, P. T. Kyaw, R. E. Mohan, S. H. M. Swe, A. Rajendran, [29] J. Yin, K. G. S. Apuroop, Y. K. Tamilselvam, R. E. Mohan, B. Ramalingam,
K. Boopathi, and N. H. K. Nhan, ‘‘Autonomous floor and staircase clean- and A. V. Le, ‘‘Table cleaning task by human support robot using deep
ing framework by reconfigurable sTetro robot with perception sensors,’’ learning technique,’’ Sensors, vol. 20, no. 6, p. 1698, Mar. 2020.
J. Intell. Robot. Syst., vol. 101, no. 1, pp. 1–19, Jan. 2021. [30] A. Le, P.-C. Ku, T. T. Tun, N. H. K. Nhan, Y. Shi, and R. Mohan,
[9] J. K. Burgoon, D. B. Buller, and W. G. Woodall, Nonverbal Communica- ‘‘Realization energy optimization of complete path planning in differential
tion: The Unspoken Dialogue. New York, NY, USA: Harpercollins College drive based self-reconfigurable floor cleaning robot,’’ Energies, vol. 12,
Division, 1989. no. 6, p. 1136, Mar. 2019.
[31] A. V. Le, N. H. K. Nhan, and R. E. Mohan, ‘‘Evolutionary algorithm-based BRAULIO FÉLIX GÓMEZ received the bachelor’s degree in mechatronic
complete coverage path planning for tetriamond tiling robots,’’ Sensors, and bachelor’s degree in informatics engineering from the Technological
vol. 20, no. 2, p. 445, Jan. 2020. Institute of Los Mochis, Mexico, in 2018 and 2019, respectively. He is a
[32] A. Le, V. Prabakaran, V. Sivanantham, and R. Mohan, ‘‘Modified a-star Research Assistant with ROAR Laboratory, Singapore University of Tech-
algorithm for efficient coverage path planning in Tetris inspired self- nology and Design (SUTD). His research interests include robotics vision,
reconfigurable robot with integrated laser sensor,’’ Sensors, vol. 18, no. 8, robotics control, and AI based robotics.
p. 2585, Aug. 2018.
[33] A. V. Le, A. A. Hayat, M. R. Elara, N. H. K. Nhan, and K. Prathap, ‘‘Recon-
figurable pavement sweeping robot and pedestrian cohabitant framework
by vision techniques,’’ IEEE Access, vol. 7, pp. 159402–159414, 2019.
[34] L. Yi, A. V. Le, A. A. Hayat, C. S. C. S. Borusu, R. E. Mohan,
N. H. K. Nhan, and P. Kandasamy, ‘‘Reconfiguration during locomotion
by pavement sweeping robot with feedback control from vision system,’’
IEEE Access, vol. 8, pp. 113355–113370, 2020.
[35] Y. Shi, M. R. Elara, A. V. Le, V. Prabakaran, and K. L. Wood, ‘‘Path
tracking control of self-reconfigurable robot hTetro with four differential RAJESH ELARA MOHAN received the B.E.
drive units,’’ IEEE Robot. Autom. Lett., vol. 5, no. 3, pp. 3998–4005, degree from Bharathiar University, India, in 2003,
Jul. 2020. and the M.Sc. and Ph.D. degrees from Nanyang
[36] M. A. V. J. Muthugala, A. V. Le, E. S. Cruz, M. R. Elara, Technological University, in 2005 and 2012,
P. Veerajagadheswar, and M. Kumar, ‘‘A self-organizing fuzzy logic classi- respectively. He is currently an Assistant Professor
fier for benchmarking robot-aided blasting of ship hulls,’’ Sensors, vol. 20, with the Engineering Product Development Pillar,
no. 11, p. 3215, Jun. 2020. Singapore University of Technology and Design.
[37] A. K. Lakshmanan, R. E. Mohan, B. Ramalingam, A. V. Le,
He is also a Visiting Faculty Member with the
P. Veerajagadeshwar, K. Tiwari, and M. Ilyas, ‘‘Complete coverage
path planning using reinforcement learning for tetromino based cleaning International Design Institute, Zhejiang Univer-
and maintenance robot,’’ Autom. Construct., vol. 112, Apr. 2020, sity, China. He has published over 80 papers in
Art. no. 103078. leading journals, books, and conferences. His research interests include
[38] A. V. Le, R. Parween, R. E. Mohan, N. H. K. Nhan, and R. E. Abdulkader, robotics with an emphasis on self-reconfigurable platforms and research
‘‘Optimization complete area coverage by reconfigurable hTrihex tiling problems related to robot ergonomics and autonomous systems. He was a
robot,’’ Sensors, vol. 20, no. 11, p. 3170, Jun. 2020. recipient of the SG Mark Design Award in 2016 and 2017, the ASEE Best of
[39] S. Kohlbrecher, O. von Stryk, J. Meyer, and U. Klingauf, ‘‘A flexible and Design in Engineering Award in 2012, and the Tan Kah Kee Young Inventors’
scalable SLAM system with full 3D motion estimation,’’ in Proc. IEEE Int. Award in 2010.
Symp. Saf., Secur., Rescue Robot., Nov. 2011, pp. 155–160.
[40] N. Wojke, A. Bewley, and D. Paulus, ‘‘Simple online and realtime tracking
with a deep association metric,’’ in Proc. IEEE Int. Conf. Image Process.
(ICIP), Sep. 2017, pp. 3645–3649.
[41] N. Wojke and A. Bewley, ‘‘Deep cosine metric learning for person re-
identification,’’ in Proc. IEEE Winter Conf. Appl. Comput. Vis. (WACV),
Mar. 2018, pp. 748–756.
[42] R. Kirby, ‘‘Social robot navigation,’’ Ph.D. dissertation, Robot. Inst.,
Carnegie Mellon Univ., Pittsburgh, PA, USA, May 2010.
TRAN HOANG QUANG MINH received the
B.S. degree in electronics and telecommunications
from the Hanoi University of Technology, Viet-
nam, in 2007, and the Ph.D. degree in electron-
ANH VU LE received the B.S. degree in elec- ics and electrical from Dongguk University, South
tronics and telecommunications from the Hanoi Korea, in 2015. He is currently with the Opto-
University of Technology, Vietnam, in 2007, and electronics Research Group, Faculty of Electrical
the Ph.D. degree in electronics and electrical and Electronics Engineering, Ton Duc Thang Uni-
from Dongguk University, South Korea, in 2015. versity, Ho Chi Minh City, Vietnam. He is also a
He is currently with the Optoelectronics Research Postdoctoral Research Fellow with ROAR Labora-
Group, Faculty of Electrical and Electronics Engi- tory, Singapore University of Technology and Design. His current research
neering, Ton Duc Thang University, Ho Chi Minh interests include robotics vision, robot navigation, human detection, action
City, Vietnam. He is also a Postdoctoral Research recognition, feature matching, and 3D video processing.
Fellow with ROAR Laboratory, Singapore Univer-
sity of Technology and Design. His current research interests include robotics
vision, robot navigation, human detection, action recognition, feature match-
ing, and 3D video processing.