0% found this document useful (0 votes)
31 views7 pages

Efficient Surface Detection For Assisting Collaborative Robots

The document discusses efficient surface detection techniques for collaborative robots using deep learning. It proposes a convolutional neural network model trained on inertial measurement unit sensor data to classify surfaces with 88% accuracy. The approach could help collaborative robots identify surfaces to maintain balance and is compared to other machine learning methods.

Uploaded by

Wilson Núñez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views7 pages

Efficient Surface Detection For Assisting Collaborative Robots

The document discusses efficient surface detection techniques for collaborative robots using deep learning. It proposes a convolutional neural network model trained on inertial measurement unit sensor data to classify surfaces with 88% accuracy. The approach could help collaborative robots identify surfaces to maintain balance and is compared to other machine learning methods.

Uploaded by

Wilson Núñez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Robotics and Autonomous Systems 161 (2023) 104339

Contents lists available at ScienceDirect

Robotics and Autonomous Systems


journal homepage: www.elsevier.com/locate/robot

Efficient surface detection for assisting Collaborative Robots



Simranjit Singh a , , Mohit Sajwan a , Gurbhej Singh b , Anil Kumar Dixit c , Amrinder Mehta d
a
School of Computer Science Engineering and Technology, Bennett University, Greater Noida, U.P., India
b
Department of Mechanical Engineering, Amritsar Group of Colleges, Amritsar, Punjab, India
c
Division of Research & Innovation, Uttaranchal University, Uttarakhand, Dehradun, India
d
Division of Research and Development (DRD), Lovely Professional University, Phagwara, Punjab, India

article info a b s t r a c t

Article history: Collaborative Robots need to read the surfaces they are walking on to keep their dynamic equilibrium,
Available online 12 December 2022 regardless of whether the ground is flat or uneven. Although accelerometers are frequently employed
Keywords:
for this task, previous efforts have centered on retrofitting the quadruped robots with new sensors.
COBOTs The second technique is to collect lots of samples for machine learning algorithms, which are not
Robotics widely implemented. Learning-based approaches altered the traditional way of data analytics. The
Deep learning advanced deep learning algorithms provide better accuracy and prove more efficient when the data
CNN size is large. This paper introduced a novel architecture of Convolutional Neural Network, a deep
LSTM learning-based approach for efficiently classifying the surface on which the robots are walking. The
dataset contains reading captured by Inertia Measurement Unit sensors. The proposed model achieved
an overall classification accuracy of 88%. The proposed architecture is compared with the existing deep
and machine learning techniques to show its effectiveness. The proposed model can be installed on
collaborative robots’ onboard processors to identify the surfaces effectively.
© 2022 Elsevier B.V. All rights reserved.

1. Introduction dealing with food processing, aerospace exploration, healthcare,


automobiles, building, biomanufacturing [12,13], and assembly.
Depending on the specific use case, the term ‘‘collaborative Collaborative robots (Cobots) [14] are employed in many dif-
robot’’ (or ‘‘cobot’’) might have a variety of different connotations. ferent domains for assisting humans. They are a logical progres-
The robot has not been constructed with human cooperation in sion that may solve current issues in manufacturing and assembly
mind [1,2]. Physically interacting with humans in a collaborative jobs, and they are meant to be readily reprogrammed, even
work environment is another definition of a cobot [2,3] by non-experts, to play diverse roles in a constantly growing
But Cobotics is a made-up word that combines collaborations workflow [15]. By combining the capabilities of a human worker
with robotics. Cobotics were utilized firstly by Edward Colgate, (judgment, reaction time, and planning) with those of a robot
and Michael Peshkin [4] to envisage the direct connection be- (repetition and strength), cobots are considered a viable option
tween a robot and a person on a processor [5,6]. The discipline to boost output while cutting manufacturing costs.
of cobotics refers to the methods used to create, analyze, and Cobots can also be employed for mobile service. Veloso
assess cobotic systems. Specifically, this refers to a collaborative et al. [16] deploy these in the multi-floor building. The deployed
bots provided robust real-time autonomous localization [17],
workspace in which a robot and human worker are integrated [6].
based on WiFi data [18], and on depth camera information [19].
To provide a flexible production environment of humans and
Veloso et al. [20] also developed a cobot capable of providing
robots [7] and to aid people in completing jobs (by decreasing
mobile service to users. They have employed non-Markovian
physical effort and cognitive stress) [8], industrial cobots are
localization for handling challenging environments. The authors
built for direct actuation with human coworkers. Industrial cobots
have deployed multiple bots to check the robustness of the
are also utilized to assist employees with lifting, to move, and
system. These bots can ask for help from humans if they are not
monitoring production workloads and assembly lines. In addition
able to perform some tasks. Robotics played a very important
to helping out human operators [9], they may set down cargo in a role in the Covid 19 pandemic. During the recent coronavirus
brisk, accurate, and risk-free manner [10]. Many businesses have epidemic, the Belgian firm ZoraBots created an interactive robot
started using these kinds of technologies [11], including those called CRUZR [21], which has been deployed in hospitals and
other regions as a first-line controller. The robot can greet guests,
∗ Corresponding author. take their temperature, and engage them in conversation in any
E-mail address: [email protected] (S. Singh). of the 53 supported languages. It is capable of learning on the job,

https://doi.org/10.1016/j.robot.2022.104339
0921-8890/© 2022 Elsevier B.V. All rights reserved.
S. Singh, M. Sajwan, G. Singh et al. Robotics and Autonomous Systems 161 (2023) 104339

automating mundane tasks, and freeing up medical professionals 2. Background


to focus on patient care. Su et al. [22] presented a smart robot
with the ability to diagnose skin conditions using the robot’s 2.1. Dataset
built-in camera. Patients can be informed of the diagnosis via
spoken and textual outputs. It can save data in the cloud, which This study uses the surface detection dataset developed and
can be used to expand telemedicine’s reach. made available by Tampere University [29]. Robots require en-
Hoepflinger et al. [23] have used measurements of ground vironmental awareness for successful navigation and interac-
contact force and currents in the joints to extrapolate information tion. This method takes raw sensor data and distills it into
about the terrain’s shape and topography. A multiclass AdaBoost usable information for more complex activities like navigation
machine learning technique was used for classification, and the and route planning. The sensors may collect both visual and
results were promising. The stresses and torques of a six-legged
temporal information.
walking robot were utilized in another terrain classification [24],
Recognition of floor surfaces from IMU data is required (IMU
and the system correctly identified five surfaces 76% of the time.
sensors). The information is gleaned from the IMU sensor read-
In some environments, a legged robot might be more useful
than one with wheels, but the robot’s gait would need to be mod- ings of a tiny mobile robot driven across various floor surfaces
ified to accommodate the surface it is walking on. Information at Tampere University. Sensor readings categorize surfaces into
about the surface material or fundamental features like stiffness nine types (carpet, tiles, concrete, etc.). (acceleration, velocity,
might help with this modification. State-of-the-art approaches to etc.; altogether ten sensor channels).
this issue include cameras, accelerometers, and force sensors in
the knees and ankles. 2.2. Convolutional Neural Network
Accelerometers record straight accelerations in the three axes,
and complex software may infer additional characteristics from When given visual input data, a Convolutional Neural Network
the signals’ temporal fluctuations. Adding an Inertia Measure- (CNN) establishes meaningful connections between its neurons.
ment Unit (IMU) to AIBO and utilizing the well-known Kalman As long as the filters are used effectively, CNN may learn spatial
filter applied to a MEMS sensor to assess the robot’s position and temporal relationships. As demonstrated in Fig. 1, a convo-
during locomotion was the method employed by Marcinkiewicz lution operation [30] is carried out by applying the appropriate
et al. [25]. In Huang et al. [26] work, the directional bias of an filter to an image and then multiplying the coinciding values of
accelerometer was calculated by integrating the accelerometer’s x the filter and the values in the picture. These effects are applied
and y measurements. Sinha and Bajcsi [27] found that leg slippage to the photographs themselves.
was a surface property by using an accelerometer linked to the ConvNets are modeled after the neural connections in the
robot’s toes. human brain and take inspiration from the anatomy of the visual
Unlike competing high-end IMUs, Sony AIBO lacks a gyroscope cortex. Each neuron’s sensitivity to the visible world is measured
to complement its accelerometer for more precise tilt compu- by its ‘‘Receptive Field’’ Multiple similar fields overlap to cover
tations; this presents a problem for the reliable detection of
the whole viewing area.
body oscillations when walking. In order to build AIBO’s surface
By applying the right filters, a ConvNet may be able to ac-
detection capabilities, Vail and Veloso [24] built a C4.5 decision
curately capture the time and space connections contained in
tree classification. While their approach did not require human
adjustment, it did necessitate a huge sample database and a an image. With fewer parameters to tune and the flexibility to
slower offline learning process. With the use of AIBO’s built-in recycle weights, this architecture provides a more precise fit to
accelerometer, Chen et al. [28] created an artificial vestibular the picture dataset. That is to say, the network may be taught to
system to evaluate real-time posture, slope, and surface. The recognize more subtleties in the image. Popular implementations
Oscillation Power (OP) characteristic, derived from the high-pass of CNNs are LeNet, Alex Net (shown in Fig. 2), etc. Convolutional
filtered accelerometer data, was used for their surface detection. Nets consist of convolutional layers, Kernels, Pooling layers, and
Even though the OP description has been developed, a thorough Fully connected layers.
examination has yet to be conducted. Convolution Layers
Surface detection was previously accomplished in previous
A convolutional layer convolves the input, and the output is
research with other robots using motor currents, accelerometers,
passed on to the following layer. This is analogous to how neurons
or ground contact forces, whereas early experiments with AIBO
in the visual brain react to external input [9]. Each convolu-
solely employed accelerometers. This work presents a state-of-
tional neuron only uses its receptive field to process information.
the-art novel Convolutional Neural Network which effectively
classifies the various surfaces based on the input data. The dataset Learning features and labeling data is possible using fully linked
is prepared with the help of IMU sensors. Chen et al. [28] have not feedforward neural networks. For images to be processed, even
performed any experimentation based machine learning based a shallow architecture needs a lot of neurons since each pixel
techniques for building their model; vail and veloso [24] used stands in for a significant input property. Regularized weights
C4.5 decision trees without doing a more thorough evaluation of across a smaller set of parameters allow for backpropagation to
available approaches. The proposed approach is compared against proceed without encountering the common problems of gradient
other deep learning and machine learning algorithms to show its vanishing and gradient expansion that plague traditional neural
effectiveness. networks [31,32].
Pooling Layer
1.1. Contribution
Convolutional network architectures may also include pooling
Existing works implemented surface detection based on ac- layers, both locally and globally. It is possible to reduce the
celerometers, motor currents, or ground contact forces. This number of dimensions in a dataset by using pooling layers, which
paper presents a novel CNN model, a deep learning-based frame- combine the outputs of several neurons in a lower layer into a
work that efficiently classifies various surfaces. The model em- single neuron in a higher layer. Smaller tiling sizes, such as 2
ploys two convolutional layers with 3 × 3 kernel size and one by 2, when using local pooling to merge clusters, are typically
max-pooling layer with 2 × 2 kernel size. The proposed method employed. All of the feature map’s neurons are affected by global
is compared to multiple deep learning and machine learning pooling [33,34]. The most popular kinds of pooling are maximum
methods to demonstrate its efficacy. and average. The difference between max pooling and average
2
S. Singh, M. Sajwan, G. Singh et al. Robotics and Autonomous Systems 161 (2023) 104339

Fig. 1. Convolution operation.

Fig. 2. Popular architecture of convolutional neural network.

pooling is that the former takes the maximum value from each 3. Proposed CNN architecture
local cluster of neurons in the feature map, and the latter takes
the average. In this section, the novel CNN architecture is proposed as
Fully Connected Layer shown in Fig. 3. The architecture employs two convolutional
layers with 3 × 3 kernel size and one max-pooling layer with
All neurons in one layer are linked to those in the other layers
2 × 2 kernel size. It is typical to stick with a kernel size of
in a fully connected structure. It is the same as any old-fashioned
3 × 3 or 5 × 5. Large sizes are often maintained during the
neural network built using multilayer perceptrons. The flattened
first convolutional layer. Since it is only one first layer and fewer
matrix passes through a fully linked layer to sort the photos into
input channels, its diminutive dimensions are less of an issue. As a
categories.
method for down-sampling feature maps, pooling layers provide
Weights a summary of feature presence in regions of the feature map. The
Each neuron has a receptive field in a neural network that re- two most frequent types of pooling are average pooling and max
ceives values from the layer above it. It then uses this information pooling, which summarize the typical occurrence of a feature and
to calculate an output value based on a function. A bias and a the maximum occurrence of a feature, respectively. We employed
weighted vector decide which function will be used on the data max pooling, afterwards the obtained output is flattened and
that comes in (typically real numbers). Changing these values and transferred to fully connected layers followed by a softmax layer
priorities over time is part of what it means to learn. for classification. In the end, the output is achieved and compared
3
S. Singh, M. Sajwan, G. Singh et al. Robotics and Autonomous Systems 161 (2023) 104339

Fig. 3. Proposed CNN architecture.

with truth values to find the error. This process is repeated many Table 1
times to minimize the error and obtain a trained prediction CNN Hyperparameters employed in CNN.

model. Parameters Filters Kernel size


Convolutional-1 32 (3,3)
Convolutional-2 64 (3,3)
Algorithm 1 Training of CNN algorithm Maxpooling – (2,2)
Fully connected 128 –
Input
Fully connected 9 –
X_train, y_train: Training Samples
X_test, y_test Testing Samples
Output:
wcp , bcp : weights and bias of Convolution and Pooling
layers. CNN architecture is then trained on 67% of the data for prepar-
wfc , bfc : weights and bias of fully connected layers. ing the effective prediction model. The proposed CNN model
begin architecture consists of 2 convolutional layers followed by one
Set the required parameters max-pooling layer and fully connected layers. Training is mea-
while w < maxtime andE(t) > targeterror do sured over 100 epochs with categorical cross-entropy as a loss
for traning set do function. After this process, a prediction model is obtained, to
p_train(prediction of labels) is calculated according to which the remaining 33% of data is passed for prediction. The
X_train with forward propagation; predicted values are compared to corresponding ground truth
end for quantity values to measure the model’s efficiency. The plot of the
CNN architecture is shown in Fig. 4(a). The CNN Hyperparameters
∑x
E(t) is re-calculated as 1 E(t) = 21 ((p_train(x) −
2 that are set during the experiment are shown in Table 1.
y_train(x))) , where is the total samples
wcp , bcp , wfc , bfc is adjusted with the help of backpropa-
4.3. Model evaluation
gation algorithm.
t++
Finally, a trained model is generated by feeding all the avail-
end while
able data. The artificial neural network model gives the calculated
end
predictions. In order to assess the model’s reliability, these read-
ings are recorded and compared to the observed data. Precision,
Recall, F1-score, and overall accuracy (OA) is employed to assess
4. Results and analysis the quality of the developed model. The employed metrics are
defined as:
The experiments are performed on the free cloud environ- P(T )
Precision =
ment, Google Collab, with GPU enabled. P(T ) + P(F )
P(T )
4.1. Pre-preprocessing Recall =
P(T ) + N(F )
Precision × Recall
The data obtained from TAU surface detection [29] has 3408 F1 = 2 ×
samples, each having the dimension of 10 × 128. The obtained Precision + Recall
dataset is divided into training and testing sets with correspond- P(T ) + N(T )
OA =
ing ground truth values. Firstly, the data is prepared by combining P +N
the training and testing sets. Afterward, data standardization is
where P(T ) is defined as True Positive, P(F ) is defined as False
performed. Positive, N(F ) is defined as False Negative and N(T ) is True Neg-
ative, P is Positive, and N is Negative.
4.2. Training and validation The proposed CNN is compared with existing Deep and Ma-
chine learning models like Long Short Term Memory Networks
The standardized dataset is then randomly divided into train- (LSTM), Gated Recurrent Unit (GRU), Support Vector Machines
ing and testing sets. The train test split is 67:33. The proposed (SVM), and Decision Trees as shown in Table 3. To have a fair
4
S. Singh, M. Sajwan, G. Singh et al. Robotics and Autonomous Systems 161 (2023) 104339

Table 2
Hyperparameters.
SVM Tree LSTM/GRU
Parameters Values Parameters Values Parameters Values
Kernel RBF Minimum leaf size 4 No. of layers 2
Gamma 0.001 Surrogate splits 0ff Loss function Categorical crossentropy
C 10 Max features Auto Dense layer neuron 585
Degree 3 Criterion Squared error Dropout 0.1
Hidden neuron (128,64)
Activation Softmax
Optimizer Adam

Fig. 4. Model plot of (a) CNN (b) GRU (c) LSTM.

comparison between the models, all the models are trained on parameter tuning. The grid search method is used to find the best
67% of the data and tested on the remaining 33% of the data. parameters in the case of all models. All the models are executed
The model evaluation is performed on the testing data. LSTM and 10 times, and the evaluation measures are approximated to show
GRU are sequential-based recurrent neural networks that perform the final results.
well when the data is sequential, like time series, audio, etc. The It can be seen from Table 3 that the proposed CNN architecture
difference between both models is that LSTM uses more gates outperforms all other popular models. The CNN model can effec-
for processing as compared to GRU. In contrast to LSTM’s three tively perform feature learning with the help of convolutional and
gates (input, output, forget), GRU’s bag only has two (reset and pooling layers, which other models cannot do. The proposed CNN
update). The model plots of the deep learning models GRU and can achieve an overall accuracy of 88% with an overall precision,
LSTM are shown in Fig. 4(b) and (c), respectively. Similarly, SVM Recall, and F1 score of 85%, which is far better compared to other
and Decision trees are machine learning-based models. The input deep learning models of LSTM and GRU, which have an overall
data is reshaped to 2D for training and testing purposes in these accuracy of 79% and 78%, respectively. The proposed CNN is also
models. The hyperparameters employed in the models are shown performing well compared to machine learning-based models
in Table 2. All the parameters employed are selected after the SVM and Decision Trees, which have overall accuracy of 57% and
5
S. Singh, M. Sajwan, G. Singh et al. Robotics and Autonomous Systems 161 (2023) 104339

Table 3
Comparison of proposed CNN with other existing Deep and Machine learning models.
Classes CNN GRU LSTM SVM Decision Trees
Precision Recall F1-Score Precision Recall F1-Score Precision Recall F1-Score Precision Recall F1-Score Precision Recall F1-Score
0 0.96 0.96 0.96 0.83 0.92 0.87 0.88 0.87 0.87 0.71 0.94 0.81 0.66 0.75 0.71
1 0.68 0.54 0.6 0.55 0.33 0.41 0.52 0.41 0.46 0.57 0.2 0.3 0.47 0.44 0.45
2 0.89 0.83 0.86 0.82 0.8 0.81 0.75 0.78 0.77 0.46 0.35 0.4 0.73 0.67 0.7
3 0.84 0.92 0.88 0.74 0.9 0.81 0.76 0.88 0.82 0.51 0.98 0.67 0.83 0.77 0.8
4 0.81 0.81 0.81 0.77 0.6 0.67 0.79 0.63 0.7 0.5 0.06 0.11 0.67 0.6 0.63
5 0.91 0.93 0.92 0.83 0.86 0.85 0.81 0.89 0.85 0.59 0.59 0.59 0.77 0.79 0.78
6 0.86 0.91 0.88 0.86 0.77 0.81 0.77 0.73 0.75 0.43 0.26 0.33 0.8 0.78 0.79
7 0.92 0.81 0.86 0.85 0.81 0.83 0.83 0.76 0.79 0.51 0.22 0.31 0.89 0.84 0.86
8 0.78 0.92 0.84 0.69 0.63 0.66 0.61 0.58 0.59 0.38 0.24 0.29 0.3 0.34 0.32
Overall 0.85 0.85 0.85 0.77 0.74 0.75 0.75 0.73 0.73 0.52 0.43 0.42 0.68 0.66 0.67
OA 0.88 0.79 0.78 0.57 0.72

overall accuracy, F1 score, precision, and Recall, respectively, in


the case of Deep learning models and showed an atmost 54.38%,
102.38%, 63.46%, 97.67% performance improvement in terms of
overall accuracy, F1 score, precision and Recall respectively in the
case of machine learning models.

Declaration of competing interest

The authors declare that they have no known competing finan-


cial interests or personal relationships that could have appeared
to influence the work reported in this paper.

Data availability

Data is referred with link.


Fig. 5. Violin plot of the residuals.

References

[1] D. Bitonneau, T. Moulieres-Seban, J. Dumora, O. Ly, J.-F. Thibault, J.-M. Sa-


72%. The SVM model has the least performance scores due to the lotti, B. Claverie, Human-centered design of an interactive industrial robot
higher dimension of the input data. The deep learning models can system through participative simulations: application to a pyrotechnic tank
perform better than the machine learning-based models because cleaning workstation, in: 26th IEEE International Symposium on Robot and
Human Interactive Communication, 2017.
when the dimensions of the input data become high, the perfor-
[2] N. Shravani, S. Rao, Introducing robots without creating fear of unemploy-
mance of machine learning models becomes low due to the curse
ment and high cost in industries, Int. J. Eng. Technol. Sci. Res. 5 (1) (2018)
of dimensionality. The proposed CNN model showed an atmost
1128–1135.
of 14.28%, 16.43%, 13.33%, and 16.43% improvement in terms of
[3] A. De Santis, Modelling and control for human-robot interaction: physical
overall accuracy, F1 score, precision, and Recall, respectively in and cognitive aspects, in: 2008 IEEE International Conference on Robotics
the case of Deep learning models and showed an atmost 54.38%, and Automation, Citeseer, 2008, Retrieved from PHRIENDS website: http:
102.38%, 63.46%, 97.67% performance improvement in terms of //www.phriends.eu/papers.htm.
overall accuracy, F1 score, precision and Recall respectively in [4] M. Peshkin, J.E. Colgate, Cobots, Ind. Robot: Int. J. (1999).
the case of machine learning models. The predicted and truth [5] A. Hentout, M. Aouache, A. Maoudj, I. Akli, Human–robot interaction
values of all the models are captured and compared to find the er- in industrial collaborative robotics: a literature review of the decade
rors/residuals shown in Fig. 5 as violin plots. The violin plot shows 2008–2017, Adv. Robot. 33 (15–16) (2019) 764–799.
the residuals of the corresponding prediction model employed. [6] T. Moulières-Seban, J. Salotti, B. Claverie, et al., Classification of cobotic
The wider section of the plot represents a high probability, and systems for industrial applications, in: The 6th Workshop Towards a
the skinnier sections represent a lower probability. It is visible Framework for Joint Action, 2015, pp. 1–6.
from the plot that CNN shows a higher probability near zero, [7] N.T. Pons, Estandarization in Human Robot Interaction (Ph.D. thesis), Uni-
which implies that the predicted values are very close to the versitat Politècnica de Catalunya. Escola Tècnica Superior d’Enginyeria . . . ,
2013.
actual values. The plot of CNN indicates that it has the least error
[8] S.S. Restrepo, G. Raiola, P. Chevalier, X. Lamy, D. Sidobre, Iterative virtual
compared to other models.
guides programming for human-robot comanipulation, in: 2017 IEEE
International Conference on Advanced Intelligent Mechatronics, AIM, IEEE,
5. Conclusion
2017, pp. 219–226.
[9] P.J. Koch, M.K. van Amstel, P. Debska, M.A. Thormann, A.J. Tetzlaff, S. Bøgh,
This paper introduces a novel Convolutional Neural Network D. Chrysostomou, A skill-based robot co-worker for industrial maintenance
architecture for efficiently classifying the diverse surfaces per- tasks, Procedia Manuf. 11 (2017) 83–90.
ceived by a robot. The provided model can be stored in the [10] R. Meziane, P. Li, M.J.-D. Otis, H. Ezzaidi, P. Cardou, Safer hybrid workspace
robots’ onboard processor to quickly determine the surfaces they using human-robot interaction while sharing production activities, in:
are standing on. The proposed CNN model showed an atmost 2014 IEEE International Symposium on Robotic and Sensors Environments
of 14.28%, 16.43%, 13.33%, and 16.43% improvement in terms of (ROSE) Proceedings, IEEE, 2014, pp. 37–42.

6
S. Singh, M. Sajwan, G. Singh et al. Robotics and Autonomous Systems 161 (2023) 104339

[11] Y.S. Liang, D. Pellier, H. Fiorino, S. Pesty, A framework for robot Dr. Simranjit Singh is working as Assistant Profes-
programming in cobotic environments: First user experiments, in: Pro- sor in the School of Computer Science Engineering
ceedings of the 3rd International Conference on Mechatronics and Robotics and Technology, Bennett University Greater Noida. He
Engineering, 2017, pp. 30–35. received his Ph.D. degree from the Department of
Computer Science and Engineering at the Thapar In-
[12] C. Prakash, S. Singh, R. Singh, S. Ramakrishna, B. Pabla, S. Puri, M. Uddin,
stitute of Engineering and Technology, Patiala, India.
Biomanufacturing, Vol. 20, Springer, 2019.
His research interests include Deep Learning, Machine
[13] S. Singh, C. Prakash, M. Singh, G.S. Mann, M.K. Gupta, R. Singh, S. Ramakr- learning, Hyperspectral Image Analysis, Security. He has
ishna, Poly-lactic-acid: potential material for bio-printing applications, in: more than 6 years of experience in research. He has
Biomanufacturing, Springer, 2019, pp. 69–87. developed various deep learning frameworks for the
[14] L. Barbazza, M. Faccio, F. Oscari, G. Rosati, Agility in assembly systems: a classification of Hyperspectral data. He also developed
comparison model, Assem. Autom. (2017). many quantification frameworks to quantify the various soil attributes. He
[15] J.E. Colgate, J. Edward, M.A. Peshkin, W. Wannasuphoprasit, Cobots: Robots recently completed a project named “HT-Pred: A complete defensive machine
for collaboration with human operators, 1996. learning tool for Hardware Trojan Detection” funded by Data Security Council
[16] M. Veloso, J. Biswas, B. Coltin, S. Rosenthal, T. Kollar, C. Mericli, M. Samadi, of India.
S. Brandão, R. Ventura, CoBots: Collaborative robots servicing multi-floor
buildings, in: 2012 IEEE/RSJ International Conference on Intelligent Robots Dr. Mohit Sajwan is working as Assistant Professor in
and Systems, 2012, pp. 5446–5447. Computer Science and Engineering Department, Ben-
[17] J. Biswas, B. Coltin, M. Veloso, Corrective gradient refinement for mobile nett University Greater Noida. He received his B.Tech.
robot localization, in: 2011 IEEE/RSJ International Conference on Intelligent degree in Computer Science Engineering from Amrapali
Robots and Systems, IEEE, 2011, pp. 73–78. Institute of Technology, Uttarakhand Technical Univer-
sity, Dehradun, and M.E. (software engineering) from
[18] J. Biswas, M. Veloso, Wifi localization and navigation for autonomous
Birla Institute of Technology, Mesra, Ranchi. He re-
indoor mobile robots, in: 2010 IEEE International Conference on Robotics
ceived his Ph.D. degree from Department of Computer
and Automation, IEEE, 2010, pp. 4379–4384. Science and Engineering at the NIT Delhi, India. His
[19] J. Biswas, M. Veloso, Depth camera based indoor mobile robot localization research interests include, wireless sensor networks,
and navigation, in: 2012 IEEE International Conference on Robotics and IoT, information security, Machine learning and image
Automation, IEEE, 2012, pp. 1697–1702. similarity. He also developed various energy efficient routing protocols for
[20] M. Veloso, J. Biswas, B. Coltin, S. Rosenthal, Cobots: Robust symbiotic wireless sensor networks. He recently completed a project named “HT-Pred:
autonomous mobile service robots, in: Twenty-Fourth International Joint A complete defensive machine learning tool for Hardware Trojan Detection”
Conference on Artificial Intelligence, 2015. funded by Data Security Council of India.
[21] L. Lafranca, J. Li, Humans and robots in times of quarantine based on first-
hand accounts, in: International Conference on Social Robotics, Springer, Dr. Gurbhej Singh is working as Assistant Professor
2020, pp. 688–707. in the department of Mechanical Engineering, Amrit-
[22] C.-Y. Su, H. Samani, C.-Y. Yang, O.N.N. Fernando, Doctor robot with physical sar Group of Colleges Amritsar, Punjab, India. He has
examination for skin disease diagnosis and telemedicine application, in: received his Ph.D. degree from CT University Ludhiana-
2018 International Conference on System Science and Engineering, ICSSE, India in 2021. His research areas include Surface
IEEE, 2018, pp. 1–6. Engineering (MICROWAVE CLADDINGS) || Currently
Working on, Microwave processing of materials. He has
[23] M.A. Hoepflinger, C.D. Remy, M. Hutter, L. Spinello, R. Siegwart, Haptic ter-
many SCI and Scopus indexed publications. He has been
rain classification for legged robots, in: 2010 IEEE International Conference
awarded a ‘‘Top Cited Paper Awards India 2022’’ by
on Robotics and Automation, IEEE, 2010, pp. 2828–2833.
@IOP Publishing, United Kingdom in Review Category.
[24] D. Vail, M. Veloso, Learning from accelerometer data on a legged robot, He has also published many conference papers and
IFAC Proc. Vol. 37 (8) (2004) 822–827. published book chapters.
[25] M. Marcinkiewicz, R. Kaushik, I. Labutov, S. Parsons, T. Raphan, Learning to
stabilize the head of a quadrupedal robot with an artificial vestibular sys-
Dr. Anil Kumar Dixit is working as Professor in the
tem, in: 2009 IEEE International Conference on Robotics and Automation,
department of Law, Uttaranchal University, Dehradun,
IEEE, 2009, pp. 2512–2517.
India. He has received his Ph.D. degree and master’s de-
[26] D.-J. Huang, W.-C. Teng, A gait based approach to detect directional bias gree from Bundelkhand University, India. His research
of four-legged robots’ direct walking utilizing acceleration sensors, in: areas include Constitutional Law, Media law, sports
International Conference on Knowledge-Based and Intelligent Information law. He has also published many conference papers and
and Engineering Systems, Springer, 2007, pp. 681–688. published book chapters.
[27] Y. Qin, X. Zang, X. Wang, J. Zhao, H. Cai, Posture detection system based
upon MEMS inertia sensor for robots, Chin. J. Sensor Actuators 20 (2007)
298–301.
[28] Y. Chen, C. Liu, Q. Chen, A vestibular system model for robots and its
application in environment perception, in: 2010 International Conference
on Computing, Control and Industrial Engineering, Vol. 2, IEEE, 2010, pp. Amrinder Mehta is an Assistant Superintendent in the
230–235. Division of Research and Development (DRD) at Lovely
[29] M. LLC, dataset kernel description, 2018. Professional University in Phagwara, Punjab, India. In
[30] C.-C.J. Kuo, Understanding convolutional neural networks with a 2015, he received his master’s degree from Lovely
mathematical model, J. Vis. Commun. Image Represent. 41 (2016) 406–413. Professional University in Phagwara, Punjab, India,
[31] V.E. Balas, R. Kumar, R. Srivastava, Recent Trends and Advances in Artificial and is currently pursuing his Ph.D. Surface Engineer-
Intelligence and Internet of Things, Springer, 2020. ing/Thermal Spraying is one of his research interests
(HVOF, FLAME SPRAY, COLD SPRAY, AND PLASMA
[32] R. Venkatesan, B. Li, Convolutional Neural Networks in Visual Computing:
SPRAY). He is currently working on Nano-structured
A Concise Guide, CRC Press, 2017.
|| Multi-modal || High entropy alloys coatings for
[33] D.C. Ciresan, U. Meier, J. Masci, L.M. Gambardella, J. Schmidhuber, Flexible, high-temperature oxidation and corrosion resistance —
high performance convolutional neural networks for image classification, Thermal Barrier Coatings (TBCs) and Microwave material processing. He has
in: Twenty-Second International Joint Conference on Artificial Intelligence, published numerous articles on thermal spray coatings in prestigious journals
2011. such as Materials Today Communications, Surface Review and Letters, Surface
[34] A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep Topography, MDPI: Metrology and Properties, and Journal of Failure Prevention
convolutional neural networks, Commun. ACM 60 (6) (2017) 84–90. and Control.

You might also like