0% found this document useful (0 votes)
24 views10 pages

Deep Convolutional Neural Network Based Ship Images Classification

Uploaded by

mujumdarsumit00
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views10 pages

Deep Convolutional Neural Network Based Ship Images Classification

Uploaded by

mujumdarsumit00
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/350181100

Deep Convolutional Neural Network based Ship Images Classification

Article in Defence Science Journal · March 2021


DOI: 10.14429/dsj.71.16236

CITATIONS READS
16 855

3 authors, including:

Narendra KUMAR Mishra


Indian Institute of Technology Delhi
4 PUBLICATIONS 212 CITATIONS

SEE PROFILE

All content following this page was uploaded by Narendra KUMAR Mishra on 11 May 2021.

The user has requested enhancement of the downloaded file.


Defence Science Journal, Vol. 71, No. 2, March 2021, pp. 200-208, DOI : 10.14429/dsj.71.16236
© 2021, DESIDOC

Deep Convolutional Neural Network based Ship Images Classification


Narendra Kumar Mishra*, Ashok Kumar, and Kishor Choudhury
Weapons and Electronics Systems Engineering Establishment, New Delhi - 110 066, India
*
E-mail: [email protected]

ABSTRACT
Ships are an integral part of maritime traffic where they play both militaries as well as non-combatant roles.
This vast maritime traffic needs to be managed and monitored by identifying and recognising vessels to ensure the
maritime safety and security. As an approach to find an automated and efficient solution, a deep learning model
exploiting convolutional neural network (CNN) as a basic building block, has been proposed in this paper. CNN
has been predominantly used in image recognition due to its automatic high-level features extraction capabilities
and exceptional performance. We have used transfer learning approach using pre-trained CNNs based on VGG16
architecture to develop an algorithm that performs the different ship types classification. This paper adopts data
augmentation and fine-tuning to further improve and optimize the baseline VGG16 model. The proposed model
attains an average classification accuracy of 97.08% compared to the average classification accuracy of 88.54%
obtained from the baseline model.
Keywords: Ship classification; Convolutional neural network; Transfer learning; VGG16

1. INTRODUCTION AI has turned up as one of the most promising technologies


Apart from their conventional roles, modern naval forces across diverse fields. In this paper, CNN based deep learning
are also actively involved in maritime security operations, algorithm has been studied, and its performance is evaluated
including monitoring, tracking, detecting, and identifying and analysed.
ocean traffic, efficiently and effectively. In CNN based models, input images are minimally
Vessel movements are currently monitored using processed and fed directly to the system, where a suitable group
automatic identification system (AIS)1, synthetic aperture of features is extracted through a learning5. This CNN capability
radar (SAR)2, satellite-based images3-4, and optical images allows for the cascading of several CNN layers making it a
captured by cameras. SAR or satellite images give the full view “Deep” feature extractor while learning the essential features
of maritime vessels and cover larger ocean areas than optical for the particular problem of interest. In Deep learning models,
images. For maritime surveillance, the optical image-based convolutional layers learn more generic features in the initial
classification would be an efficient solution due to its simplicity stages and learn features specific to the input training dataset in
and easy availability. However, its successful realisation using deeper stages that are further utilised to classify the test images
conventional methods faces many challenges such as degraded that were not part of the training dataset. It is predominantly
quality of images due to environmental factors, the resemblance used in a broad range of image recognition applications due
in the look and form of the class of ships, and the vastness of to its automatic high-level feature extraction capabilities and
the ocean environment. exceptional performance6.
These factors call for a more reliable technology or system Fundamentally, CNN architecture consists of sequences of
which can automatically classify ships based on their features, layers that transform the pixel values of input images through
where artificial intelligence (AI) can play a significant role. AI various processes to final class scores. The process flow
system is capable of automatic identification and recognition architecture of a typical CNN, and its basic building blocks are
of marine vessels and objects around it, like navigation-aids, shown in Fig. 1. The details of fundamental blocks of CNN7
boats, etc. that can lead to the enhanced situational awareness. are described as follows:
(i) Convolutional Layer: This layer is responsible for learning
1.1 Convolutional Neural Network the important features from input images. It consists of
With the advancement in technology, in terms of more several learnable filters or kernels that slides spatially
robust algorithms, availability of large volume of structured across the input image and calculates the dot products
datasets, and the capability of handling large volumes of data as a response called the feature maps. A schematic
more efficiently through graphical processing units (GPUs), implementation of the convolution operation is shown
in Fig. 2. Two important features of the convolutional
Received : 25 August 2020, Revised : 19 January 2021 layer are local connectivity (at a time, filter weights are
Accepted : 01 February 2021, Online published : 10 March 2021 multiplied to only a local area of the input image) and

200
Mishra, et al.: Deep Convolutional Neural Network based Ship images Classification

Figure 1. Process flow architecture of typical convolution neural network.

1.2 Transfer Learning


It is very uncommon to train a convolutional network
from scratch because it needs a sufficiently large training
dataset and GPU to execute and evaluate the deep learning
model. Alternatively, a transfer learning approach can be
used for a new classification task. In the transfer learning-
based approach, the pretrained model weights, which have
already been trained optimally on similar problems, are
used for the new image recognition task. A transfer learning
Figure 2. Schematic representation of convolution operation in a
convolutional layer. approach is schematically represented in Fig. 5. In transfer
learning, either a convolutional network pretrained using
Weight Sharing (the same filter weights are multiplied to millions of images could be used as a fixed feature extractor
every spatial location of the input image). A Convolutional (where pretrained weights of the convolutional blocks are
layer output is fed to the activation function (e.g., ReLU) used as it is for the particular classification task of interest),
that introduces non-linearity into the artificial neural or weights of the pretrained network can be fine-tuned for the
network. The output size of the convolutional layer specific dataset/problem. The selection of a specific transfer
depends on the following four Hyperparameters:
• Number of filters, K
• Filter size, FxF
• Amount of zero padding, P
• Stride, S
For an input volume of size W1xH1xD1, the
convolutional layer results in an output volume of the size
W2xH2xD2 that can be calculated as
W1 − F + 2P
W2 = +1
S
H1 − F + 2P
H2 = +1
S
D2 = K
(ii) Pooling Layer: This layer downsamples the input
image’s spatial dimension and is placed in-between
two convolutional layers. Pooling minimises the Figure 3. Schematic representation of pooling operation in a convolutional
computational complexity by reducing the learnable layer, maxpooling (top), and AveragePooling (bottom).
network parameters. MaxPooling and AveragePooling
are two prominently used Pooling techniques, as
depicted in Fig. 3.
(iii) Fully-Connected Layer: This layer performs the
actual prediction (classification or regression) job.
It consists of fully connected layers as a regular
artificial neural network followed by a Softmax layer
(final output layer) that provides the class scores.
It consists of input layer, output layer and number
of hidden layers as shown in Fig. 4. The number of
hidden layers and number of nodes in each layer are Figure 4. Schematic representation of neural network with fully
Hyperparameters. connected layers.

201
Def. SCI. J., Vol. 71, No. 2, march 2021

blocks can be used as it is for the particular


classification task. Therefore, to make the
VGG16 model more relevant and specific to the
classification of ship’s images, the last convolution
block of the VGG16 network has been re-trained.
However, a relatively small number of images
in the training of the model leads to overfitting
of the model that has been verified empirically
also. To mitigate this issue, we have used two
important process improvement techniques;
the first is ‘BatchNormalisation’. The second is
‘Dropout.’ It is pertinent to mention that process
improvement techniques, BatchNormalisation
and Dropout, were not implemented in the
standard VGG16 model.
Process improvement techniques cannot
Figure 5. Schematic representation of the Transfer Learning approach.
be incorporated directly into the pretrained
learning approach depends upon several factors8 that includes convolutional blocks of the standard VGG16
size and similarity of the new dataset compared to the original model. Therefore, an additional convolutional block similar
dataset and is tabulated in Table 1. Due to the inadequacy to the convolutional blocks of standard VGG16 has been
of a sufficiently large dataset and GPU availability, transfer appended in the proposed model to incorporate process
learning has been used in the present study considering Case 3 improvement techniques. An extra convolutional block
for implementation. would also lead to learning more specific features of the input
There are several freely accessible top-performing training dataset as the convolutional layer goes into deeper
models, like VGG16 , ResNet50 , Inception , Xception ,
9 10 11 12 stages. Process improvement techniques have also been
InceptionResNet , and DenseNet , which can be readily
13 14 incorporated into the classification block consisting of Fully-
integrated into a new image recognition task. In the present Connected Layers. The VGG16 model has been further built
study, the transfer learning approach based on the standard upon by data augmentation and fine-tuning of the network
VGG16 model has been used as a baseline model for ship Hyperparameters. Proposed model has been evaluated against
image classification. Originally, VGG16 was trained using the baseline model.
ImageNet that consists of millions of images with 1000
categories. In our study, the marine vessel images are smaller 2. RELATED WORK
in numbers and dissimilar in relation to the original dataset. In the recent past, several efforts have been made to
Therefore, the standard VGG16 could not be used as a fixed classify maritime vessels’ optical images using CNN based
feature extractor, where pretrained weights of the convolutional deep learning algorithms. In reference15, CNN trained on

Table 1. Choice of transfer learning approach depending upon the similarity and size of the images in the new problem of a statement
as compared to the original dataset
Case Factors ( of new dataset Choice of transfer learning Explanation
compared to the original dataset) approach
Similarity Size
(1) Similar Smaller Train only a classifier layer Fine-tuning of the pretrained weights would lead to the
using pretrained weights overfitting problem due to the small dataset. As images in the
new problem of interest have similarities with the original
dataset, features learned by the pretrained weights would still
be relevant.
(2) Similar Larger The model can be fine-tuned Since the new dataset is sufficiently large, re-training would
through the full network. not suffer overfitting issues.
(3) Dissimilar Smaller Few convolutional layers, Since the new dataset is small, only the classifier layer to be re-
including the classifier layer, trained. However, due to the new dataset’s difference compared
can be fine-tuned. to the original dataset, few convolutional layers need to be re-
trained to learn the features specific to the new dataset.
(4) Dissimilar Larger A model can be developed A model can be trained from scratch due to the availability of
from scratch, or transfer a large data set. Alternatively, transfer learning can be utilised
learning can be utilised by by fine-tuning through the entire network.
fine-tuning through the entire
network.

202
Mishra, et al.: Deep Convolutional Neural Network based Ship images Classification

AlexNet, Inception, and ResNet50 has been developed experiment purpose, we obtained the dataset by downloading
using the MARVEL dataset16, a large-scale image dataset ship images from the aforementioned website.
for maritime vessels. MARVEL dataset is a huge collection The website consists of a large number of vessel images for
of marine vessels consisting of 2 million images from ship each category. To reduce our model’s processing complexity;
photos and ship tracker website17. Ship classification18 using we have compiled a class balanced dataset comprised of 2400
AlexNet model for ten categories of vessels using images images of four classes: aircraft carrier, Crude oil tankers,
from the same website has been developed. More studies19-20 Cruise ships & liners, and Destroyers. A few images from the
have been undertaken using images from the same website. training dataset for each category are demonstrated in Fig. 6.
Although these studies have used transfer learning based The complete dataset has been distributed in a proportion of
architectures and have used images from the same website, 80:20 for training and testing of the proposed model. Twenty
one-to-one performance comparison with the present study percent of the training dataset has been further utilised for
cannot be undertaken due to lack of uniformity in the datasets. validation purposes. Description of a number of the training
and testing images from the dataset are enumerated in
3. EXPERIMENTAL DESIGN Table 2. All the images were saved by keeping the pixel size
3.1 Dataset of 224 of the image’s largest dimension without affecting the
The first challenge in training and validating the proposed pixel qualities as the standard VGG16 model was developed
model was the availability of authentic and labelled images using an input image size of 224x224.
of ships for classification purposes. To ensure this, for our
Aircraft Carriers Crude Oil Tankers Cruise Ships and Liners Destroyers

Figure 6. Sample images of four classes of ships from the dataset.

203
Def. SCI. J., Vol. 71, No. 2, march 2021

Table 2. Description of a number of training and test images


from the dataset.
Class Train Test
Aircraft carriers 480 120
Crude oil tankers 480 120
Cruise ships and liners 480 120
Destroyers 480 120
Total 1920 480

3.2 Neural Network Architecture


In this part, the baseline model based on VGG16
architecture and various techniques that we incorporated in the
proposed model to improve the classification performance has
been described.

3.2.1 Baseline Model (VGG16)


VGG16 model has been used as a baseline model
developed by the Visual Graphics Group (VGG) at Oxford. It
comprises of series of Convolutional layers and MaxPooling
layers as its primary element connected in a pattern, as shown
in Fig. 7. Convolution Blocks 1 and 2, each comprise two
convolutional layers and one MaxPooling layer in succession,
as a feature extractor. Similarly, Convolution Blocks 3, 4, and
5, each include three convolutional layers and one MaxPooling
layer in sequence. The final Block 6 consists of three fully
connected layers and a Softmax layer in succession as a
classifier. It is important to note that the Dropout and Batch
Normalisation steps were not implemented in the standard
VGG16 model. Since data augmentation does not form part of
the actual VGG16 model, it has been also incorporated into the
baseline model.

3.2.2 Data Augmentation and Fine-tuning of VGG16


Model
The performance of the VGG16 model has been further
improved upon by incorporating various process improvement
techniques, as discussed below:
(a) Data Augmentation: Data augmentation has been
employed to achieve diverse feature learning by adding
individual variations in the images so that the same kinds of
images are not fed in each epoch during the learning process.
Data augmentation has been applied very carefully to generate
a new set of images to augment the training dataset while
preserving the basic features. The various kinds of random
variations incorporated into the training dataset include
zooming, rotation, shift, and horizontal/vertical flips.
(b) Re-train the weights of the VGG16 model: VGG16
was designed to extract fine-grained features of objects from
1,000 categories. As the higher-order features learned by the
model corresponds to the ImageNet dataset that may not be
directly relevant to the classification of optical images of the
ships, we have re-trained some convolution blocks of VGG16
to fine-tune the weights for our classification task.
(c) Fine-Tuning the Model: Following Hyperparameters
have been fine-tuned to improve the performance of the
baseline model:
(i) Number of Layers: Classification accuracy may be Figure 7. Process flow architecture of standard VGG16
improved by increasing the number of hidden layers and model.

204
Mishra, et al.: Deep Convolutional Neural Network based Ship images Classification

numbers of nodes in each layer as it enhances the model (c) Batch Normalisation and Dropout have been embedded
capacity. However, it has been observed empirically that a into the Block-6 consisting of fully connected layers.
deeper network lead to the overfitting of the model, higher The process flow architecture of the proposed model is
complexity, and more training time due to the increased presented in Fig. 8.
number of learnable parameters. Therefore, the impact of
the number of hidden layers and nodes in each layer is 3.2.4 Experimental Parameters
evaluated empirically, and optimal numbers were chosen The proposed model has been trained and evaluated on
accordingly. the Google Colab cloud server. The Hyperparameter values
(ii) Learning Rates: Learning rate is one of the vital have been tuned optimally in multiple iterations while training
Hyperparameter that needs careful selection. Through our model. Details of the final experimental parameters are
experiments, it has been observed that a small learning tabulated in Table 3.
rate causes the trapping and slow down of the
learning process; whereas, a large value of learning
rate leads to quick and non-optimal convergence. An
optimal value of the learning rate has been chosen
empirically.
(iii) Batchsize: Batchsize is the number of training samples
fed to the gradient descent algorithm in determining
the error gradient. It is a vital Hyperparameter that
influences the learning algorithm’s dynamics.
(iv) BatchNormalisation: BatchNormalisation performs
the normalisation (shifting and scaling) of the output
from a convolutional layer before feeding it to the next
layer that reduces the covariance shift of the network21.
It speeds up the learning process of an artificial neural
network with enhanced stability.
(v) Regularisation: A major challenge in the development
of any deep learning model is to overcome the
overfitting problem so that it may generalize well
on the new dataset. To mitigate this issue, two
prominent regularisation techniques, Dropout22 and
Early Stopping, have been used in this paper. These
techniques not only reduce overfitting but can also
lead to the faster optimisation and better overall
performance.

3.2.3 Proposed Model


During the performance analysis of the baseline
model, several significant observations have been made.
During the training process, cross-entropy loss was first
decreasing; however, it started increasing after a certain
number of epochs. It is also observed that there exists a
substantial gap between the graph of training and validation
accuracy. The model achieved very high training accuracy
but performed poorly on the test dataset. This behaviour
clearly indicates the overfitting of the model. To further
improve the performance, the following modifications
were incorporated in the proposed model:
(a) Weights of the convolution Block-5 are re-trained so
that the model will be more suitable and efficient for
the ship classification task.
(b) An additional convolution block consisting of three
consecutive convolutional layers and a MaxPooling
layer has been inserted before the block of fully
connected layers. This block has been primarily used to
incorporate BatchNormalisation and Dropout to avoid
overfitting in the model and assist in learning of higher-
order features. Figure 8. Process flow architecture of the proposed model.

205
Def. SCI. J., Vol. 71, No. 2, march 2021

4. ANALYSIS and RESULTS Table 3. Hyperparameters selected for the training of the
Both baseline and proposed models have been trained baseline and the proposed model.
on Google Colab using Hyperparameters as listed in Table Experimental parameters Values
3 for the same input dataset. The details of classification
Learning rate 0.0001
performance measures for both the models are tabulated in
Table 4. It is to be noted that the Early Stopping criterion takes Momentum 0.99
almost twice the number of epochs to exit the training process Batchsize 32
in the proposed model. A gap of 4.3% between training and Number of epochs 500 with early stopping
validation accuracy in the baseline model was further reduced Dropout 0.2-0.5
to 2.4% in the proposed model, showing reduced overfitting Optimizer Adam
and better convergence. Evaluation of the test dataset shows a
performance improvement of 8.54% in terms Table 4. Classification performance measures for the Baseline and the Proposed
of classification accuracy compared to the Model.
baseline model.
The performance has been also Performance Measures
evaluated by analysing the graphs of Model
Training Validation Testing Epochs
classification accuracy and cross-entropy loss Accuracy Accuracy Accuracy (max 500)
during the training process. In the baseline Loss Loss Loss
(%) (%) (%)
model, through the analysis of graphs of
Baseline 0.0609 97.53 0.2321 93.23 0.4856 88.54 161
classification accuracy and cross-entropy
loss, as shown in Fig. 9, it has been observed Proposed 0.0068 99.80 0.1728 97.40 0.1347 97.08 327
that there exists a considerable gap between
the training and validation, which confirms overfitting in the
model. Regularisation (through Dropout) and Early Stopping
has been included into the baseline model to reduce the impact
of overfitting. The corresponding graph for the proposed
model showing improved performance and better convergence
between training and validation is shown in Fig. 10.
The confusion matrix23, which provides the matrix of true
labels vs. predicted labels, is shown in Figs. 11 and 12 for the
baseline and the proposed models. It represents the number
of true classifications in each category through the diagonal
elements. It has been observed that significant confusion
occurs between the aircraft carrier and destroyer category of
images, and the same has been predicted due to the similarity
of features between the two categories. Six aircraft carrier
images have been incorrectly predicted to destroyer-class in

Figure 10. Graphs of classification accuracy and cross-entropy


loss for the proposed model.

Figure 9. Graphs of classification accuracy and cross-entropy


loss for the baseline model. Figure 11. Confusion matrix for the baseline model.

206
Mishra, et al.: Deep Convolutional Neural Network based Ship images Classification

11th European Conference on Synthetic Aperture Radar,


Hamburg, Germany, 2016, 1-4.
3. Rainey, Katie; Reeder, John D. & Corelli, Alexander G.
Convolution neural networks for ship type recognition.
In Proc. SPIE 9844, 12 May 2016, Automatic Target
Recognition XXVI, 984409.
doi:10.1117/12.2229366
4. Shi, Qiaoqiao; Li, Wei; Tao, Ran; Sun, Xu & Gao, Lianru.
Ship classification based on multifeature ensemble with
convolutional neural network. Remote Sens., 2019, 11,
419.
doi: 10.3390/rs11040419.
5. Lecun, Y.; Haffner, P.; Bottou, L. & Bengio, Y. Object
recognition with gradient-based learning. In Shape,
Contour and Grouping in Computer Vision, Springer-
Verlag, Heidelberg, Berlin, 1999, 319-345.
Figure 12. Confusion matrix for the proposed model. doi: 10.1007/3-540-46805-6_19
6. Gonalez, R.C. Deep convolutional neural networks
the proposed model. Another significant observation is that out [Lecture Notes]. IEEE Signal Processing Magazine, Nov
of 120 test images for the destroyer-class, 118 images have 2018, 35(2), 79-87.
been correctly classified using the baseline model, while the doi: 10.1109/MSP.2018.2842646
number is 117 for the proposed model. However, the overall 7. https://cs231n.github.io/convolutional-networks/
classification accuracy has improved significantly in the [Accessed on 14 Nov 2020].
proposed model. 8. https://cs231n.github.io/transfer-learning/ [Accessed on
14 Nov 2020].
5. CONCLUSION 9. Simonyan, Karen & Zisserman, Andrew. Very deep
In the present study, ship classification has been addressed convolutional networks for large-scale image recognition.
using VGG16 based transfer learning architecture. Further, 2015.
the addition of several performance improvement techniques arXiv 1409.1556v6
and fine-tuning of neural network Hyperparameters have been 10. He, K.; Zhang, X; Ren S. & Sun J. Deep residual learning
carried out to improve the baseline model. Evaluation and for image recognition. In Proc. IEEE Conf. Comput. Vis.
analysis of the proposed model have been carried out for four Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun
categories of ship images using a limited dataset. CNN based 2016, 770–778.
proposed model shows promising results with a classification doi: 10.1109/CVPR.2016.90
accuracy of 97.08%, making it suitable for maritime security 11. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J. & Wojna,
applications. Z. Rethinking the inception architecture for computer
In all the experiments, it has been assumed that the input vision. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
images belong only to one of the four categories. However, if (CVPR), Las Vegas, NV, USA, Jun 2016, 2818–2826.
the input image does not belong to any of the four categories, doi: 10.1109/CVPR.2016.308
it can be classified as a member of the ‘unknown’ class by 12. Chollet, F. Xception: Deep learning with depthwise
assigning a suitable threshold value (say, 0.5) to the class separable convolutions. In Proc. IEEE Conf. Comput.
scores. Class scores is the values of associated probabilities Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, Jul
at the output of Softmax layer that is the final output layer in 2017, 1800–1807.
artificial neural network. If the value of the highest class score doi:10.1109/CVPR.2017.195
is lower than the threshold value, then that particular output 13. Szegedy, C.; Ioffe, S.; Vanhoucke, V. & Alemi, A.
can be marked as ‘unknown’ class. Inception- v4, inception-resnet and the impact of residual
As a future work, this model can be further fine-tuned for connections on learning. In Proc. AAAI Conf. Artif.
use with satellite-based or SAR based ship images to create a Intell., San Francisco, CA, USA: AAAI Press, 2017,
robust system for ship classification. The study can be further 4278–4284.
extended to the case of multiple ships or objects in each input doi: 10.5555/3298023. 3298188
image. 14. Huang, G.; Liu, Z.; Van Der Maaten, L. & Weinberger,
K.Q. Densely connected convolutional networks. In Proc.
REFERENCES IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR),
1. http://www.imo.org/en/OurWork/safety/navigation/ Honolulu, HI, USA, Jul 2017, 2261–2269.
pages/ais.aspx [Accessed on 28 Jun 2020]. doi: 10.1109/CVPR.2017.243
2. Bentes, C.; Frost, A.; Velotto, D. & Tings, B. Ship-iceberg 15. Leclerc, M.; Tharmarasa, R.; Florea, M. C.; Boury-
discrimination with convolutional neural networks in high Brisset, A.; Kirubarajan, T. & Duclos-Hindié, N.
resolution SAR images. In Proceedings of EUSAR 2016: Ship classification using deep learning techniques for

207
Def. SCI. J., Vol. 71, No. 2, march 2021

maritime target tracking. 21st International Conference Contributors


on Information Fusion (FUSION), Cambridge, 2018, pp.
737-744. Mr Narendra Kumar Mishra received his MTech in Communication
doi: 10.23919/ICIF.2018.8455679 Engineering from IIT Delhi, in 2018. He is presently working
as Scientist ‘D’ at DRDO - Weapons and Electronics Systems
16. Solmaz, Berkan; Gundogdu, Erhan; Yucesoy, Veysel
Engineering Establishment, New Delhi. His field of research
& Koc, Aykut. Generic and attribute-specific deep includes Embedded Systems, Systems Integration, Signal
representations for maritime vessels. IPSJ Transactions Processing and Machine Learning. He has contributed in the
on Computer Vision and Applications, 2017. design and development of interface solutions for various
doi: 10.1186/s41074-017-0033-4 ships and submarines.
17. Ship photos and ship tracker. http://www.shipspotting. In the current study, he conceived and designed the experiment,
com[Accessed on 22 May 2020]. optimised the deep learning techniques used in the experiment,
18. Bartan, Burak. Ship classification using an image dataset. performed software-coding, results analysis and prepared the
2017. Corpus ID: 29004678 manuscript.
19. Milicevic, Mario; Zubrinic, Krunoslav; Obradovic, Ines &
Mr Ashok Kumar received his MTech in Radio Frequency
Sjekavica, Tomo. Data augmentation and transfer learning Design and Technology from IIT Delhi, in 2017. He is presently
for limited dataset ship classification. WSEAS Trans. Syst. working as Scientist ‘D’ at DRDO-Weapons and Electronics
Control, 2018, 13, 460-465. Systems Engineering Establishment, New Delhi. His field of
20. Dao, Cuong; Xiaohui, Hua; Morère, Olivier. research includes Embedded Systems, Data Communication
Maritime vessel images classification using deep and Systems Integration. He has contributed in the design
convolutional neural networks. In proceedings of and development of interface solutions for various ships and
the Sixth International Symposium on Information submarines.
and Communication Technology, 2015, 276-281. In the current study, he performed data preparation, helped
in formulation of the experiment and preparation of the
doi:10.1145/2833258.2833266
manuscript.
21. Ioffe, Sergey & Szegedy, Christian. Batch normalisation:
accelerating deep network training by reducing internal Mr Kishor Choudhury received his MTech in Computer
covariate shift. 2015. Technology from IIT Delhi, in 2012. He is presently working
arXiv:1502.03167 as Scientist ‘F’ at DRDO - Weapons and Electronics Systems
22. Srivastava, Nitish; Hinton, Geoffrey; Krizhevsky, Alex; Engineering Establishment, New Delhi. He has received CNS
Sutskever, Ilya & Salakhutdinov, Ruslan. Dropout: A Commendation in 2006 and DRDO Technology Group Award
simple way to prevent neural networks from overfitting. in 2008. His field of research includes algorithms, embedded
J. Mach. Learning Res., 2014, 15(56), 1929-1958. systems and computer vision. He has contributed in the design
and development of interface units for various ships and
23. https://en.wikipedia.org/wiki/Confusion_matrix
submarines.
[Accessed on 28 Jun 2020]. In the current study, he provided overall guidance in
conceptualisation & realisation of the experiment and finalisation
of the manuscript.

208

View publication stats

You might also like