0% found this document useful (0 votes)
7 views

Object Detection using ELAN

The ODUELAN has long served as the de facto industry standard for effective object detection. The ODUELAN community has grown significantly, enhancing its application across a wide range of hardware platforms and situations. This technical report includes, we make an uncompromising effort to advance its limits to the next level. attitude for practical use in the workplace. We carefully review the most recent advances in object detection from academia or business, taking into account the various
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Object Detection using ELAN

The ODUELAN has long served as the de facto industry standard for effective object detection. The ODUELAN community has grown significantly, enhancing its application across a wide range of hardware platforms and situations. This technical report includes, we make an uncompromising effort to advance its limits to the next level. attitude for practical use in the workplace. We carefully review the most recent advances in object detection from academia or business, taking into account the various
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

International Journal of Advanced Engineering Research

and Science (IJAERS)


Peer-Reviewed Journal
ISSN: 2349-6495(P) | 2456-1908(O)
Vol-11, Issue-12; Dec, 2024
Journal Home Page Available: https://ijaers.com/
Article DOI: https://dx.doi.org/10.22161/ijaers.1112.10

Object Detection using ELAN


Sanjivani Sharma, Dr RK Sharma

Department of Computer Science, AKTU University, Lucknow, UP, India


[email protected], [email protected]

Received: 27 Nov 2024, Abstract— The ODUELAN has long served as the de facto industry
Receive in revised form: 20 Dec 2024, standard for effective object detection. The ODUELAN community has
grown significantly, enhancing its application across a wide range of
Accepted: 25 Dec 2024,
hardware platforms and situations. This technical report includes, we make
Available online: 31 Dec 2024 an uncompromising effort to advance its limits to the next level. attitude for
©2024 The Author(s). Published by AI practical use in the workplace. We carefully review the most recent
Publication. This is an open access article advances in object detection from academia or business, taking into
under the CC BY license account the various demands for accuracy in the actual environment. We
(https://creativecommons.org/licenses/by/4.0/). substantially include concepts from contemporary data detect, export,
training approaches, testing strategies, and optimisation techniques.
Keywords— Object detection, ODUELAN
Additionally, we combine our ideas and experience to provide a set of
framework, deployment strategies,
deployment focus.
optimization techniques, and training
approaches.

I. INTRODUCTION individual chips devices and enhancing edge CPU speed


present object identification is a crucial component comparison. Enhancing the speed comparision of diverse
problem inside computer vision since It is frequently a GPUs is the goal of approaches like YOLOX [21] and
crucial part of computer vision systems. Examples include YOLOR [81]. The development of an efficient
robots [35, 58], autonomous driving [40, 18], and multi- architecture for real-time object detection has recently
object tracking [94, 93]. analysis of medical images [34, taken central stage. Real-time object detectors on CPUs
46], etc. A variety of neural processing units (NPUs), a [54, 88, 84, 83] that can be used on MobileNet [28, 66,
portable CPU or GPU, and devices made by well-known 27, ShuffleNet [92, 55], or GhostNet [25] largely depend
firms, are frequently used for real-time object detection. on their architecture. Another well-known real-time object
NPUs include, for instance, the Intel Neural Computing detector for the GPU is being developed [81, 21, 97], and
Stick, the Kernel on AI SoCs, the MediaTek AI it primarily makes use of ResNet [26], DarkNet [63], or
Processing Unit, the Qualcomm Neural Processing DLA [87], after which the architecture is optimised using
Engine, and the Apple Neural Engine, the Google Edge the CSPNet [80] technique. The real-time object detectors
TPU, Nvidia's Jetson AI Boundary Modules, and the used in the present mainstream are different from those
MediaTek AI Processing Unit. Some of the edge devices used in this article. Our suggested solutions will
concentrate on accelerating processes, such as MLP concentrate on improving the training procedure in
operations, depth-wise convolution, and vanilla addition to the architecture. We'll concentrate on a few
convolution. In this research, We suggest an on-demand modules and optimisation techniques that may to increase
image detection that primarily supports GPU machines object recognition accuracy, Enhancing the deduction
and portable GPUs from the edge of the cloud. value without doing so, the training cost has to be
increased. The recommended modules and optimisation
the previous years, present object detectors have been
methods are referred to as "trainable bags of free stuff."
created for a variety of edge devices. For instance, the
development The focus of MCUNet's [49, 48] and Instead of focusing on The key objective of this project is
NanoDet's [54] development was on creating lightweight to develop a quick object detector for use in

www.ijaers.com Page | 84
Sharma and Sharma International Journal of Advanced Engineering Research and Science, 11(12)-2024

manufacturing systems and to optimise for parallel 38] are a few of these monitors. between the skull and the
calculations, as indicated by The conceptual sign of vertebral column, Modern scanners for items generally
limited processing capacity (BFLOP). We anticipate that have many layers, which are typically employed to collect
using and training the intended item will be simple. Maps of features at various phases. It might be called the
For instance, anybody who trains and tests on a typical subject of the detect collarbone. A head often consists of a
GPU may provide real-time, excellent, and convincing variety of top-down and bottom-up routes. The Path
object identification outcomes like the YOLOv4 findings Aggregation Network (PAN) and the Feature Pyramid
seen in Figure 1. Anything we provide is summarised as Network (FPN) are two networks that use this method
follows: [44]. Other researchers worked on the spot creating a
brand-new skeleton (DetNet [43], DetNAS [7],
1. We develop a reliable and successful System for
HitDetector [20]), or a completely a fresh layout
identifying objects. Anyone may use a 1080 Ti or 2080 Ti
(SpineNet [12], for instance), in addition to the models
GPU to train an extremely rapid and precise object
already described.
detection.
In conclusion, the following elements make up an
2. During detector training, we evaluate the impact of
ordinary object detector:
cutting-edge Bag-of-Freebies and Bag-of-Specials item
detection techniques. • Input: Pyramid, Patches, and Image

3. We tweak cutting-edge techniques like CBN [89], PAN • SpineNetwork[12], EfficientNetwork-B0/B7 [75],
[49], SAM [85], and others to increase their efficiency VGG16 [68], CSPResNeXt50 [81], CSPDarknet53 [81],
and suited for training on a single GPU. ResNet-50 [26], and ResNet-50 [26] are the backbones.
• Neck:
SPP (25), ASPP (5), RFB (47), and SAM (85) are
additional blocks.
Path-aggregation blocks include FPN [44], PAN [49],
NAS-FPN [17], Full-connected FPN, BiFPN [77], ASFF
[48], and SFAM [98].
Looks: RPN [64], SSD [50], YOLO [61], RetinaNet;
• Dense Prediction (one-stage):

II. RELEVANT WORK [45] (based on anchor)

2.1 Methods to identify objects FCOS [78] (anchor free), CornerNet [37], CenterNet [13],
MatrixNet [60], Two-stage
A typical current detector has two components: One that
looks like forecasts ite categories, a framework, and limits • Sparse Prediction:
has already been equipped with Image Network. VGG R-CNN [64], R-FCN [9], and Mask RCNN [23] (anchor
[68], ResNet [26], ResNeXt [86], or DenseNet [30] may based) are faster
act as the main detectors when utilising GPU-based RepPoints [87] (free anchor)
detectors. Regarding those devices that operate on CPU
platforms, the backbone may be Mobile Network [28, 66, 2.2 Model Re-parameterization
27, 74], Squeeze Net [31], or Shuffle Net [97, 53]. The Several computing units are combined into one during the
two primary varieties for the main element are one-stage inference stage of Reparametrizing the equation processes
detection of objects and multi-stage image analyzers. The [71, 31, 75, 19, 33, 11, 4, 24, 13, 12, 10, 29, 14, 78].
The R-CNN [19] succession, which comprises the rapid Model re-parameterization techniques may be divided
R-CNN [18], quicker R-CNN [64], R-FCN [9], and Libra into two groups: Groups at the module and model levels.
R-CNN [58] theories, is one of the most typical two-stage You may consider it an ensemble method. Two popular
object sensor. Another method is to change an a two-stage model-level reparameterization methods are methods to
image detection into a sensor for objects without links, arrive at the ultimate conclusion model. One approach is
such as RepPoints [87]. The most common choices for to average the model weights after training several the
one-stage object detectors are YOLO [61, 62, 63], SSD same models with various training datasets. A weighted
[50], and RetinaNet [45]. lately anchorless one-stage average of model weights over different iteration counts
object detection systems have been built. FCOS [78], can be calculated as an alternative. Module level re-
CornerNet [37, 38], CenterNet [13], and CornerNet [37, parameterization has grown in popularity as a research

www.ijaers.com Page | 85
Sharma and Sharma International Journal of Advanced Engineering Research and Science, 11(12)-2024

subject recently. This type of technique separates a The preferred architecture for this model is concatenation-
module After instruction, modules may take similar or based, hence a new compound scaling method must be
separate branches. A multi-branched module is developed.
consolidated into a single similar module when making a
conclusion. But not every re-parameterized module that is
III. CONSTRUCTION
offered can be exactly used to many architectural designs.
As a result, we developed a newly created 3.1 Extended efficient layer aggregation network
parameterization module and associated techniques for Just The key factors taken into account in the bulk of
using different architectures in applications. studies on developing effective platforms are the quantity
of variables, the volume of the process, and the level of
computation. Using the characteristics of memory access
cost as a starting point,
The architecture of the transition layer is unaffected by
ODUELAN; only the design of the computational block is
changed. We propose the group distortion to be used to
extend the commonality and stream of the calculation
elements. The exact same stream factor and grouping
attribute will be applied to each processing block in a
processing layer. The feature chart generated by each
2.3 designing scale processing block will then be concatenated after being
designing scale allows a model to be scaled up or down to split into collections based on the designated g group
fit on different computing devices [72, 60, 74, 73, 15, 16, variable g. Currently, how many channels are
2, 51]. The model scaling method typically employs a incorporated in each set of features maps will be the same
variety of In order to establish a suitable balancing of the as it was in the primary building. at last, by including g
number of connection variables, calculation, and feature map collections, we combine the a cardinality
inference efficiency and reliability, scaling factors Keeping the initial ELAN project design in mind, E-
including resolution (size of input picture), depth (number ELAN has the ability to instruct other steps of processing
of layer), breadth (number of channel), and stage (number together to offer new, more diversified functionality.
of feature pyramid) are used. Network architecture search 3.2 Model scalling for concatenation-based model
(NAS) is a model scaling approach that is often used.
Model scaling is typically used to change certain model
Without creating too many difficult rules, The search
characteristics and create models of different sizes to
space may be automatically searched by NAS for suitable
support different inference rates. The Efficient Net scaling
scaling factors. The drawback of NAS is that finding
model, for instance, [72] takes into account the extent,
model scaling factors needs a lot of expensive computing.
breadth, and clarity. The scaling model's goal with respect
The researcher makes an effort to study the connection
to the scaled-YOLOv4 is to alter the amount of phases
comparing sizing variables and the number of processes
[79]. Dollar' et al. [15] investigated the impacts of both
and variables in [15] in order to determine the scaling
group and plain convolution on the number of parameters
variables and directly predict some laws needed by
and calculations while increasing in both size and
designing scale. According to our analysis of the
dimension, and they used the findings to construct the
literature, almost all model scaling strategies examine
suitable designing scale approach.
each scaling component separately, and even those that
fall into the category of compound scaling also optimise
each scaling factor separately. Following a study of the
literature, we found that almost all scaling strategies for
models examine each scaling component independently,
and even compound scaling techniques also optimise each
scaling factor. Considering the majority commonly
employed NAS designs consider aspects of scaling that
are not closely connected. We discovered that every
scheme based on combining, including DenseNet [32] and
VoVNet [39], would As the depth of these kinds of The aforementioned occurrence shows that we are unable
models changes, certain layers' input width was scaled. to study various scaling variables individually for a model

www.ijaers.com Page | 86
Sharma and Sharma International Journal of Advanced Engineering Research and Science, 11(12)-2024

using combination, but rather that they must be taken into based method models will be described in relation to the
consideration jointly. For instance, enlarged thickness will overall goal of the ablation research session.
alter the ratio between a The middle layer input and
output channels, which can lower the model's hardware
needs. Consequently, we must present the appropriate
formula for scaling a complicated model for a model
using combination. Scaling the an algorithmic block's
depth factor requires calculating the modification to that
block's export stream. The outcome is displayed in Figure
3(c). after applying the same amount of modification to
the transition layers' width factor scaling. We propose a
compound scaling technique that preserves the model's
ideal structure and its original properties.
4.2 Fine for lead loss and coarse for auxiliary
IV. TRAINABLE BAG-OF-FREEBIES Deep supervision is a technique that is widely used while
training deep networks [38]. The assistance loss acts as
4.1.Re-parameterized generation is anticipated
the direction for the shallow network weights, and its
RepConv [13] has done well on the VGG [68], but the basic notion is to supply more auxiliary heads to the
precision will be greatly diminished when it is used network's intermediate tiers. Despite this, extensive
straight to ResNet [26], DenseNet [32], and other models. oversight [70, 98, 67, 47, 82, 65, 86, 50] may dramatically
We look into the transmission of gradient flows. channels enhance the model's the efficiency of various jobs, even
should be used to combine re-parameterized convolution for often convergent architectures like ResNet [26] and
with different networks. In addition, Cordingly, our DenseNet [32]. Figure 5(a) and (b), which Demonstrate
anticipated redesigned convolution, was constructed. the subject's sensor structure both "without" and "along"
vigorous oversight, respectively. The head in this
investigation was that generates the final output is
referred to as the lead head, while the head that assists in
training is referred to as the Additional head.
Deep supervision must be concentrated on the targeted
objectives whether the lead head is an auxiliary head or
vice versa. We unintentionally encountered fresh variant
problem, namely "How to assign gentle labels to the main
lead and secondary heads," when exploring strategies
related to soft label assigners. For the extent that we are
aware, the relevant There isn't yet literature on this
Identity connection, 3 3 convolution, and 1 1 convolution
subject. Figure 5(c) shows the results of the currently
are all included in the convolutional layer known as
most popular method, which separates the lead head and
RepConv. The link to identification in RepConv
auxiliary head before utilising their own forecast findings
eliminates the concatenation in DenseNet and the residual
and the basic truth about carry out labels are assigned.
in ResNet, resulting in a broader diversity of gradients for
This study proposes a novel label assignment strategy that
various feature maps, according to our analysis of
employs lead head prediction to guide both the lead head
RepConv's performance when used in conjunction with
and the auxiliary head. To put it another way, we employ
other architectures. For these factors, we create the
lead column prediction. to direct the development of
proposed re-parameterized convolution's structure using
coarse-to-fine layered categories to learn in the lead head
RepConv without same relationship (RepConvN). In our
and support heads. The two recommended allocation of
perspective, There shouldn't be a unique link when a re-
the deep monitoring label methods are shown in Figures
parameterized layer of convolution replaces a layer of
5(d) and (e), accordingly.
convolution with residual or mixture activation. Figure 4
shows a plainnet and resnet version of our "planned re- prediction The primary head directed term assigner uses
parameterized inversion". A re-parameterized The the actual fact and the leader head's predictions outputs as
convergence research using concatenation- and residual- its two main inputs. which then uses optimisation to
produce soft labels. This collection of soft labels will

www.ijaers.com Page | 87
Sharma and Sharma International Journal of Advanced Engineering Research and Science, 11(12)-2024

serve as the lead head and auxiliary head's target training coupled with the convolution layer's skew and intensity
model. Since leader head has a moderate amount of that comes before or after. EMA model, third: Only the
acquiring capacity, the neutral label that emerges from it EMA model is used as the final inference model in our
should be better in detecting the variation and connection system. Mean teachers employ the EMA approach [75].
between the source and target inputs. We may also think
of this studying while a form of generalised residual
learning. By allowing the less experienced assistant head
to pay attention to the concepts that the head of leadership
acquired, the a larger lead neck capable to focus on
studying remaining knowledge that hasn't yet been taught.
The identify assigner with a coarse-to-fine lead point is
utilised. both the ground truth and the point head's
anticipated outcome to create soft labels. However, during
the method, we create two separate sets of soft labels:
coarse label and fine label, which are identical to the soft
labels created by the lead head guided label assigner.
Loosening the limitations on the positive sample
assignment technique allows for the production of coarse
labels by allowing more grids to be seen as positive
targets. This is because an auxiliary head's learning
capacity is lower than a point head's, thus in order to
prevent losing the knowledge that has to be kept, we will
focus on improving the auxiliary head's recall. As the
resultant work for the lead head results, we may separate V. RESULT
the high accuracy results from the high recall outcomes. A A comparison of the results using several cutting-edge
subpar prior might be produced by the final forecast. It is sensors for objects is show in Figure. Our ODUELAN on
crucial to pay attention if the coarse label's increased the Pare superiority test structure and are quicker and
weight is close to that of the fine label. In order to avoid more accurate than The most rapid and precise sensors.
good squares that are exceptionally sharp from creating As several approaches employ GPUs with different
excellent smooth label we placed restrictions on the designs to verify interpretation at runtime, we run
decoder in order to lessen their impact. Through the use ODUELAN on popular GPUs of the Designs of Maxwell,
of the method previously stated, it is feasible to Pascal, and Volta are compared. it to other cutting-edge
dynamically alter the relative weights of fine and coarse methodologies. Table 8 displays the frame rate
labels throughout the learning process, and it is also comparison findings using either the Tesla M40 GPU or
guaranteed that fine labels have a higher optimizable the GTX Titan X (Maxwell) GPU. The results of the
upper bound more coarse labels. Pascal GPU frame-per-second compare are displayed in
4.3 Various trainable bags-of-freebies Table 9 and include the TitanX (Pascal), TitanXp, GTX
We shall mention a few trainable bag-offreebies in this 1080 Ti, and Tesla P100 GPUs. Frame rates using a Volta
area. We utilised several of these freebies in our training, GPU, which might be a Titan Volta or a Tesla Vol100
but we didn't come up with the original ideas. The GPU, are compared in Table 10.
Appendix will elaborate on the training specifics for these
bonuses, including: (1) Batch normalisation in the VI. CONCLUSION
topology of conv-bn-activation: This section mostly joins
We give a brand-new actual time element in this study
the convolutional layer and batch normalisation layer.
identification architecture together with a model scaling
This integrates the leaning and frequency of the
method. We also find that new research ideas are
convolutional layer created during the deduction stage
generated by the process of building object identification
with the average and deviation of the whole batch
systems. During the course of the investigation, we
normalisation. (1) Pre-computing at the judgement step in
identified the replacement issue for the re-parameterized
YOLOR allows hidden data to be transformed into a
course and the assignment challenge for the allocating
vector. (2) Convolution feature map multiplied by implicit
labels automatically. To address the issue and increase
knowledge in YOLOR [81]. The resulting vector may be
item identification accuracy, we recommend using the

www.ijaers.com Page | 88
Sharma and Sharma International Journal of Advanced Engineering Research and Science, 11(12)-2024

trainable bag-of-gifts approach. We developed the cutting- powerful CNN via asymmetric convolution blocks. In
edge ODUELAN of item detecting systems depending on Proceedings of the IEEE/CVF International Conference on
the preceding provided information. Computer Vision (ICCV), pages 1911–1920, 2019. 2
[11] Xiaohan Ding, Xiangyu Zhang, Jungong Han, and
Guiguang Ding. Diverse branch block: Building a
ACKNOWLEDGEMENTS convolution as an inception-like unit. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern
The National Centre for High-performance Computing Recognition (CVPR), pages 10886–10895, 2021. 2
(NCHC), which provided the computational and storage [12] Xiaohan Ding, Xiangyu Zhang, Ningning Ma, Jungong
resources, is acknowledged by the authors. Han, Guiguang Ding, and Jian Sun. RepVGG: Making
VGG-style convnets great again. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern
REFERENCES Recognition (CVPR), pages 13733–13742, 2021. 2, 4
[1] Irwan Bello, William Fedus, Xianzhi Du, [13] Xiaohan Ding, Xiangyu Zhang, Yizhuang Zhou, Jungong
EkinDogusCubuk, Aravind Srinivas, Tsung-Yi Lin, Han, Guiguang Ding, and Jian Sun. Scaling up your kernels
Jonathon Shlens, and Barret Zoph. Revisiting ResNets: to 31x31: Revisiting large kernel design in CNNs. In
Improved training and scaling strategies. Advances in Proceedings of the IEEE/CVF Conference on Computer
Neural Information Processing Systems (NeurIPS), 34, Vision and Pattern Recognition (CVPR), 2022. 2
2021. 2 [14] Piotr Dollar, Mannat Singh, and Ross Girshick. Fast and ´
[2] Alexey Bochkovskiy, Chien-Yao Wang, and HongYuan accurate model scaling. In Proceedings of the IEEE/CVF
Mark Liao. YOLOv4: Optimal speed and accuracy of Conference on Computer Vision and Pattern Recognition
object detection. arXiv preprint arXiv:2004.10934, 2020. 2, (CVPR), pages 924–932, 2021. 2, 3
6, 7 [15] Xianzhi Du, Barret Zoph, Wei-Chih Hung, and Tsung-Yi
[3] Yue Cao, Thomas Andrew Geddes, Jean Yee Hwa Yang, Lin. Simple training strategies and model scaling for object
and Pengyi Yang. Ensemble deep learning in detection. arXiv preprint arXiv:2107.00057, 2021. 2
bioinformatics. Nature Machine Intelligence, 2(9):500–508, [16] Chengjian Feng, YujieZhong, Yu Gao, Matthew R Scott,
2020. 2 and Weilin Huang. TOOD: Task-aligned one-stage object
[4] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, detection. In Proceedings of the IEEE/CVF International
Nicolas Usunier, Alexander Kirillov, and Sergey Conference on Computer Vision (ICCV), pages 3490–
Zagoruyko. End-to-end object detection with transformers. 3499, 2021. 2, 5
In Proceedings of the European Conference on Computer [17] Di Feng, Christian Haase-Schutz, Lars Rosenbaum, Heinz ¨
Vision (ECCV), pages 213–229, 2020. 10 Hertlein, Claudius Glaeser, Fabian Timm, Werner
[5] Kean Chen, Weiyao Lin, Jianguo Li, John See, Ji Wang, Wiesbeck, and Klaus Dietmayer. Deep multi-modal object
and Junni Zou. AP-loss for accurate one-stage object detection and semantic segmentation for autonomous
detection. IEEE Transactions on Pattern Analysis and driving: Datasets, methods, and challenges. IEEE
Machine Intelligence (TPAMI), 43(11):3782–3798, 2020. 2 Transactions on Intelligent Transportation Systems,
[6] Zhe Chen, YuchenDuan, Wenhai Wang, Junjun He, Tong 22(3):1341– 1360, 2020. 1
Lu, Jifeng Dai, and Yu Qiao. Vision transformer adapter for [18] Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin,
dense predictions. arXiv preprint arXiv:2205.08534, 2022. Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces,
10 mode connectivity, and fast ensembling of DNNs.
[7] Jiwoong Choi, Dayoung Chun, Hyun Kim, and Hyuk-Jae Advances in Neural Information Processing Systems
Lee. Gaussian YOLOv3: An accurate and fast object (NeurIPS), 31, 2018. 2
detector using localization uncertainty for autonomous [19] Zheng Ge, Songtao Liu, Zeming Li, Osamu Yoshie, and
driving. In Proceedings of the IEEE/CVF International Jian Sun. OTA: Optimal transport assignment for object
Conference on Computer Vision (ICCV), pages 502–511, detection. In Proceedings of the IEEE/CVF Conference on
2019. 5 Computer Vision and Pattern Recognition (CVPR), pages
[8] Xiyang Dai, Yinpeng Chen, Bin Xiao, Dongdong Chen, 303–312, 2021. 2, 5
Mengchen Liu, Lu Yuan, and Lei Zhang. Dynamic head:
Unifying object detection heads with attentions. In
Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), pages 7373–7382,
2021. 2
[9] Xiaohan Ding, Honghao Chen, Xiangyu Zhang, Kaiqi
Huang, Jungong Han, and Guiguang Ding.
Reparameterizing your optimizers rather than architectures.
arXiv preprint arXiv:2205.15242, 2022. 2
[10] Xiaohan Ding, YuchenGuo, Guiguang Ding, and Jungong
Han. ACNet: Strengthening the kernel skeletons for

www.ijaers.com Page | 89

You might also like