0% found this document useful (0 votes)
26 views12 pages

Detecting The Unseen: A Study of CNNs For Real Time Abnormal Event Detection

Abnormal event detection is a critical task in various domains, including surveillance, healthcare, and industrial monitoring. This paper explores the application of Convolutional Neural Network (CNNs) for detecting abnormal events in dynamic environments. By leveraging CNNs’ ability to extract spatial and temporal features, we primarily aim to enhance the accuracy and efficiency of anomaly detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views12 pages

Detecting The Unseen: A Study of CNNs For Real Time Abnormal Event Detection

Abnormal event detection is a critical task in various domains, including surveillance, healthcare, and industrial monitoring. This paper explores the application of Convolutional Neural Network (CNNs) for detecting abnormal events in dynamic environments. By leveraging CNNs’ ability to extract spatial and temporal features, we primarily aim to enhance the accuracy and efficiency of anomaly detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Detecting the Unseen: A Study of CNNs for


Real Time Abnormal Event Detection
Zainab Khan 1 Varsha K2
AMC Engineering College Bengaluru, India AMC Engineering College Bengaluru, India

Uzaira Sahar3 Vaibhavi M.G4


AMC Engineering College Bengaluru, India AMC Engineering College Bengaluru, India

Velvizhi Ramya R5 Guide (Professor)


Department of CSE
AMC Engineering College Bengaluru, India

Abstract:- Abnormal event detection is a critical task in lead to severe outcomes. Technologies like edge computing
various domains, including surveillance, healthcare, and improve real-time processing by carrying out computations
industrial monitoring. This paper explores the near the data origin, minimizing latency and enabling rapid
application of Convolutional Neural Network (CNNs) for responses. The rapid advancements in Artificial Intelligence
detecting abnormal events in dynamic environments. By (AI) have revolutionized anomaly detection, a critical task in
leveraging CNNs’ ability to extract spatial and temporal applications such as surveillance, healthcare, industrial
features, we primarily aim to enhance the accuracy and monitoring, and autonomous systems. Among AI
efficiency of anomaly detection. To validate our techniques, Convolutional Neural Networks (CNNs) have
approach, we employed a comparative analysis of emerged as a powerful tool due to their ability to
supervised and unsupervised learning techniques. automatically learn hierarchical features from complex
Extensive experimentation on our datasets demonstrated datasets. Abnormal event detection aims to identify rare,
that CNNs consistently outperformed traditional unexpected, or suspicious patterns in data, which often lack
methods in identifying anomalies with higher precision explicit labels and are difficult to capture using Traditional
and recall rates. The results highlight the potential of methods.
CNNs as a robust solution for abnormal event detection,
effectively balancing computational efficiency and II. TRANSFORMATION OF REAL TIME
detection accuracy. Our findings highlight the INSIGHTS IN THE REAL WORLD
importance of integrating domain-specific insights and
using advanced architectures to address all the AI-driven event detection is transforming industries by
challenges in anomaly detection. addressing critical challenges and unlocking new
possibilities. In public safety, AI- powered surveillance
Keywords:- Event Detection, CNN, K-means, Logistic systems analyze video feeds to detect suspicious activities or
Regression, Supervised, Unsupervised, Model, NLP, Image potential threats. By identifying unusual behavior in real-
Recognition. time, these systems enhance security and prevent incidents
before they escalate.
I. INTRODUCTION
In healthcare, wearable devices and medical sensors
 AI Integration for Enhanced Detection continuously monitor patient data. AI algorithms process
AI-driven event detection mainly relies on data this data to detect critical health events, such as irregular
collection and preprocessing. [1] Data is generally acquired heart rhythms or seizures, enabling timely interventions.
from varied assets such as surveillance systems, IoT This not only saves lives but also improves the quality of
devices, social media feeds, sensors and satellite imagery. care and patient outcomes.
This untreated data is cleaned and standardized to ensure it
is fit for analysis. High-Quality and authenticated data is Environmental monitoring is another area where AI
essential for accurate detection. Once prepared, this data is has made a profound impact. By analyzing data from
input to machine learning (ML) algorithms that are built to satellites and sensors, AI systems detect natural events like
identify patterns indicative of important events. Real-time earthquakes, floods, and wildfires. Early detection facilitates
analytics is a hallmark of AI-powered event detection. In swift responses, minimizing damage and aiding in disaster
contrast to conventional systems that examine data after an management. Similarly, in finance, AI models monitor
event has occurred, AI allows for immediate detection of transactions and market trends to detect fraud or anomalies,
events as they unfold. This feature is crucial in time-critical ensuring the security and stability of financial systems.
situations, such as emergency responses, where delays can

IJISRT24DEC641 www.ijisrt.com 1211


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
coordinate response efforts. This ability to analyze
unstructured text data significantly expands the scope of
event detection. Predictive analytics is another significant
advancement enabled by AI. By analyzing historical data,
AI models can forecast future events, empowering industries
to take proactive measures. Predicting disease outbreaks,
financial market fluctuations, or equipment malfunctions are
just a few examples of how predictive capabilities enhance
Fig 1 Insights Representation decision- making and preparedness.

III. DRIVING AI INTEGRATION – KEY IV. HARNESSING SUPERVISED LEARNING


TECHNOLOGIES USED MODELS FOR ABNORMAL EVENTS

Event detection relies on a combination of advanced A. KNN (K Nearest Neighbours)


technologies to achieve its capabilities. Machine learning is KNN is a simple, non-parametric, supervised learning
central to recognizing patterns and anomalies within data. algorithm. It classifies data based on the majority class of
Supervised learning models excel in detecting predefined the ‘K’ nearest points to a new data point in the feature
events, such as mechanical failures in industrial equipment, space. The distance between data points is typically
by learning from labelled datasets. On the other hand, measured using Euclidean distance or other distance metrics
unsupervised learning models identify outliers or like Manhattan distance.
unexpected occurrences, making them invaluable for
uncovering rare or new events. In applications involving  Application in Abnormal Event Detection:
textual data, natural language processing (NLP) plays a [3] Detecting Abnormal Events: In anomaly detection,
crucial role. KNN works by comparing the distance between a new data
point and its nearest neighbours. If the distance is
NLP algorithms analyze language patterns, sentiment, significantly larger than that of most other data points, the
and context to extract meaningful insights. For instance, new point can be classified as an anomaly. In other words,
during natural disasters, NLP can process social media an anomaly is detected if it doesn’t resemble the majority of
posts to identify early warnings, assess damage, and data in the feature space.

Fig 2- KNN train-test code

 Advantages: classification. It finds a hyper plane that best separates


different classes in the feature space, maximizing the margin
 Simple and Intuitive: KNN is easy to understand and between the classes. In the context of anomaly detection, the
implement, and it doesn’t require a complex training SVM algorithm is used to separate normal and abnormal
phase. data by constructing a hyper plane that best distinguishes
normal data points from abnormal ones.
 Challenges:
 Application in Abnormal Event Detection:
 Computational Complexity: KNN requires storing the
entire dataset and computing distances for every  One- Class SVM: A variation of SVM, known as one-
prediction, which makes it inefficient for large datasets. class SVM, is specifically used for anomaly detection. It
learns a boundary around the normal data points and
B. Support Vector Machines (SVMs) considers any data point that lies outside this boundary
[4] SVM is a supervised learning algorithm used for as abnormal.

IJISRT24DEC641 www.ijisrt.com 1212


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig 3 SVM Graph Representation

 Advantages:  Challenges:

 Effective in High-Dimensional Data: SVM works well in  Parameter Tuning: The performance of SVM is highly
cases where the data is high- dimensional and non-linear. sensitive to the choice of kernel, regularization
It uses kernel functions to transform data into higher parameter, and other hyper parameters, requiring careful
dimensions for better separation. tuning.

Fig 4 SVM Train-Test Code

C. Decision Trees labels or output values. The tree is built recursively by


A Decision Tree is a tree-like model where each selecting the feature that provides the most significant
internal node represents a feature, and each branch information gain or minimizes impurity (e.g., Gini impurity)
represents a decision rule. The leaf nodes represent class at each split.

IJISRT24DEC641 www.ijisrt.com 1213


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig 5 Decision Tree Graph

 Application in Abnormal Event Detection: paths.


Decision Trees can classify events into normal or  Handling Mixed Data Types: Decision Trees can handle
abnormal categories based on a series of feature- based both continuous and categorical features making them
decisions. For example, in the case of financial fraud versatile for different types of datasets.
detection, a decision tree might use features like transaction
amount, location, and frequency of transactions to classify  Challenges:
whether an event is normal or abnormal.
 Over fitting: Decision Trees can easily over fit to noisy
 Advantages: data, especially when the tree is very deep, leading to
poor generalization.
 Interpretability: Decision Trees are highly interpretable
and provide clear, human- readable decision-making

Fig 6 Decision Tree Train-Test Code

D. Logistic Regression  Advantages:


Logistic Regression is a statistical model used for
binary classification. It models the probability of a binary  Simplicity and Efficiency: Logistic Regression is easy to
outcome (0 or 1) using a logistic function (sigmoid), which implement, computationally efficient, and suitable for
outputs values between 0 and 1. The model estimates the large datasets.
relationship between the input features and the probability  Probability Output: Unlike some other classifiers,
of a certain outcome. Logistic Regression provides probability estimates,
which can be useful for ranking predictions or setting
 Application in Abnormal Event Detection: thresholds for detection.
Logistic Regression can classify events as normal (0)
or abnormal based on feature input.  Challenges:

 Linear Relationships: Logistic Regression assumes a


linear relationship between input features and the output,
which may not capture complex patterns in the data.

IJISRT24DEC641 www.ijisrt.com 1214


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig 7 Decision Tree Train-Test Code

V. EXPLORING UNSUPERVISED MODELS assigns each data point to the nearest cluster center and
FOR EVENT DETECTION iteratively refines the centroids until convergence.

A. K-Means  Application in Abnormal Event Detection:


K-means is an unsupervised learning algorithm used Detecting Abnormal Events: K-means can be used to
for clustering. It partitions the data into ‘K’ clusters by cluster normal events together. Any data point that does not
minimizing the within-cluster variance (sum of squared fit well into any cluster or lies far from the cluster centroids
distances from points to their centroids). [5] K-means is flagged as an anomaly.

Fig 8 K-means Train-Test Code

 Advantages:  [6] Sensitivity to Initialization: K-means is sensitive to


the initial placement of centroids, and poor initialization
 Scalability: K-means is efficient for large datasets and can lead to suboptimal clustering results. To mitigate
can handle high-dimensional data when combined with this, techniques like K-means++ can be used for better
dimensionality reduction techniques like PCA (Principal initialization.
Component Analysis).
 Simplicity: K-means is easy to implement and has a B. Auto-encoders
relatively low computational cost compared to other Auto encoders are commonly used for unsupervised
clustering methods. learning, meaning they can learn patterns in data without
labelled examples. For abnormal event detection, auto
 Challenges: encoders are trained primarily on normal data. Once trained,
The number of clusters (K) must be pre-specified, and they can identify abnormal events by measuring the
selecting an optimal K can be challenging. Using methods reconstruction error. [7] A large reconstruction error
like the elbow method or silhouette score can help, but it indicates that the model was unable to reconstruct an input
still requires experimentation. correctly, implying that the input is significantly different
from the learned normal behaviour.

IJISRT24DEC641 www.ijisrt.com 1215


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
 Application in Abnormal Event Detection: suspicious behaviour such as trespassing or a fight by
Detecting unusual movements, activities, or behaviours identifying abnormal events based on high reconstruction
in video streams. For example, auto encoders can flag error.

Fig 9 Auto-encoders Train-Test Code

 Advantages: hierarchies of features from data without requiring manual


feature extraction, making them ideal for tasks like detecting
 Versatility: Auto encoders can work with various types abnormal events in unstructured data like images and videos.
of data, including time-series data, images, and sensor
readings. This makes them suitable for diverse A. Key Reasons CNNs are Preferred for Abnormal event
applications, such as anomaly detection in images, video, Detection:
and continuous sensor streams.
 Automatic Feature Extraction:
 Dimensionality Reduction: Auto encoders automatically CNNs automatically learn features (edges, textures,
learn a compressed, lower- dimensional representation of shapes, patterns) from raw data, reducing the need for
the input data in the latent space, which can help manual feature engineering.
simplify the detection process by focusing on the most
relevant features.  Translation Invariance:
CNNs are robust to the location of features within an
 Challenges: input, which is particularly useful when abnormal events
may appear in different parts of a video or image.
 Imbalanced Data: In many real-world scenarios,
abnormal events are rare compared to normal data. This  High Performance on Image and Video Data:
imbalance can make training difficult, as the model may CNNs are capable of learning and detecting patterns
struggle to generalize. Techniques like data in high-dimensional data like images or video frames,
augmentation or anomaly sampling help, but the which are commonly used in surveillance or healthcare
imbalance remains. applications.

VI. EXPLORING CNN – BASED APPROACHES  Deep Learning Capabilities:


IN DETECTION; A COMPREHENSIVE CNNs can learn complex relationships between data
ANALYSIS points by utilizing multiple layers, which helps in
distinguishing between normal and abnormal patterns.
CNNs are a class of deep learning algorithms designed
to process grid-like data, such as images and videos, by
simulating the way human visual perception works. [8]
CNNs are effective at automatically learning spatial

IJISRT24DEC641 www.ijisrt.com 1216


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig 10 Architecture of CNN [11]

B. Approach to Abnormal Event Detection Using CNN and Long Short-Term Memory (LSTM) networks can be
employed. This architecture is effective in capturing
 Data Pre-processing and Preparation temporal dependencies between consecutive frames and
understanding the dynamics of abnormal events.
 Video Frames or Image Sequences: In applications like
surveillance, abnormal event detection typically involves  Training the CNN Model
analyzing video frames. CNNs are used to process these
individual frames (or a sequence of frames) to detect  Loss Function: A key part of training a CNN is choosing
anomalies. the right loss function. In binary classification tasks, the
 Normalization: Data is often normalized to ensure model may use cross- entropy loss to minimize the
uniformity in input values, making it easier for the CNN difference between predicted and actual labels. For
to learn patterns. anomaly detection, the goal is to minimize false
 Augmentation: Data augmentation (like flipping, negatives (detecting abnormal events as normal).
rotating, or scaling) can be used to artificially increase  Optimization: Optimizers such as Adam or SGD
the size of the training dataset, making the CNN model (Stochastic Gradient Descent) are used to adjust the
more robust to various transformations. weights of the network to minimize the loss during
training.
 CNN Architecture for Abnormal Event Detection
 Evaluation and Performance Metrics
 [10] Convolutional Layers: These layers apply
convolutional filters to the input data, extracting features  Accuracy: Accuracy measures how often the model
like edges, textures, and shapes. For abnormal event correctly classifies normal and abnormal events.
detection, these features can help the CNN recognize However, in the case of abnormal event detection,
unusual patterns or behaviours in the data. Pooling accuracy might not be the best measure due to class
Layers: Pooling (such as max pooling) is used to down imbalance (normal events often outnumber abnormal
sample the feature maps, reducing dimensionality and events).
computational complexity while retaining important  Precision and Recall: These metrics are important for
features. evaluating how well the model detects abnormal events.
 [9] Fully Connected Layers: After extracting relevant High precision means fewer false positives, and high
features, the fully connected layers at the end of the recall means fewer false negatives.
network map the features to the final decision (normal  F1-Score: The F1-score is a harmonic mean of precision
vs. abnormal). and recall and is often used when there is an imbalance
 Recurrent CNN: For video-based abnormal event in the dataset (i.e., when abnormal events are much less
detection, a recurrent CNN or a combination of CNN frequent than normal events).

IJISRT24DEC641 www.ijisrt.com 1217


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig 11 CNN Detection Code

IJISRT24DEC641 www.ijisrt.com 1218


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig 12 Matric Representation CNN

Fig 13 Accuracy Plot Graph

Fig 14 Loss Plot Graph

IJISRT24DEC641 www.ijisrt.com 1219


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
C. Challenges in Using CNN for Abnormal Event Detection prevents issues like exploding or vanishing gradients and
ensures smooth training.
 Data Imbalance:
In most real-world scenarios, abnormal events are rare,  Data Augmentation
which leads to an imbalanced dataset. This can cause the
model to be biased toward predicting normal events.  Increase Training Data Variability: Techniques like
Techniques like oversampling, under sampling, or using random cropping, flipping, rotation, zooming, and color
anomaly-specific loss functions can help address this. jittering introduce variability in the training data, helping
the model generalize better.
 Complexity of Temporal Patterns:  Synthetic Data: Generate synthetic data using GANs or
In cases of video-based detection, CNNs alone might other techniques to augment datasets for rare or
not be enough to capture the temporal dependencies underrepresented classes.
between frames. Combining CNNs with RNNs (Recurrent
Neural Networks) or LSTMs can help overcome this  Regularization Techniques
limitation by learning the sequential relationships in the
data.  Dropout: Randomly deactivating a fraction of neurons
during training prevents overfitting by ensuring the
 Computational Cost: network doesn’t rely too heavily on specific neurons.
Training deep CNN models can be computationally  Weight Decay: Adding an L2 regularization term to the
expensive, requiring significant processing power and loss function penalizes large weights, reducing over
memory, especially for large datasets like high-resolution fitting.
video.
 Transfer Learning
D. CNN Model Optimisation for Better Accuracy
The performance of a Convolutional Neural Network  Pre-trained Models: Fine-tuning pre-trained models
(CNN) can be significantly improved through careful like VGG, ResNet, or Inception on your dataset can
optimization. CNN optimization involves fine-tuning the leverage learned features and improve accuracy with less
architecture, training process, and hyper parameters to training data.
enhance accuracy while minimizing over fitting and  Layer Freezing: Freeze lower layers (feature extractors)
computational costs. Below are key techniques and of a pre-trained model and train only the top layers to
strategies for optimizing CNNs to achieve better accuracy: adapt to your specific problem.
 Network Architecture Optimization  Fine-Tuning the Loss Function
 Depth and Width of the Network: Increasing the number  Custom Loss Functions: In some applications, standard
of layers (depth) or neurons in each layer (width) allows loss functions like cross-entropy may not be ideal.
the network to learn more complex features. However, Custom loss functions, tailored to specific tasks, can
excessive depth can lead to vanishing gradients, making improve accuracy.
training difficult. Techniques like batch normalization or  Class Imbalance Handling: Use weighted loss
ResNet’s skip connections can address this. functions or techniques like focal loss to handle
 Convolutional Kernel Size: Experimenting with imbalanced datasets effectively.
different kernel sizes (e.g., 3x3, 5x5) can help the model
capture different levels of feature granularity. Smaller  Improving Training Efficiency
kernels are often combined in deeper architectures for
better feature extraction.  Batch Normalization: Normalizing intermediate layers
 Pooling Layers: Using max pooling or average pooling reduces internal covariate shift, allowing for faster and
layers reduces spatial dimensions while retaining more stable training.
essential features. Adaptive pooling can dynamically  Gradient Clipping: For high-dimensional data, gradient
adjust the pooling size based on input dimensions. clipping ensures stability by capping gradient values
during back propagation.
 Hyper parameter Tuning  Model Validation and Testing
 Learning Rate Adjustment: The learning rate is crucial  Cross-Validation: Use techniques like k-fold cross-
for model convergence. Using learning rate schedulers validation to assess model performance across different
(e.g., step decay, exponential decay) or adaptive
subsets of data.
optimizers (e.g., Adam, RMSprop) can improve model
performance.  Visualization and Debugging
 Batch Size: Smaller batch sizes can improve
generalization, while larger batch sizes make training  Feature Visualization: Visualize intermediate layers and
faster. A balance between the two is essential.
feature maps to ensure the model is learning meaningful
 Weight Initialization: Proper weight initialization features.

IJISRT24DEC641 www.ijisrt.com 1220


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
VII. DISCUSSION them unsuitable for real-world anomaly detection tasks.

 Why CNNs Outperform Traditional Anomaly Detection With techniques like data augmentation and dropout,
Models CNNs generalize well even on limited datasets, making
[12] Convolutional Neural Networks (CNNs) have them more robust in diverse environments.
become a dominant approach for anomaly detection,
especially in domains involving high- dimensional and  Performance Metrics
unstructured data such as images, videos, and spatial sensor CNNs consistently outperform traditional models on
data. This success is largely due to their ability to key metrics such as accuracy, precision, recall, and F1 score
automatically extract and learn hierarchical features from in anomaly detection tasks, especially when dealing with
data. complex datasets.

Additionally, the integration of Artificial Intelligence B. Use Cases Where CNNs Excel
(AI) technologies has significantly enhanced the
performance and adaptability of CNNs, making them far  [12] Video Surveillance: Detecting suspicious
more effective than traditional anomaly detection models. activities in crowded places.
 Healthcare: Identifying anomalies in medical imaging
A. Why CNNs are more efficient: (e.g., tumor detection in MRI scans).
 Industrial Monitoring: Detecting faults in machinery
 Superior Feature Extraction through vibration or thermal imaging.
Traditional models such as KNN, SVM, and Decision  Cybersecurity: Spotting unusual patterns in network
Trees rely on handcrafted features or predefined input traffic or user behaviour.
representations. These features are often limited in their
ability to capture complex patterns in high-dimensional data. VIII. CONCLUSION
CNNs, on the other hand, automatically extract In this research paper, we have explored and
features at multiple levels of abstraction. Convolutional demonstrated the applications of various supervised and
layers detect simple patterns like edges, while deeper layers unsupervised learning algorithms for detecting abnormal
identify complex structures, enabling robust anomaly events. Through a detailed analysis of algorithms such as K-
detection without manual intervention. Nearest Neighbours (KNN), Support Vector Machines
(SVM), Decision Trees, Logistic Regression, K- Means
 Scalability and High-Dimensional Data Handling Clustering, and Auto encoders, we assessed their
Traditional models struggle with high-dimensional effectiveness in various domains, including anomaly
data due to the curse of dimensionality, where performance detection in images, sensor data, and time-series data.
deteriorates as input size increases.
While each algorithm presented certain strengths, we
CNNs excel in processing high-dimensional data, such concluded that Convolutional Neural Networks (CNNs)
as images and videos, by reducing dimensionality through stand out as the most effective and suitable approach for
pooling layers and leveraging hierarchical feature extraction. abnormal event detection in complex datasets. CNNs are
particularly adept at recognizing intricate patterns in high-
 Contextual Understanding dimensional data, such as images or video streams, due to
Traditional models lack the ability to understand their ability to learn hierarchical features automatically.
spatial or temporal context, which is crucial for anomaly
detection in structured data. Through extensive experimentation and the use of
practical examples, we demonstrated the superior
CNNs inherently preserve spatial relationships within performance of CNNs in identifying abnormal events. Our
data, allowing them to detect subtle anomalies that depend code implementations and visualizations clearly highlighted
on spatial patterns or temporal correlations. how CNNs outperformed other models in terms of accuracy
and robustness, particularly when handling spatial data like
 Adaptability images.
CNNs are highly adaptable to different types of data
and tasks. For instance, they can handle image, video, and The flexibility of CNNs, combined with their capacity
even time-series anomaly detection with minimal changes to for automatic feature extraction, makes them a highly
architecture. effective tool for anomaly detection.
Traditional models often need extensive modifications Overall, our research confirms that CNNs offer a
or domain-specific tuning to handle different data types significant advantage over traditional machine learning
effectively. models, making them the optimal choice for complex event
detection tasks, especially in domains where data is high-
 Generalization dimensional and contains intricate patterns.
Traditional models often overfit small datasets, making

IJISRT24DEC641 www.ijisrt.com 1221


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Future research could explore further optimizations to
CNN architectures and their applications to even more
diverse anomaly detection problems.

ACKNOWLEDGEMENTS

We are deeply thankful to Dr V Mareeswari Prasanna


ma’am, for their continuous support and encouragement.
Their leadership and mentorship have provided us with the
necessary resources and environment to pursue academic
excellence.

Additionally, we extend our thanks to the participants


and organizations who generously shared their time and
insights, without whom this research would not have been
possible.

REFERENCES

[1]. Abnormal Event Detection in Crwoded Places by


Cong, Yang: Yuan, Junsong: Liu, Ji
[2]. Abnormal Event Detection in Crwoded Places by
Cong, Yang: Yuan, Junsong: Liu, Ji
[3]. Shangai Tech University, Tencent AI Lab; Margin
Learning Embedded Prediction for Anomaly
Detection by Wen Liu, Weixin Luo, Zhenxgin Li,
Pein Zhao, Shenghua Gao
[4]. Shangai Tech University, Tencent AI Lab; Margin
Learning Embedded Prediction for Anomaly
Detection by Wen Liu, Weixin Luo, Zhenxgin Li,
Pein Zhao, Shenghua Gao
[5]. Unsupervised clustering Methods for identifying rare
events by Witcha Chimphlee, Abdul Hanan, Mohd
Noor, Siripon Chimphlee, Surat
[6]. Unsupervised clustering Methods for identifying rare
events by Witcha Chimphlee, Abdul Hanan, Mohd
Noor, Siripon Chimphlee, Surat
[7]. Event Detection in Videos using Spatiotemporal
autoencoder by Yong Shean Chong.
[8]. CNN and 1-Class Event Classifier in 8th
International Conference Imaging by Samir
Buoindour, Mazen Hittawe, Sandy, Hichem Snoussi.
[9]. CNN and 1-Class Event Classifier in 8th
International Conference Imaging by Samir
Buoindour, Mazen Hittawe, Sandy, Hichem Snoussi.
[10]. CNN and 1-Class Event Classifier in 8th
International Conference Imaging by Samir
Buoindour, Mazen Hittawe, Sandy, Hichem Snoussi.
[11]. Image Architecture of Convolutional Neural
Networks by K.Eswaran in journal – Research Gate.
[12]. On identifying leaves: A comparison of CNN with
Classical ML methods by Abbas Hedjazi, Ikram
Kourban, Yakup Genc.
[13]. On identifying leaves: A comparison of CNN with
Classical ML methods by Abbas Hedjazi, Ikram
Kourban, Yakup Genc.

IJISRT24DEC641 www.ijisrt.com 1222

You might also like