0% found this document useful (0 votes)
85 views36 pages

Ai Algorithm Failure

The document outlines a research paper on slip and fall detection systems, highlighting advancements in modeling techniques and the creation of a self-curated dataset. It discusses the integration of machine learning and various methodologies for improving detection accuracy and addresses ethical considerations in AI applications. The paper serves as a resource for understanding the current landscape of slip and fall detection and suggests future research directions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views36 pages

Ai Algorithm Failure

The document outlines a research paper on slip and fall detection systems, highlighting advancements in modeling techniques and the creation of a self-curated dataset. It discusses the integration of machine learning and various methodologies for improving detection accuracy and addresses ethical considerations in AI applications. The paper serves as a resource for understanding the current landscape of slip and fall detection and suggests future research directions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Page 1 of 12 - Cover Page Submission ID trn:oid:::1:2776243122

Varshith Sai Gali


Scientific Writing
Dr. Vishal Baloria

B.Tech2020

BML Munjal University

Document Details

Submission ID

trn:oid:::1:2776243122 10 Pages

Submission Date 5,219 Words

Dec 7, 2023, 8:53 PM GMT+5:30


30,235 Characters

Download Date

Dec 7, 2023, 9:29 PM GMT+5:30

File Name

IEEE_Paper_Format.pdf

File Size

3.4 MB

Page 1 of 12 - Cover Page Submission ID trn:oid:::1:2776243122


Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:2776243122

How much of this submission has been generated by AI?

36%
Caution: Percentage may not indicate academic misconduct. Review required.

It is essential to understand the limitations of AI detection before making decisions


about a student's work. We encourage you to learn more about Turnitin's AI detection
capabilities before using the tool.
of qualifying text in this submission has been determined to be
generated by AI.

Frequently Asked Questions

What does the percentage mean?


The percentage shown in the AI writing detection indicator and in the AI writing report is the amount of qualifying text within the
submission that Turnitin's AI writing detection model determines was generated by AI.

Our testing has found that there is a higher incidence of false positives when the percentage is less than 20. In order to reduce the
likelihood of misinterpretation, the AI indicator will display an asterisk for percentages less than 20 to call attention to the fact that
the score is less reliable.

However, the final decision on whether any misconduct has occurred rests with the reviewer/instructor. They should use the
percentage as a means to start a formative conversation with their student and/or use it to examine the submitted assignment in
greater detail according to their school's policies.

How does Turnitin's indicator address false positives?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be AI-generated will be highlighted blue
on the submission text.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

What does 'qualifying text' mean?


Sometimes false positives (incorrectly flagging human-written text as AI-generated), can include lists without a lot of structural variation, text that literally repeats
itself, or text that has been paraphrased without developing new ideas. If our indicator shows a higher amount of AI writing in such text, we advise you to take that
into consideration when looking at the percentage indicated.

In a longer document with a mix of authentic writing and AI generated text, it can be difficult to exactly determine where the AI writing begins and original writing
ends, but our model should give you a reliable guide to start conversations with the submitting student.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
both human and AI-generated text) so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an
organization's application of its specific academic policies to determine whether any academic misconduct has occurred.

Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:2776243122


Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776243122

AI Based Workplace Safety Solution: Slip & Fall


Detection
Harshita Nauhwar1*, Varshith Sai Gali2, Kriti Shrivastava3 Aishwarya4

1,2,3,4
Department of Computer Science and Engineering, BML-Munjal University, Haryana, India
Email - [email protected], [email protected], [email protected],
*
4
[email protected]

Abstract— Slip and fall accidents pose significant threats to traditional methods relying on rule-based algorithms.
safety in various settings, necessitating advancements in However, these conventional approaches suffered from
slip and fall detection systems. This research focuses on the limitations, including high false positive rates and limited
evolution of modelling techniques and datasets, adaptability to diverse environments.
highlighting the creation of a state-of-the-art, self-curated
dataset. Additionally, the study investigates the trajectory In recent years, there has been an integration of machine
of datasets used for training and evaluating detection learning in slip and fall detection. Techniques such as support
models, scrutinising factors such as data collection vector machines, random forests, and neural networks have
methods, annotation strategies, and the inclusion of been employed to extract meaningful features and classify slip
diverse real-world scenarios. A distinctive contribution lies and fall events. These approaches have demonstrated
in the detailed development and utilisation of our curated promising results by leveraging the power of data-driven
dataset for training and refining a slip and fall detection models to automatically learn patterns and features from
model. Synthesising existing literature, this project not sensor data, enabling more robust and accurate detection.
only elucidates the strengths and limitations of diverse
approaches but also identifies emerging trends. It serves as To develop and evaluate slip and fall detection models, the
a valuable resource for comprehending the current slip availability of appropriate datasets plays a vital role. While
and fall detection landscape, providing insights into simulated fall data cannot precisely replicate a fall, building
potential directions for future research and development. datasets that collect data from volunteers simulating different
falls remains the preferred option for evaluating fall detection
Keywords— YOLOv5, YOLOv7, Slip and Fall detection, Dataset systems. Datasets encompassing a wide range of scenarios,
Curation, Workplace Safety sensor modalities, and annotated labels are essential for
training and testing the models.
I. INTRODUCTION
Falls are commonly defined as "inadvertently coming to rest
on the ground, floor, or other lower levels, excluding II. LITERATURE REVIEW
intentional changes in position to rest in furniture, wall, or
other objects." According to the World Health Organization In recent years, the domain of Slip & Fall detection systems
(WHO), fall-related injuries are more prevalent among older has experienced a surge in research activity, with researchers
individuals and constitute a significant cause of domestic worldwide engaging in diverse methodologies and innovations
accidents. Approximately 28-35% of people aged 65 and over aimed at addressing the critical challenge of fall detection.
experience falls annually, with an increase of 32-42%. The This section embarks on a journey through these studies,
incidence of falls varies among countries, such as in China shedding light on their contributions and illustrating the
(6-31%) and Japan (20% of older adults annually). evolving landscape of fall detection technology.

Recognizing the importance of preventing and mitigating A. Foundational Research and Wearable Technology
these incidents, researchers have dedicated considerable
efforts to developing slip and fall detection systems capable of Our journey commences with foundational research conducted
promptly identifying and alerting individuals. One crucial by Villaseñor and colleagues in 2019 [4]. They provided a
aspect of slip and fall detection systems is the modelling significant resource in the form of a multimodal dataset,
approach used to analyse sensor data and distinguish between serving as the cornerstone for subsequent studies. This dataset
normal activities and slip or fall events, in contrast to established a benchmark for evaluating the effectiveness of

Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776243122


Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776243122

various fall detection systems. Complementing this dataset, meta-analyses conducted by Burke et al. (2006) and
Trkov et al. (2015) introduced a novel slip detection and Christian et al. (2009), which emphasise the
prediction scheme employing wearable Inertial Measurement effectiveness of behavioural modelling, safety
Units (IMUs) and extended Kalman filters, emphasising the knowledge, and motivation in safety performance
pivotal role of wearable technology in enhancing fall detection [12,13]. These studies offer valuable insights into the
[6]. human factors that complement technological
advancements in fall detection
B. Vision-Based Approaches and Machine Learning
3) Ethical Considerations and Future Outlook
As we delve deeper into the research landscape, we encounter
the work of Nooruddin et al. (2021), who conducted a Returning to the international stage, researchers
systematic review categorising fall detection solutions into employ artificial intelligence and computer vision for
single and multiple sensor-based systems [1]. Beddiar and accident rate reduction, addressing challenges posed
colleagues (2021) take a prominent position with their by low lighting conditions. This work extends the
emphasis on video-based fall detection, leveraging machine application of computer vision beyond fall detection,
learning and pose estimation techniques for precise showcasing its potential for broader safety
classification [2]. Inturi and co-authors (2022) introduce a improvements. Our journey culminates in a scoping
vision-based approach employing AlphaPose for keypoint review that underscores the transformative potential
detection, emphasising its potential for in-home fall detection of AI in occupational safety and health across
[3]. Wu et al. (2023) present an innovative weakly-supervised high-risk industries. It also acknowledges the
learning-based dual-modal network, promising improved surveillance concerns associated with certain AI
efficiency and accuracy in fall detection [5]. These studies applications, highlighting the ethical considerations
collectively reflect the persistent pursuit of enhanced detection that must accompany technological advancements. In
algorithms and their potential for real-world applications. Europe, researchers conduct a literature review and
expert interviews, emphasising the need for safe and
Transitioning to the second phase of our journey, we explore transparent Artificial Intelligence for Workplace
recent international contributions to the field and broaden our Management (AIWM) systems to address
perspective to encompass the broader implications of fall occupational safety and health risks. This perspective
detection technology for safety and health. broadens our understanding of the role of AI in
shaping the future of workplace safety.
1) International Innovations in Machine Learning and
Sensing
In conclusion, our journey through these research papers
Phand and team (2023) have harnessed the reveals a rich tapestry of innovation, collaboration, and
capabilities of Convolutional Neural Networks exploration. The field of fall detection systems is continuously
(CNN) for human fall detection through visual evolving, with researchers from diverse backgrounds
surveillance, illustrating the global adoption of contributing to its advancement. From wearable sensors to
advanced machine learning techniques [7]. Liao et al. computer vision, from machine learning to human factors,
(2011) propose a method incorporating the Integrated these studies collectively paint a picture of a field poised to
Spatiotemporal Energy (ISTE) map and Bayesian make significant strides in improving safety and health across
Belief Networks (BBN) for effective fall and slip the globe.
detection, emphasising the importance of
incorporating both spatial and temporal information
in detection algorithms [8]. Trkov and colleagues
(2019) introduce an innovative inertial sensor-based
III. DATASET OVERVIEW
slip detection system for human walking, showcasing
potential safety improvements [9]. Mobsite et al.
This section outlines the meticulous process employed to
(2023) pioneered a camera-based system that
respects privacy concerns by utilising human curate a comprehensive dataset for our slip and fall detection
silhouettes, highlighting the importance of addressing research. The dataset amalgamates various sources, leveraging
ethical and privacy considerations in system design both widely cited datasets and the versatile Roboflow online
[10]. platform, ensuring a diverse representation of real-world
scenarios.
2) Broader Safety and Health Implications
A. Widely-Cited Datasets
Our journey takes us to researchers who have
developed an affordable motion-based method
Our dataset synthesis involved the incorporation of three key
utilising pyroelectric infrared sensors (PIR) and
Arduino technology for accurate fall detection [11]. widely-cited datasets:
This work serves as a reminder of the necessity for
cost-effective solutions that can be deployed in
resource-constrained settings. Shifting to a broader
perspective on safety and health, we explore the

Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776243122


Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776243122

1) UR-Fall Detection Dataset (Interdisciplinary Centre 3) The Le2i Dataset (Laboratoire Electronique,
for Computational Modelling, University of Informatique et Image)
Rzeszow):
A standard RGB based fall detection dataset
This dataset contains 70 (30 falls + 40 activities of consisting of 130 fall and non-fall scenes recorded at
daily living(ADL)) sequences. The sequences are 25 frames per second by 17 actors using a single
recorded at a frame rate of 30 fps and a resolution of camera. The average duration of all videos is about 1
640x480 pixels. The dataset was recorded using two minute, where the person either falls or continues
Microsoft Kinect cameras. The Kinect cameras are performing a daily activity. The videos are recorded
RGB-D cameras that capture depth and colour in different locations such as “Home”, “Coffee
information.The falls in the dataset include both room”, “Office”, and “Lecture room” with high
forward and backward falls, as well as falls from variance between fall and non-fall events.[16]
standing and sitting positions. The ADLs in the
dataset include a variety of activities, such as
walking, running, and getting up from a chair.[14]

Fig.3 Image sample via Le2i Dataset [16]

B. Additional Data Sources


Fig.1 Image sample via UR-Fall Detection Dataset [14]

In the process of expanding our dataset, we harnessed online


2) CAUCAFall Dataset, (Universidad del Cauca, platforms, including Pixel, YouTube, and Roboflow
Universidad de la Amazonia): repositories. We curated a collection of random images and
videos pertinent to the slip and fall domain from these
The dataset contains 100 sequences, of which 50 are platforms, encompassing diverse sources such as CCTV
falls and 50 are activities of daily living (ADLs). The surveillance footage. This selection significantly augmented
sequences are recorded at a frame rate of 30 fps and a our dataset, providing a diverse representation of real-world
resolution of 640x480 pixels, and a single RGB-D challenges.
camera is used The data included forward falls,
backward falls, falls to the left, falls to the right, and To enhance the model's discernment between fall and non-fall
scenarios, we intentionally incorporated images portraying a
falls arising from sitting. The ADLs performed by the
spectrum of situations, ranging from plain backgrounds to
participants are: walking, hopping, picking up an instances of Activities of Daily Living (ADLs). This strategic
object, sitting, and finally kneeling.[15] inclusion ensures a robust dataset, which helps to foster a
clearer distinction between Fall & Non-Fall in real-life
scenarios

In addition to online sources, our team captured videos and


meticulously documented frame-by-frame images of
controlled descents within a secure setup. We have seamlessly
integrated this visual content into the dataset, thereby
enhancing its diversity and comprehensiveness.

C. Dataset Annotation and Resizing

To optimise the utility of our dataset, acquired images and


Fig.2 Image sample via CAUCAFall Dataset [15]
videos underwent meticulous annotation and resizing
processes.

Categorization of the dataset into "Fall" and "No-Fall" classes


was performed, providing nuanced annotations that contribute

Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776243122


Page 1 of 12 - Cover Page Submission ID trn:oid:::1:2776243122

Varshith Sai Gali


Scientific Writing
Dr. Vishal Baloria

B.Tech2020

BML Munjal University

Document Details

Submission ID

trn:oid:::1:2776243122 10 Pages

Submission Date 5,219 Words

Dec 7, 2023, 8:53 PM GMT+5:30


30,235 Characters

Download Date

Dec 7, 2023, 9:29 PM GMT+5:30

File Name

IEEE_Paper_Format.pdf

File Size

3.4 MB

Page 1 of 12 - Cover Page Submission ID trn:oid:::1:2776243122


Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:2776243122

How much of this submission has been generated by AI?

36%
Caution: Percentage may not indicate academic misconduct. Review required.

It is essential to understand the limitations of AI detection before making decisions


about a student's work. We encourage you to learn more about Turnitin's AI detection
capabilities before using the tool.
of qualifying text in this submission has been determined to be
generated by AI.

Frequently Asked Questions

What does the percentage mean?


The percentage shown in the AI writing detection indicator and in the AI writing report is the amount of qualifying text within the
submission that Turnitin's AI writing detection model determines was generated by AI.

Our testing has found that there is a higher incidence of false positives when the percentage is less than 20. In order to reduce the
likelihood of misinterpretation, the AI indicator will display an asterisk for percentages less than 20 to call attention to the fact that
the score is less reliable.

However, the final decision on whether any misconduct has occurred rests with the reviewer/instructor. They should use the
percentage as a means to start a formative conversation with their student and/or use it to examine the submitted assignment in
greater detail according to their school's policies.

How does Turnitin's indicator address false positives?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be AI-generated will be highlighted blue
on the submission text.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

What does 'qualifying text' mean?


Sometimes false positives (incorrectly flagging human-written text as AI-generated), can include lists without a lot of structural variation, text that literally repeats
itself, or text that has been paraphrased without developing new ideas. If our indicator shows a higher amount of AI writing in such text, we advise you to take that
into consideration when looking at the percentage indicated.

In a longer document with a mix of authentic writing and AI generated text, it can be difficult to exactly determine where the AI writing begins and original writing
ends, but our model should give you a reliable guide to start conversations with the submitting student.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
both human and AI-generated text) so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an
organization's application of its specific academic policies to determine whether any academic misconduct has occurred.

Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:2776243122


Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776243122

AI Based Workplace Safety Solution: Slip & Fall


Detection
Harshita Nauhwar1*, Varshith Sai Gali2, Kriti Shrivastava3 Aishwarya4

1,2,3,4
Department of Computer Science and Engineering, BML-Munjal University, Haryana, India
Email - [email protected], [email protected], [email protected],
*
4
[email protected]

Abstract— Slip and fall accidents pose significant threats to traditional methods relying on rule-based algorithms.
safety in various settings, necessitating advancements in However, these conventional approaches suffered from
slip and fall detection systems. This research focuses on the limitations, including high false positive rates and limited
evolution of modelling techniques and datasets, adaptability to diverse environments.
highlighting the creation of a state-of-the-art, self-curated
dataset. Additionally, the study investigates the trajectory In recent years, there has been an integration of machine
of datasets used for training and evaluating detection learning in slip and fall detection. Techniques such as support
models, scrutinising factors such as data collection vector machines, random forests, and neural networks have
methods, annotation strategies, and the inclusion of been employed to extract meaningful features and classify slip
diverse real-world scenarios. A distinctive contribution lies and fall events. These approaches have demonstrated
in the detailed development and utilisation of our curated promising results by leveraging the power of data-driven
dataset for training and refining a slip and fall detection models to automatically learn patterns and features from
model. Synthesising existing literature, this project not sensor data, enabling more robust and accurate detection.
only elucidates the strengths and limitations of diverse
approaches but also identifies emerging trends. It serves as To develop and evaluate slip and fall detection models, the
a valuable resource for comprehending the current slip availability of appropriate datasets plays a vital role. While
and fall detection landscape, providing insights into simulated fall data cannot precisely replicate a fall, building
potential directions for future research and development. datasets that collect data from volunteers simulating different
falls remains the preferred option for evaluating fall detection
Keywords— YOLOv5, YOLOv7, Slip and Fall detection, Dataset systems. Datasets encompassing a wide range of scenarios,
Curation, Workplace Safety sensor modalities, and annotated labels are essential for
training and testing the models.
I. INTRODUCTION
Falls are commonly defined as "inadvertently coming to rest
on the ground, floor, or other lower levels, excluding II. LITERATURE REVIEW
intentional changes in position to rest in furniture, wall, or
other objects." According to the World Health Organization In recent years, the domain of Slip & Fall detection systems
(WHO), fall-related injuries are more prevalent among older has experienced a surge in research activity, with researchers
individuals and constitute a significant cause of domestic worldwide engaging in diverse methodologies and innovations
accidents. Approximately 28-35% of people aged 65 and over aimed at addressing the critical challenge of fall detection.
experience falls annually, with an increase of 32-42%. The This section embarks on a journey through these studies,
incidence of falls varies among countries, such as in China shedding light on their contributions and illustrating the
(6-31%) and Japan (20% of older adults annually). evolving landscape of fall detection technology.

Recognizing the importance of preventing and mitigating A. Foundational Research and Wearable Technology
these incidents, researchers have dedicated considerable
efforts to developing slip and fall detection systems capable of Our journey commences with foundational research conducted
promptly identifying and alerting individuals. One crucial by Villaseñor and colleagues in 2019 [4]. They provided a
aspect of slip and fall detection systems is the modelling significant resource in the form of a multimodal dataset,
approach used to analyse sensor data and distinguish between serving as the cornerstone for subsequent studies. This dataset
normal activities and slip or fall events, in contrast to established a benchmark for evaluating the effectiveness of

Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776243122


Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776243122

various fall detection systems. Complementing this dataset, meta-analyses conducted by Burke et al. (2006) and
Trkov et al. (2015) introduced a novel slip detection and Christian et al. (2009), which emphasise the
prediction scheme employing wearable Inertial Measurement effectiveness of behavioural modelling, safety
Units (IMUs) and extended Kalman filters, emphasising the knowledge, and motivation in safety performance
pivotal role of wearable technology in enhancing fall detection [12,13]. These studies offer valuable insights into the
[6]. human factors that complement technological
advancements in fall detection
B. Vision-Based Approaches and Machine Learning
3) Ethical Considerations and Future Outlook
As we delve deeper into the research landscape, we encounter
the work of Nooruddin et al. (2021), who conducted a Returning to the international stage, researchers
systematic review categorising fall detection solutions into employ artificial intelligence and computer vision for
single and multiple sensor-based systems [1]. Beddiar and accident rate reduction, addressing challenges posed
colleagues (2021) take a prominent position with their by low lighting conditions. This work extends the
emphasis on video-based fall detection, leveraging machine application of computer vision beyond fall detection,
learning and pose estimation techniques for precise showcasing its potential for broader safety
classification [2]. Inturi and co-authors (2022) introduce a improvements. Our journey culminates in a scoping
vision-based approach employing AlphaPose for keypoint review that underscores the transformative potential
detection, emphasising its potential for in-home fall detection of AI in occupational safety and health across
[3]. Wu et al. (2023) present an innovative weakly-supervised high-risk industries. It also acknowledges the
learning-based dual-modal network, promising improved surveillance concerns associated with certain AI
efficiency and accuracy in fall detection [5]. These studies applications, highlighting the ethical considerations
collectively reflect the persistent pursuit of enhanced detection that must accompany technological advancements. In
algorithms and their potential for real-world applications. Europe, researchers conduct a literature review and
expert interviews, emphasising the need for safe and
Transitioning to the second phase of our journey, we explore transparent Artificial Intelligence for Workplace
recent international contributions to the field and broaden our Management (AIWM) systems to address
perspective to encompass the broader implications of fall occupational safety and health risks. This perspective
detection technology for safety and health. broadens our understanding of the role of AI in
shaping the future of workplace safety.
1) International Innovations in Machine Learning and
Sensing
In conclusion, our journey through these research papers
Phand and team (2023) have harnessed the reveals a rich tapestry of innovation, collaboration, and
capabilities of Convolutional Neural Networks exploration. The field of fall detection systems is continuously
(CNN) for human fall detection through visual evolving, with researchers from diverse backgrounds
surveillance, illustrating the global adoption of contributing to its advancement. From wearable sensors to
advanced machine learning techniques [7]. Liao et al. computer vision, from machine learning to human factors,
(2011) propose a method incorporating the Integrated these studies collectively paint a picture of a field poised to
Spatiotemporal Energy (ISTE) map and Bayesian make significant strides in improving safety and health across
Belief Networks (BBN) for effective fall and slip the globe.
detection, emphasising the importance of
incorporating both spatial and temporal information
in detection algorithms [8]. Trkov and colleagues
(2019) introduce an innovative inertial sensor-based
III. DATASET OVERVIEW
slip detection system for human walking, showcasing
potential safety improvements [9]. Mobsite et al.
This section outlines the meticulous process employed to
(2023) pioneered a camera-based system that
respects privacy concerns by utilising human curate a comprehensive dataset for our slip and fall detection
silhouettes, highlighting the importance of addressing research. The dataset amalgamates various sources, leveraging
ethical and privacy considerations in system design both widely cited datasets and the versatile Roboflow online
[10]. platform, ensuring a diverse representation of real-world
scenarios.
2) Broader Safety and Health Implications
A. Widely-Cited Datasets
Our journey takes us to researchers who have
developed an affordable motion-based method
Our dataset synthesis involved the incorporation of three key
utilising pyroelectric infrared sensors (PIR) and
Arduino technology for accurate fall detection [11]. widely-cited datasets:
This work serves as a reminder of the necessity for
cost-effective solutions that can be deployed in
resource-constrained settings. Shifting to a broader
perspective on safety and health, we explore the

Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776243122


Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776243122

1) UR-Fall Detection Dataset (Interdisciplinary Centre 3) The Le2i Dataset (Laboratoire Electronique,
for Computational Modelling, University of Informatique et Image)
Rzeszow):
A standard RGB based fall detection dataset
This dataset contains 70 (30 falls + 40 activities of consisting of 130 fall and non-fall scenes recorded at
daily living(ADL)) sequences. The sequences are 25 frames per second by 17 actors using a single
recorded at a frame rate of 30 fps and a resolution of camera. The average duration of all videos is about 1
640x480 pixels. The dataset was recorded using two minute, where the person either falls or continues
Microsoft Kinect cameras. The Kinect cameras are performing a daily activity. The videos are recorded
RGB-D cameras that capture depth and colour in different locations such as “Home”, “Coffee
information.The falls in the dataset include both room”, “Office”, and “Lecture room” with high
forward and backward falls, as well as falls from variance between fall and non-fall events.[16]
standing and sitting positions. The ADLs in the
dataset include a variety of activities, such as
walking, running, and getting up from a chair.[14]

Fig.3 Image sample via Le2i Dataset [16]

B. Additional Data Sources


Fig.1 Image sample via UR-Fall Detection Dataset [14]

In the process of expanding our dataset, we harnessed online


2) CAUCAFall Dataset, (Universidad del Cauca, platforms, including Pixel, YouTube, and Roboflow
Universidad de la Amazonia): repositories. We curated a collection of random images and
videos pertinent to the slip and fall domain from these
The dataset contains 100 sequences, of which 50 are platforms, encompassing diverse sources such as CCTV
falls and 50 are activities of daily living (ADLs). The surveillance footage. This selection significantly augmented
sequences are recorded at a frame rate of 30 fps and a our dataset, providing a diverse representation of real-world
resolution of 640x480 pixels, and a single RGB-D challenges.
camera is used The data included forward falls,
backward falls, falls to the left, falls to the right, and To enhance the model's discernment between fall and non-fall
scenarios, we intentionally incorporated images portraying a
falls arising from sitting. The ADLs performed by the
spectrum of situations, ranging from plain backgrounds to
participants are: walking, hopping, picking up an instances of Activities of Daily Living (ADLs). This strategic
object, sitting, and finally kneeling.[15] inclusion ensures a robust dataset, which helps to foster a
clearer distinction between Fall & Non-Fall in real-life
scenarios

In addition to online sources, our team captured videos and


meticulously documented frame-by-frame images of
controlled descents within a secure setup. We have seamlessly
integrated this visual content into the dataset, thereby
enhancing its diversity and comprehensiveness.

C. Dataset Annotation and Resizing

To optimise the utility of our dataset, acquired images and


Fig.2 Image sample via CAUCAFall Dataset [15]
videos underwent meticulous annotation and resizing
processes.

Categorization of the dataset into "Fall" and "No-Fall" classes


was performed, providing nuanced annotations that contribute

Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776243122


Page 1 of 12 - Cover Page Submission ID trn:oid:::1:2777323462

Kriti Shrivastava
SW
Research Papers

B.Tech2020

BML Munjal University

Document Details

Submission ID

trn:oid:::1:2777323462 10 Pages

Submission Date 5,164 Words

Dec 8, 2023, 3:07 PM GMT+5:30


30,211 Characters

Download Date

Dec 8, 2023, 11:16 PM GMT+5:30

File Name

f_Copy_of_AI_Workspace_Safety_Solutions_SW_Paper_Submission.docx

File Size

3.4 MB

Page 1 of 12 - Cover Page Submission ID trn:oid:::1:2777323462


Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:2777323462

How much of this submission has been generated by AI?

34%
Caution: Percentage may not indicate academic misconduct. Review required.

It is essential to understand the limitations of AI detection before making decisions


about a student's work. We encourage you to learn more about Turnitin's AI detection
capabilities before using the tool.
of qualifying text in this submission has been determined to be
generated by AI.

Frequently Asked Questions

What does the percentage mean?


The percentage shown in the AI writing detection indicator and in the AI writing report is the amount of qualifying text within the
submission that Turnitin's AI writing detection model determines was generated by AI.

Our testing has found that there is a higher incidence of false positives when the percentage is less than 20. In order to reduce the
likelihood of misinterpretation, the AI indicator will display an asterisk for percentages less than 20 to call attention to the fact that
the score is less reliable.

However, the final decision on whether any misconduct has occurred rests with the reviewer/instructor. They should use the
percentage as a means to start a formative conversation with their student and/or use it to examine the submitted assignment in
greater detail according to their school's policies.

How does Turnitin's indicator address false positives?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be AI-generated will be highlighted blue
on the submission text.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

What does 'qualifying text' mean?


Sometimes false positives (incorrectly flagging human-written text as AI-generated), can include lists without a lot of structural variation, text that literally repeats
itself, or text that has been paraphrased without developing new ideas. If our indicator shows a higher amount of AI writing in such text, we advise you to take that
into consideration when looking at the percentage indicated.

In a longer document with a mix of authentic writing and AI generated text, it can be difficult to exactly determine where the AI writing begins and original writing
ends, but our model should give you a reliable guide to start conversations with the submitting student.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
both human and AI-generated text) so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an
organization's application of its specific academic policies to determine whether any academic misconduct has occurred.

Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:2777323462


Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:2777323462

AI Based Workplace Safety Solution: Slip & Fall


Detection
Harshita Nauhwar1*, Varshith Sai Gali2, Kriti Shrivastava3 Aishwarya4

1,2,3,4
Department of Computer Science and Engineering, BML-Munjal University, Haryana, India
*[email protected], [email protected], [email protected],
Email -
[email protected]

Abstract— Slip and fall accidents pose significant threats to traditional methods as well, such as high false positive rates and
safety in various settings, necessitating advancements in adaptability to diverse environments.
slip and fall detection systems. This research focuses on the
evolution of modelling techniques and datasets, Machine learning has been incorporated into fall and slip
highlighting the creation of a state-of-the-art, self-curated detection in recent years. Techniques like random forests,
dataset. Additionally, the study investigates the trajectory neural networks, and support vector machines have been
of datasets used for training and evaluating detection employed to identify slip and fall incidents and extract relevant
models, scrutinising factors such as data collection characteristics. By applying data-driven models that recognise
methods, annotation strategies, and the inclusion of diverse patterns and features from sensor data, these methods have
real-world scenarios. A distinctive contribution lies in the shown encouraging outcomes and contributed to more reliable
detailed development and utilisation of our curated dataset and accurate detection.
for training and refining a slip and fall detection model.
Synthesising existing literature, this project not only Access to appropriate datasets is essential for the development
elucidates the strengths and limitations of diverse and assessment of models for the detection of slips and falls.
approaches but also identifies emerging trends. It serves as Although fall simulation data cannot perfectly mimic a real fall,
a valuable resource for comprehending the current slip and the best way to assess fall detection systems is to create datasets
fall detection landscape, providing insights into potential that gather information from volunteers while they simulate
directions for future research and development. various falls. Training and testing the models requires datasets
with annotated labels, sensor modalities, and a broad range of
Keywords— YOLOv5, YOLOv7, Slip and Fall detection, Dataset scenarios.
Curation, Workplace Safety

I. INTRODUCTION II. LITERATURE REVIEW


Falls are commonly defined as "inadvertently coming to rest on
the ground, floor, or other lower levels, excluding intentional In recent years, the domain of Slip & Fall detection systems has
changes in position to rest in furniture, wall, or other objects." experienced a surge in research activity, with researchers
According to the World Health Organization (WHO), fall- worldwide engaging in diverse methodologies and innovations
related injuries are more prevalent among older individuals and aimed at addressing the critical challenge of fall detection. This
constitute a significant cause of domestic accidents. section embarks on a journey through these studies, shedding
Approximately 28-35% of people aged 65 and over experience light on their contributions and illustrating the evolving
falls annually, with an increase of 32-42%. The incidence of landscape of fall detection technology.
falls varies among countries, such as in China (6-31%) and
Japan (20% of older adults annually). A. Foundational Research and Wearable Technology

Recognizing the importance of preventing and mitigating these Our journey commences with foundational research conducted
incidents, researchers have dedicated considerable efforts to by Villaseñor and colleagues in 2019 [4]. They provided a
developing slip and fall detection systems capable of promptly significant resource in the form of a multimodal dataset, serving
identifying and alerting individuals. Unlike traditional methods as the cornerstone for subsequent studies. This dataset
that rely on rule-based algorithms, one important feature of slip established a benchmark for evaluating the effectiveness of
and fall detection systems is the modelling approach used to various fall detection systems. Complementing this dataset,
analyse sensor data and distinguish between normal activities Trkov et al. (2015) introduced a novel slip detection and
and slip or fall events. However, there are limitations to these prediction scheme employing wearable Inertial Measurement

Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:2777323462


Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:2777323462

Units (IMUs) and extended Kalman filters, emphasising the [12,13]. These studies offer valuable insights into the
pivotal role of wearable technology in enhancing fall detection human factors that complement technological
[6]. advancements in fall detection

B. Vision-Based Approaches and Machine Learning 3) Ethical Considerations and Future Outlook

As we delve deeper into the research landscape, we encounter Returning to the international stage, researchers
the work of Nooruddin et al. (2021), who conducted a employ artificial intelligence and computer vision for
systematic review categorising fall detection solutions into accident rate reduction, addressing challenges posed
single and multiple sensor-based systems [1]. Beddiar and by low lighting conditions. This work extends the
colleagues (2021) take a prominent position with their application of computer vision beyond fall detection,
emphasis on video-based fall detection, leveraging machine showcasing its potential for broader safety
learning and pose estimation techniques for precise improvements. Our journey culminates in a scoping
classification [2]. Inturi and co-authors (2022) introduce a review that underscores the transformative potential of
vision-based approach employing AlphaPose for keypoint AI in occupational safety and health across high-risk
detection, emphasising its potential for in-home fall detection industries. It also acknowledges the surveillance
[3]. Wu et al. (2023) present an innovative weakly-supervised concerns associated with certain AI applications,
learning-based dual-modal network, promising improved highlighting the ethical considerations that must
efficiency and accuracy in fall detection [5]. These studies accompany technological advancements. In Europe,
collectively reflect the persistent pursuit of enhanced detection researchers conduct a literature review and expert
algorithms and their potential for real-world applications. interviews, emphasising the need for safe and
transparent Artificial Intelligence for Workplace
Transitioning to the second phase of our journey, we explore Management (AIWM) systems to address
recent international contributions to the field and broaden our occupational safety and health risks. This perspective
perspective to encompass the broader implications of fall broadens our understanding of the role of AI in
detection technology for safety and health. shaping the future of workplace safety.

1) International Innovations in Machine Learning and


Sensing In conclusion, our journey through these research papers
reveals a rich tapestry of innovation, collaboration, and
Phand and team (2023) have harnessed the capabilities exploration. The field of fall detection systems is continuously
of Convolutional Neural Networks (CNN) for human evolving, with researchers from diverse backgrounds
fall detection through visual surveillance, illustrating contributing to its advancement. From wearable sensors to
the global adoption of advanced machine learning computer vision, from machine learning to human factors, these
techniques [7]. Liao et al. (2011) propose a method studies collectively paint a picture of a field poised to make
incorporating the Integrated Spatiotemporal Energy significant strides in improving safety and health across the
(ISTE) map and Bayesian Belief Networks (BBN) for globe.
effective fall and slip detection, emphasising the
importance of incorporating both spatial and temporal
information in detection algorithms [8]. Trkov and
colleagues (2019) introduce an innovative inertial
III. DATASET OVERVIEW
sensor-based slip detection system for human
walking, showcasing potential safety improvements
This section outlines the meticulous process employed to curate
[9]. Mobsite et al. (2023) pioneered a camera-based
system that respects privacy concerns by utilising a comprehensive dataset for our slip and fall detection research.
human silhouettes, highlighting the importance of The dataset amalgamates various sources, leveraging both
addressing ethical and privacy considerations in widely cited datasets and the versatile Roboflow online
system design [10]. platform, ensuring a diverse representation of real-world
scenarios.
2) Broader Safety and Health Implications
A. Widely-Cited Datasets
Our journey takes us to researchers who have
developed an affordable motion-based method
Our dataset synthesis involved the incorporation of three key
utilising pyroelectric infrared sensors (PIR) and
Arduino technology for accurate fall detection [11]. widely-cited datasets:
This work serves as a reminder of the necessity for
cost-effective solutions that can be deployed in
resource-constrained settings. Shifting to a broader
perspective on safety and health, we explore the meta- 1) UR-Fall Detection Dataset (Interdisciplinary Centre
analyses conducted by Burke et al. (2006) and for Computational Modelling, University of
Christian et al. (2009), which emphasise the Rzeszow):
effectiveness of behavioural modelling, safety
knowledge, and motivation in safety performance

Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:2777323462


Page 6 of 12 - AI Writing Submission Submission ID trn:oid:::1:2777323462

1. Training Set (76%): Our dataset contains the shortcomings we found in the other
datasets, which included object/human occlusions, bad
This subset, consisting of 15,000 images, serves as the core for lightning conditions, blurry backgrounds, and backgrounds
training our model. It provides the foundational data for the being prominent more than the intended activity.
model to learn and adapt to.

2. Validation Set (12%):

With 1,200 images, this subset is designed for fine-tuning and


optimization of the model. It plays a crucial role in ensuring the
model's performance is refined and optimized for real-world
scenarios.

3. Testing Set (12%):

Comprising 1,200 images, this subset is dedicated to evaluating


the model's performance in real-world situations. It allows us
to assess how well the model generalizes to new and unseen Fig.4 Sample image from the dataset showcasing background prominence
data, providing a robust measure of its effectiveness. over the main event.

E. Image Types and Features

The dataset is made up of three batches of images:

1) Positive Samples:

Depicting instances of falls or fall-related events.

2) Negative Samples:

Showcasing normal activities without falls, instances


of Daily Life Activities such as Walking, Sleeping,
etc.
Fig.5 Sample image from our dataset showcasing challenging lightning
conditions
Additionally, background images devoid of fall-related events
were included.

F. Image Features

This carefully curated dataset serves as the foundation for


robust slip and fall detection model development and
evaluation. The dataset was carefully curated to encompass a
variety of factors such as lighting conditions, camera angles,
indoor and outdoor environments, and diverse human subjects.

1) Image Size:

Ranging from 416x416 to 640x640 pixels,


standardised to 640x640 pixels during preprocessing.
Fig.6 Sample image from the dataset highlighting occlusions

2) Augmentation:

No explicit augmentation techniques were applied


during dataset preparation.

The dataset contains 2 classes namely "Fall" and “no-fall” and


is annotated accordingly. This diversity aimed to improve the
system's ability to accurately detect falls in different scenarios IV. METHODS
and demographic groups.
In this study, we used two state-of-the-art object detection
G. Image Samples algorithms, YOLOv5 and YOLOv7. YOLOv5 is a single-
stage object detector that is known for its accuracy and
speed. YOLOv7 is a newer object detector that builds on the

Page 6 of 12 - AI Writing Submission Submission ID trn:oid:::1:2777323462


Page 1 of 12 - Cover Page Submission ID trn:oid:::1:2776399708

Varshith Sai Gali


Scientific Writing
PS2

B.Tech2020

BML Munjal University

Document Details

Submission ID

trn:oid:::1:2776399708 10 Pages

Submission Date 5,265 Words

Dec 7, 2023, 11:09 PM GMT+5:30


29,940 Characters

Download Date

Dec 7, 2023, 11:11 PM GMT+5:30

File Name

Copy_of_AI_Workspace_Safety_Solutions_SW_Paper_Submission.pdf

File Size

3.4 MB

Page 1 of 12 - Cover Page Submission ID trn:oid:::1:2776399708


Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:2776399708

How much of this submission has been generated by AI?

32%
Caution: Percentage may not indicate academic misconduct. Review required.

It is essential to understand the limitations of AI detection before making decisions


about a student's work. We encourage you to learn more about Turnitin's AI detection
capabilities before using the tool.
of qualifying text in this submission has been determined to be
generated by AI.

Frequently Asked Questions

What does the percentage mean?


The percentage shown in the AI writing detection indicator and in the AI writing report is the amount of qualifying text within the
submission that Turnitin's AI writing detection model determines was generated by AI.

Our testing has found that there is a higher incidence of false positives when the percentage is less than 20. In order to reduce the
likelihood of misinterpretation, the AI indicator will display an asterisk for percentages less than 20 to call attention to the fact that
the score is less reliable.

However, the final decision on whether any misconduct has occurred rests with the reviewer/instructor. They should use the
percentage as a means to start a formative conversation with their student and/or use it to examine the submitted assignment in
greater detail according to their school's policies.

How does Turnitin's indicator address false positives?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be AI-generated will be highlighted blue
on the submission text.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

What does 'qualifying text' mean?


Sometimes false positives (incorrectly flagging human-written text as AI-generated), can include lists without a lot of structural variation, text that literally repeats
itself, or text that has been paraphrased without developing new ideas. If our indicator shows a higher amount of AI writing in such text, we advise you to take that
into consideration when looking at the percentage indicated.

In a longer document with a mix of authentic writing and AI generated text, it can be difficult to exactly determine where the AI writing begins and original writing
ends, but our model should give you a reliable guide to start conversations with the submitting student.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
both human and AI-generated text) so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an
organization's application of its specific academic policies to determine whether any academic misconduct has occurred.

Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:2776399708


Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776399708

AI Based Workplace Safety Solution: Slip & Fall


Detection
Harshita Nauhwar1*, Varshith Sai Gali2, Kriti Shrivastava3 Aishwarya4

1,2,3,4
Department of Computer Science and Engineering, BML-Munjal University, Haryana, India
Email - [email protected], [email protected], [email protected],
*
4
[email protected]

Abstract— Slip and fall accidents pose significant threats to distinguish between normal activities and slip or fall events.
safety in various settings, necessitating advancements in However, there are limitations to these traditional methods as
slip and fall detection systems. This research focuses on the well, such as high false positive rates and adaptability to
evolution of modelling techniques and datasets, diverse environments.
highlighting the creation of a state-of-the-art, self-curated
dataset. Additionally, the study investigates the trajectory Machine learning has been incorporated into fall and slip
of datasets used for training and evaluating detection detection in recent years. Techniques like random forests,
models, scrutinising factors such as data collection neural networks, and support vector machines have been
methods, annotation strategies, and the inclusion of employed to identify slip and fall incidents and extract
diverse real-world scenarios. A distinctive contribution lies relevant characteristics. By applying data-driven models that
in the detailed development and utilisation of our curated recognise patterns and features from sensor data, these
dataset for training and refining a slip and fall detection methods have shown encouraging outcomes and contributed
model. Synthesising existing literature, this project not to more reliable and accurate detection.
only elucidates the strengths and limitations of diverse
approaches but also identifies emerging trends. It serves as Access to appropriate datasets is essential for the development
a valuable resource for comprehending the current slip and assessment of models for the detection of slips and falls.
and fall detection landscape, providing insights into Although fall simulation data cannot perfectly mimic a real
potential directions for future research and development. fall, the best way to assess fall detection systems is to create
datasets that gather information from volunteers while they
Keywords— YOLOv5, YOLOv7, Slip and Fall detection, Dataset simulate various falls. Training and testing the models
Curation, Workplace Safety requires datasets with annotated labels, sensor modalities, and
a broad range of scenarios.
I. INTRODUCTION
Falls are commonly defined as "inadvertently coming to rest
on the ground, floor, or other lower levels, excluding II. LITERATURE REVIEW
intentional changes in position to rest in furniture, wall, or Over the last couple of years, there has been a noticeable rise
other objects." According to the World Health Organization in the amount of research dedicated to developing Slip & Fall
(WHO), fall-related injuries are more prevalent among older detection systems. Researchers around the world have pursued
individuals and constitute a significant cause of domestic different methods and innovative ideas to address the
accidents. Approximately 28-35% of people aged 65 and over important problem of fall detection. This part examines these
experience falls annually, with an increase of 32-42%. The research endeavours, emphasising their impacts and
incidence of falls varies among countries, such as in China illustrating the evolving landscape of technology aimed at
(6-31%) and Japan (20% of older adults annually). identifying falls.

Recognizing the importance of preventing and mitigating A. Foundational Research and Wearable Technology
these incidents, researchers have dedicated considerable
efforts to developing slip and fall detection systems capable of Our journey starts with foundational research conducted by
promptly identifying and alerting individuals. Unlike Villaseñor and colleagues in 2019 [4]. They introduced a
traditional methods that rely on rule-based algorithms, one comprehensive collection of data in the form of a multimodal
important feature of slip and fall detection systems is the dataset, serving as the foundation for future research efforts.
modelling approach used to analyse sensor data and

Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776399708


Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776399708

This dataset established a benchmark for evaluating the analyses conducted by Burke et al. (2006) and
effectiveness of various fall detection systems. In addition to Christian et al. (2009). These analyses emphasise the
this dataset, Trkov and his team (2015) revealed a novel effectiveness of behavioural modelling, safety
method for detecting and predicting falls using wearable knowledge, and motivation in improving safety
Inertial Measurement Units (IMUs) and expanded Kalman performance [12,13]. These studies offer valuable
filters, highlighting the vital importance of wearable insights into the human aspects that complement
technology in the advancement of fall detection [6]. technological advancements in fall detection.

B. Vision-Based Approaches and Machine Learning 2) Ethical Considerations and Future Outlook

As we delve deeper into the realm of research, we encounter Our research has brought us to scientists who have
the work of Nooruddin et Al.(2021). This study systematically developed an affordable method using motion and
classifies different fall detection solutions, making a clear pyroelectric infrared sensors (PIR), along with
distinction between those that rely on a single sensor and Arduino technology, for accurate fall detection [11].
those that make use of multiple sensors [1]. Beddiar and This effort emphasizes the importance of
colleagues (2021) are notable for their emphasis on cost-effective solutions suitable for environments
video-based fall detection, utilising machine learning and pose with limited resources. Expanding our view to
estimation techniques to achieve precise classification [2]. include safety and health on a broader scale, we
Inturi and team (2022) propose a vision-focused approach that examine the comprehensive studies conducted by
incorporates AlphaPose for keypoint detection, highlighting Burke et al. (2006) and Christian et al. (2009). These
its usefulness in identifying falls within domestic settings [3]. studies highlight the effectiveness of behavioural
Wu and collaborators (2023) present an innovative dual-modal modelling, safety knowledge, and motivation in
network based on weakly-supervised learning, demonstrating improving safety performance [12,13]. These
potential for improved efficiency and accuracy in fall investigations offer valuable insights into the human
detection [5]. Together, these studies collectively demonstrate factors that complement technological advancements
an ongoing dedication to enhancing detection algorithms and in the field of fall detection.
their potential real-world applications.Transitioning to the
second phase of our journey, we explore recent international In brief, our examination of these scholarly papers reveals a
contributions to the field and broaden our perspective to wide range of originality, collaboration, and research. The
encompass the broader implications of fall detection field of fall detection systems is continuously advancing, with
technology for safety and health. people from different fields contributing to its development.
From using wearable sensors to employing computer vision,
1) International Innovations in Machine Learning and and from utilising machine learning to taking into account
Sensing human factors, these studies collectively illustrate a field
poised to make significant strides in improving worldwide
Phand and his team in 2023 used Convolutional safety and health.
Neural Networks (CNN) to spot human falls in visual
surveillance, indicating the widespread adoption of
advanced machine learning techniques [7]. In a study
by Liao et al. in 2011, a method is suggested that III. DATASET OVERVIEW
combines the Integrated Spatiotemporal Energy
(ISTE) map and Bayesian Belief Networks (BBN) to This section outlines the meticulous process employed to
improve the effectiveness of fall and slip detection. curate a comprehensive dataset for our slip and fall detection
This highlights the importance of integrating both research. The dataset amalgamates various sources, leveraging
spatial and temporal information into detection
both widely cited datasets and the versatile Roboflow online
algorithms [8]. Trkov and colleagues in 2019
introduced an original slip detection system based on platform, ensuring a diverse representation of real-world
inertial sensors for human walking, showing potential scenarios.
improvements in safety [9]. Mobsite et al. in 2023
have developed a privacy-conscious camera-based A. Widely-Cited Datasets
system that uses human silhouettes, emphasising the
importance of addressing ethical and privacy Our dataset synthesis involved the incorporation of three key
concerns in the design of such systems [10].Broader widely-cited datasets:
Safety and Health Implications
1) UR-Fall Detection Dataset (Interdisciplinary Centre
Our research has led us to scientists who have
developed a cost-effective method using motion and for Computational Modelling, University of
pyroelectric infrared sensors (PIR) along with Rzeszow):
Arduino technology to accurately detect falls [11].
This effort highlights the importance of affordable This dataset contains 70 (30 falls + 40 activities of
solutions suitable for use in settings with limited daily living(ADL)) sequences. The sequences are
resources. Expanding our scope to include safety and recorded at a frame rate of 30 fps and a resolution of
health more broadly, we examine the comprehensive 640x480 pixels. The dataset was recorded using two

Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776399708


Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776399708

Microsoft Kinect cameras. The Kinect cameras are in different locations such as “Home”, “Coffee
RGB-D cameras that capture depth and colour room”, “Office”, and “Lecture room” with high
information.The falls in the dataset include both variance between fall and non-fall events.[16]
forward and backward falls, as well as falls from
standing and sitting positions. The ADLs in the
dataset include a variety of activities, such as
walking, running, and getting up from a chair.[14]

Fig.3 Image sample via Le2i Dataset [16]

B. Additional Data Sources


Fig.1 Image sample via UR-Fall Detection Dataset [14]
IIn our quest to make our dataset bigger and better, we
scoured the internet, checking out places like Pixel, YouTube,
2) CAUCAFall Dataset, (Universidad del Cauca, and the Roboflow repositories. We handpicked a bunch of
Universidad de la Amazonia): random pictures and videos related to slips and falls from
various sources, including sneaky CCTV footage. This mix
The dataset contains 100 sequences, of which 50 are gave our dataset a real-world character, showcasing all sorts of
falls and 50 are activities of daily living (ADLs). The challenges.
sequences are recorded at a frame rate of 30 fps and a
resolution of 640x480 pixels, and a single RGB-D To make sure our model can tell the difference between
camera is used The data included forward falls, someone taking a fall and just going about their day, we made
sure to include all kinds of situations. Our collection ranges
backward falls, falls to the left, falls to the right, and
from simple backgrounds to everyday activities, covering a
falls arising from sitting. The ADLs performed by the wide spectrum of scenarios. This careful selection makes our
participants are: walking, hopping, picking up an dataset strong and helps the model do its job well in real-life
object, sitting, and finally kneeling.[15] situations.

But we didn't stop there. Our team went hands-on, capturing


videos and taking frame-by-frame pictures of controlled falls
in a safe environment. We added this content to our dataset,
making it even more diverse and comprehensive.

C. Dataset Annotation and Resizing

To make sure our dataset is as useful as possible, we put the


acquired images and videos through detailed annotation and
resizing processes.
Fig.2 Image sample via CAUCAFall Dataset [15]
The dataset was categorised into "Fall" and "No-Fall" classes,
adding nuanced annotations that enhance the model's
capability to understand intricate scenarios across a variety of
environments.

3) The Le2i Dataset (Laboratoire Electronique, D. Dataset Composition


Informatique et Image)
We carefully divided the resulting dataset into three subsets,
A standard RGB based fall detection dataset each with a specific role in training and evaluating our slip
consisting of 130 fall and non-fall scenes recorded at and fall detection model:
25 frames per second by 17 actors using a single
camera. The average duration of all videos is about 1 1. Training Set (76%):
minute, where the person either falls or continues
performing a daily activity. The videos are recorded

Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776399708


Page 6 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776399708

This subset, consisting of 15,000 images, serves as the core lightning conditions, blurry backgrounds, and backgrounds
for training our model. It provides the foundational data for being prominent more than the intended activity.
the model to learn and adapt to.

2. Validation Set (12%):

With 1,200 images, this subset is designed for fine-tuning and


optimization of the model. It plays a crucial role in ensuring
the model's performance is refined and optimised for
real-world scenarios.

3. Testing Set (12%):

Comprising 1,200 images, this subset is dedicated to


evaluating the model's performance in real-world situations. It
allows us to assess how well the model generalises to new and Fig.4 Sample image from the dataset showcasing background prominence
unseen data, providing a robust measure of its effectiveness. over the main event.

E. Image Types and Features

The dataset is made up of three batches of images:

1) Positive Samples:

Depicting instances of falls or fall-related events.

2) Negative Samples:

Showcasing normal activities without falls, instances


of Daily Life Activities such as Walking, Sleeping,
etc.
Fig.5 Sample image from our dataset showcasing challenging lightning
Additionally, background images devoid of fall-related events conditions
were included.

F. Image Features

This carefully curated dataset serves as the foundation for


robust slip and fall detection model development and
evaluation. The dataset was carefully curated to encompass a
variety of factors such as lighting conditions, camera angles,
indoor and outdoor environments, and diverse human subjects.

1) Image Size:

Ranging from 416x416 to 640x640 pixels,


standardised to 640x640 pixels during preprocessing.
Fig.6 Sample image from the dataset highlighting occlusions
2) Augmentation:

No explicit augmentation techniques were applied


during dataset preparation.

The dataset contains 2 classes namely "Fall" and “no-fall” and


is annotated accordingly. This diversity aimed to improve the
system's ability to accurately detect falls in different scenarios
IV. METHODS
and demographic groups.
In this study, we used two state-of-the-art object detection
G. Image Samples
algorithms, YOLOv5 and YOLOv7. YOLOv5 is a
single-stage object detector that is known for its accuracy
Our dataset contains the shortcomings we found in the other
and speed. YOLOv7 is a newer object detector that builds
datasets, which included object/human occlusions, bad
on the success of YOLOv5, and is mainly used for

Page 6 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776399708


Page 1 of 12 - Cover Page Submission ID trn:oid:::1:2776437860

Varshith Sai Gali


Scientific Writing
PS2

B.Tech2020

BML Munjal University

Document Details

Submission ID

trn:oid:::1:2776437860 10 Pages

Submission Date 5,247 Words

Dec 7, 2023, 11:40 PM GMT+5:30


29,658 Characters

Download Date

Dec 7, 2023, 11:42 PM GMT+5:30

File Name

Copy_of_AI_Workspace_Safety_Solutions_SW_Paper_Submission.pdf

File Size

3.4 MB

Page 1 of 12 - Cover Page Submission ID trn:oid:::1:2776437860


Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:2776437860

How much of this submission has been generated by AI?

*18%
Caution: Percentage may not indicate academic misconduct. Review required.

It is essential to understand the limitations of AI detection before making decisions


about a student's work. We encourage you to learn more about Turnitin's AI detection
capabilities before using the tool.
of qualifying text in this submission has been determined to be
generated by AI.

* Low scores have a higher likelihood of false positives.

Frequently Asked Questions

What does the percentage mean?


The percentage shown in the AI writing detection indicator and in the AI writing report is the amount of qualifying text within the
submission that Turnitin's AI writing detection model determines was generated by AI.

Our testing has found that there is a higher incidence of false positives when the percentage is less than 20. In order to reduce the
likelihood of misinterpretation, the AI indicator will display an asterisk for percentages less than 20 to call attention to the fact that
the score is less reliable.

However, the final decision on whether any misconduct has occurred rests with the reviewer/instructor. They should use the
percentage as a means to start a formative conversation with their student and/or use it to examine the submitted assignment in
greater detail according to their school's policies.

How does Turnitin's indicator address false positives?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be AI-generated will be highlighted blue
on the submission text.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

What does 'qualifying text' mean?


Sometimes false positives (incorrectly flagging human-written text as AI-generated), can include lists without a lot of structural variation, text that literally repeats
itself, or text that has been paraphrased without developing new ideas. If our indicator shows a higher amount of AI writing in such text, we advise you to take that
into consideration when looking at the percentage indicated.

In a longer document with a mix of authentic writing and AI generated text, it can be difficult to exactly determine where the AI writing begins and original writing
ends, but our model should give you a reliable guide to start conversations with the submitting student.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
both human and AI-generated text) so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an
organization's application of its specific academic policies to determine whether any academic misconduct has occurred.

Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:2776437860


Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776437860

AI Based Workplace Safety Solution: Slip & Fall


Detection
Harshita Nauhwar1*, Varshith Sai Gali2, Kriti Shrivastava3 Aishwarya4

1,2,3,4
Department of Computer Science and Engineering, BML-Munjal University, Haryana, India
Email - [email protected], [email protected], [email protected],
*
4
[email protected]

Abstract— Slip and fall accidents pose significant threats to common tasks and slip or fall events, in opposition to
safety in various settings, necessitating advancements in traditional methods which rely on rule-based algorithms.
slip and fall detection systems. This research focuses on the These conventional approaches do have several drawbacks,
evolution of modelling techniques and datasets, though, like high false positive rates and limited
highlighting the creation of a state-of-the-art, self-curated environmental flexibility.
dataset. Additionally, the study investigates the trajectory In recent years, as research experiments suggest, machine
of datasets used for training and evaluating detection learning techniques have been applied to fall and slide
models, scrutinising factors such as data collection detection. Machine learning methods such as support vector
methods, annotation strategies, and the inclusion of machines, random forests, and neural networks have been
diverse real-world scenarios. A distinctive contribution lies used to extract relevant information from slip-and-fall
in the detailed development and utilisation of our curated episodes.Through the use of models based on data that
dataset for training and refining a slip and fall detection identify trends and traits from sensor data, these techniques
model. Synthesising existing literature, this project not have produced positive results and helped to improve
only elucidates the strengths and limitations of diverse detection accuracy and reliability.
approaches but also identifies emerging trends. It serves as
a valuable resource for comprehending the current slip The creation and assessment of models for the location of
and fall detection landscape, providing insights into slips and falls expect admittance to one side datasets. The best
potential directions for future research and development. way to deal with assessing fall identification frameworks is to
develop datasets that gather information from members as
Keywords— YOLOv5, YOLOv7, Slip and Fall detection, Dataset they repeat various falls, despite the fact that fall reproduction
Curation, Workplace Safety information can't precisely imitate a genuine fall. Sensors
draw near, a wide assortment of settings, and datasets with
I. INTRODUCTION enlightening labels are required for both preparation and
Falls are usually characterised as "coincidentally stopping on testing of the models.
the ground, floor, or other lower levels, barring deliberate
changes ready to rest in furnishings, walls, or different items." II. LITERATURE REVIEW
As per the World Wellbeing Association (WHO), fall-related The amount of research devoted to creating Slip & Fall
wounds are more predominant among more established people detection systems has noticeably increased over the past few
and comprise a huge reason for homegrown mishaps. Around years. In order to tackle the significant issue of fall detection,
28-35% of individuals aged 65 and over experience falls every researchers from all around the world have explored a variety
year, with an increment of 32-42%. The occurrence of falls of approaches and creative concepts. This section looks at
fluctuates among nations, like China (6-31%) and Japan (20% various research projects, highlighting their effects and
of more seasoned grown-ups every year). showing how technology is changing to better spot falls.

Understanding how critical it is to stop and lessen these A. Foundational Research and Wearable Technology
occurrences, scientists have spent a great deal of time and
energy creating fall and slip detection devices that can quickly Villaseñor et al.'s 2019 foundational research serves as the
identify and notify people. one of the key factors of slip and starting point of our journey [4]. They presented a
fall detection systems is the methodology design approach multifaceted dataset, which is a complete collection of data
used for evaluating sensor data and differentiating between that will serve as the basis for further research.

Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776437860


Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776437860

This dataset created a standard by which different fall led by Burke et al. (2006) and Christian et al. (2009).
detection systems could be measured. Apart from this dataset, These examinations underline the viability of
Trkov and colleagues (2015) presented a new technique that conduct demonstrating, well-being information, and
employs expandable Kalman filters and wearable Inertial inspiration in further developing security execution
Measurement Units (IMUs) to detect and forecast falls. This [12,13]. These examinations offer important bits of
approach emphasises the critical role that wearable technology knowledge into the human angles that supplement
plays in improving detection of falls. [6]. mechanical headways in fall discovery.

B. Vision-Based Approaches and Machine Learning 3) Ethical Considerations and Future Outlook

1) As we continue our investigation, the work of Our exploration has carried us to researchers who
Nooruddin et al. (2021) comes into view. This study have fostered a reasonable technique utilising
clearly distinguishes between fall detection systems movement and pyroelectric infrared sensors (PIR),
that use numerous sensors and those that rely on a alongside Arduino innovation, for precise fall
single sensor by methodically classifying various fall location [11]. This work stresses the significance of
detection methods [1]. Beddiar and colleagues (2021) savvy arrangements reasonable for conditions with
are well-known for emphasising video-based fall restricted assets. Extending our view to remember
recognition, which achieves precise order through the security and wellbeing on a more extensive scale, we
use of AI and posture evaluation techniques [2]. inspect the complete investigations led by Burke et
Inturi et al. (2022) present a dream-centred al. (2006) and Christian et al. (2009). These
methodology that integrates AlphaPose for keypoint examinations feature the viability of conduct
identification, emphasising the latter's usefulness in demonstrating, well-being information, and
identifying falls inside domestic environments [3]. In inspiration in further developing security execution
response to the appallingly slow rate of adoption, Wu [12,13]. These examinations offer significant bits of
and colleagues (2023) offer a creative two-module knowledge into the human factors that supplement
organisation that shows promise for increased mechanical headways in the field of fall location.
efficacy and precision in fall detection [5].
Collectively, these analyses demonstrate an ongoing In short, our assessment of these insightful papers uncovers an
dedication to improving identification computations extensive variety of creativity, cooperation, and examination.
and their anticipated real-world applications. The field of fall identification frameworks is consistently
progressing, with individuals from various fields adding to its
2) International Innovations in Machine Learning and turn of events. From utilising wearable sensors to utilising PC
Sensing vision, and from using AI to considering human factors, these
examinations on the whole represent a field ready to take
Phand and the others displayed the wide use of critical steps in working on overall security and wellbeing.
cutting edge AI techniques in 2023 using
Convolutional Brain Organisations (CNN) to detect
human falls in visual observation [7]. A approach is
presented in a concentrate by Liao et al. (2011) to III. DATASET OVERVIEW
work on the viability of fall and slip location using
the Coordinated Spatiotemporal Energy (ISTE) map This section highlights the process overview of creating a
and Bayesian Conviction Organisations (BBN). This comprehensive dataset for our slip and fall detection research.
shows the significance of including transitory and The dataset combines several sources, making use of both
spatial data in discovery computations [8]. A novel
widely cited datasets and Roboflow’s online platform, to
slip location framework for human strolling, taking
into account inertial sensors, was presented by Trkov ensure a varied depiction of real-world scenarios.
and associates in 2019 [9], suggesting possible
enhancements in wellbeing. Mobsite et al. in 2023 A. Widely-Cited Datasets
fostered a security cognizant camera-based
framework that utilises human outlines, underlining The first step of our dataset creation involved the combination
the significance of tending to moral and protection of three key highly respected & cited datasets:
worries in the plan of such frameworks [10].Broader
Wellbeing and Wellbeing Suggestions 1) UR-Fall Detection Dataset (Interdisciplinary Centre
for Computational Modelling, University of
Our examination has driven us to researchers who
have fostered a financially savvy technique utilising Rzeszow):
movement and pyroelectric infrared sensors (PIR)
The UR-Fall Dataset consists of 30 falls and 40
alongside Arduino innovation to precisely recognize
falls [11]. This work features the significance of activities of daily living(ADL)) sequences, in a total
reasonable arrangements appropriate for use in of 70 sequences. These recordings have a frame rate
settings with restricted assets. Growing our extension of 30 fps and a resolution of 640x480 pixels. The
to incorporate security and wellbeing all the more dataset was recorded using two Microsoft Kinect
extensively, we analyse the complete examinations cameras. The Kinect cameras are RGB-D cameras

Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776437860


Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776437860

that capture depth and colour information.The falls in


the dataset include both forward and backward falls,
as well as falls from standing and sitting positions.
The ADLs in the dataset include a variety of
activities, such as walking, running, and getting up
from a chair.[14]

Fig.3 Image sample via Le2i Dataset [16]

B. Additional Data Sources

In our quest to make our dataset bigger and better, we scoured


the internet, checking out places like Pixel, YouTube, and the
Roboflow repositories. We handpicked a bunch of random
Fig.1 Image sample via UR-Fall Detection Dataset [14]
pictures and videos related to slips and falls from various
sources, including sneaky CCTV footage. This mix gave our
2) CAUCAFall Dataset, (Universidad del Cauca, dataset a real-world character, showcasing all sorts of
Universidad de la Amazonia): challenges.

The dataset contains 100 sequences, of which 50 are To make sure our model can differentiate between someone
falls and 50 are activities of daily living (ADLs). The falling and just going about their day, we made sure to include
sequences are recorded at a frame rate of 30 fps and a all kinds of scenarios. Our dataset ranges from simple
resolution of 640x480 pixels, and a single RGB-D backgrounds to everyday activities including falls, covering a
camera is used The data included forward falls, wide spectrum. This careful selection makes our dataset
diverse and helps the model adapt to real-life situations.
backward falls, falls to the left, falls to the right, and
falls arising from sitting. The ADLs performed by the We then went hands-on, capturing videos and taking
participants are: walking, hopping, picking up an frame-by-frame pictures of controlled falls in a safe
object, sitting, and finally kneeling.[15] environment, set up by ourselves. We added this personalised
content to our dataset, to make it even more diverse and
comprehensive.

C. Dataset Annotation and Resizing

To maximise the usage of our dataset, we annotated the


obtained images and videos & later resized them.

The dataset was then divided into 2 classes : "Fall" and


"No-Fall'' classes. It was made sure to incorporate detailed
annotations to improve the model's ability to comprehend
complex scenarios in different environments.
Fig.2 Image sample via CAUCAFall Dataset [15]

D. Dataset Composition
3) The Le2i Dataset (Laboratoire Electronique,
Informatique et Image) The final dataset was then divided into three subsets, each
with a specific role in training and evaluation purposes:
A standard RGB based fall detection dataset
consisting of 130 fall and non-fall scenes recorded at 1. Training Set (76%):
25 frames per second by 17 actors using a single
camera. The average duration of all videos is about 1 A set of 15,000 images: this serves as the fundamental for
minute, where the person either falls or continues training our model. This is how our model learns & learns to
performing a daily activity. The videos are recorded adapt.
in different locations such as “Home”, “Coffee
2. Validation Set (12%):
room”, “Office”, and “Lecture room” with high
variance between fall and non-fall events.[16]

Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776437860


Page 1 of 12 - Cover Page Submission ID trn:oid:::1:2776437860

Varshith Sai Gali


Scientific Writing
PS2

B.Tech2020

BML Munjal University

Document Details

Submission ID

trn:oid:::1:2776437860 10 Pages

Submission Date 5,247 Words

Dec 7, 2023, 11:40 PM GMT+5:30


29,658 Characters

Download Date

Dec 7, 2023, 11:42 PM GMT+5:30

File Name

Copy_of_AI_Workspace_Safety_Solutions_SW_Paper_Submission.pdf

File Size

3.4 MB

Page 1 of 12 - Cover Page Submission ID trn:oid:::1:2776437860


Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:2776437860

How much of this submission has been generated by AI?

*18%
Caution: Percentage may not indicate academic misconduct. Review required.

It is essential to understand the limitations of AI detection before making decisions


about a student's work. We encourage you to learn more about Turnitin's AI detection
capabilities before using the tool.
of qualifying text in this submission has been determined to be
generated by AI.

* Low scores have a higher likelihood of false positives.

Frequently Asked Questions

What does the percentage mean?


The percentage shown in the AI writing detection indicator and in the AI writing report is the amount of qualifying text within the
submission that Turnitin's AI writing detection model determines was generated by AI.

Our testing has found that there is a higher incidence of false positives when the percentage is less than 20. In order to reduce the
likelihood of misinterpretation, the AI indicator will display an asterisk for percentages less than 20 to call attention to the fact that
the score is less reliable.

However, the final decision on whether any misconduct has occurred rests with the reviewer/instructor. They should use the
percentage as a means to start a formative conversation with their student and/or use it to examine the submitted assignment in
greater detail according to their school's policies.

How does Turnitin's indicator address false positives?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be AI-generated will be highlighted blue
on the submission text.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

What does 'qualifying text' mean?


Sometimes false positives (incorrectly flagging human-written text as AI-generated), can include lists without a lot of structural variation, text that literally repeats
itself, or text that has been paraphrased without developing new ideas. If our indicator shows a higher amount of AI writing in such text, we advise you to take that
into consideration when looking at the percentage indicated.

In a longer document with a mix of authentic writing and AI generated text, it can be difficult to exactly determine where the AI writing begins and original writing
ends, but our model should give you a reliable guide to start conversations with the submitting student.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
both human and AI-generated text) so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an
organization's application of its specific academic policies to determine whether any academic misconduct has occurred.

Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:2776437860


Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776437860

AI Based Workplace Safety Solution: Slip & Fall


Detection
Harshita Nauhwar1*, Varshith Sai Gali2, Kriti Shrivastava3 Aishwarya4

1,2,3,4
Department of Computer Science and Engineering, BML-Munjal University, Haryana, India
Email - [email protected], [email protected], [email protected],
*
4
[email protected]

Abstract— Slip and fall accidents pose significant threats to common tasks and slip or fall events, in opposition to
safety in various settings, necessitating advancements in traditional methods which rely on rule-based algorithms.
slip and fall detection systems. This research focuses on the These conventional approaches do have several drawbacks,
evolution of modelling techniques and datasets, though, like high false positive rates and limited
highlighting the creation of a state-of-the-art, self-curated environmental flexibility.
dataset. Additionally, the study investigates the trajectory In recent years, as research experiments suggest, machine
of datasets used for training and evaluating detection learning techniques have been applied to fall and slide
models, scrutinising factors such as data collection detection. Machine learning methods such as support vector
methods, annotation strategies, and the inclusion of machines, random forests, and neural networks have been
diverse real-world scenarios. A distinctive contribution lies used to extract relevant information from slip-and-fall
in the detailed development and utilisation of our curated episodes.Through the use of models based on data that
dataset for training and refining a slip and fall detection identify trends and traits from sensor data, these techniques
model. Synthesising existing literature, this project not have produced positive results and helped to improve
only elucidates the strengths and limitations of diverse detection accuracy and reliability.
approaches but also identifies emerging trends. It serves as
a valuable resource for comprehending the current slip The creation and assessment of models for the location of
and fall detection landscape, providing insights into slips and falls expect admittance to one side datasets. The best
potential directions for future research and development. way to deal with assessing fall identification frameworks is to
develop datasets that gather information from members as
Keywords— YOLOv5, YOLOv7, Slip and Fall detection, Dataset they repeat various falls, despite the fact that fall reproduction
Curation, Workplace Safety information can't precisely imitate a genuine fall. Sensors
draw near, a wide assortment of settings, and datasets with
I. INTRODUCTION enlightening labels are required for both preparation and
Falls are usually characterised as "coincidentally stopping on testing of the models.
the ground, floor, or other lower levels, barring deliberate
changes ready to rest in furnishings, walls, or different items." II. LITERATURE REVIEW
As per the World Wellbeing Association (WHO), fall-related The amount of research devoted to creating Slip & Fall
wounds are more predominant among more established people detection systems has noticeably increased over the past few
and comprise a huge reason for homegrown mishaps. Around years. In order to tackle the significant issue of fall detection,
28-35% of individuals aged 65 and over experience falls every researchers from all around the world have explored a variety
year, with an increment of 32-42%. The occurrence of falls of approaches and creative concepts. This section looks at
fluctuates among nations, like China (6-31%) and Japan (20% various research projects, highlighting their effects and
of more seasoned grown-ups every year). showing how technology is changing to better spot falls.

Understanding how critical it is to stop and lessen these A. Foundational Research and Wearable Technology
occurrences, scientists have spent a great deal of time and
energy creating fall and slip detection devices that can quickly Villaseñor et al.'s 2019 foundational research serves as the
identify and notify people. one of the key factors of slip and starting point of our journey [4]. They presented a
fall detection systems is the methodology design approach multifaceted dataset, which is a complete collection of data
used for evaluating sensor data and differentiating between that will serve as the basis for further research.

Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776437860


Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776437860

This dataset created a standard by which different fall led by Burke et al. (2006) and Christian et al. (2009).
detection systems could be measured. Apart from this dataset, These examinations underline the viability of
Trkov and colleagues (2015) presented a new technique that conduct demonstrating, well-being information, and
employs expandable Kalman filters and wearable Inertial inspiration in further developing security execution
Measurement Units (IMUs) to detect and forecast falls. This [12,13]. These examinations offer important bits of
approach emphasises the critical role that wearable technology knowledge into the human angles that supplement
plays in improving detection of falls. [6]. mechanical headways in fall discovery.

B. Vision-Based Approaches and Machine Learning 3) Ethical Considerations and Future Outlook

1) As we continue our investigation, the work of Our exploration has carried us to researchers who
Nooruddin et al. (2021) comes into view. This study have fostered a reasonable technique utilising
clearly distinguishes between fall detection systems movement and pyroelectric infrared sensors (PIR),
that use numerous sensors and those that rely on a alongside Arduino innovation, for precise fall
single sensor by methodically classifying various fall location [11]. This work stresses the significance of
detection methods [1]. Beddiar and colleagues (2021) savvy arrangements reasonable for conditions with
are well-known for emphasising video-based fall restricted assets. Extending our view to remember
recognition, which achieves precise order through the security and wellbeing on a more extensive scale, we
use of AI and posture evaluation techniques [2]. inspect the complete investigations led by Burke et
Inturi et al. (2022) present a dream-centred al. (2006) and Christian et al. (2009). These
methodology that integrates AlphaPose for keypoint examinations feature the viability of conduct
identification, emphasising the latter's usefulness in demonstrating, well-being information, and
identifying falls inside domestic environments [3]. In inspiration in further developing security execution
response to the appallingly slow rate of adoption, Wu [12,13]. These examinations offer significant bits of
and colleagues (2023) offer a creative two-module knowledge into the human factors that supplement
organisation that shows promise for increased mechanical headways in the field of fall location.
efficacy and precision in fall detection [5].
Collectively, these analyses demonstrate an ongoing In short, our assessment of these insightful papers uncovers an
dedication to improving identification computations extensive variety of creativity, cooperation, and examination.
and their anticipated real-world applications. The field of fall identification frameworks is consistently
progressing, with individuals from various fields adding to its
2) International Innovations in Machine Learning and turn of events. From utilising wearable sensors to utilising PC
Sensing vision, and from using AI to considering human factors, these
examinations on the whole represent a field ready to take
Phand and the others displayed the wide use of critical steps in working on overall security and wellbeing.
cutting edge AI techniques in 2023 using
Convolutional Brain Organisations (CNN) to detect
human falls in visual observation [7]. A approach is
presented in a concentrate by Liao et al. (2011) to III. DATASET OVERVIEW
work on the viability of fall and slip location using
the Coordinated Spatiotemporal Energy (ISTE) map This section highlights the process overview of creating a
and Bayesian Conviction Organisations (BBN). This comprehensive dataset for our slip and fall detection research.
shows the significance of including transitory and The dataset combines several sources, making use of both
spatial data in discovery computations [8]. A novel
widely cited datasets and Roboflow’s online platform, to
slip location framework for human strolling, taking
into account inertial sensors, was presented by Trkov ensure a varied depiction of real-world scenarios.
and associates in 2019 [9], suggesting possible
enhancements in wellbeing. Mobsite et al. in 2023 A. Widely-Cited Datasets
fostered a security cognizant camera-based
framework that utilises human outlines, underlining The first step of our dataset creation involved the combination
the significance of tending to moral and protection of three key highly respected & cited datasets:
worries in the plan of such frameworks [10].Broader
Wellbeing and Wellbeing Suggestions 1) UR-Fall Detection Dataset (Interdisciplinary Centre
for Computational Modelling, University of
Our examination has driven us to researchers who
have fostered a financially savvy technique utilising Rzeszow):
movement and pyroelectric infrared sensors (PIR)
The UR-Fall Dataset consists of 30 falls and 40
alongside Arduino innovation to precisely recognize
falls [11]. This work features the significance of activities of daily living(ADL)) sequences, in a total
reasonable arrangements appropriate for use in of 70 sequences. These recordings have a frame rate
settings with restricted assets. Growing our extension of 30 fps and a resolution of 640x480 pixels. The
to incorporate security and wellbeing all the more dataset was recorded using two Microsoft Kinect
extensively, we analyse the complete examinations cameras. The Kinect cameras are RGB-D cameras

Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776437860


Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776437860

that capture depth and colour information.The falls in


the dataset include both forward and backward falls,
as well as falls from standing and sitting positions.
The ADLs in the dataset include a variety of
activities, such as walking, running, and getting up
from a chair.[14]

Fig.3 Image sample via Le2i Dataset [16]

B. Additional Data Sources

In our quest to make our dataset bigger and better, we scoured


the internet, checking out places like Pixel, YouTube, and the
Roboflow repositories. We handpicked a bunch of random
Fig.1 Image sample via UR-Fall Detection Dataset [14]
pictures and videos related to slips and falls from various
sources, including sneaky CCTV footage. This mix gave our
2) CAUCAFall Dataset, (Universidad del Cauca, dataset a real-world character, showcasing all sorts of
Universidad de la Amazonia): challenges.

The dataset contains 100 sequences, of which 50 are To make sure our model can differentiate between someone
falls and 50 are activities of daily living (ADLs). The falling and just going about their day, we made sure to include
sequences are recorded at a frame rate of 30 fps and a all kinds of scenarios. Our dataset ranges from simple
resolution of 640x480 pixels, and a single RGB-D backgrounds to everyday activities including falls, covering a
camera is used The data included forward falls, wide spectrum. This careful selection makes our dataset
diverse and helps the model adapt to real-life situations.
backward falls, falls to the left, falls to the right, and
falls arising from sitting. The ADLs performed by the We then went hands-on, capturing videos and taking
participants are: walking, hopping, picking up an frame-by-frame pictures of controlled falls in a safe
object, sitting, and finally kneeling.[15] environment, set up by ourselves. We added this personalised
content to our dataset, to make it even more diverse and
comprehensive.

C. Dataset Annotation and Resizing

To maximise the usage of our dataset, we annotated the


obtained images and videos & later resized them.

The dataset was then divided into 2 classes : "Fall" and


"No-Fall'' classes. It was made sure to incorporate detailed
annotations to improve the model's ability to comprehend
complex scenarios in different environments.
Fig.2 Image sample via CAUCAFall Dataset [15]

D. Dataset Composition
3) The Le2i Dataset (Laboratoire Electronique,
Informatique et Image) The final dataset was then divided into three subsets, each
with a specific role in training and evaluation purposes:
A standard RGB based fall detection dataset
consisting of 130 fall and non-fall scenes recorded at 1. Training Set (76%):
25 frames per second by 17 actors using a single
camera. The average duration of all videos is about 1 A set of 15,000 images: this serves as the fundamental for
minute, where the person either falls or continues training our model. This is how our model learns & learns to
performing a daily activity. The videos are recorded adapt.
in different locations such as “Home”, “Coffee
2. Validation Set (12%):
room”, “Office”, and “Lecture room” with high
variance between fall and non-fall events.[16]

Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:2776437860


Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:2777323462

This dataset contains 70 (30 falls + 40 activities of 25 frames per second by 17 actors using a single
daily living(ADL)) sequences. The sequences are camera. The average duration of all videos is about 1
recorded at a frame rate of 30 fps and a resolution of minute, where the person either falls or continues
640x480 pixels. The dataset was recorded using two performing a daily activity. The videos are recorded in
Microsoft Kinect cameras. The Kinect cameras are different locations such as “Home”, “Coffee room”,
RGB-D cameras that capture depth and colour “Office”, and “Lecture room” with high variance
information.The falls in the dataset include both between fall and non-fall events.[16]
forward and backward falls, as well as falls from
standing and sitting positions. The ADLs in the dataset
include a variety of activities, such as walking,
running, and getting up from a chair.[14]

Fig.3 Image sample via Le2i Dataset [16]

B. Additional Data Sources


Fig.1 Image sample via UR-Fall Detection Dataset [14]
IIn our quest to make our dataset bigger and better, we scoured
the internet, checking out places like Pixel, YouTube, and the
2) CAUCAFall Dataset, (Universidad del Cauca, Roboflow repositories. We handpicked a bunch of random
Universidad de la Amazonia): pictures and videos related to slips and falls from various
sources, including sneaky CCTV footage. This mix gave our
The dataset contains 100 sequences, of which 50 are dataset a real-world character, showcasing all sorts of
falls and 50 are activities of daily living (ADLs). The challenges.
sequences are recorded at a frame rate of 30 fps and a
resolution of 640x480 pixels, and a single RGB-D To make sure our model can tell the difference between
camera is used The data included forward falls, someone taking a fall and just going about their day, we made
sure to include all kinds of situations. Our collection ranges
backward falls, falls to the left, falls to the right, and
from simple backgrounds to everyday activities, covering a
falls arising from sitting. The ADLs performed by the wide spectrum of scenarios. This careful selection makes our
participants are: walking, hopping, picking up an dataset strong and helps the model do its job well in real-life
object, sitting, and finally kneeling.[15] situations.

But we didn't stop there. Our team went hands-on, capturing


videos and taking frame-by-frame pictures of controlled falls in
a safe environment. We added this content to our dataset,
making it even more diverse and comprehensive.

C. Dataset Annotation and Resizing

To make sure our dataset is as useful as possible, we put the


acquired images and videos through detailed annotation and
resizing processes.
Fig.2 Image sample via CAUCAFall Dataset [15]
The dataset was categorized into "Fall" and "No-Fall" classes,
adding nuanced annotations that enhance the model's capability
to understand intricate scenarios across a variety of
environments.
3) The Le2i Dataset (Laboratoire Electronique, D. Dataset Composition
Informatique et Image)
We carefully divided the resulting dataset into three subsets,
A standard RGB based fall detection dataset
each with a specific role in training and evaluating our slip and
consisting of 130 fall and non-fall scenes recorded at fall detection model:

Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:2777323462


Page 1 of 17 - Cover Page Submission ID trn:oid:::1:2777700831

Harshita Nauhwar
Varshith_SC
PS2

B.Tech2020

BML Munjal University

Document Details

Submission ID

trn:oid:::1:2777700831 15 Pages

Submission Date 5,014 Words

Dec 8, 2023, 11:10 PM GMT+5:30


29,872 Characters

Download Date

Dec 8, 2023, 11:13 PM GMT+5:30

File Name

Fuzzy-Logic-Plant-Leaf_SC_Paper_Submission.pdf

File Size

1.2 MB

Page 1 of 17 - Cover Page Submission ID trn:oid:::1:2777700831


Page 2 of 17 - AI Writing Overview Submission ID trn:oid:::1:2777700831

How much of this submission has been generated by AI?

55%
Caution: Percentage may not indicate academic misconduct. Review required.

It is essential to understand the limitations of AI detection before making decisions


about a student's work. We encourage you to learn more about Turnitin's AI detection
capabilities before using the tool.
of qualifying text in this submission has been determined to be
generated by AI.

Frequently Asked Questions

What does the percentage mean?


The percentage shown in the AI writing detection indicator and in the AI writing report is the amount of qualifying text within the
submission that Turnitin's AI writing detection model determines was generated by AI.

Our testing has found that there is a higher incidence of false positives when the percentage is less than 20. In order to reduce the
likelihood of misinterpretation, the AI indicator will display an asterisk for percentages less than 20 to call attention to the fact that
the score is less reliable.

However, the final decision on whether any misconduct has occurred rests with the reviewer/instructor. They should use the
percentage as a means to start a formative conversation with their student and/or use it to examine the submitted assignment in
greater detail according to their school's policies.

How does Turnitin's indicator address false positives?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be AI-generated will be highlighted blue
on the submission text.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

What does 'qualifying text' mean?


Sometimes false positives (incorrectly flagging human-written text as AI-generated), can include lists without a lot of structural variation, text that literally repeats
itself, or text that has been paraphrased without developing new ideas. If our indicator shows a higher amount of AI writing in such text, we advise you to take that
into consideration when looking at the percentage indicated.

In a longer document with a mix of authentic writing and AI generated text, it can be difficult to exactly determine where the AI writing begins and original writing
ends, but our model should give you a reliable guide to start conversations with the submitting student.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
both human and AI-generated text) so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an
organization's application of its specific academic policies to determine whether any academic misconduct has occurred.

Page 2 of 17 - AI Writing Overview Submission ID trn:oid:::1:2777700831


Page 3 of 17 - AI Writing Submission Submission ID trn:oid:::1:2777700831

Plant Leaf Health Classifier: Fuzzy Logic Approach


Varshith Sai Gali1
1
Department of Computer Science and Engineering, BML-Munjal University, Haryana, India
Email - *[email protected],

Abstract— This research explores the realm of identifying plant diseases by closely examining two modelling approaches:
Fuzzy Logic and Convolutional Neural Networks (CNNs). Recognizing the increasing significance of disease detection in
agriculture for minimising crop losses, the study aims to offer a concise evaluation of the precision and effectiveness of
these two models. The research strategy involves creating and applying models based on both Fuzzy Logic and CNN,
using various datasets. Fuzzy Logic relies on expert knowledge and linguistic rules, whereas CNN employs deep learning
methods. Performance is assessed using critical metrics such as accuracy, precision, and recall, considering factors like
computational efficiency and resilience. The results are intended to offer valuable insights into the strengths and
weaknesses of each methodology, assisting decision-makers in selecting the most suitable model for specific agricultural
applications. This study contributes to the ongoing conversation about soft computing techniques in precision
agriculture, with potential implications for advancing crop management and food production.

Keywords— YOLOv5, YOLOv7, Slip and Fall detection, Dataset Curation, Workplace Safety

I. INTRODUCTION
In agriculture, researching how to classify plant health using fuzzy logic is crucial. Detecting and categorising plant diseases
plays a vital role in maintaining agricultural productivity and ensuring the quality and quantity of crops. Traditional methods of
disease detection, like manually inspecting plants, are time-consuming and prone to errors. To address this, machine learning
techniques, such as fuzzy logic, offer a way to automate disease detection and classification.

Fuzzy logic, a mathematical framework adept at handling uncertainty and imprecision in data, proves particularly well-suited
for plant health classification. By incorporating image processing techniques like image segmentation and feature extraction, the
accuracy of disease detection can be further enhanced. The proposed methodology involves capturing an image of the plant leaf,
preprocessing the image, conducting image segmentation, extracting features, and employing fuzzy logic for disease
classification.

The results obtained from classification can be utilised to grade the severity of the disease and provide guidance for effective
disease management. Previous research studies have demonstrated the effectiveness of using fuzzy logic in plant health
classification. This proposed methodology has the potential to contribute significantly to the advancement of precision
agriculture, aiming to maximise profits, optimise agricultural inputs, and minimise environmental impact by tailoring
agricultural practices to site-specific demands.

II. LITERATURE REVIEW

In the realm of agriculture, addressing plant diseases has evolved significantly, thanks to the adoption of soft computing
techniques that open up new possibilities for disease detection and classification. This review takes a deep dive into the
integration of soft computing methodologies, specifically emphasising the synergy between image processing and machine
learning techniques. These advancements play a crucial role in automating the complex task of identifying and categorising
plant diseases, addressing persistent challenges faced by the agricultural community.

Drawing parallels with challenges in various domains, the review clarifies the rationale for adopting soft computing techniques.
It further explores the comprehensive integration of these innovative methods, highlighting their essential role in enhancing the
efficiency of agricultural practices. The discussion extends to the use of unmanned vehicles in agriculture and how
hyper-spectral sensors show promise for early disease detection. The analysis critically examines the accuracy differences
observed among Convolutional Neural Networks (CNNs), deep learning models, and fuzzy logic models in plant disease
detection, shedding light on why fuzzy logic is emerging as the preferred approach.

Page 3 of 17 - AI Writing Submission Submission ID trn:oid:::1:2777700831


Page 4 of 17 - AI Writing Submission Submission ID trn:oid:::1:2777700831

Global agriculture faces a significant threat from plant diseases that endanger crop yields and food security. Traditional manual
approaches to disease detection are both laborious and financially burdensome, particularly for resource-constrained farmers. In
response to these challenges, automating disease detection through soft computing techniques emerges as a compelling solution.
By streamlining the identification process, these techniques make it economically viable for a broader agricultural community.
The foundation of these innovative techniques lies in establishing a robust training dataset, carefully curated and enriched with
images. These datasets serve as the basis for automated disease classification, with Multiclass Support Vector Machines (SVM)
playing a crucial role in producing precise results. Image segmentation and feature analysis are vital components facilitating the
autonomous identification of healthy and afflicted plant regions, a critical step in disease classification.

The comprehensive integration of soft computing techniques marks a significant leap forward in automated systems for precise
and rapid plant disease detection, offering substantial advantages for agricultural productivity. This holistic approach
encompasses elements such as image processing, segmentation, feature extraction, and machine learning algorithms.
Convolutional Neural Networks (CNNs) have gained prominence for their ability to discern intricate patterns within plant
diseases, while Support Vector Machines (SVM) complement the process by mapping data into high-dimensional spaces,
enabling the identification of optimal separation hyperplanes. This collaboration ensures that the disease identification process
is not only rapid but also characterised by precision, paving the way for a more resilient agricultural landscape.

The field of agricultural surveillance has undergone a revolutionary transformation with the deployment of unmanned aerial and
ground vehicles. These vehicles excel at capturing high-quality images, subsequently subjected to meticulous analysis using soft
computing techniques. The integration of deep convolutional network algorithms and SVM has significantly enhanced disease
identification, empowering farmers and researchers with a powerful tool for monitoring plant health. Furthermore, the
implementation of hyper-spectral sensors on Unmanned Aerial Systems (UAS) has elevated early plant disease identification,
enabling proactive disease management. These sensors, coupled with soft computing techniques, hold the promise of delivering
unprecedented precision in agriculture. This symbiotic relationship between technology and agriculture represents a new era in
the battle against plant diseases, with far-reaching implications for crop yields and food security.

In the ongoing pursuit of the most accurate model for plant disease detection, a rigorous comparison is being made among
Convolutional Neural Networks (CNNs), deep learning models, and fuzzy logic. CNNs receive acclaim for their proficiency in
recognizing intricate patterns within images. Deep learning models, with their multi-layered architecture, provide a platform for
intricate feature extraction and representation learning, achieving remarkable success in various image recognition tasks,
including plant disease detection. In contrast, fuzzy logic models introduce a unique perspective into plant disease detection,
excelling in managing the uncertainty and imprecision often associated with agricultural data. Fuzzy logic proves particularly
suitable for scenarios where establishing clear boundaries between healthy and diseased plant regions proves to be challenging.

The preference for employing fuzzy logic in plant disease detection stems from its innate capacity to address the ambiguity and
vagueness encountered in real-world agricultural settings. Unlike CNNs and deep learning models, which heavily rely on
precise data, fuzzy logic readily accommodates uncertainty and imprecision. This adaptability is crucial when dealing with plant
diseases that may manifest varying symptoms and gradations of severity. The unique capacity of fuzzy logic to provide
continuous output membership functions facilitates a nuanced approach to disease classification, ultimately improving the
accuracy of disease identification. The integration of hyper-spectral sensors on Unmanned Aerial Systems (UAS) for plant
disease identification is explored, offering a promising avenue for early disease detection. The promising synergy between
hyper-spectral data and soft computing techniques symbolises a brighter and more efficient future in the realm of plant disease
detection, ensuring enhanced agricultural productivity.

III. DATASET OVERVIEW


In this section, we provide an overview of how we went about creating a substantial dataset for our research on classifying the
health of plant leaves. To ensure a comprehensive and varied representation of diseases in real-world conditions, our dataset
includes samples from over 30 different classes of diseases, along with healthy leaf samples.

A. Kaggle Database

Kaggle acts as a central hub for data science enthusiasts, offering a collaborative platform where people can join competitions,
exchange datasets, and take part in machine learning projects. With a diverse array of datasets and challenges, Kaggle attracts a
global community of data scientists, researchers, and practitioners. The platform promotes learning through practical experience
by hosting competitions that address real-world issues, encouraging innovation and knowledge sharing within the data science
community. Participants have the opportunity to delve into and apply different machine learning techniques to overcome
challenges, contributing to the progress of data science and creating a collaborative space for skill development.

Page 4 of 17 - AI Writing Submission Submission ID trn:oid:::1:2777700831

You might also like