JETIR552117
JETIR552117
org (ISSN-2349-5162)
Abstract: Due to the COVID-19 pandemic, there is now a demand for effective and accurate automatic face mask detection
systems in public places so as to promote health safety. The development of such systems has been aided by computer vision and
machine learning methods. Such systems focus on the detection of individuals who are not wearing face masks and sending alerts
to the enforcement for appropriate action.
In this paper, a system is demonstrated that implements TensorFlow and combines computer vision and deep neural networks for
detection of face masks and social distance measurement. Both zoomed and non-zoomed images are parts of our application
appealing to the context of this paper mask use monitoring; therefore, this solution has been proposed in order to respond to
global needs of the society. The experimental results look promising and proof the effectiveness and the required accuracy of the
developed system in the appropriate contexts. This is an appropriate system that provides a practical tool in case of restriction on
public settings. We highlight some of the problems in creating accurate face mask detection algorithms, such as the diversity of
face mask appearance, individual facial expressions, lighting and camera angles used.
In addition, we point out the issues of privacy and ethics in the design and application of the systems and emphasize the need for
monitoring in real time. Finally, we finish with possible areas of further research and developments on the face mask detection
systems, including the use of various modalities and increasing the tolerance to alterations.
________________________________________________________________________________________________________
I. INTRODUCTION
The COVID-19 pandemic has caused unprecedented disruptions to daily life, with millions of people affected worldwide. As the
virus continues to spread, it is increasingly clear that effective public health measures are essential to contain its spread. Two key
measures recommended by health organizations worldwide are social distancing and the use of face masks. According to the World
Health Organization (WHO), as of May 10, 2023, there have been over 506 million confirmed cases of COVID-19 worldwide, with
over 6.5 million deaths. In addition, new variants of the virus continue to emerge, making it even more critical to implement
effective measures to control its spread.
One of the most effective ways to prevent the spread of the virus is by wearing face masks, as recommended by the Centers for
Disease Control and Prevention (CDC). However, enforcing the use of face masks in public areas has been a significant challenge.
Social distancing has been shown to be effective in slowing the spread of the virus. A study conducted by researchers at the
University of California, Berkeley found that social distancing measures reduced the spread of the virus by 30%. However,
enforcing social distancing guidelines in public spaces can also be challenging, and many people may not adhere to these guidelines
consistently.
To overcome these challenges, the face mask detection system along with the social distance monitoring can prove to be an
effective solution. Various existing systems use technologies such as artificial intelligence and computer vision to detect individuals
who are not wearing masks or are not following social distancing guidelines, enabling more effective enforcement of public health
measures. The technology we are using to develop this high accuracy system is TensorFlow, an open-source software library
developed by Google for machine learning and artificial intelligence applications
.
[1] Arjya Das, TensorFlow, Keras, The method attains The proposed method
Mohammad Covid-19 Face OpenCV, and Scikit- accuracy up to may not work well in
Ansari, and Mask Learn 95.77% and 94.58% low-light conditions or
Rohini Basak Detection respectively on two when the face is partially
Using different datasets. covered.
TensorFlow,
Keras and
OpenCV
[2] Md. Rafiul Islam, The paper evaluates It highlights the need The paper does not
Md. Comprehensive the performance of for more research to introduce any new
Zahangir Alom, Review on different facemask build an efficient facemask detection
Mst. Shamima Facemask detection algorithms facemask detection algorithm or dataset.
Nasrin, and Detection based on metrics system.
Tarek M. Taha Techniques in the such as precision,
(2021) Context of recall, and F1 score.
Covid-19
[3] Alwan Febri, The YOLO V4 deep Dataset used to train the
Putra, Riska The Face Mask learning algorithm is YOLO V4 algorithm only
The experimental
Analia, Ika Detection For used to detect face includes images of people
results show that the
Karlina, and Laila Preventing the masks in real-time wearing and not wearing
device can detect a
Nur Spread of applications. face masks.
single user wearing a
COVID-19 at
face mask accurately
Politeknik
even with some
Negeri Batam
disturbance in the area.
[4] Samuel Sanjaya Face Mask MobileNetV2 can The model was trained
Detection Implementing the detect people who and tested on a limited
Using model in 25 cities to are wearing a face dataset,
MobileNetV2 in monitor mask and not which may not be
The Era of compliance with wearing it at an representative of all
COVID-19 face mask accuracy of 96.85 possible scenarios.
Pandemic regulations and percent.
correlate it with the
vigilance index of
COVID-19.
[5 Preeti Nagrath, The paper uses deep The proposed face Dataset used in this paper
] Rachna Jain, learning, mask detection is a combination
Agam Madan, A real time TensorFlow, Keras, model using the of various open-source
Rohan Arora, DNN-based face and OpenCV to SSDMNV2 datasets and pictures,
Piyush Kataria, and mask detection develop a face mask approach achieved which may not be
Jude system using single detection model. an accuracy score of representative of all
Hemanth shot multibox 0.9264 and an F1 possible scenarios.
detector and score of 0.93.
MobileNetV2
[6] Firas Amer, Face Mask The paper discusses The paper discusses
Mohammed Detection the two-stage method, the challenges of As it is a literature
Ali, and Methods and deep incorrect mask usage review, it does not
learning for object and the need for a provide any new
Mohammed Techniques: A
detection, and the reliable mask experimental results or
AlTamimi Review model architecture of monitoring system. data. Instead, it
retina face detection. summarizes and
compares the existing
techniques for face
mask detection using AI
and image processing.
[7] S Balaji, B A brief Survey on Using the VGG-16 Has Alert System to
Balamurugan, T AI Based CNN model to achieve remind people to One limitation of this
Ananth Kumar, Face Mask a 96% result wear masks paper is that the realtime
R Rajmohan, and P Detection for performance and face detection
Praveen System for accuracy device may not work
Kumar Public Places properly in all possible
situations, such as when
the lighting conditions are
poor or when the
person's face is partially
covered.
[8] Palangappa Mb TensorFlow and The system achieved Cannot detect whether the
Keras algorithm , an accuracy of mask is being worn
Face Mask
Convolutional 98.5% on the test correctly or not.
Detection by using
Neural, Network dataset.
Optimistic
(CNN) model,
Convolutional
MobileNet
Neural
Network
[9] Xinbei Jiang, Real-Time The proposed SE- The proposed SE- The proposed SE-
Tianhan Gao, Face Mask YOLOv3 method, YOLOv3 method YOLOv3 method was
Zichen Zhu, and Detection which uses an outperforms only evaluated on the
Yukang Method Based on attention mechanism YOLOv3 and other PWMFD dataset, which is
Zhao YOLOv3 to improve the state-of-the-art a relatively small
performance of detectors on the dataset compared to
mask detection. PWMFD dataset. other object detection
datasets.
[10] Harish Face Mask OpenCV, Keras. The model was able to The method may not work
Adusumalli and M Detection classify three well in low light
Pratap Teja Using OpenCV categories conditions or when the
of facemask-wearing face is partially covered.
conditions, namely
correct facemask-
wearing, incorrect
facemask-wearing,
and no
facemaskwearing.
[11] Mohamed Resnet50, decision The proposed hybrid The model was tested
Loey, trees, model achieved high only on images of faces
Gunasekaran A hybrid deep Support Vector accuracy in detecting with masks and without
Manogaran, transfer Machine (SVM), and face masks in the masks, and it may not
Mohamed learning model ensemble three datasets used in perform well on other
Hamed, N with machine algorithm. the experiments. types of images.
Taha, Eldeen Nour, learning
and Khalifa. methods for face
(2021) mask
detection in the era
of the
COVID-19
pandemic
[13] Shilpa Sethi, Face mask The proposed The proposed model has
Mamta detection using Three popular baseline technique achieved been evaluated under
Kathuria and deep learning: models high accuracy controlled conditions, and
Trilok An approach to (ResNet50, (98.2%) when its performance may vary
Kaushik reduce risk of AlexNet, and implemented with in different lighting
Coronavirus spread MobileNet) to explore ResNet50. The conditions, camera angles,
the proposed model and other environmental
possibility of generated 11.07% and factors.
plugging them into 6.44%.
the proposed model
for achieving highly
accurate results in
less inference time.
[17] Alok Negi, Prachi Face Mask CNN model using The dataset used for
Chauhan, R Detection Python script, The model achieved training and validation
Rajput, and Classifier and TensorFlow, and deep high precision, is relatively small,
Krishan Kumar (2020) Model Pruning learning recall, and F1 score, which may affect the
with Keras- architecture indicating its generalizability of the
Surgeon effectiveness in model to other datasets.
detecting both
masked and
unmasked faces.
3.1.2 DELIVERABLES
The deliverables of this project include the installation and integration of hardware and software to detect face masks and
social distancing compliance.This includes cameras or sensors, computer vision or machine learning algorithms, and data
management software.
3.1.3 RISKS
The risks associated with this project include inaccurate detection of face masks or social distancing compliance, limited
field of view, interference from other technology, data security and privacy concerns, and false alarms or alerts.
V. PROJECT DIAGRAMS
Figure 1 depicts the high-level design of our system. The initial step includes collection of the relevant dataset with images
of masked and unmasked people, for the face mask detection and social distance monitoring. The data later has to be trained on
various algorithms like CNN, TensorFlow, MobilenetV2, YOLO etc in order to create a high accuracy model for the face mask
detection and social distance monitoring in real time. After the training, data cleaning and feature extraction is done as a part of
preprocessing in order to get the higher accuracy for the model. The dataset is then tested on the model built for face mask
detection and social monitoring system. Real time images are taken through camera and then the output of whether a person is
wearing a mask or not is displayed.
Figure 2 is the system architecture for the face mask detection functionality. As discussed earlier, the data with masked and
unmasked images is trained and then the data visualization is done. The model is built and trained using CNN algorithm. It
helps to categorize the person with either mask or no mask in real time.
Figure 3 depicts the system architecture for social distancing functionality of the system. The input for monitoring is taken in
real time. If the input is in the form of video, then it is converted into a frame so that the pretrained model for object detection
can be used. The model determined the bounding box with coordinates information for every person in the frame. Centroid is
calculated for each bounding box, for every person present in the input image. The initial bounding box is green in colour.
Further, the distance between each pair of bounding boxes is calculated using Euclidean distance. The threshold value for safe
distance is already defined and then the calculated distance is compared to it. If the calculated distance tends to be less than that
of the threshold, we conclude that it is a social distance violation and finally display a red bounding box around that person in
frame. Similarly, if the calculated value is greater than that of threshold, it implies that social distance is maintained properly
which in turn displays a green bounding box around that particular person.
Figure 4 shows the real time activity of our system and how it works in aspect of the face mask detection. Initial stage consists
of image acquisition followed by its preprocessing. Further the image dataset is preprocessed and image augmentation is
applied. The dataset is split into training and validation sets and trained using DNN models.
Once the images are captured in real time, the face detector model is applied to it and later, the saved trained model for the
mask detector is loaded. If the face is detected in real time, the model displays the output with either mask or no mask and the
detection is done successfully.
Simultaneously, the centroid distances of all the bounding boxes are calculated using the Euclidean algorithm in order to add
the functionality of social distance monitoring. The distances are compared to the threshold and accordingly we are able to get
the output in the form of either green or red bounding box which depict safe social distance or social distancing violation
respectively.
VI. IMPLEMENTATION
A combination of pre-trained Yolov3 and SSD models is utilized for detecting people and faces, respectively. Moreover, a
trained MobileNetV2 model is employed as a face mask classifier. The Euclidean Distance algorithm is employed to compute
social distancing violations in the crowded areas. Overall, this approach allows for accurate detection of people, faces, and their
compliance with face mask wearing and social distancing guidelines, making it a promising solution for ensuring public health and
safety.
6.1 METHODOLOGY
6.1.1 DATA PREPROCESSING
Data processing is a crucial step in the proposed method for face mask detection and social distancing. The input image is first
preprocessed by resizing and color conversion using the OpenCV library. Then, the pre-trained Yolov3 model is used to detect the
people in the image, and the pre-trained SSD model is employed for face detection. The faces detected are then passed through a
trained MobileNetV2 model, which classifies them into two categories based on the presence or absence of a face mask. The output
of the face mask classifier is combined with the output of the SSD model to determine whether each person in the image is wearing
a mask or not. Finally, the Euclidean Distance algorithm is used to calculate social distancing violations between the detected
people. In Real time a red square box is displayed on the person and it shows if he/she is wearing the mask properly or not,
additionally a blue square box will also be displayed if the individuals are maintaining the proper distance
.
To prepare the face mask detection dataset for training, pre-processing of images is necessary. This involves resizing each
image to a standard size of 224x224 pixels, converting them to an array format, and scaling the pixel intensities in the input image
to the range of [-1, 1]. These steps are performed using the preprocess_input convenience function. The pre-processed images and
their corresponding labels, i.e., "with_mask" or "without_mask," are then appended to the data and labels lists, respectively. This
process is performed on the face mask detection dataset, which consists of 4000 samples, including 2000 images of people wearing
masks and 2000 images of people not wearing masks. By performing these pre-processing steps and using suitable deep learning
techniques such as transfer learning, a highly accurate face mask detection model can be trained and can be used in public places to
ensure if people are wearing masks and maintaining social distance.
The research involved a multi-step process for training the model to detect face masks and calculate social distancing violations.
Firstly, a dataset of 4000 samples, consisting of 2000 images each for "With_mask" and "Without_mask" categories, was
downloaded from Kaggle. The images were then preprocessed, including resizing to 224x224 pixels, conversion to array format,
For fine-tuning, the MobileNetV2 model was used. The pre-trained ImageNet weights were loaded, a new FC head was
constructed, and the base layers of the network were frozen. The model was compiled using the Adam optimizer, a learning rate
decay schedule, and binary cross-entropy. After the model was trained, it was evaluated on the test set, and a classification report
was printed in the terminal for inspection. Finally, the face mask classification model was serialized to disk.
In addition to the face mask classification model, the YOLOv3 pre-trained model was used to detect people, and the SSD pre-
trained model was used to detect faces. The Euclidean Distance was used to calculate social distancing violations. The coco names
file was used as a pre-trained model that had a list of 80 object classes that the model was trained to detect. The YOLOv3 pre-
trained weights of the neural network were set to weightsPath, and the neural network architecture model was set to ConfigPath.
The readNetFromDarknet function returned a Network object that was ready to go forward and threw an exception in failure cases.
The pixel values in relation to the threshold value were assigned as 0.4. These steps collectively resulted in an accurate and reliable
model for detecting face masks and calculating social distancing violations, which can be used to ensure public safety during
pandemiclike situations
Step 1: - Import all required library modules for this training script. TensorFlow functions, keras, Scikit Learn, imutils,
matplot.
Use of scikit-learn aka sklearn for binarizing class labels, segmenting dataset and a printing a classification
report. By imutils paths implementation will help to find and list images in dataset, and use of matplotlib to plot
training curves.
Step 2: -
Step 3: -
Step 4: -
(line 56-58) one-hot encode our class labels, it accepts Categorical data as input and returns an Numpy array.
Using scikit-learn’s convenience method, (Lines 63-64) segment our data into 80% training and the remaining 20% for
testing.
During training, we’ll be applying on-the-fly mutations to our images in an effort to improve generalization. This is
known as data augmentation, where the random rotation, zoom, shear, shift, and flip parameters are established on Lines
77-84. We’ll use the aug object at training time. For fine-tuning used to prepare MobileNetV2.
Load MobileNet with pre-trained ImageNet weights, leaving off head of network (Lines 77- 78)
Construct a new FC head, and append it to the base in place of the old head (Lines 82- 90)
Freeze the base layers of the network (Lines 94-95). The weights of these base layers will not be updated during
the process of backpropagation, whereas the head layer weights will be tuned.
Lines 98-100 compile our model with the Adam optimizer, a learning rate decay schedule, and binary cross-entropy. If
you’re building from this training script with > 2 classes, be sure to use categorical cross-entropy.
Face mask training is launched via Lines 104-109. Notice how our data augmentation object (aug) will be providing
batches of mutated image data.
Once training is complete, we’ll evaluate the resulting model on the test set. In Lines 113-117 make predictions on the test
set, grabbing the highest probability class label indices. Then, we print a classification report in the terminal for inspection.
Step 5: -
Last step to plot accuracy and loss curves of this model. Once the plot is ready it will auto save to Disk self-using --plot
(Line 26, Step 2) file-path.
6.2.2 MODULE 2
Step 1: -
Import all required libraries and one self-made module known as conf. this python file contains some global values like
NMS_THRES – threshold value, MIN_CONF, People_Counter at same path.
This Module 2 is made to count the total persons in a frame and returns total number to output display.
6.2.3 MODULE 3
Step 1: -
Imports all required libraries in this module. Such as tensorflow, Keras, numpy and SciPy. SciPy is used to
compute distance using Euclidean Distance.
Step 2: -
There is a file called coco.names (pre-trained model) that has the list of 80 object class that the model will be able
to detect. The model has been trained only on these 80 object classes. Call and Open pre-trained model (Lines
12-13), similarly set path of YOLO Version3 pre-trained weights of neural network i.e.. weights and neural
network architecture model i.e. .cfg file to weightsPath and ConfigPath respectively (Line 2021).
readNetFromDarknet returns Network object that ready to do forward, throw an exception in failure cases(Lines
23), and to assign the pixel values in relation to the threshold value is 0.4.
Load the GPU for backend support (Lines 28-33). Call all pre-trained face detect models deploy_prototxt & caffemodel.
Step 3: -
To Start the videoStreaming set the path of Camera main camera value is ‘0’ for System inbuilt Camera, and to connect
with other out source of camera such as external camera like CCTV and webcam set the value in VideoCapture () is 1, 2.
as the no. connected of cameras.
Start VideoStreaming to load real-time images to detect the face and check to maintain social detecting. After Video
Streaming it set size of output windows to 720x640 and for analyzing an image that has undergone binarization
processingsuch as the presence, number, area, position, length, and direction of lumps (Lines 44-61).
after calculate the distance between 2 points using formulae √𝑥2 + 𝑦2 . and convert it into list from dictionary.
Initialize a & b lists (Line 95-96) and append all points such as (x, y), (w, h) to x and y.
For every list elements put a rectangle and text for detecting the faces with their result that they wear a mask or not. If yes,
then both rectangle and text will display in Green color else in red color i.e. Alert.
construct a blob, detect faces, and initialize lists, two of which the function is set to return.
Inside the loop, we filter out weak detections (Lines 146-149) and extract bounding boxes while ensuring
bounding box coordinates do not fall outside the bounds of the image (Lines 150-154).
After extracting face ROIs and pre-processing (Lines 156-161), it will expand the shape of an array.
Insert a new axis that will appear at the axis position in the expanded array shape.
To detect and calculate total persons in a frame and who maintain the social distancing at a time there are at least two
people detections (required in order to compute our pairwise distance maps). extract all centroids from the results and
compute the Euclidean distances between all pairs of the centroids (Line 168-186).
To evaluate the distance between two or more persons I’ve used distance differentiate method i.e., Euclidean Distance
𝑛
Method.
After the frame is displayed, we capture key presses. If the user presses ‘q’ (quit), we break out of the loop and
perform housekeeping.
The described model has undergone rigorous training, validation, and testing processes on a dataset comprising 4000 images,
with 2000 images featuring masked subjects and 2000 images featuring unmasked subjects. Remarkably, this model has
achieved an accuracy rate of 98%, which is highly impressive. One of the key factors contributing to this success lies in the
implementation of MaxPooling, which provides basic translation invariance to the internal representation and helps to reduce
the number of parameters that the model must learn. Through a sample-based discretization process, the input representation of
the image is down-sampled by reducing its dimensionality. The optimized filter values and pool size are used to filter out the
key features of the image, enabling the model to accurately detect the presence of a mask without causing over-fitting.
The system has demonstrated the ability to effectively detect partially occluded faces, whether by a mask, hair, or hand. It
assesses the degree of occlusion in four key regions: the nose, mouth, chin, and eyes, which allows it to differentiate between an
annotated mask and a face covered by a hand. As such, a mask covering the entire face, including the nose and chin, will only
be identified as "with mask" by the model. Nevertheless, the model is still faced with challenges such as varying angles and lack
of clarity. Moving, indistinct faces in video streams can also pose challenges, but analyzing the trajectories of several frames in
the video stream can help the model make better decisions regarding whether a subject is wearing a mask or not.
7.1 APPLICATIONS
7.1.1 RETAIL STORES:
Retail stores can use Face Mask Detection and Social Distancing technologies to monitor the number of
customers inside the store and ensure they are following social distancing guidelines. If too many customers are
in the store or are too close to each other, the system can alert store employees to take appropriate action.
Public transportation systems can use Face Mask Detection and Social Distancing technologies to monitor
passengers and ensure they are wearing face masks and maintaining safe distances from each other. This can help
prevent the spread of the virus on buses, trains, and other forms of public transportation.
7.1.3 HOSPITALS:
Hospitals can use Face Mask Detection and Social Distancing technologies to monitor staff, patients, and visitors
and ensure they are following proper safety protocols. This can help prevent the spread of the virus within the
hospital and protect vulnerable patients.
7.1.4 AIRPORTS:
Airports can use Face Mask Detection and Social Distancing technologies to monitor passengers and ensure they
are wearing face masks and maintaining safe distances from each other. This can help prevent the spread of the
virus among travelers and airport staff.
Face Mask Detection and Social Distancing technologies can be used to monitor public gatherings such as
concerts, festivals, and sporting events. The system can alert organizers if too many people are in one area or if
individuals are not following social distancing guidelines.
Schools and universities can use Face Mask Detection and Social Distancing technologies to monitor students and
ensure they are wearing face masks and maintaining safe distances from each other. This can help prevent the
spread of the virus in educational institutions.
Face mask detection can be improved by combining multiple detection techniques such as image analysis, depth
sensing, and thermal imaging. This would increase the reliability of detection and reduce false positives and false
negatives.
To improve the accuracy of face mask detection, it's important to train the algorithms on diverse data sets that
include different types of masks, varied lighting conditions, and diverse demographics. This would help the system
recognize a wide range of masks and improve its performance in real-world scenarios.
Face mask detection technology can be made more effective by taking into account the context of the situation,
such as the location, crowd density, and the purpose of wearing a mask. For instance, in certain crowded areas, the
technology can be set to be more sensitive to mask detection.
Deep learning-based approach can be proved handy here to detect & limit the disease spread by enhancing our
proposed solution with body gesture analysis to understand if an individual is coughing and sneezing in public
places.
VIII. CONCLUSION
AI and ML are being utilized by companies to serve humanity during the pandemic. Digital product development firms have
launched mask detection API services that allow for the swift creation of a face mask detection system to aid in keeping public
places secure and individuals safe. Our proposed method employs computer vision and MobileNet V2 SSD architecture to
automatically monitor public areas, reducing the need for authorities to perform physical surveillance and assisting in the
containment of COVID-19.
The system can accurately and instantly detect faces of individuals wearing masks and can be easily integrated into existing
business systems while maintaining user data privacy. As lockdowns ease and businesses reopen, our system can efficiently
track public places. Our system is focused on monitoring social distancing and identifying face masks to ensure public health
and is a crucial solution for various industries such as healthcare, retail, temples, shopping centers, metro stations, airports, and
corporate sectors.
IX. REFERENCES
[1] M. A. Fauzi, A. Harjoko, and A. S. Nugroho, "Real-time face mask detector using YOLOv4," in 2021 International
Conference on Information Management and Technology (ICIMTech), 2021, pp.
1-6, doi: 10.1109/ICIMTech52681.2021.9469625.
[2] Liu, Y., Chen, X., Liu, C., & Chen, Z. (2021). SE-YOLOv3: A Mask Detector Based on Squeeze and Excitation
Mechanism. IEEE Access, 9, 111415-111424. https://doi.org/10.1109/ACCESS.2021.3107647
[3] Alaa Tharwat, "A Hybrid Deep Learning and Classical Machine Learning Approach for Face Mask Detection", IEEE
Access, vol. 9, pp. 99598-99607, 2021.
[4] S. K. Singh, D. K. Singh, S. K. Singh, and M. K. Gupta, "COVID-19 face mask detector using deep learning techniques,"
Journal of Ambient Intelligence and Humanized Computing, pp. 1-9, 2021.
[5] Khan, M. U., Khan, S. U., Khan, M. A., & Irshad, M. S. (2021). DeepMasknet: A Deep Learning Approach to Mask
Detection and Recognition of the Masked Faces. IEEE Access, 9, 103739103750.
[6] S. Gupta, A. Kumar, P. Kumar, and M. Varma, "Lightweight face mask detection in the wild using MobileNetV2," arXiv
preprint arXiv:2101.06004, 2021.
[7] M. A. Islam, M. A. Hossain, M. Hasan, and A. Almogren, "Detecting Facial Masks in Videos Using
Deep Learning," in IEEE Access, vol. 9, pp. 101042-101052, 2021, doi: 10.1109/ACCESS.2021.3099477.
[8] Su X, Gao M, Ren J, Li Y, Dong M, Liu X. Face mask detection and classification via deep transfer learning. Multimed
Tools Appl. 2022;81(3):4475-4494. doi: 10.1007/s11042-021-11772-5. Epub 2021 Dec 9. PMID: 34903950; PMCID:
PMC8656443.
[9] A. Negi, P. Chauhan, K. Kumar and R. S. Rajput, "Face Mask Detection Classifier and Model
Pruning with Keras-Surgeon," 2020 5th IEEE International Conference on Recent Advances and Innovations in
Engineering (ICRAIE), Jaipur, India, 2020, pp. 1-6, doi: 10.1109/ICRAIE51050.2020.9358337.
[10] I. B. Venkateswarlu, J. Kakarla and S. Prakash, "Face mask detection using MobileNet and Global Pooling Block,"
2020 IEEE 4th Conference on Information & Communication Technology (CICT), Chennai, India, 2020, pp. 1-5, doi:
10.1109/CICT51604.2020.9312083.
[11] Sharma, V. (2020). Face Mask Detection Using YOLOv5 for COVID-19. California State University San Marcos.
Retrieved from https://books.google.co.in/books?id=HuR2zgEACAAJ
[12] Kumar, V., Kumari, R., Kumar, V., & Verma, R. (2021). Face Mask Detection System using Resnet and RFMRD in
PyTorch. EAI Endorsed Transactions on Scalable Information Systems, 8(27), e5. https://doi.org/10.4108/eai.8-1
2021.167843
[14] Ullah, N., Javed, A., Ghazanfar, M. A., Alsufyani, A., & Bourouis, S. (2022). A novel
DeepMaskNet model for face mask detection and masked facial recognition. Journal of King Saud
University - Computer and Information Sciences, 34(10), 9905-9914. https://doi.org/10.1016/j.jksuci.2021.12.017
[15] Shilpa Sethi, Mamta Kathuria, and Trilok Kaushik. "Face mask detection using deep learning: An approach to reduce
risk of Coronavirus spread." Journal of Biomedical Informatics 120 (2021): 103848.
https://doi.org/10.1016/j.jbi.2021.103848.
[16] Kumar, A., Kalia, A., Verma, K., Sharma, A., & Kaushal, M. (2021). Scaling up face masks detection with YOLO on a
novel dataset. Optik, 239, 166744. https://doi.org/10.1016/j.ijleo.2021.166744.
[17] Loey, M., Manogaran, G., Taha, M. H. N., & Khalifa, N. E. M. (2021). A hybrid deep transfer learning model with
machine learning methods for face mask detection in the era of the COVID-19 pandemic. Measurement, 167, 108288.
https://doi.org/10.1016/j.measurement.2020.108288.
[18] H. Adusumalli, D. Kalyani, R. K. Sri, M. Pratapteja and P. V. R. D. P. Rao, "Face Mask Detection Using OpenCV,"
2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV),
Tirunelveli, India, 2021, pp. 1304-1309, doi: 10.1109/ICICV50876.2021.9388375.
[19] Xinbei Jiang, Tianhan Gao, Zichen Zhu, and Yukang Zhao. "Real-Time Face Mask Detection Method Based on
YOLOv3." Electronics 10, no. 7 (2021): 837. https://doi.org/10.3390/electronics10070837.
[20] K. Suresh, M. Palangappa and S. Bhuvan, "Face Mask Detection by using Optimistic Convolutional Neural Network,"
2021 6th International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 2021, pp. 1084-
1089, doi: 10.1109/ICICT50816.2021.9358653.
[21] Beby, M. L. A., Arthy, M. V., Priyanka, M., Sindhu, K., & Shajina, R. (2021). Diabetic
Retinopathy Identification with Fuzzy C-Means Segmentation and Neural Network Classifier. Irish Interdisciplinary
Journal of Science & Research (IIJSR). Retrieved from SSRN: https://ssrn.com/abstract=3876254
[22] Amer, F., & Al-Tamimi, M. S. H. (2022). Face Mask Detection Methods and Techniques: A Review. The International
Journal of Nonlinear Analysis and Applications (IJNAA), 13(1), 38113823. DOI: 10.22075/ijnaa.2022.6166.
[23] Nagrath, P., Jain, R., Madan, A., Arora, R., Kataria, P., & Hemanth, J. (2021). SSDMNV2: A real time DNN-based face
mask detection system using single shot multibox detector and MobileNetV2. Sustainable Cities and Society, 66, 102692.
https://doi.org/10.1016/j.scs.2020.102692.
[24] S. A. Sanjaya and S. Adi Rakhmawan, "Face Mask Detection Using MobileNetV2 in The Era of COVID-19 Pandemic,"
2020 International Conference on Data Analytics for Business and Industry: Way Towards a Sustainable Economy
(ICDABI), Sakheer, Bahrain, 2020, pp. 1-5, doi: 10.1109/ICDABI51230.2020.9325631.
[25] S. Susanto, F. A. Putra, R. Analia and I. K. L. N. Suciningtyas, "The Face Mask Detection For Preventing the Spread of
COVID-19 at Politeknik Negeri Batam," 2020 3rd International Conference on Applied Engineering (ICAE), Batam,
Indonesia, 2020, pp. 1-5, doi: 10.1109/ICAE50557.2020.9350556.