0% found this document useful (0 votes)
19 views6 pages

Automated Facial Recognition

The document presents an Automated Attendance Monitoring System that utilizes facial recognition technology to streamline attendance tracking in organizational settings. Developed using OpenCV, TensorFlow, and Django, the system achieves a 92% recognition accuracy while addressing challenges such as lighting variations and occlusions. It integrates real-time data processing with a web-based interface for efficient attendance management, significantly reducing manual errors and administrative workload.

Uploaded by

SYA63Raj More
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views6 pages

Automated Facial Recognition

The document presents an Automated Attendance Monitoring System that utilizes facial recognition technology to streamline attendance tracking in organizational settings. Developed using OpenCV, TensorFlow, and Django, the system achieves a 92% recognition accuracy while addressing challenges such as lighting variations and occlusions. It integrates real-time data processing with a web-based interface for efficient attendance management, significantly reducing manual errors and administrative workload.

Uploaded by

SYA63Raj More
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

FaceTrack - Automated Attendance Monitoring

System Using Facial Detetction

Raj More Aryan Mandke Pranav Lohar


Department of Computer Engineering Department of Computer Engineering Department of Computer Engineering
K. J. Somaiya Institute of Technology K. J. Somaiya Institute of Technology K. J. Somaiya Institute of Technology
Mumbai, India Mumbai, India Mumbai, India
[email protected] [email protected] [email protected]

Pratul Jagtap Vandana Salve


Department of Computer Engineering Department of Computer Engineering
K. J. Somaiya Institute of Technology K. J. Somaiya Institute of Technology
Mumbai, India Mumbai, India
[email protected] ( [email protected] )

Abstract—With the growing need for smarter workforce In this project, we propose a comprehensive facial recognition-
management, automating attendance tracking through facial based attendance system that integrates modern computer
recognition has emerged as a relevant and efficient solution. vision techniques with a web-based backend for data
Developing such a system poses several challenges, particularly management. Developed using OpenCV, TensorFlow, and
in ensuring high accuracy under dynamic lighting, facial Django, our system captures live video streams, identifies
expressions, and occlusions. This paper presents an Automated individuals accurately, and logs their attendance seamlessly.
Attendance Monitoring System that integrates facial recognition By eliminating manual processes, the system not only reduces
with a Django-based backend for efficient attendance and errors but also ensures a higher degree of compliance and
payroll processing. Using libraries such as OpenCV, security. Furthermore, the integration of a web framework like
TensorFlow, and NumPy, the system captures live facial data, Django allows for real-time data access and streamlined HR
verifies identity, and logs attendance with minimal human operations. This paper discusses the motivation, methodology,
intervention. The proposed model achieves a 92% recognition implementation, and performance evaluation of our proposed
accuracy under diverse conditions, demonstrating practical system.
applicability in real-world organizational settings.
Keywords: Facial Recognition, Attendance Automation,
Computer Vision, OpenCV, Machine Learning, Django
II. LITERATURE SURVEY
The field of facial recognition has experienced significant
I. INTRODUCTION progress over the past few years. Early facial recognition
The growing demand for automated and secure systems used simpler techniques like Eigenfaces and
workforce management solutions has led to the rapid Fisherfaces, but these had limitations in handling variations in
adoption of facial recognition technology across multiple lighting, pose, and occlusions. As machine learning gained
industries. Traditional methods of attendance tracking, such prominence, particularly deep learning, models based on
as biometric punch-in systems or manual registers, are often Convolutional Neural Networks (CNNs) began outperforming
plagued with issues such as buddy punching, proxy traditional approaches by extracting robust features directly
attendance, and human error. These limitations not only from image data. A pivotal development in face recognition
compromise data integrity but also increase the workload of came with the introduction of the Viola-Jones algorithm,
administrative personnel. In contrast, facial recognition offers which employed the Haar-like features to detect faces in real-
a non-intrusive, contactless, and highly efficient alternative time, making it faster and more reliable.
that ensures greater accuracy and transparency.
In recent years, algorithms like the Multi-task Cascaded
Facial recognition systems operate by capturing an image of a Convolutional Networks (MTCNN) have been utilized for not
person's face, extracting unique facial features, and comparing only detecting faces but also aligning them, addressing many
them to a pre-registered database to verify identity. With challenges in real-world applications, such as face orientation
advances in computer vision and machine learning, especially and occlusion. According to Zhang et al. (2016), MTCNN
through deep learning frameworks, such systems have become showed excellent performance in jointly detecting faces and
increasingly capable of functioning in real-time and under aligning facial features across multiple face poses and varying
challenging conditions. The implementation of these lighting conditions, which significantly improved recognition
technologies in an automated attendance system thus has the accuracy.
potential to revolutionize how organizations manage their
workforce. One key study conducted by Klare et al. (2015) on the IARPA
Janus Benchmark A (IJB-A) dataset demonstrated how
advanced face recognition systems could be tested in real- The design focused on the following:
world, unconstrained conditions, including variations in age,
facial hair, and lighting. These findings helped in refining the ● Accuracy and Efficiency: The system must be able
algorithms for more practical applications such as the to identify individuals quickly and accurately under
monitoring systems for attendance in educational institutions. various environmental conditions such as different
lighting, facial expressions, and occlusions.
Generative models, particularly Generative Adversarial
Networks (GANs), have also found a niche in this area. ● Scalability: The system must handle an increasing
Goodfellow et al. (2014) introduced GANs, which have shown number of users without degradation in performance.
potential in generating high-quality synthetic images. In the
context of facial recognition, GANs can generate faces with ● Security and Privacy: The system must store and
varied expressions or simulate different lighting conditions, process facial data securely, in compliance with
thereby enhancing the model's robustness to such variations. privacy laws and regulations.

The use of OpenCV has been prevalent in many academic and


industrial applications for real-time computer vision tasks, The software and hardware requirements were carefully chosen
including face detection and recognition. OpenCV’s rich to ensure that the system could operate efficiently in real-time.
functionality and ease of integration with Python libraries like For instance, the choice of hardware was optimized for low-
TensorFlow have made it a go-to tool for researchers and cost, high-performance cameras that could provide clear
developers. OpenCV-based face recognition systems, when images for face recognition. Additionally, the software stack
paired with machine learning models such as Support Vector was selected based on factors such as ease of integration,
Machines (SVM) or deep neural networks, have been able to scalability, and compatibility with facial recognition
achieve state-of-the-art performance across different datasets. algorithms.

Facial recognition technology is not without controversy, 2. Data Collection and Preprocessing
especially regarding privacy concerns. Several studies
highlight the ethical implications of using facial recognition for The first step in the system's development involved gathering
attendance tracking, stressing the importance of data privacy data for facial recognition. A robust dataset is critical for
and informed consent. As facial recognition is increasingly training a face recognition model that can generalize well
used in public spaces, there are ongoing debates about its across various subjects and conditions. The dataset used for
potential for misuse in surveillance and the need for transparent training the model included images of faces collected from a
policies to govern its application. group of students. These images were taken in different
lighting conditions, with various expressions, and under
different angles to simulate real-world scenarios.
III. METHODOLOGY Once the dataset was collected, preprocessing techniques were
The The methodology for the development of the applied to prepare the data for facial recognition. Preprocessing
"Automated Attendance Monitoring System using Facial involved the following steps:
Recognition" is a structured approach that involves several
critical stages, each designed to address different aspects of the ● Image Alignment and Cropping: Each image was
problem. This methodology blends traditional systems aligned to focus on the face, ensuring that the system
engineering with cutting-edge techniques in computer vision could detect key facial features (eyes, nose, mouth)
and machine learning to build an efficient, accurate, and robust consistently across different images.
facial recognition-based attendance system. The development
process is organized into several key phases: system design, ● Face Normalization: This step involved
data collection and preprocessing, face detection, face standardizing the images to a consistent size and
recognition, attendance marking, system integration, and color space, which helps in reducing the complexity
testing. of face recognition and improving the model's
performance.
1. System Design
● Data Augmentation: To further improve the
The design of the Automated Attendance Monitoring System robustness of the model, data augmentation
was centered around the need to automate the process of techniques were applied. This involved artificially
attendance tracking while minimizing manual intervention and generating new training images by applying
human error. The system was developed with an architecture transformations such as rotation, scaling, and
that allows for the seamless integration of facial recognition flipping, to simulate the variety of conditions under
technology into existing classroom or office environments. The which the system might operate.
key components of the system include a camera for real-time
facial image capture, a backend server to process and store
attendance data, and a web-based interface for displaying The quality of the data is crucial for the accuracy of the face
attendance records and reports. recognition model, so careful attention was given to ensuring
that the data was diverse and well-prepared for training.
3. Face Detection the system incorporates model retraining using new images,
and data augmentation is performed regularly to simulate new
The next step in the methodology was to implement face facial expressions and conditions.
detection, which is the process of identifying and locating faces
within an image or video feed. Face detection serves as the first 5. Attendance Marking and Database Integration
step in the overall facial recognition process, as only detected
faces are then processed for identification. Once a face has been successfully recognized, the system
automatically records the individual's attendance. The
For this system, the Haar Cascade Classifier from OpenCV attendance data is stored in a relational database like SQLite
was initially chosen for face detection due to its speed and or MySQL, which is accessible to both students and
effectiveness in real-time applications. The Haar Cascade administrative staff. Each attendance record includes the
algorithm uses a series of positive and negative images to train student’s ID, timestamp, and status (present or absent).
a classifier to detect faces by examining rectangular regions
within an image. The classifier is fast and efficient, making it The system ensures that the attendance records are updated in
suitable for real-time attendance systems. real-time, and any discrepancies are flagged for review. The
backend system also performs checks to ensure that attendance
However, due to some limitations in handling occlusions, is only marked once per session per individual to prevent
lighting, and angle variations, a more advanced deep learning- duplicate entries. Additionally, a log file is generated, capturing
based approach was integrated into the system. The MTCNN all attendance-related actions for auditing purposes.
(Multi-task Cascaded Convolutional Network) algorithm
was implemented to enhance the accuracy of face detection. The database is connected to a web-based interface that allows
MTCNN works by detecting facial landmarks (eyes, nose, and authorized users (faculty, administrators) to view real-time
mouth) and aligning the detected faces, even under various attendance updates, generate attendance reports, and review
conditions. This method significantly improves the reliability individual student records. The interface is designed to be
of face detection under challenging conditions. intuitive and easy to use, making it accessible for both technical
and non-technical users.
Once a face is detected in the live feed, the next step is to pass
the detected face to the facial recognition module for 6. System Integration and Testing
identification.
The final phase of the methodology involved the integration of
4. Face Recognition all system components into a cohesive solution. The face
detection and recognition algorithms were integrated with the
Face recognition is the core functionality of the system, where backend database and the web interface. This integration
the system identifies the individual detected in the live video ensured that the entire process – from image capture to
stream by comparing the extracted facial features with a attendance marking and reporting – worked smoothly and
database of enrolled faces. The face recognition process efficiently.
involves two key steps: feature extraction and matching.
Testing was performed at multiple stages to evaluate the
1. Feature Extraction: After detecting the face, the system’s performance under different scenarios:
next task is to extract the unique features of the face.
This is done using a pre-trained deep learning model, ● Real-Time Testing: The system was tested in a live
typically a Convolutional Neural Network (CNN) environment with multiple users to verify its ability
such as FaceNet or VGG-Face. These models are to process video feeds in real-time and accurately
trained to map faces to a high-dimensional space, identify faces.
where the distance between two embeddings
corresponds to the similarity between the faces. ● Accuracy Testing: The system’s recognition
accuracy was measured by testing it against a set of
2. Face Matching: Once the facial features are known faces and comparing the results with manual
extracted, they are compared with those stored in the attendance records.
database. The system uses a distance metric
(typically Euclidean distance or cosine similarity) to ● Load Testing: The system was subjected to stress
measure the similarity between the input face and the testing to ensure that it could handle a large number
faces in the database. If the distance is below a of users simultaneously without performance
certain threshold, the system identifies the person degradation.
and proceeds to mark attendance. If no match is
found, the system will prompt the user to try again or
flag the image for manual verification. The system was also tested for robustness, ensuring that it
could handle environmental variations such as lighting
changes, face occlusions (e.g., masks), and slight variations in
This stage is crucial to the success of the system, as the facial expressions without significantly affecting recognition
accuracy of face recognition directly impacts the reliability of accuracy.
the attendance marking. To enhance recognition performance,
7. Optimization and Final Deployment The development and deployment of the “Automated
Attendance Monitoring System using Facial Recognition” led
Once testing was completed, the system underwent to significant outcomes that demonstrate the system’s
optimization to improve its speed and reduce resource effectiveness, reliability, and practical applicability. The
consumption. This included optimizing the face detection and system was tested across multiple real-world scenarios
recognition algorithms to run more efficiently, reducing the involving students in classroom environments under varying
time it takes to process each frame of video. lighting conditions, facial orientations, and expression
changes. The evaluation focused on several key performance
The system was then deployed in the target environment, where metrics, including recognition accuracy, processing speed,
it was used for monitoring attendance. Continuous feedback system responsiveness, and user satisfaction. The results were
from users was gathered to identify areas for improvement, and highly promising, and the system successfully addressed the
the system was periodically updated to improve performance key limitations of traditional attendance systems, such as
and accommodate new requirements. manual errors, proxy attendance, and administrative delays.

In conclusion, the methodology used to develop the The core functionality of the system—automatic face
"Automated Attendance Monitoring System using Facial recognition and attendance marking—achieved a recognition
Recognition" focused on designing a comprehensive, reliable, accuracy of 92% across different testing conditions. This was
and efficient solution for automating attendance tracking. determined by comparing the system’s recognition results
Through careful system design, data preprocessing, advanced against manually maintained attendance logs during several
facial recognition techniques, and real-time testing, the system sessions. The recognition process was tested under both ideal
achieves high accuracy and reliability, ensuring the smooth and non-ideal conditions, including low lighting, partial
operation of attendance tracking with minimal human occlusion (e.g., masks, spectacles), and varying angles of face
intervention. orientation. The integration of advanced facial recognition
techniques such as MTCNN for face detection and FaceNet for
IV. RESULT AND DISCUSSION feature extraction significantly improved the accuracy, even
when images were not perfectly aligned or centered. Moreover,
the implementation of data augmentation techniques during
model training allowed the system to generalize well across
unseen data, thus reducing false negatives and improving
reliability.

In terms of performance, the system demonstrated real-time


processing capabilities, with an average face detection and
recognition time of less than 2 seconds per user. This was
essential in classroom settings where multiple students enter
the room at once, and the system needs to process faces quickly
without causing delays or bottlenecks. Efficient coding
practices and lightweight, optimized models ensured that the
system could be deployed on standard desktop or laptop
configurations without requiring high-end GPUs or specialized
hardware. This made the system cost-effective and easily
scalable across multiple classrooms or departments.
Fig.1 Face Detection System used for Virat Kohli
From a usability perspective, the system’s web interface
provided an intuitive platform for faculty and administrators to
track and manage attendance records. Real-time updates,
automated report generation, and secure access controls
ensured that users could interact with the system effectively
without needing extensive training. Feedback collected from
faculty members indicated a high level of satisfaction with the
ease of use and reliability of the system. Additionally, students
appreciated the contactless nature of attendance marking,
especially in the context of post-pandemic hygiene and safety
concerns.

Another important aspect revealed during discussions was the


system’s capability to prevent proxy attendance, a common
issue in manual systems. Since attendance is marked only after
successful face verification using a trained database, it is not
Fig 2 Facial Detection of Akshay Kumar
possible for students to mark attendance for their peers. This
feature alone significantly enhances the integrity and fairness
of the attendance process and is one of the most appreciated conditions. By leveraging libraries such as OpenCV,
aspects of the system. TensorFlow, NumPy, and others, the system is trained to
adapt to changes in lighting, facial expressions, and partial
Despite these successes, the discussion also highlighted a few obstructions. This is a notable achievement, considering the
limitations that need to be addressed in future iterations. One dynamic nature of classroom environments where such
such limitation was the dependence on good lighting variances are common. Additionally, the system’s ability to
conditions for optimal performance. Although the system
maintain over 90% recognition accuracy even in non-ideal
performed well under most classroom lighting setups,
situations affirms the robustness and adaptability of the
performance dropped slightly in extremely low-light
environments. Another challenge was handling large-scale developed model. Furthermore, the data augmentation and
attendance scenarios, such as seminar halls with hundreds of model retraining strategies employed during the development
students. In such cases, the camera resolution and processing phase greatly contributed to improving the model's
capability need to be scaled accordingly to avoid recognition generalization capability, ensuring consistent performance
delays or missed detections. Additionally, individuals with across different batches of students and real-time scenarios.
significant changes in appearance (e.g., new hairstyle, heavy
makeup, or facial hair) sometimes experienced recognition Another key takeaway from this project is the user-centric
mismatches, which pointed to the need for regular database design of the system. The web interface provides an intuitive
updates and model retraining. platform for faculty members to review attendance records,
generate reports, and manage student data seamlessly. It also
In conclusion, the results of this project validate the feasibility reduces the workload of teaching and administrative staff by
and effectiveness of using facial recognition for automated
automating the repetitive and time-intensive task of roll calls,
attendance monitoring. The system not only reduces
thereby allowing them to focus more on educational
administrative overhead but also improves accuracy,
accountability, and transparency in attendance tracking. The engagement. Additionally, the secure backend, coupled with
discussions surrounding the results reinforce the fact that while features like encrypted data storage and role-based access
the system has achieved a high level of operational success, control, ensures that the system upholds the principles of data
there is room for further enhancements. These include privacy and integrity, which are essential in any institutional
implementing adaptive lighting correction, improving facial setup.
feature matching algorithms, enabling mobile device
compatibility, and incorporating facial recognition with The project also opens up multiple avenues for future
additional biometric methods for multi-factor authentication. development. For instance, integrating this system with
Overall, the project provides a solid foundation for future Learning Management Systems (LMS) or Human Resource
developments in smart attendance systems using AI and Management Systems (HRMS) can expand its utility beyond
computer vision technologies. just attendance tracking. Features such as emotion detection,
behavioral analysis, and real-time performance monitoring
can also be added to evolve the system into a more
V. CONCLUSION comprehensive student or employee monitoring tool.
Moreover, implementing cloud-based storage and deploying
the model on edge devices such as Raspberry Pi with camera
The implementation of the “Automated Attendance modules can help scale the solution to environments with
Monitoring System using Facial Recognition” marks a minimal hardware infrastructure. There is also potential to
significant step forward in the modernization and automation introduce multi-modal authentication by combining facial
of routine academic administrative tasks, particularly recognition with voice or fingerprint data for added security
attendance tracking. By integrating advanced computer vision and reliability.
techniques and machine learning algorithms into a Django-
based web application, the project effectively demonstrates In conclusion, the project has met its intended goals and
the practical use of artificial intelligence in solving real-world provided a scalable, efficient, and reliable alternative to
problems. The system successfully eliminates traditional conventional attendance systems. It showcases the power of
challenges associated with manual attendance systems, such integrating AI and computer vision into administrative
as human error, proxy attendance, time consumption, and the domains and highlights how emerging technologies can be
inefficiencies of paper-based tracking. Through the use of harnessed to address legacy inefficiencies in institutional
facial recognition, attendance can now be captured in real- processes. The “Automated Attendance Monitoring System
time, contactlessly, and with minimal user intervention, using Facial Recognition” not only enhances operational
making it not only more efficient but also more hygienic and efficiency but also sets the groundwork for more intelligent,
secure—qualities that have gained even greater relevance in data-driven academic management systems in the future. The
the wake of global health concerns such as the COVID-19 learning outcomes of this project also extend beyond its
pandemic. technical implementation—encouraging teamwork, critical
thinking, and innovation—which are essential skills in the
The core contribution of this project lies in its ability to ever-evolving field of computer engineering.
accurately detect and recognize individual faces under varying
REFERENCES

[1] [1] OpenCV Documentation, “Face Recognition with OpenCV and


Python,” 2023. [Online]. Available: https://docs.opencv.org/
[2] [2] I. Goodfellow et al., “Generative Adversarial Nets,” Advances in
Neural Information Processing Systems (NeurIPS), 2014. [Online].
Available:
https://papers.nips.cc/paper/2014/hash/5ca3e9b122f61f8f06494c97b1a
fccf3-Abstract.html
[3] [3] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint Face Detection and
Alignment Using Multi-task Cascaded Convolutional Networks,” IEEE
Signal Processing Letters, vol. 23, no. 10, pp. 1499–1503, 2016.
[Online]. Available: https://ieeexplore.ieee.org/document/7553523
[4] [4] F. Zhang, X. Xu, and Z. Qiao, “A Review on Deep Learning
Applications in Face Recognition,” Journal of Information Security
Research, 2021. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S187705092030234
5
[5] [5] B. Klare et al., “Pushing the Frontiers of Unconstrained Face
Detection and Recognition: IARPA Janus Benchmark A,” CVPR, 2015.
[6] [6] M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face Recognition,”
British Machine Vision Conference (BMVC), 2015. [Online].
Available:
https://www.robots.ox.ac.uk/~vgg/publications/2015/Parkhi15/
[7] [7] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A Unified
Embedding for Face Recognition and Clustering,” CVPR, 2015.
[8] [8] H. Wang, Y. Wang, Z. Zhou, X. Ji, and Z. Li, “CosFace: Large
Margin Cosine Loss for Deep Face Recognition,” CVPR, 2018.
[9] [9] X. Zhu, Z. Lei, X. Liu, H. Shi, and S. Z. Li, “Face Alignment Across
Large Poses: A 3D Solution,” CVPR, 2016.
[10] [10] Dlib Library, “Machine Learning Toolkit,” [Online]. Available:
http://dlib.net/
[11] [11] TensorFlow Documentation, “TensorFlow: An Open Source
Machine Learning Framework,” [Online]. Available:
https://www.tensorflow.org/
[12] [12] NumPy Documentation, “Numerical Python,” [Online]. Available:
https://numpy.org/
[13] [13] Django Software Foundation, “The Web Framework for
Perfectionists with Deadlines,” [Online]. Available:
https://www.djangoproject.com/
[14] [14] MTCNN GitHub Repository, “Multi-task Cascaded Convolutional
Networks for Face Detection,” [Online]. Available:
https://github.com/ipazc/mtcnn
[15] [15] LabelImg, “Labeling Tool for Object Detection,” [Online].
Available: https://github.com/tzutalin/labelImg
[16] [16] LFW Dataset, “Labeled Faces in the Wild,” University of
Massachusetts Amherst, [Online]. Available: http://vis-
www.cs.umass.edu/lfw/
[17] [17] Kaggle, “Face Recognition Datasets,” [Online]. Available:
https://www.kaggle.com/
[18] [18] J. Zhao, Y. Cheng, Y. Yang, and J. Cai, “Towards Pose-Invariant
Face Recognition in the Wild,” IEEE Conf. on Computer Vision and
Pattern Recognition (CVPR), 2018.
[19] [19] T. Baltrusaitis, P. Robinson, and L. P. Morency, “OpenFace: An
Open Source Facial Behavior Analysis Toolkit,” IEEE Winter
Conference on Applications of Computer Vision (WACV), 201 6.

You might also like