0% found this document useful (0 votes)
29 views11 pages

Journal Paper-2

Uploaded by

jojises632
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views11 pages

Journal Paper-2

Uploaded by

jojises632
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

FACE RECOGNITION PROCESSING WITH OPEN CV INTEGRATION USING AI AND

DEEP LEARNING ALGORITHMS


Hema Nagalla1, Nalla Gnana Suma 2, Grandhi Sri Kavya Sudha 3 , Vemula Pavana
Pravallika4, Saladi Sri Venkata Anjani5, Ganga Bhavani Billa6
1,2,3,4,5
Department of Artificial Intelligence and Machine Learning, Bonam Venkata Chalamayya Engineering College,
6
Department of Computer Science And Machine Learning, Bonam Venkata Chalamayya Engineering College, India

Abstract
Face recognition systems compare photographs of faces with a Traditional techniques like manual entry or physical ID
dataset, which is a collection of photos. One of the most intriguing cards are no longer necessary thanks to the use of mobile
study topics for specialists in applications involving human-computer devices or integrated systems for the collection and analysis
interaction is facial recognition analysis. This paper shows how to use of facial data, which reduces errors and the possibility of
Python and OpenCV to create a face recognition system in practice. fraud. Furthermore, advancements in facial recognition now
The system is made to recognize and detect well-known faces in a live address challenges such as varying lighting conditions,
video broadcast. We use a K-NearestNeighbour (KNN)-based model
occlusions, and expression variations, ensuring robustness
for face recognition, utilizing the face recognition and simple_facerec
and reliability. However, its widespread adoption raises
libraries, and Histogram of Oriented Gradients (HOG) for face
concerns about privacy, data security, and potential misuse,
detection. Data collection, face encoding, and the methodical
necessitating stringent regulations and ethical considerations
application of the algorithm are all covered in detail in the paper. The
system's performance measurements and possible uses in security and
for its implementation. As face recognition continues to
customized user experiences are also covered. integrate with artificial intelligence and IoT systems, it holds
Keywords: Face Recognition, Open CV, CNN, KNN, HOG, LBPH. immense potential to redefine identity verification in the
digital age.
1. INTRODUCTION
As seen in the provided image, face recognition Sudha Sharma, et.al [1] The ORL dataset, which has been
technology has advanced quickly to become one of the divided into three configurations (60:40, 70:30, and 90:10)
most well-known and extensively used biometric for learning and evaluation, is used in the work to study
systems. It uses sophisticated algorithms to analyze each automatic face detection. PCA is used to extract important
person's distinct facial traits in order to identify or information, and different classifiers are tested; configuration
validate them. Convolutional neural networks (CNNs) B and configuration C achieve 97% and 100% recognition
and deep learning models are used in contemporary face accuracy, respectively. Future studies will examine new
recognition systems to translate complex facial features datasets, such as GTF and YALE, to tackle more face
—such as the spatial relationships between the mouth, detection problems and evaluate alternative methods to
nose, eyes, and other landmarks—into high-dimensional enhance performance. Chao Liu, et.al [2] In order to increase
feature vectors. Applications for this technology can be accuracy, the study introduces HFE-Net, a novel facial
found in a number of fields, such as security, attendance, expression recognition network that combines CNN and
and access control.tracking and customized client Hybrid Feature Extraction Blocks. For improved
interactions. For instance, as the graphic shows, facial performance, it uses a Feature Fusion Device, Multi-head
recognition guarantees smooth attendance verification, Self-attention, and promotes the use of multidimensional
offering accuracy and efficiency in real-time information. On a variety of datasets, the model performs
applications. better than current techniques. Future research will be
encouraged and will concentrate on continuous facial
expression recognition for practical uses. Ramyar A
Teimoor, et.al [3] Due to its many commercial, security, and
forensic uses, face recognition has attracted a lot of interest
in biometrics, pattern recognition, and computer vision. This
study highlights the value and benefits of face recognition
systems in daily life by categorizing face detection methods
and tabulating the benefits and drawbacks of different face
recognition algorithms. Ola N. Kadhim, et.al [4] This paper
uses machine learning approaches (SoftMax & SVM) for
face recognition classification and deep wavelet scattering
transform network for feature extraction. The SVM classifier
performs better than SoftMax, according to experimental
results, with a recognition accuracy of 98.29% as opposed to
Fig-1: Face Recognition SoftMax's 97.87%. The efficacy of the suggested approach
has been confirmed by validation on the MULB face
database. For better outcomes, future research attempts to
combine multimodal biometric approaches with face
recognition. Hong Jiaa, et.al [5] The study presents a novel
distance measure for categorical data that takes attribute
dependency and frequency probabilities into account.
Although it works well in experiments, it might have trouble influencing Convolutional Neural Network (CNN)
with noise and small clusters, which will be fixed performance. The number of levels affects network
in later studies. Songsong Wu, et.al [6] In order to measure performance as well, although adding more levels lengthens
image distances effectively, the paper presents Local Image the time needed for testing and training. In machine learning
Distance Metric Learning (LIDML). LIDML preserves applications including voice recognition, picture and video
structure and lowers processing costs by keeping images in recognition, and face detection, CNNs are frequently utilized.
their 2D matrix form. It works well with databases of face Bo-Gun Park, et.al [15] In this research, a Face-ARG model
and palmprint images and provides a closed-form solution that captures both local characteristics and geometric
for quicker and more reliable metric collecting. Ruti structure is used to describe faces in a unique face
Goyal,et.al[7]By analyzing different algorithms in terms of recognition algorithm that uses partial ARG matching. In
space and time paradigms, the article concludes that Haar order to identify individuals, the algorithm builds a
cascades are the best effective face identification technique. correspondence graph between the reference and test Face-
It compares the benefits and drawbacks of face detection ARGs and assesses similarity. It just needs one training
with OpenCV and MATLAB. Camshift and motion image per individual, is resilient, and can identify unreliable
detection algorithms are quicker, but Haar cascades and features. Even in extreme situations like occlusion or facial
camshift methods work better. Maur icio Marangoni, et.al [8] emotions, performance results demonstrate exceptional
PCA, matching strategies, machine learning, tracking, robustness. Nevertheless, the approach has issues with
optical flow, and parallel computer vision with CUDA are computing complexity and pose variations, which will be
among the advanced image processing and computer vision resolved in subsequent research. Steve Lawrence, et.al [16]In
topics covered in this study. The OpenCV library for C/C++ this paper, a rapid, automatic face recognition system that
programmers on several platforms is used to illustrate these combines a convolutional network, SOM network, and local
ideas. To demonstrate their uses, both theoretical and real- image sample representation is presented. The system
world examples are given. Kashvi Taunk, et.al [9] By outperforms the eigenfaces technique in classification
categorizing data according to its nearest neighbors, the K- performance by offering invariance to small modifications,
Nearest Neighbors (KNN) algorithm provides excellent translation, rotation, scaling, and deformation. With five
accuracy and is frequently used in the healthcare and stock photos per individual, it achieves a 3.8% error rate, providing
market forecasting industries. Its speed and performance are quick classification with little preprocessing. By learning
improved by extensions like SVM-KNN and weighted KNN. more suitable features than eigenfaces, the system also shows
Enhancing accuracy, managing small datasets, and better generalization. LIXIANG LI, et.al [17] Although face
scalability for huge datasets are the main areas of ongoing recognition technology has made great strides, there is
study. Aireza Naser Sadrabadi, et.al [10] The goal of the always need for development in real-world applications.
study was to use the IWO method to measure the distance Specialized cameras that improve image quality and solve
between non- numerical qualities in test and training data in issues like image filtering, reconstruction, and denoising
the best possible way. The findings demonstrated that IWO might be part of future advancements. Furthermore, 2D
enhances distance computations and yields more accurate photos could be enhanced with 3D technology to address
answers for non- numerical cases. Other metaheuristic rotation and occlusion problems. Bhanushree K.
algorithms for attribute weighting and distance measurement J,et.al[18]Users can submit a picture for face detection and
should be investigated in future studies. Youssef Hbali, et.al recognition using the Face Recognition application. After a
[11] The Histogram of Oriented Gradients (HOG) face is identified, it is compared to a database of faces that
characteristics used in this paper's marker less augmented has been saved. The application outputs the identified face if
reality eye recognition method proven to be quick and a match is discovered. It means that the face is not
accurate for real-time tracking. HOG is appropriate for recognized if there is no match. Garc´ıa Amaro, et.al[19]The
virtual try-on systems on mobile devices due to its speed and usefulness of computer vision and machine learning methods
ease of use. This strategy can dramatically increase e- for facial recognition and detection is assessed in this
commerce and modify customer shopping behaviors. research. A small database is created using a common face
Hafiz Ahamed, et.al [12] The automatic face identification identification technique, and different machine learning
system presented in this paper combines convolutional models are then trained offline. Face recognition in movies
neural networks (CNN) with histogram of oriented gradients with unidentified participants is then used to validate these
(HOG). Under different lighting circumstances, HOG models. Insaf Adjabi, et.al [20] The article talks about the
performs better than LBP at detecting edges and corners with possibility of 3D facial recognition to get around 2D
less dimensions. The system recognizes faces, calculates restrictions and highlights current developments in facial
pixel gradients, and transforms images to grayscale. By recognition, especially the superiority of deep learning over
comparing the encoded face in the present image with ones conventional techniques. It outlines future approaches, such
that have been saved in the past, CNN training aids in face as multimodal biometrics and improved security measures,
recognition. Rahul Chauhan, et.al [13] CNNs for image while highlighting the need for more study on 3D databases
identification and recognition utilizing the MNIST and and obstacles.
CIFAR-10 datasets on a CPU are the main topic of the
article's discussion of deep learning. With a training accuracy
of 76.57% after 50 epochs, the MNIST accuracy is 99.6% 2. METHOD
and the CIFAR-10 accuracy is 80.17%. By adding more Data collection is the first step in the facial recognition
hidden layers and utilizing a GPU and more epochs, the system where pictures are acquired for testing and training. Face
CIFAR-10 performance can be enhanced. The system may detection, which finds faces in pictures or video frames, is the
be used to help machine vision recognize symbols in next stage. After detection, the eyes, nose, and mouth are aligned
natural language. Saad Albawi, et.al [14] The convolution for consistency using Face Alignment. The next step is Face
layer, which uses the greatest processing time, is the main Normalization, which modifies expression, posture, and lighting
emphasis of this paper's discussion of the major variables to produce a uniform face image. The next step is feature
extraction, which records important facial traits. These
characteristics are then transformed into a numerical format
through the process of encoding. The closest match is then found shown in the Fig-3, it will see the faces that have been
by classifying the encoded features. The system then displays discovered, draw rectangles around them.
the facial recognition's accuracy, precision, recall, and F1-Score
in a result. The evaluation stage ensures the effectiveness and
dependability of the system by assessing the overall
performance. Accurate and effective facial recognition is
ensured by this all- encompassing method. These steps are
followed by Fig-2.

Fig-3: Face Detection

Methods for facial recognition


 Haar Cascades (for identifying faces)
A cascade function is trained using a large
number of both positive and negative images in the machine
learning-based Haar Cascades method. It is employed to identify
things in pictures, in this example faces. To find out whether a
face is there, the algorithm applies the trained classifier to each
sub- window as it scans the image with a sliding window.
Because Haar Cascades is quick and effective, it can be used for
face detection in real time.
 Histogram of Oriented Gradients (HOG)
For object detection in computer vision and
image processing, HOG is a feature descriptor. It calculates a
histogram of gradient directions or edge orientations for the
pixels inside each cell after splitting the image into tiny,
connected areas known as cells. HOG works well for face
detection under a variety of circumstances since it is resistant to
changes in position and lighting.
 Deep Learning-based Detection
To detect and identify faces, the face
recognition library makes use of a deep learning model—more
precisely, a Convolutional Neural Network (CNN)—that has
been trained on a sizable dataset. After extracting facial traits, the
Fig-2: Proposed Method model encodes them into a vector of 128 dimensions. Deep
learning models handle changes in illumination, position, and
2.1 Data Collection occlusions with ease, offering great accuracy and robustness in
face identification and recognition.
 ImagesDataset
The dataset includes pictures of people that are 2.3 Face Alignment
used to test and train the face recognition algorithm. It
features a wide variety of faces with different lighting,
perspectives, and facial emotions. The pictures are
arranged to make loading and encoding simple, and
they are kept in the "images/" folder.
 Labeling
The name or identification of the matching
person is correctly identified on every photograph in the
dataset. For the model to be trained to accurately
recognize and match faces, this labeling is necessary.
Organization:
Fig-4: Face Alignment
To guarantee effective processing, the photos
are arranged methodically. This involves arranging
The method of aligning face photos using identified
photos to facilitate fast access and encoding, which will
facial landmarks ensures that important facial features (such as
facilitate efficient training and recognition.
the mouth, nose, and eyes) appear in the same location in each
picture. For facial recognition systems to become more accurate,
2.2 The Face Detection Process
this procedure is essential. Here, it will highlight important facial
Use OpenCV to take a picture or video frame
features like the corners of the lips, the nose tip, and the eyes as
from the camera. To expedite the detection process, resize the
shown in Fig-4. To align the face to a canonical position,
frame. Since the face recognition library works with RGB
compute a transformation (such as an affine transformation)
images, convert the frame from BGR to RGB color space. To
using the landmarks that were recognized.
find faces in a picture, use the face recognition library. To restore
the original frame size, adjust the detected face locations'
coordinates. As
 Histogram Equalization (for lighting normalization)
Methods for facial Alignment
This image processing technique modifies an image's
contrast by redistributing the intensity values of its pixels. This
 Face Landmark Detection (e.g., Dlib) improves the visibility of face features and normalizes lighting
The Dlib library detects facial landmarks using an conditions. By improving lighting uniformity across face photos,
ensemble of regression trees. The landmark positions are first histogram equalization strengthens the recognition process'
estimated by the algorithm, which then iteratively improves them resistance to illumination changes.
to better fit the face. Because of its excellent accuracy and speed,
Dlib's facial landmark detector is appropriate for real-time 2.5 Feature Extraction
applications. The process of locating and encoding a face's
 Affine Transformation distinguishing traits into a numerical representation is known as
This type of linear mapping technique maintains planes, feature extraction in face recognition. Then, faces can be
points, and straight lines. When it comes to face alignment, it recognized and compared using this format. Key facial features
computes a transformation matrix that aligns the face to a including the corners of the eyes, the tip of the nose, and the
canonical location using the landmarks that were found. Affine corners of the mouth will be identified here. To guarantee
transformations offer an excellent compromise between ease of uniform size and placement, align the face with the identified
use and efficiency in face alignment, and they are landmarks. To encode the unique face traits into a numerical
computationally efficient. representation (feature vector), use a feature extraction approach.
2.4 Face Normalization Methods for Feature Extraction
The technique of standardizing face photos so that
important facial features are consistent and aligned in every  Local Binary Patterns Histograms (LBPH)
picture is known as "face normalization." By ensuring that facial The image is transformed into a sequence of
characteristics are in constant scales and positions, this phase binary patterns using the texture-based method known as LBPH.
helps increase the accuracy of face recognition systems. Key facial After segmenting the image into several areas, it calculates local
features including the corners of the eyes, the tip of the nose, and binary pattern histograms for each area. A feature vector is then
the corners of the mouth will be identified here. To align the face created by concatenating these histograms. LBPH is appropriate
to a canonical position, compute a transformation (such as an for real-time face recognition since it is easy to implement and
affine transformation) using the landmarks that were recognized. resilient to changes in lighting.
To align the facial image, apply the calculated transformation.  Histogram of Oriented Gradients (HOG)
To guarantee consistency in all face photos, adjust the lighting HOG is a feature descriptor that uses the
and image size. It is clearly shown in Fig-5. gradient orientation histograms in certain areas of the picture to
capture the texture and structure of the face. A feature vector is
then created by combining these histograms. HOG is a common
option for face recognition tasks because it effectively captures
texture and edge information.
 Feature extraction based on deep learning
To learn high-level facial traits, deep learning
models—like Convolutional Neural Networks (CNNs)—are
trained on enormous datasets. These models create a high-
dimensional feature vector by putting the image through several
layers of convolution and pooling procedures in order to extract
features. High accuracy and resilience are offered by deep
Fig-5: Face Normalization learning models, which can successfully handle changes in
occlusions, lighting, and position.
 Image Loading and Conversion
Methods for facial Normalization
Use OpenCV to load the image and then
convert it to RGB color space. Find Face Locations: To find faces
 Face Landmark Detection (e.g., Dlib) in an image, use the face recognition library.
The Dlib library detects facial landmarks using an
ensemble of regression trees. The landmark positions are first 2.6 Faces Encoding
estimated by the algorithm, which then iteratively improves them
to better fit the face. Because of its excellent accuracy and speed, Facial encoding, sometimes referred to as feature vector,
Dlib's facial landmark detector is appropriate for real-time is the process of turning a facial image into a numerical
applications. representation. This encoding can be used for recognition and
 Affine Transformation comparison since it captures the distinctive traits of the face.
This type of linear mapping technique maintains planes, After loading the face image, transform it to the RGB color
points, and straight lines. When it comes to face normalization, it system. To align the face, locate important facial landmarks. To
computes a transformation matrix that aligns the face to a guarantee uniform size and placement, align the face with the
canonical location using the landmarks that were found. Affine identified landmarks. To encode the unique face traits into a
transformations offer an excellent compromise between ease of numerical representation (feature vector), use a feature extraction
use and efficiency in face alignment, and they are approach. A pre-trained deep learning model based on
computationally efficient. convolutional neural networks (CNNs) is used by the face
recognition library. By putting the image through several
layers of convolution and
pooling operations, the model extracts features and produces a successfully identified faces to all detected faces, presented as a
high-dimensional feature vector, typically 128-dimensional. percentage, will be used to determine the accuracy of face
High accuracy and resilience are offered by deep learning recognition. The accuracy will be shown on the screen or in the
models, which can successfully handle changes in occlusions, console in real time. Continuous performance feedback will be
lighting, and position. Accurate face recognition is possible with provided by the system, which will update the accuracy for every
the feature vectors that are produced since they are very frame analyzed. As shown in Fig-7.
discriminative.

2.7 Face Recognition Using k-Nearest Neighbors

A straightforward yet effective technique for


classification problems, such as facial recognition, is k-Nearest
Neighbors (k-NN). In this case, a new, unseen face is classified
using k-NN based on how similar its feature vector is to those of
known faces. Procedures for Face Recognition with k-NN Store
feature vectors and encode the faces of people the system needs
to identify. To recognize the new face, encode its feature vector.
Determine the Euclidean distances between the feature vectors of
the new face and the previously stored feature vectors of faces
you are familiar with. Determine which k feature vectors are the Fig-7: Final Accuracy
new face's closest neighbors. Sort the new face according to the k
nearest neighbors' majority label. The Euclidean distance, which Using matplotlib, the system will create a graph that displays the
is a measure of the straight-line distance between two locations accuracy over time. As more frames are processed, the graph will
in a multidimensional space, is one of the algorithms used. It is show how the accuracy varies. The system's performance will be
computed by taking the square root of the total squared assessed over time with the aid of this visual representation. The
deviations between matching locations. In the context of face final accuracy will be displayed in the console after the video
recognition, Euclidean distance provides an efficient way to feed processing is complete. The accuracy trend during the
quantify feature vector similarity and is easy to calculate. The processing of the video feed will be shown by the accuracy
learning algorithm known as k-Nearest Neighbors (k-NN) is graph(Fig-8).
instance-based and non- parametric. The k instances in the
training set that are closest to a new instance in feature space are
taken into account when classifying it. The class of the new
instance is determined by the majority label among these k
neighbors. When the decision boundary is complicated and
difficult to define, k-NN works well since it is simple to use and
understandable.

3. RESULTS AND DISCUSSION

 Live Video Stream:

A live video stream will be seen after the camera is


opened by the system. Real-time processing of the video feed
will be used to identify and determine faces. As shown in Fig-6.

Fig-8: Accuracy Chart

4 CONCLUSION

The goal of the face recognition was to provide a trustworthy


method for recognizing people by their facial characteristics. In
the "Introduction," it is explained how the system uses OpenCV
and the face recognition library to identify, encode, and detect
faces in real time. During the implementation, a variety of image
datasets were gathered, each image was precisely labeled, and
Fig-6: Resultant output face detection and identification techniques were used. The
efficiency of the system in accurately identifying well-known
 Face Recognition and Detection: faces was illustrated in the results and discussion sections. Strong
Rectangles will be created around detected faces to performance was guaranteed by using K-Nearest Neighbors
highlight them. The names of the people will appear above the (KNN) for classification, as seen by the accuracy metrics
rectangles if their faces are recognized. Faces that are unknown monitored throughout time. By producing a useful and precise
will be marked as "Unknown." facial recognition system, the project effectively fulfilled the
 Calculating Accuracy goals outlined in the introduction.
The total number of faces found and the number of
faces correctly identified will be monitored by the system. The
ratio of
5 ACKNOWLEDGEMENTS 2020.9349582

The Bonam Venkata Chalamayya group of institutions [11] Youssef Hbali, Mohammed Sadgal, Abdelaziz El
provided support for this study. The authors express their Fazziki," Object detection based on HOG features: Faces and
gratitude to B. Ganga Bhavani Mam for her essential support and dual- eyes augmented reality",IEEE Access,
guidance during this work. Year: 2013 ,DOI: 10.1109/WCCIT.2013.6618716

REFERENCES [12] Hafiz Ahamed, Ishraq Alam, Md. Manirul Islam,"


HOG- CNN Based Real Time Face Recognition",IEEE
[1] Sudha Sharma, Mayank Bhatt, Pratyush Sharma," Face 2018 ,
Recognition System Using Machine Learning Algorithm”, IEEE DOI: 10.1109/ICAEEE.2018.8642989
Access, 2020, Conference paper,
DOI: 10.1109/ICCES48766.2020.9137850 [13] Rahul Chauhan, Kamal Kumar Ghanshala, R.C
Joshi,"Convolutional Neural Network (CNN) for Image
[2] Dandan Song, Chao Liu, “A facial expression recognition Detection and Recognition",IEEE Access, 2018 ,DOI :
network using hybrid feature extraction”, Journal Access, 2025, 10.1109/ICSCCC.2018.8703316
DOI:10.1371/journal.pone.0312359.
[14] Saad Albawi, Tareq Abed Mohammed; Zawi,
[3] Ramyar A. Teimoor," Face Recognition & machine “Understanding of a convolutional neural network",IEEE
learning”, Access,2017 ,DOI: 10.1109/ICEngTechnol.2017.8308186.
ResearchGate 2018,DOI:10.13140/RG. 2.2.12616 .37127
[15] Bo-Gun Park, Lee, Sang,"Face recognition using
[4] Ola N. Kadhim," Face Recognition approach via Deep and faceARGmatching",IEEEAccess,2005 , Vol:27, Issue: 12 , Journ
Machine Learning", September 2023, Wasit Journal of Pure al DOI: 10.1109/TPAMI.2005.243.
sciences 2(3):152-164, DOI:10.31185/wjps.194
[16] Steve Lawrence, Member, IEEE, C. Lee Giles, Senior
[5] Hong Jia; Yiu-ming Cheung,"A new distance metric for Member, IEEE, Ah Chung Tsoi, Senior Member, IEEE, and
unsupervised learning of categorical data",IEEE Andrew D. Back, Member, IEEE," Face Recognition: A
Access,Year2016 , Volume: 27, Issue: 5,Journal Article, Convolutional Neural-Network Approach”, IEEE Access, VOL. 8,
DOI:: 10.1109/TNNLS.2015.2436432 NO. 1, JANUARY 1997, DOI: 10.1109/72.554195.

[6] Songsong Wu, Jingyu Yang,"Local Image [17] LIXIANG LI, XIAOHUI MU, SIYING LI1, AND
DistanceMetricLearning",IEEEAccess,2010,DOI:10.1109/CCPR HAIPENG PENG," A Review of Face Recognition Technology”,
.2010.5659195 IEEEAccess,2020, Vol:8, DOI: 10.1109/ACCESS.2020.3011028

[7] KrutiGoyal, Kartikey Agarwal, Rishi Kumar,"Face [18] Dr M B Meenavathi, Bhanushree K J,"Feature Based
detection and tracking: Using OpenCV",IEEE Face Recognition using Machine Learning Techniques”,
Access,2017,DOI:10.1109/ICECA.2017.8203730 Reasearchgate Access, International Journal of Recent
Technology and Engineering (IJRTE) 8(6),
[8] Mauricio Marengoni, Denise Stringhini,"High Level DOI:10.35940/ijrte.F7497.038620.
Computer Vision Using OpenCV",IEEE
Access,2011,DOI:10.1109/SIBGRAPI-T.2011.11 [19] E. García Amaro, M.A. Nuño-Maganda, M. Morales-
Sandoval,"Evaluation of machine learning techniques for face
[9] Kashvi Taunk, Sanjukta De, Srishti Verma; Aleena detection and recognition”, IEEE Access, Year: 2012,
Sweta Padma ," A Brief Review of Nearest Neighbor Algorithm Conference, DOI :10.1109/CONIELECOMP.2012.6189911.
for Learning and Classification",IEEE Access,
2019 , DOI: 10.1109/ICCS45141.2019.9065747 [20] Insaf Adjabi, Abdeldjalil Ouahabi, Amir Benzaoui and
Abdelmalik Taleb-Ahmed,"Past, Present, and Future of Face
[10] AirezaNaserSadrabadi, SeyedMahmoodZnjirchi,Habib Recognition: A Review”, MDPI Access, Year:2020, DOI :
Zare Ahmad Abadi,Ahmad Hajimoradi ," An optimized K- 10.3390/electronics9081188.
Nearest Neighbor algorithm based on Dynamic Distance
approach",IEEE,2020 , Conference,DOI: 10.1109/ICSPIS51611.
BIOGRAPHIES OF AUTHORS:

Hema Nagalla is a highly skilled data analyst, set to graduate with a B.Tech degree from Bonam
Venkata Chalamayya Engineering College, Odalarevu, India in 2026. With a strong foundation in
Artificial Intelligence and Machine Learning, Hema excels in data visualization tools, possessing
advanced skills in Tableau and PowerBI. Her certifications include PowerBI from Techtip24,
Tableau from Jobaaj Learning, AI for India 2.0 from GUVI, Artificial Intelligence from Infosys, and
Young Professional from TCS ION Career Edge. Additionally, she has expertise in Python, certified
by GUVI. Hema completed an internship at Cognifyz Technologies from May 2024 to June 2024,
where she worked on PowerBI Data Analysis, Python, and Excel, and developed a dashboard on
people's savings. With her exceptional skills in Tableau and PowerBI, Hema is poised to make a
significant impact in the field of data analysis.
She can be contacted at [email protected].

Nalla Gnana Suma is a Bachelor's degree student at Bonam Venkata Chalamayya Engineering
College Engineering College, India, expected to graduate in 2026. She completed her intermediate
studies at Sri Chaitanya Junior College. Suma is a diligent and ambitious individual with a strong
interest in Artificial Intelligence and Machine Learning (AIML). Hailing from Amalapuram, India,
Suma possesses a strong foundation in programming languages, including C, Python, and Java. She
has also gained valuable experience through her involvement in various projects, including
Blackbucks (AIMLDS), Codsoft (UI/UX), and APSSDC (AIML).
She can be contacted at [email protected].

Grandhi Sri Kavya Sudha is a Bachelor's degree student at Bonam Venkata Chalamayya
Engineering College Engineering College, India, expected to graduate in 2026. She completed her
intermediate studies at Aditya Junior College. Kavya Sudha is a bright and ambitious individual with
a strong interest in Artificial Intelligence and Machine Learning (AIML), as well as Cybersecurity.
Hailing from Amalapuram, India. Kavya Sudha possesses programming skills in Python and Java,
and has gained hands-on experience through her involvement in various projects, including
Blackbucks (Cybersecurity) and Codsoft (UI/UX).
She can be contacted at [email protected].

Pavana Pravallika Vemula is a Bachelor's degree student at Bonam Venkata Chalamayya


Engineering College Engineering College, India, expected to graduate in 2026. She completed her
intermediate studies at Surya Junior College. Pravallika is a diligent and enthusiastic individual with
a strong passion for Artificial Intelligence and Machine Learning (AIML). Hailing from
Amalapuram, India. Pravallika possesses programming skills in Python and Java, and has gained
valuable experience through her involvement in various projects, including Blackbucks (AIMLDS),
Codsoft (UI/UX), and APSSDC (AIML).
She can be contacted at [email protected].

Saladi Sri Venkata Anjani is a Bachelor's degree student at Bonam Venkata Chalamayya
Engineering College Engineering College, India, expected to graduate in 2026. She completed her
intermediate studies at Tirumala Junior College. Anjani is a talented and motivated individual with a
strong passion for Artificial Intelligence and Machine Learning (AIML). She hails from
Amalapuram, India. Anjani possesses programming skills in Python and C++, and has gained hands-
on experience through her involvement in various projects, including Blackbucks (AIMLDS) and
Codsoft (UI/UX). She can be contacted at [email protected]
Mrs.Ganga Bhavani Billa is Research Scholar at college, Koneru Lakshmaiah Education
Foundation (KLEF) Green Fileds, Vaddeswaram also Mrs.Ganga Bhavani Billa is Associate
Professor at college Bonam Venkata Chalamayya Engineering College,Odalarevu.She holds a
M.Tech degree in Computer Science and Engineering in GIET College.,Rajahmundry.Her Research
areas are Machine Learning,Deep Learning and Artificial Intelligence.She has number of patents
related to machine learning field and industrial designs on her innovative ideas and has been
awraded with international patents and published differnt articles in international conferences.
She can be contacted at address:
Mrs.Ganga Bhavani Billa is Research Scholar at college, Koneru Lakshmaiah Education Foundation
(KLEF)
Green Fileds, Vaddeswaram, A.P. – 522302
Email: [email protected]
ORCID: https://orcid.org/0000-0003-1433-5832

You might also like