0% found this document useful (0 votes)
208 views8 pages

Interview Proctoring System JETIR Paper

The document discusses developing an online interview proctoring system using machine learning. It would detect cheating during interviews by continuously monitoring the candidate's face and movements using their webcam. The system is designed to identify unusual behavior and maintain integrity while respecting privacy. Deep learning algorithms would analyze facial expressions and detect any suspicious activity in real-time.

Uploaded by

A14Dange Pooja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
208 views8 pages

Interview Proctoring System JETIR Paper

The document discusses developing an online interview proctoring system using machine learning. It would detect cheating during interviews by continuously monitoring the candidate's face and movements using their webcam. The system is designed to identify unusual behavior and maintain integrity while respecting privacy. Deep learning algorithms would analyze facial expressions and detect any suspicious activity in real-time.

Uploaded by

A14Dange Pooja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

INTERVIEW PROCTORING SYSTEM IN

DATA SCIENCE
Prof. Diksha Bhave Pooja Dange Pranali Ghuge Duhita Narkhede
SSJCOE SSJCOE SSJCOE SSJCOE
Dombivli(E), India Dombivli(E), India Dombivli(E), India Dombivli(E), India
[email protected] [email protected] [email protected] [email protected]

Abstract: Many companies nowadays use online hiring methods and conduct interviews online. However, the
credibility of online interviews has become a widespread concern for organizations globally. To ensure the integrity
of the interview process, candidate authentication, fair results, and detection of candidate misbehaviour, an online
interview proctoring system can be helpful. The main objective of this system is to detect cheating in online
interviews, whether it is in direct or indirect format. In this system, the interviewer will act as an administrator and
track and prevent any cheating attempts during the interview. The candidate will be continuously monitored
through a webcam, and if any suspicious behaviour is detected, the system will display a warning message. The
system is developed using machine learning techniques and algorithms, and various deep learning algorithms are
used to analyse the candidate's facial expressions and movements during the interview. The webcam will record
real-time videos of the candidate, which can be stored as proof for the interviewer. We believe that our system will
successfully identify unusual candidate behaviour and maintain the integrity of the interview process.

Index Terms - Generative models, large language models, computing education, student perceptions

1. INTRODUCTION

Online job interviews are becoming more and more popular because of their convenience and efficiency. Candidates
can take interviews from anywhere in the world, and it saves time and energy for both employees and employers.
However, cheating is a major concern during online interviews and can compromise academic honesty. Therefore,
proctoring online interviews is crucial to ensure authenticity. Automated proctoring solutions are valuable additions
to online interview systems, as they use computer vision and machine learning algorithms to validate the interview's
integrity. In this system, we propose an effective two-model combination for cheating detection, where the first model
detects the face, and the second model classifies it as cheating or not cheating. We conducted a comprehensive
literature review of related works and proposed deep learning strategies based on various model combinations that
use video recording during a test to detect cheating. Although this system only processes video records, it can serve
as a deterrent for candidates from cheating. We have built an interview Proctoring System using Machine learning
detection models. We have conducted a thorough review of related literature on cheating detection in online tests.
Several methods have been employed to identify cheating, but some are less accurate, while others that are more
accurate have a time complexity problem, which negatively impacts overall performance. To address these concerns,
we propose an effective two-model combination for cheating detection. The first model uses face detection to identify
individuals, while the second model classifies the detected faces as either cheating or not cheating.

2. REVIEW OF LITERATURE

Sr.n Title Advantages Disadvantages overview


o.

1.
“Cheating Detection ▪ It is aspect based proctoring ▪ The limitation of this it was a
Pipeline for online system. existing system is it is system which
Interview and Exams” ▪ In this HOG based SVM domain based and we have was giving
presented by, Ozen et al. classifier model is used not looked towards the output with
(2021) which is more accurate than important aspect as accuracy of
other algorithms. authentication that much. 96.5%
2. “A Machine Learning ▪ In this system, the main ▪ As we said dealing with A good system
Approach to Classify advantage is that this system large values in data set is performance
Students’ Performance in was capable of examining possible but critical with accuracy
an interview ” presented large amount of data set. evaluation pf data is of 90%.
by, Alnassar et al. somewhat difficult.
(2021)

3. “Prediction of Students ▪ In this system, multiple ▪ The main disadvantage of Overall


behaviour using Machine algorithms are used for this system is that the performance
learning” presented by, detection such as binomial number of attempts is not of this system
Dhilipan et al. logistic regression, Decision controlled. was good with
(2021 ) tree, and Entropy and KNN accuracy of
classifier. 86%.

4. “Detecting probable ▪ The main feature of this ▪ The main disadvantage of Overall
cheating during online system is logistic this system is that there is performance
assessments”, presented regression is used in this lack of analysis of of the system
by, Chuang et system. historical data. is good with
al. accuracy
( 2017) 85.6%

5. “An intelligent system for ▪ The main feature of this ▪ The choice of the most with gives low
online exam monitoring” system is logistic suitable deep learning accuracy as
presented by, Prathish & regression is used in this model depends on the compared to
Bijlani system. specific dataset and other systems
(2016) problem, which can make which is 80%.
model selection
challenging.

3. OBJECTIVES

Taking an interview online can feel very different from doing it in person. Due to this, many instructors have started
using online proctoring programs to maintain the integrity of online interviews. Proctoring software tools monitor
candidates through their webcams and/or by taking control of functions on their computers. Candidates are not
allowed to leave the interview site during the process. The software analyses each frame of the video to detect the
presence of the candidate's face, count the number of faces and bodies, and identify the appearance of an electronic
device. The video of the candidate is recorded as proof for the interviewer. The system mainly focuses on
identifying the candidate (through face detection), and tracking their eye and body movements. Candidates are
required to keep looking into the camera throughout the interview process.

4. METHODOLOGY

Data Collection and Preprocessing

Gather video feeds and audio from the interviewee's webcam. Collect a Yolo training dataset with labeled examples
of faces and relevant objects/actions. Preprocess video frames (e.g., resizing, normalization) for Yolo model input.

Train Yolo and Real-time Detection

Train a Yolo model on the collected dataset to detect faces and prohibited items, utilizing existing or custom
architectures.
Capture real-time frames from the interviewee's webcam.
Apply the trained Yolo model to detect and track relevant objects within the frames.
Track the position and orientation of the face during real-time detection.
Object Classification and Alerts/Actions

Perform classification on detected objects to determine authorization or prohibition (e.g., distinguish between
authorized notes and unauthorized materials).
Define a set of actions based on detection results, including alerting proctors.

User Interface and Privacy Considerations

Develop a user-friendly interface for proctors and administrators to monitor the interview and receive real-time
alerts.
Ensures the system respects privacy laws and regulations, obtaining consent, and securely storing collected data.

Testing, Calibration, and Continuous Improvement

Thoroughly test the system for accurate and reliable performance in various scenarios.
Calibrate the Yolo model and detection thresholds to minimize false positives and false negatives.
Continuously update and refine the Yolo model for improved accuracy, adapting to changing circumstances, and
incorporating feedback from proctors and users to enhance the system. Preprocess video frames (e.g., resizing,
normalization) for Yolo model input.

5. RESEARCH ANALYSIS

The interview proctoring system is the system which will machine learning and deep learning algorithms to detect
the suspecious behaviour of candidate during an interview. The YOLO algorithm (You Only Look Once) is the one
which is used for face detection. Along with face detection speech recognition will be also there. The administrator
will initiate the session and the candidate will enter their login ID and password so that they can log in into the system.
Once after completing the authentication process the camera will start capturing the candidate's face and also it will
begin recording the interview.

If the candidate displays any unusual or mysterious behavior, a warning message will be displayed screen. The system
sucessfully detects in which direction the candidate is looking along with that it also detects mobile phones, and
multiple faces. If such behaviour is caught, system will give the warning. The interviewer will also give the candidate
a warning to behave properly. If the candidate continues to display unusual behavior or if the system detects the
presence of another person or mobile phone or no face detection, the session will end after two warnings and candidate
will get disqualified right there. The candidate will then be unable to log in to the system again.

Fig. Workflow of Proposed System

There are certain rules and regulations which need to be followed by candidate while giving an interview. If He/she
does not follow the rules or got caught while performing any suspicious behaviour then the system will give warning
along with the message.
It is necessary that candidate should be well behaved during an interview otherwise He/she can be disqualified
because of that particular reason. Candidate first have to complete the login and authentication process they He/she can
start the interview. As it is shown in above figure once candidate has logged into the system it will start capturing
video.
.Flowchart of System

User Interface: The user interface is responsive, effective to use, and visually appealing. It is providing simple navigation
and a consistent user experience across various other platforms like desktop, mobile, and smartTVs.
Candidate Registration and Authentication: Candidate can login to the system by entering username and
password. If candidate does not have account then can register into the system.
Initialize camera & capture video: Once the candidate successfully logged in into the system then camera will
automatically initialized and it will start capturing the video of candidate.
Image extraction & extract features: The opencv function is used to read an image and then perform various
operations like resizing, cropping, or extracting specific regions. To train the dataset CNN is used to extract features
from images. Keras and TensorFlow provide easy access to these model.
YOLO Algorithm: The YOLO (You Only Look Once) algorithm in an interview proctoring system enhances the
system's ability to detect and track objects, including faces and other relevant elements such as face detection ,
anomaly detection and real time alerts and monitoring.
Working of system: The system will start proctoring the candidate if mobile is detected or multiple faces are
detected then candidate will receive a warning. If candidate continuous to do such behaviour then the session will
end there and the candidate will not able to login again immediately.
6. TECHNOLOGIES USED

PYTHON: Python is chosen as the programming language for several reasons in this context. Firstly, Python offers
a wide range of powerful libraries and frameworks for various tasks, including web development (Flask), computer
vision (OpenCV and Mediapipe), deep learning (TensorFlow), and database management (SQLite). Secondly,
Python's simplicity and readability make it suitable for rapid prototyping and development, allowing developers to
quickly build and iterate on complex systems. Additionally, Python's strong community support and extensive
documentation make it an ideal choice for collaborative projects and troubleshooting. Overall, Python's versatility,
ease of use, and robust ecosystem make it a preferred language for developing this real-time monitoring system.

SQL: Structured query language (SQL) is a programming language for storing and processing information in a
relational database. We are creating an SQLite database named "Monitored_records" and defining a table named
"candidate" within it. This table is designed to store records related to monitored candidates in a real-time monitoring
system. Specifically, it consists of four columns: "candidate_name" to store the name of the candidate being
monitored, "actions" to log the actions or warnings detected for the candidate, "time" to record the timestamp of when
the action occurred, and "warnings" to keep track of the remaining warnings or alerts for the candidate. This table
structure allows for the systematic storage and retrieval of monitoring data, enabling analysis and reporting on
candidate behaviour over time. By utilizing a relational database like SQLite, the system ensures data integrity,
efficient storage, and ease of querying, contributing to the overall effectiveness and reliability of the monitoring
solution.

Data Science: In the provided code, data science principles are applied in several aspects. Firstly, the object detection
functionality, implemented using the YOLOv3 deep learning model, involves analysing image data to detect objects
such as people and mobile phones. This process encompasses data preprocessing, model training, and inference,
which are fundamental data science tasks. Secondly, facial landmark detection, facilitated by the Mediapipe library,
involves analysing facial features in real-time video streams, which is another application of data science in computer
vision. Additionally, the system stores monitored records in an SQLite database, allowing for data management and
analysis over time, which is a crucial component of data science. Overall, the code integrates data science
methodologies for object detection, facial analysis, and data storage, contributing to the development of a
comprehensive monitoring system.

HTML: The HTML code represents a web interface for an online test or monitoring system. It consists of several
elements designed to display real-time information and interact with the user. The main components include a header
displaying the title of the application ("Online test"), a section to show the number of warnings remaining for the
candidate being monitored, and an image element to stream the live video feed captured by the monitoring system.
The warnings remaining and any additional warnings or alerts are dynamically updated using JavaScript to provide
real-time feedback to the user. This interface is essential for users to monitor the status of the test or surveillance in
real-time, facilitating timely responses to any detected issues. Additionally, the styling applied to the HTML
elements enhances the visual presentation and user experience of the application. Overall, this HTML code serves
as the user interface for the monitoring system, enabling users to interact with and monitor the ongoing test or
surveillance activities.
CSS: Cascading Style Sheets is a style sheet language used for specifying the presentation and styling of a document
written in a mark-up language such as HTML or XML (including XML dialects suchas SVG, MathML
or XHTML). We are defining styles for different HTML elements to enhance the visual presentation and user
experience of a web interface. The `body` element is styled to remove any default margins, set the background color
to a gradient from yellow to red, and ensure that the background image does not repeat and remains fixed within the
viewport. The `#header1` selector targets an element with the ID "header1" and applies styling to create a header
with a dark background, orange text color, rounded corners, and centered alignment. Additionally, a 2px black
border is applied to all `img` elements to provide a visual boundary. The `.flex` class is defined to create a flex
container, enabling flexible layout options for its child elements. Overall, these CSS styles contribute to a visually
appealing and cohesive design for the web interface, enhancing usability and aesthetics for the end user.
7. RESULT
ACKNOWLEDGEMENT
We Would like to start by expressing my sincere gratitude to the almighty, the most beneficent and the most merciful
for giving us the chance to complete the Interview Proctoring System in Data Science project. We would like to
take the opportunity to express sincere thanks to department and the University for this course where we have such
an opportunity to express our ideas and put our learning all the way into practice. We would specially thank our
express our gratitude to our supervisor Prof. Diksha Bhave, for her tremendous support and encouragement from the
beginning till the completion of the project.

CONCLUSION

In conclusion, the interview proctoring system utilizes machine learning and the YOLO algorithm for face detection,
incorporating speech recognition and real-time monitoring of candidate behaviour. After secure authentication, the
system captures the candidate's face and records the interview. It actively identifies unusual behaviour, issues
warnings, and terminates the session if misconduct persists. This swift disqualification ensures the interview's
integrity, preventing disqualified candidates from future logins. The system acts as a robust tool, leveraging advanced
technologies to maintain fairness and security in the interview process.

REFERENCES

[1] , Cynthia Zastudil, Magdalena, Christine Kapp, Jennifer Vaughn Stephen macneil, “Generative AI in
Computing Education: Perspectives of Students and Instructors” IEEE, (2021)

[2] , Aysha Sultan Alkalbani, Amir Ahmad “cheating detection in online exams based on captured video using deep
learning” IEEE(2021)

[3] , Azmi Can Ozgen, Mahiye Uluya ¨ gmur ˘ Ozt ¨ urk, Umut Bayraktar, “ Cheating Detection Pipeline for Online
Interviews and Exams” IEEE, (2020)

[4] , Razan Bawarith, Dr. Abdullah Basuhail, Dr. Anas Fattouh and Prof. Dr. Shehab Gamalel-Din “E-exam
Cheating Detection System” IEEE, (2020)

[5] , Azmi Can Özgena*, Mahiye Uluyağmur Öztürkb , Umut Bayraktarc , Selim Aksoyd, “An Anti-Cheating
System for Online Interviews and Exams” IEEE, (2019)

[6], Ozen et. Al “Cheating Detection Pipeline for online Interview and Exams” IEEE(2021)

[7], Alnasser et al, “a machine learning approach to classify student interview” IEEE(2021)
[8], Dhilipian et. Al “prediction of cheating by students during online exam or interview” IEEE(2022-2)

You might also like