0% found this document useful (0 votes)
10 views6 pages

NLP and OCR Based Automatic Answer Script

The document presents an Automatic Answer Script Evaluation System (AASES) that integrates Natural Language Processing (NLP) and Optical Character Recognition (OCR) to automate the grading of handwritten answer scripts in educational settings. AASES aims to improve grading efficiency, reduce biases, and provide instant feedback to students, while also addressing challenges related to accuracy, data privacy, and implementation. The paper discusses various methodologies, including machine learning techniques and similarity measures, to enhance the effectiveness of automated grading systems.

Uploaded by

PAVITHRA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views6 pages

NLP and OCR Based Automatic Answer Script

The document presents an Automatic Answer Script Evaluation System (AASES) that integrates Natural Language Processing (NLP) and Optical Character Recognition (OCR) to automate the grading of handwritten answer scripts in educational settings. AASES aims to improve grading efficiency, reduce biases, and provide instant feedback to students, while also addressing challenges related to accuracy, data privacy, and implementation. The paper discusses various methodologies, including machine learning techniques and similarity measures, to enhance the effectiveness of automated grading systems.

Uploaded by

PAVITHRA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

International Journal of Computer Applications (0975 – 8887)

Volume 186 – No.42, September 2024

NLP and OCR based Automatic Answer Script


Evaluation System
Pranav Deepak R. Rohan R. Rohith Roopa R., PhD
Dept. of ISE Dept. of ISE Dept. of ISE Dept. of ISE
BMS College of BMS College of BMS College of BMS College of
Engineering Engineering Engineering Engineering
Bangalore, India Bangalore, India Bangalore, India Bangalore, India

ABSTRACT This capability is particularly valuable in educational settings


Evaluation of answer scripts is a tedious laborious process in where answer scripts may be handwritten, ensuring that
the education domain. Proper solution is proposed in this paper AASES can evaluate responses across a variety of formats.
using two different state of art technologies i.e., Natural The integration of NLP and OCR technologies in AASES
Language Processing (NLP) and Optical Character offers several benefits over traditional manual evaluation
Recognition (OCR). To develop the Automatic Answer Script methods. Firstly, it significantly reduces the time and effort
Evaluation System. The system is intended to simplify grading required for grading, enabling educators to focus more on
by automatic the scoring of written responses in a consistent providing personalized feedback and guidance to students.
and accurate way. The NLP portion of the system is responsible
for understanding the semantic purposes in textual content of Moreover, AASES can handle large volumes of answer scripts
answer scripts. It uses state-of-the-art language models to with consistency and impartiality, minimizing the impact of
assess and infer the context, coherence, and entailment subjective biases inherent in human grading. Additionally, by
properties of the generated text answers. Using NLP to providing instantaneous feedback, AASES promote active
understand text, the system can check not only for correct learning and encourage students to reflect on their responses,
grammar but also gauge how deeply a particular concept is thereby fostering a deeper understanding of the subject matter.
understood. Furthermore, AASES have the potential to adapt and evolve
over time through continuous learning and refinement. By
Keywords analyzing patterns in students' responses and feedback from
Natural Language Processing, OCR Analysis
educators, these systems can enhance their accuracy and
1. INTRODUCTION effectiveness, ultimately leading to improved assessment
In the field of education, evaluation is a very important outcomes. Moreover, AASES can be customized to
mechanism used to measure students understanding as well in accommodate different evaluation criteria, curriculum
terms of mastery over various subjects. Traditionally, this has requirements, and language variations, making them versatile
taken a form of manual and time-consuming process which is tools for educators across various disciplines and educational
also subject to certain degree subjective player biases. For that settings.
reason, a push towards an automated solution! But as Despite the numerous advantages offered by AASES,
technology tools like Natural Language Processing (NLP), challenges remain in their implementation and deployment.
Optical Character Recognition (OCR), etc, have started Ensuring the accuracy and reliability of NLP and OCR
evolving and ramping up that are equipped to automate the algorithms, especially in handling diverse languages,
evaluation process drastically. As the current shift changed handwriting styles, and contextual nuances, is critical to the
everything, implementation of Automatic Answer Script success of these systems. Additionally, addressing concerns
Evaluation Systems (AASES) has been initiated, wherein related to data privacy, security, and ethical considerations is
through NLP and OCR techniques; these systems analyze and paramount to building trust and acceptance among
evaluative student’s responses more effectively in an impartial stakeholders. Nonetheless, with ongoing advancements in NLP
way. and OCR technologies, coupled with concerted efforts in
NLP is a subfield of AI that enables computers to understand, research and development, AASES are poised to revolutionize
interpret and generate human language in ways that are difficult the educational assessment landscape, offering scalable,
for ordinary humans to read. By leveraging techniques such as efficient, and objective means of evaluating students'
machine learning, deep learning, and statistical modelling, NLP performance.
systems can process vast amounts of textual data and extract
2. DETAILED SURVEY
valuable insights. In the context of AASES, NLP algorithms
[1] The research question under study focuses on designing an
are employed to analyze the semantic and syntactic structure of
effective, accurate automatic grading system, with marginal
students' answers, allowing for the identification of key
percentage error, for the Generally Theory-based subjects,
concepts, logical coherence, and grammatical accuracy.
having no disparity with the grading system used by educators.
Complementing NLP, OCR technology plays a crucial role in The response to this research question is the bottleneck in the
AASES by facilitating the extraction of textual information manipulation of answer scripts, which results to increased time
from handwritten or printed answer scripts. OCR systems consumption, lack of efficiency, and most importantly, biases
utilize image processing algorithms to recognize and convert in the score assignment. The approaches used in this paper
text from scanned documents into machine-readable format. incorporate Natural Language Processing (NLP), semantic
analysis, and ontology with the aim of creating intelligent

22
International Journal of Computer Applications (0975 – 8887)
Volume 186 – No.42, September 2024

grading system. In turning the answer scripts into machine- the less frequent words to generate a summary. For text
readable format, the OCR feature is adopted in this system for preprocessing, the paper uses NLTK, which is a popular
identifying textual content alone, but is also capable of dealing framework for natural language processing, and performs
with other components such as tables and figures. The paper tokenization, stop-word removal, lemmatization, bigram
presents method of and best approach to grading using machine creation, and word frequency count. For information retrieval,
learning techniques as well as the application of support vector the paper uses a word2vec model to convert words into vectors
machines in grading. Unfortunately, the usually involved and measure their semantic similarity. For mark scoring, the
datasets in the respective decade are not discussed from the paper uses four similarity measures: cosine similarity, Jaccard
search results in the paper. However, it is probable that the similarity, bigram similarity, and synonym similarity, which
researchers employed a set of answer scripts for compare the student's answer with the correct answer and
training/development and a set of grading criteria/rubrics for calculate a score based on the angle, intersection, structure, and
grading the students’ papers. synonyms of the sentences.
[2] acknowledges effectiveness of an automatic method of [6] proposes a system that consists of the following steps: input
essay scoring in mitigating the issue of limited time in marking image, preprocessing, feature extraction, text recognition, NLP
writing assignments and subjectivity of the grading process. techniques, data splitting, classification, mark evaluation, and
The procedures adopted in the foregoing paper involve the use performance metrics. The system uses the py-tesseract library
of Natural Language Processing (NLP), sentiment analysis, and for OCR, the mean and standard deviation for feature
machine learning specifically the Long Short-Term Memory extraction, the artificial neural network (ANN) for
(LSTM) models for essay grading where the essays are written classification, and the number of words and letters for mark
in English. The phenomena are identified using NLP evaluation. The paper uses various methodologies such as
algorithms and it utilizes syntactic, semantic and sentiment image processing, OCR, NLP, and deep learning to implement
features of the essays to predict the grades by employing LSTM the proposed system. The paper also uses some tools such as
models. tkinter, matplotlib, and numpy for data handling and
visualization. It reviews some of the previous works related to
[3] The paper identifies the challenges and limitations of OCR, NLP, and answer evaluation using machine learning. The
manual evaluation of subjective answers, such as bias, paper cites some of the challenges and limitations of the
inconsistency, time consumption, and human resources. It aims existing methods and highlights the novelty and advantages of
to develop a system that can automate the evaluation process the proposed system.
and reduce the need for human intervention. The paper presents
a two-part system: a checker and an evaluator. The checker [7] Preprocessing the answer scripts involves data processing
takes a question, a student’s answer, an expected answer, and including methods like tokenization, lemmatization, and word
total marks as input, and assigns a score to the student’s answer embedding which converts the answer scripts into numerical
based on grammar, keywords, and similarity. The evaluator vector form. To do so, the paper employs deep learning
takes a sample of student’s answers and finds the best techniques such as LSTM, Recurrent neural networks, and
combination of evaluation techniques and weights for each dropout and other methods to learn the semantic representation
question. The system allows the user to choose from different of the answer scripts and then assign score to them. In the study,
methods for keyword extraction, summarization, and similarity the D-DAS is trained and evaluated through a supervised
check, or use the optimal combination suggested by the learning technique by providing answer scripts along with
evaluator. human-assessed scores as the manual dataset. The paper looks
at the existing literature on AES, and other short answer
[4] The paper is organized by first presenting the background grading systems, culminating in their strengths and
work which is divided into the research techniques which weaknesses. It also walks through various forms of LSTM
include the similarity measures and the machine learning models, including simple LSTM, deep LSTM, and
techniques. The paper also overviews the pros and cons of the bidirectional LSTM, as well as their use in practical natural
methods and offers some recommendations for an ideal grading language processing and information retrieval applications.
system: You can see that the automation of the answers
valuation scripts provides the grading system bias-free and [8] The paper also outlines the various earlier works done on
coherent. This is why there is a need to establish a model that the use of a computer to evaluate, text mining and measurement
erases the precisions and achievements the grading of text similarity. For the assessment of the student
performance because the outcome of the assessments concerns performance, there currently exists an evaluation paradigm
the student’s future. Consequently, having reviewed the study, which involves a powerful and effective Natural Language
the authors established that there exists two primary strategies Processing (NLP) algorithm. This research was followed by the
in answer grading; similarity measures and Machine Learning creation of the tool that incorporates the NLP analysis along
strategies. While similarity-based measures do not require a with the Artificial Neural Network (ANN) to perform
large training set, these methods are not effective in cases calculations. A filter set for matching an answer to the
where it needs to mine for open-ended responses. On the other examination process is developed by the faculty in form of an
hand, ML techniques expanded the possible coverage of answer sheet and a keyword dataset corresponding to the
grading systems and they do well even with the semi-open- answer for the examination process. In this context, these
ended questions. This means an enormous labelled training set datasets are contained in a data storage system. The results are
is needed to solve each question which may not be convenient then compared to the ANN algorithm to identify if they contain
at all. the correct answer from the student. Also, the student’s answer
is corrected for spelling and grammatical mistakes whenever
[5] The paper uses various methodologies for each component there is unevenness using the NLP algorithm. The results
of the system, such as OCR, NLP, machine learning, and generated from the text mining technique are calculated as soon
similarity algorithms. For image text extraction, the paper uses as the techniques from NLP and ANN reach the end of their
py-tesseract, which is a Python-based OCR tool that converts process.
images into text. For summarization, the paper uses a keyword-
based technique that selects the most frequent words and avoids [9] presents NLP techniques, such as tokenization, part-of-

23
International Journal of Computer Applications (0975 – 8887)
Volume 186 – No.42, September 2024

speech tagging, stop word removal, stemming, and semantic another important process involved in the system where
similarity checking, to preprocess and analyze the student features are extracted through n-gram, cosine similarity, latent
answers and compare them with the standard answers. Latent semantic analysis, and string similarity. It is also employed in
Semantic Analysis (LSA), which is an NLP technique based on the use of categorization models such as artificial neural
a mathematical model that creates a vector representation of a networks, support vector machines, and linear regression to
document and measures the similarity between documents by assign grades. The system also affords giving the specific
calculating the distance between vectors. Bilingual Evaluation scores reflecting the level of answers, recommendations and
Understudy (BLEU), which is an algorithm that analyzes and tips. This paper highlights the literature review focusing on the
measures the similarity between the student answer and the implicit automated question answering natura language, and
standard answer based on the n-gram co-occurrence matching the evolution of research in this field starting from the initial
procedure. advancement in artificial intelligence till the present time. The
paper categorizes the existing systems into three types: which
[10] presents a system for online paper evaluation using NLP are called corpus-based, information extraction, and mapping.
for handwritten answer sheets and automatic mark sheet Furthermore, the paper also provides an overview of the
publishing. The system consists of the following modules: research limitations and future tasks in the domain which
registration and login, upload, OCR, tokenization, similarity include content analysis, semantic analysis, and feedback
check and scoring. The system allows students to upload their system.
scanned answer sheets and teachers to upload their answer
keys. The system then converts the answer sheets into text [14] addresses the challenge of evaluating students’
using OCR, tokenizes the text and removes stop words, performance through answer scripts. Traditional manual
compares the text with the answer keys using WordNet and evaluation can be biased and is influenced by various factors
Corpus, and assigns marks based on the cosine similarity like the mood swing of the evaluator and the inter-relation
measure. The system also generates a mark sheet for each between the student and evaluator. The paper proposes an
student and displays the results to the users. The paper also uses automatic answer script evaluation system based on Natural
the cosine formula to calculate the similarity score between the Language Processing (NLP). The system takes a student’s
answer sheet and the answer key, and to determine the marks written answer as input and automatically scores marks after
obtained by the student. the evaluation. The system considers all possible factors like
spelling error, grammatical error, and various similarity
[11] In this paper, the use of NLP and ML in creating a model measures for scoring marks. The system uses NLP for handling
to assess free-response answer scripts. This paper shall attempt the English language used in the answers. For summary
at offering a solution to the general problem of the way in generation from the extracted text, keyword-based
which answer scripts in formative and summative assessments summarization techniques are used. Four similarity measures
to general tests and examinations are evaluated, especially (Cosine, Jaccard, Bigram, and Synonym) are used as
during the COVID-19 pandemic and the lockdown. parameters for generating the final mark. The paper discusses
Accordingly, the paper presents a model responsible for the the motivation behind the automated answer script evaluation,
scoring of descriptive answers with the help of the similarity which includes less time consumption, less manpower
feature that can be calculated with the help of answer keywords involvement, prohibiting human evaluator’s psychological
extracted from the reference solution. The paper also examines changes, and easy record keeping and extraction.
several prior systems and research studies that deal with the
issue of using text perception assessment for the assessment of [15] The paper presents a text analysis pipeline consisting of
the answer scripts by employing text extraction, similarity four stages: OCR, sentence boundary detection, tokenization,
estimation, BLEU engineering modification, probabilistic and part-of-speech tagging. The paper uses freely available
semantic/text relatedness assessment, ontology, artificial opensource software packages for each stage, and applies them
neural network, Wordnet, Word2vec, WMD, cosine similarity, to a large dataset of scanned news articles with different levels
multinomial naïve Bayes, and term frequency-inverse of degradation. It then compares the results of the text analysis
document frequency. The paper validates if the system works stages on the clean and noisy versions of the same documents
according to the model on a local dataset by comparing the using the proposed evaluation paradigm, which can identify
reference answers with student answers on the computerized and track individual OCR errors and their cascading effects.
tests and comparing the two sets of answers on a manual basis. The paper also proposes a novel evaluation paradigm based on
The paper states that the proposed model was able to get hierarchical dynamic programming to measure and analyze the
average accuracy of 80% and developed a text file that gives impact of OCR errors on NLP stages.
the score for the answers. To support their arguments, the paper
also presents a graphical representation of the validation 3. ARCHITECTURE
process carried out manually and with the proposed system.
[12] The paper proposes a system called Automatic Answer
Checker (AAC), which consists of a web-based interface for
uploading question papers and answer sheets, and a machine
learning module for analyzing and scoring the answers. The
system uses natural language processing techniques such as
word tokenization, stop- word and punctuation removal, and
stemming to preprocess the text and extract keywords. The
system then compares the keywords in the student’s answer
with the keywords in the model answer and calculates a
similarity score. Based on the score, the system assigns marks
to the student and displays them on the web interface.
[13] for the provision of a system for the automatic scoring of
descriptive answers of machine learning. Feature extraction is

24
International Journal of Computer Applications (0975 – 8887)
Volume 186 – No.42, September 2024

4. METHODOLOGY
4.1 Data Collection and Preprocessing
1) Answer Script Collection: Collect a diverse set of
handwritten or typed answer scripts from various
educational institutions or examinations. Ensure that
the dataset covers a range of subjects, difficulty
levels, and writing styles.
2) Digitization: Scan the collected answer scripts to
create digital images or documents that can be
processed by the OCR system.
3) Ground Truth Preparation: Establish a ground truth
dataset by manually grading a subset of the collected
answer scripts. This ground truth will be used to train
and validate the NLP algorithms.

4.2 OCR Processing


1) OCR Implementation: Implement an Optical
Character Recognition (OCR) system to extract the
textual content from the digitized answer scripts.
Ensure that the OCR system can handle both textual
and non-textual elements (e.g., diagrams, formulas)
present in the answer scripts.
2) OCR Accuracy Evaluation: Assess the accuracy of
the OCR system by comparing the extracted text with
the ground truth data. Identify and address any issues
or limitations in the OCR performance.

4.3 NLP Analysis


Fig 1. Proposed Architecture 4) Feature Extraction: Develop NLP algorithms to
extract relevant features from the OCR-processed
The suggested architecture of the system offers a complete text, such as semantic content, language complexity,
solution to automate the process of checking answer scripts coherence, and contextual relevance.
with the use modern technologies for efficiency and precision
as well as ensuring that user friendliness and data security is 5) Scoring Model Development: Design a scoring
maintained. A web-based interface that is easy to navigate for model that can effectively evaluate the quality and
both teachers and learners takes center stage in this correctness of the written responses based on the
architectural design. Educators may upload the scripts, view extracted features. Incorporate techniques like text
evaluated results and give feedback through this hub. It is similarity, sentiment analysis, and knowledge-based
designed in such a way that anyone can understand how it scoring.
works easily thus allowing them to interact with different parts
6) Model Training and Validation: Train the scoring
seamlessly.
model using the ground truth dataset. Employ cross-
Another important integration involves an Optical Character validation techniques to ensure the model's
Recognition (OCR) system. This component makes it possible generalization and robustness.
to extract textually based information from responses including
7) Model Optimization: Continuously refine and
those written by hand or containing non-textual features
optimize the NLP algorithms and scoring model
thereby setting ground for further examination. Next, written
based on the performance on the validation dataset.
responses are analyzed by Natural Language Processing (NLP)
algorithms which consider their semantic content and 4.4 Data Storage and Management
coherence. NLP analysis investigates language subtleties,
measures comprehension depth and checks contextual
1) Database Design: Design a secure database system to
store the digitized answer scripts, OCR results, NLP
appropriateness. The program also uses sophisticated linguistic
analyses, grades, and feedback.
processing methods to determine student response quality more
accurately. 2) Data Integrity and Privacy: Ensure data integrity,
The architecture is supported by a safe database system so that confidentiality, and compliance with relevant privacy
the answer scripts, OCR results, NLP analyses, grades and regulations throughout the data storage and
feedback can be stored securely, in compliance with privacy management processes.
regulations and ensuring confidentiality as well as integrity. 3) Database Integration: Integrate the database
This strong backend infrastructure serves as the spine of this seamlessly with the other components of the
system where it also protects sensitive data while enabling proposed system, enabling efficient data storage,
various functions to take place. In general terms then; proposed retrieval, and management.
structure represents an all-round, advanced technological
approach towards streamlining work-flows during assessment
automation while at the same time improving on educational
experiences among teachers as well learners.

25
International Journal of Computer Applications (0975 – 8887)
Volume 186 – No.42, September 2024

takes center stage. This model, finely attuned to the intricacies


of visual data, delivers a nuanced assessment based on the
similarity and fidelity of diagrams, enriching the grading
process with a holistic perspective.
In essence, this amalgamation of OCR, NLP, and deep learning
technologies heralds a new era in educational assessment, one
characterized by precision, transparency, and adaptability. The
website stands not only as a testament to technological
innovation but also as a beacon of progress, ushering in a
paradigm shift in the way academic achievement is perceived
and evaluated. With data integrity and privacy enshrined at its
core, this system embodies the ideals of trust and
accountability, paving the way for a future where assessment
transcends mere scrutiny, evolving into a catalyst for growth
and excellence.

6. CONCLUSION
The development and implementation of an Automated
Answer Script Evaluation System represent a pivotal
advancement in the educational technology landscape, aiming
to address the challenges associated with manual evaluation
processes. The system outlined in this report integrates cutting-
edge technologies such as Optical Character Recognition
(OCR) and Natural Language Processing (NLP) to
revolutionize the grading paradigm. The comprehensive set of
functional requirements, usability enhancements, and non-
functional considerations collectively shape a robust
framework for an efficient, accurate, and user-friendly solution.
The system's key functionalities, including user authentication,
answer script submission, OCR processing, NLP analysis, non-
textual element recognition, grading interface, feedback
Fig 2. Workflow Diagram mechanism, and data storage, collectively ensure a holistic
approach to automated evaluation. By implementing role-based
5. RESULTS access control and real-time feedback mechanisms, the system
To orchestrate this sophisticated system, a methodology was not only streamlines the evaluation process but also contributes
devised to modernize the assessment process of educational to improved educational outcomes and personalized learning
institutions.” A dynamic website with strong login paths.
authentication is built to upload and view answer scripts using The emphasis on non-functional requirements, including
the above steps. The site also has an intuitive interface that can performance, scalability, usability, maintainability, and
be easily explored, the treasure trove of digitized scripts can be compatibility, underscores the commitment to delivering a
accessed by student ID, department, semester exam and solution that meets the highest standards of efficiency,
subject. reliability, and adaptability. The software requirements,
Upon submission, the answer scripts are subjected to an OCR centered around web hosting, NLP modules, and a secure
process that converts the handwriting or typed data into database, along with specific hardware prerequisites, form the
computer-readable text form. But this is not a simple backbone of a technology stack designed to handle the
mechanical conversion of text — it’s really the beginning of complexities of large- scale assessment processes.
exactly what is often need: a place where key constructs from
each response are built, recorded and available for analysis.
7. REFERENCES
[1] A. Rokade, B. Patil, S. Rajani, S. Revandkar, and R.
The extracted text is carefully recorded in a secured database Shedge, "Automated Grading System Using Natural
for further evaluation and feedback from levels of human Language Processing," in 2018 Second International
control later on. But the magic of the system is in its NLP Conference on Inventive Communication and
capabilities, where algorithms have been trained to hone in on Computational Technologies (ICICCT), Coimbatore,
the language and actually read though those answers. These India, 2018, pp. 1123-1127, doi:
algorithms are very good at picking up on semantic nuances, 10.1109/ICICCT.2018.8473170.
peeling back the layers of complexity and checking responses
for coherence and consistency along many dimensions. With [2] V.S. Sadanand, K.R. Guruvyas, P.P. Patil, J. Janardhan
this linguistic expertise, they construct a bespoke scoring Acharya, and S. Gunakimath Suryakanth, "An automated
framework that ensures the questions are assessed fairly and essay evaluation system using natural language processing
thoughtfully. Moreover, even the evaluation is generated, the and sentiment analysis," International Journal of Electrical
response is looked from cosine similarity point of view to be and Computer Engineering (IJECE), 2022.
exactly matched with some ground truth in training corpus. [3] V. Kumari, P. Godbole, and Y. Sharma, "Automatic
This analysis is the objective foundation for awarding marks in Subjective Answer Evaluation," 2023, doi:
a manner that guarantees fairness and alleviates any teacher 10.5220/0011656000003411.
bias in grading. Meanwhile, for answers adorned with diagrams
and visual representations, a cutting-edge deep learning model [4] A.K.R. Maya, J. Nazura, and B.L. Muralidhara, "Recent

26
International Journal of Computer Applications (0975 – 8887)
Volume 186 – No.42, September 2024

Trends in Answer Script Evaluation – A Literature Natural Language Processing," 2021.


Survey," 2023, doi: 10.2991/ahis.k.210913.014.
[12] S. Mangesh, P. Maheshwari, and A. Upadhyaya,
[5] Prof. S.P. Raut, S.D. Chaudhari, V.B. Waghole, P.U. "Subjective Answer Script Evaluation using Natural
Jadhav, and A.B. Saste, "Automatic Evaluation of Language Processing," 2022.
Descriptive Answers Using NLP and Machine Learning,"
2022, doi: 10.48175/IJARSCT-3030. [13] V. Tanwar, "Machine Learning based Automatic Answer
Checker Imitating Human Way of Answer Checking,"
[6] S. K. et al., "Automatic Answer Evaluation Using Deep INTERNATIONAL JOURNAL OF ENGINEERING
Learning Algorithms," 2022, doi: RESEARCH & TECHNOLOGY (IJERT), vol. 10, no. 12,
10.31838/ecb/2023.12.s3.039. December 2021.
[7] G. Ng'Ochoi, P. Sijimol, and S. Mariam Varghese, [14] B. S. J. Kapoor, S. M. Nagpure, S. S. Kolhatkar, P. G.
"Grading descriptive answer scripts using deep Chanore, M. M. Vishwakarma, and R. B. Kokate, "An
analysis of automated answer evaluation systems based on
[8] learning," International Journal of Innovative Technology machine learning," in 2020 International Conference on
and Exploring Engineering, vol. 8, pp. 991-996, 2019. Inventive Computation Technologies (ICICT), Feb. 2020,
[9] V. Lakshmi and V. Ramesh, "Evaluating Student's pp. 439–443, doi: 10.1109/ICICT48043.2020.9112429.
Descriptive Answers Using Natural Language Processing [15] M. M. Rahman and F. H. Siddiqui, "NLP-based
and Artificial Neural Networks," ISSN: 2320-2882, 2017. Automatic Answer Script Evaluation," 2018.
[10] N. D. Kamraj, "Survey on Techniques used for Evaluation [16] D. Lopresti, "Optical character recognition errors and their
of Exam Answer Papers," 2020. effects on natural language processing," IJDAR, vol. 12,
[11] M. Sebastian, R. Kunjumon, S. Shaji, and Prof. S. R., pp. 141–151, 2009, DOI: 10.1007/s10032-009-0094-8.
"DigiValuate: Answer Sheet Evaluation System using

IJCATM : www.ijcaonline.org 27

You might also like