100% found this document useful (4 votes)
600 views7 pages

Automatic Question Paper Generation, According To Bloom's Taxonomy, by Generating Questions From Text Using Natural Language Processing

The ongoing research on "Natural Language Processing and its applications in the educational domain”, has witnessed various approaches for question generation from paragraphs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (4 votes)
600 views7 pages

Automatic Question Paper Generation, According To Bloom's Taxonomy, by Generating Questions From Text Using Natural Language Processing

The ongoing research on "Natural Language Processing and its applications in the educational domain”, has witnessed various approaches for question generation from paragraphs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Volume 6, Issue 4, April – 2021 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Automatic Question Paper Generation, according to


Bloom’s Taxonomy, by generating questions from
text using Natural Language Processing
Shivali Joshi*, Parin Shah*, Sahil Shah*
*
Bachelor of Technology, Department of Information Technology, K.J. Somaiya College of Engineering, Mumbai, India.

Abstract:- The ongoing research on "Natural Language questions from the large question bank eliminates any
Processing and its applications in the educational possibility of human bias and thus, making every test paper
domain”, has witnessed various approaches for question unpredictable. Thus, the system proves to be beneficial for
generation from paragraphs. Despite the existence of the online school examinations, especially during times of the
numerous techniques for the automatic generation of pandemic, for the creation of new questions whose answers
questions, only a few have been implemented in real are not directly available on the internet; thus, reducing
classroom settings. This research paper reviews existing student malpractices. These questions generated can be used
methods and presents an AQGS (Automatic Question by teachers to set test papers. Students can leverage it for self-
Generation System) that uses Natural Language evaluation to understand their grasp on a particular topic.
Processing Libraries like NLTK and Spacy to suggest This automation reduces costs, labor, and rules out the
questions from a passage provided as an input. The occurrence of human errors, arming the user with a fast and
Question Paper is generated by randomly selecting easy-to-use question-generating tool at their fingertips.
questions for a specific level of Bloom’s Taxonomy. We
conclude by determining the efficacy of the AQGS using Generally, the three major components of Question
performance measures like accuracy, precision, and Generation are input pre-processing, sentence selection, and
recall. question formation. The input text is filtered by removing
unnecessary words and punctuations that do not contribute to
Keywords:- Question Generation, Bloom’s Taxonomy, the meaning of the sentence. The sentences or phrases from
Natural Language Processing (NLP), Natural Language which questions can be formed are segregated from the
Toolkit (NLTK), Spacy, POS Tagging, Named Entity remaining text. These are mapped to the type of question
Recognizer (NER). (what, where, when, etc.) that can be formulated with the
selected sentence, followed by the final step of framing a
I. INTRODUCTION grammatically sound question.

Researchers belonging to various disciplines have


started working on AQGS for educational purposes.
Examinations, being one of the crucial parts of education, are
conducted to test the caliber of the examinees. Examiners are
majorly dependent on themselves for making test papers.
Setting a question paper with the least number of repeated
questions is the most time-consuming task. Going through
pages to find new questions that are appropriate for the exam
is cumbersome. Additionally, the capabilities of a candidate
can be evaluated only with a question paper that consists of
the right proportion of theoretical and application-based
questions. Fig. 1. Generic Approach of Question Generation
The proposed system aims to find a solution to both of
II. LITERATURE SURVEY
the above-mentioned problems. The feature of question
extraction from paragraphs provides professors with ample Automatic Multiple Choice Question Generation from
new questions within few seconds saving a considerable Text: A Survey [1], reviews 86 articles and summarizes
amount of time. AQGS facilitates the examiner with a huge existing methodologies for generating MCQs from a text.
repository of questions in order to avoid question repetition in After analyzing these methods, the paper presents a detailed
the examination. Secondly, a question paper set with respect flowchart of a question generating system that consists of six
to Bloom’s Taxonomy ensures goal-based learning and can phases- pre-processing, sentence selection, key selection,
efficiently evaluate the knowledge of a student. For a specific question formation, distractor generation, and post-
level of Bloom’s Taxonomy, the random selection of processing. A comprehensive discussion of various

IJISRT21APR580 www.ijisrt.com 495


Volume 6, Issue 4, April – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

techniques for implementation of each phase along with subject, object, and prepositions that a sentence comprises of.
recent trends and challenges for MCQ generation is presented POS Tagger is used to label the part of speech for each word,
in the paper. Dependency parser analyses the grammatical structure of the
sentence and the relation between the words, and Support
Automatic Cloze question generation or CQG [2], an Vector Machine (SVM) is the algorithm used for performing
article in English is provided as an input from which the classification. Human evaluation is done to check the
system generates a list of cloze questions- a sentence semantic and syntactic accuracy of the output generated.
comprising of one or multiple blanks. Sentence Selection,
Keyword selection (potential blank selection), and Distractor Similarities in words Using Different POS Taggers [7],
selection (selecting alternate answers for the blank) are major presents a comparison of four different POS Taggers (NLTK,
components of CQG. To begin with, potential sentences are Freeling, NLP Tagger and Cognitive POS Tagger) to identify
selected followed by keyword selection on the basis of NER the proper tag for a given text. The paper analyses the results
and finally, domain-specific distractors are generated based of each tagger for Wh-questions like how, what, which,
on the knowledge base provided to the model. Manual where, who and why. Out of 350 wh-questions, 154 had
evaluation of the system is done for each sentence, keyword contrasting tags by these four tools and the results can are
and distractor selection. summarized by stating that NLTK outperforms other taggers
by labeling the word with the right part of speech. We use the
Automatic Question Generation using Discourse Cues NLTK tagger for POS tagging and other NLTK algorithms
[3], the system can be viewed as content selection and like Lancaster Stemmer and WordNet Lemmatizer that are
question formation. The emphasis is on recognition of discussed in the following sections.
discourse markers and discerning the important discourse
relations like casual, temporal, contrast, result, etc. After III. SPACY
identification of relevant text for framing a question, (seven)
discourse connectives are specified for finding the type of Spacy is one of the on-the-go-libraries of NLP
Wh-question (like why, where, which and when) and syntax enthusiasts which is specifically built to process and help us
transformations are performed. Semantic and syntactic understand large volumes of text. The Spacy framework
evaluation of the system is done. which is written in Cython is a quite fast library that supports
multiple languages like English, Spanish, French, German,
Semantic Based Automatic Question Generation using Dutch, Italian, Greek, etc. It comprises various models about
Artificial Immunity [4], both, SRL (Semantic Role Labelling) trained vectors, vocabularies, syntaxes, and entities. These
and NER (Named Entity Recognizer) are for the conversion models are to be loaded based on the requirements. For the
of input text into a semantic pattern. An Artificial immune “english-core-web” package, the default package is
system that uses feature extraction, learning, storage and 'en_core_web_sm' where ‘sm’ stands for small. Spacy has
associative retrieval to classify patterns according to question three models in the English language- small, medium, and
type like who, when, where, why, and how. The input large. As the name suggests, these models vary in size and
sentence is mapped into a pattern through SRL (which is used accuracy. However, in the proposed system load the large
for feature extraction) and NER, and depending on the package which is used for entity recognition for better
question type, sentence pattern is realized. 170 sentences accuracy and precision.
were mapped into 250 patterns that were used for training and
testing. For evaluation, Recall, Precision and F-measurement >>>import spacy
were used. The proposed model has a classification accuracy >>>spacy.load (“en_core_web_lg”)
of over 95%, and 87% in generating the new question
patterns. IV. NATURAL LANGUAGE TOOLKIT

A Combined Approach Using Semantic Role Labelling NLTK- the Natural Language Processing Toolkit, is the
and Word Sense Disambiguation for Question Generation and mother of all NLP libraries. It provides lexical resources, over
Answer Extraction [5], the article introduces a joint model of 50 corpora and a set of libraries for tokenization, stemming,
question formation and answer identification using Natural classification, tagging, and semantic reasoning along with
Language Processing. The Question Generation part makes many others. It is a platform to develop programs that require
use of SRL and WSD (Word Sense Disambiguation) natural language processing in Python language. NLTK is a
techniques while the Answer Extraction part uses NER along crucial component of the AQGS system presented in this
with SRL. Simple sentences are provided as an input to the paper.
model. The questions and answer pairs obtained for a set of
sentences were analyzed to evaluate the accuracy of each A. Lancaster Stemmer
question generation and answer extraction. Stemming in Natural Language Processing refers to the
process of reducing words to their stem or root word. This
Automatic Question Generation from Given Paragraph word stem may not be the same word as a dictionary-based
[6], the paper present a web application in which simple and root word. But, it is just a smaller or equal form of the word.
complex Wh-questions are generated from a paragraph. It is For instance, ‘retrieves’, ‘retrieval’, ‘retrieved’ reduce to the
mapped to a set of predefined rules depending on the verb, root ‘retrieve’. Porter’s Stemmer, Lovins Stemmer, Dawson

IJISRT21APR580 www.ijisrt.com 496


Volume 6, Issue 4, April – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Stemmer, Xerox Stemmer, Snowball Stemmer are some of Output:


the many stemming algorithms developed till date. The trouble : trouble
proposed approach uses the Lancaster Stemmer algorithm rocks : rock
provided by NLTK as it is dynamic, fast and aggressive as corpora : corpus
compared to others. It uses an iterative algorithm and saves
rules externally, meaning, custom rules be added. We use the C. Part-of-Speech Tagging
Lancaster Stemmer during question generation to stem the POS Tagging is the technique of labeling the
verb from a sentence and generate a skeleton of the question appropriate Part-of-Speech to a token (word) in a text corpus.
after POS tagging of the sentence from which the question is POS tagger is one of the most powerful aspects of the Natural
to be formed. Sometimes, it transforms words into strange Language Toolkit (NLTK). It first reads the sentence and then
roots and therefore, proper care of the spelling errors in the assigns parts of speech (such as Noun, verb, adjective, etc) to
sentences needs to be taken while using this algorithm. In the each token. Every part of speech is represented with a tag.
example given below, ‘troubl’ isn’t a stem word according to For the question generation process, we focus majorly on NN,
the dictionary but the stemmer reduces words similar to NNS, NNP, NNPS, VB, VBN, VBD, PRP, PRPP.
trouble into the stem- ‘troubl.
Example:
>>>Lancaster = LancasterStemmer() >>>import nltk
>>>print(“cat : ”, Lancaster.stem(“cat”)) >>>print(nltk.pos_tag (nltk.word_tokenize("Hey, how are
>>>print(“trouble : ”, Lancaster.stem(“trouble”)) you doing?")))
>>>print(“troubling : ”, Lancaster.stem(“troubling”))
Output:
Output: [('Hey', 'NNP'), (',', ','), ('how', 'WRB'), ('are', 'VBP'), ('you',
cat : cat 'PRP'), ('doing', 'VBG'), ('?', '.')]
trouble : troubl
troubling : troubl TABLE 1. MAJOR POS TAGS LIST
Tag Part of Speech
B. WordNet Lemmatizer NN noun, singular ‘chair’
Lemmatization groups together different forms of the NNS noun, plural ‘chair’
word for analyzing them as a single entity in order to identify NNP proper noun, singular ‘Jones’
the dictionary root word, preferably called ‘lemma’. NNPS proper noun, plural ‘Indians’
Lemmatization is similar to stemming to some extent. The VB verb, base form ‘take’
key difference is that Lemmatization aims to get rid of the VBN verb, past participle ‘taken’
inflectional endings that might occur while stemming. The VBD verb, past tense ‘took’
output, after the process of lemmatization, has some context VBP verb, sing. present, non-3d ‘are’
to it and importantly, the word holds a meaning unlike
VBG verb, gerund/present participle ‘doing’
stemming.
PRP PRP personal pronoun ‘I, he, she’
WRB wh-abverb ‘where’, ‘how’, ‘when’
WordNet is a large, free and publicly available lexical
database of the English language. It can be viewed as a
thesaurus where similar words are grouped into sets (synsets), V. BLOOM’S TAXONOMY
each one individually expressing a distinct concept. The main
There are three levels of human cognition: thinking,
aim is to develop a structured semantic relationship between
learning and understanding. Bloom’s taxonomy is a
words. NLTK provides an interface to access this dictionary-
WordNet corpus reader. After download and installation, an classification system to define and distinguish the levels of
instance of WordNetLemmatizer() is needed to lemmatize knowledge acquisition and thus, acts as a guide to develop
assessments and questioning strategies. The system put
words, similar to the stemming example.
forward in this paper uses these categories to evaluate
students in a precise way. Every question is categorized, as
>>>lemmatizer = WordNetLemmatizer()
per 6 levels defined by revised Bloom’s Taxonomy [8], based
>>>print(“trouble : ”, lemmatizer.lemmatize(“trouble”))
>>>print(“rocks : ”, lemmatizer.lemmatize(“rocks”)) on the example question cues and stems shown below in
Table 2.
>>>print(“corpora : ”, lemmatizer.lemmatize(“corpora”))

IJISRT21APR580 www.ijisrt.com 497


Volume 6, Issue 4, April – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

TABLE 2. BLOOM’S TAXONOMY (FROM LOWER TO HIGHER ORDER THINKING SKILLS), QUESTION CUES AND QUESTION STEMS
Category Question Cues Question Stems
Knowledge define, who, when, where, quote, Who wrote...?, when did…?, who said…?,
(Factual recall, remembrance of major name, identify, label where did…?, who are the…?
dates, events, etc.)
Comprehension differentiate, distinguish, describe, What is the difference between…?, What is the
(understanding, compare, interpret) summarize, discuss, predict, list, summary of…, what is the predicted outcome
contrast of…, what is the sequence of…
Application Demonstrate, calculate, solve, How to solve…, what is the classification
(visualize application in real life, solve illustrate, examine, test, classify of…?, how to examine…?, Demonstrate the
problems using methods or theories) process of….
Analysis Analyse, explain, classify, connect, What proves that…?, how is this
(identification, pattern, recognition, infer, probe similar/different to…?, what is the problem
analysis) with…?, why did …precede/follow…?
Evaluation Access, rank, grade, support, How effective is…?, what would you
(choose, verify evidence, recognize, conclude, select, measure, convince, choose…? How would you rank/grade…?,
access theories) support what does the argument support…?
Creativity Design, innovate, hypothesise, Can you image how…?, how would you
(independent creative thinking, shift conceive, craft, compose, invent invent…?, hwo would you respond, what
perspective, innovate) design would you make for…?

VI. PROPOSED APPROACH identification of Bloom’s Taxonomy, and question paper


generation. The details of the system flowchart depicted in
The main components of the proposed system are Fig. 2 are discussed in the sections below:
sentence selection, question formation, answer extraction,

Fig. 2. Flowchart of Proposed system

IJISRT21APR580 www.ijisrt.com 498


Volume 6, Issue 4, April – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

A. Question formation and Question-Answer pair NN/NNP and VBZ =>Does


Generation NNS/NNPS(plural) and VBP => Do
A piece of text consisting of one or more paragraphs NN/NNP and VBN/VBD => Did
needs to be given as input by the person in charge of setting NNS/NNPS(plural) and VBN => Did
the test paper. The input provided is split into small units
called tokens. For tokenization, we use the NLTK Punkt Therefore, we tokenize and find a relevant POS tag for
Sentence Tokenizer which will segment the running text into all the words. The stem is determined from the verb by
meaningful sentences. Other preliminary steps of input data stemming. We check the first index of the list obtained if it is
pre-processing include lowercasing, stemming, and a noun, pronoun, singular or plural first person, etc. and an
lemmatization. The Lancaster Stemmer algorithm used for appropriate word (do, does, or did) is designated in place of
stemming and WordNet Lemmatizer is used for the word at index 0. The whole sentence is combined with a
lemmatization. question mark at the end. To find the answer pair for a
particular question, a similar approach is followed in which
The selection of potential sentences is done on the basis the answer-part identified for a particular sentence undergoes
of discourse cues. A list of seven discourse markers is POS tagging and sentence re-transformation if necessary. For
hardcoded and the sentences comprising of these are filtered instance, for the sentence- ‘He went bankrupt because he took
from the rest. Once the discourse marker in a sentence is many loans’, the question will be ‘why he went bankrupt?’
identified, the mapping to the type of question that can be and the answer will be the same as the original sentence.
formed is done. The sentence is split into two parts- the Thus, no transformation is needed. However for yes/no
question-part and the answer-part after the process of POS questions formed on the basis of discourse marker ‘although’,
tagging. sentence transformation is required. Both the question and
answer sentences are converted into a syntax tree that is most
TABLE 3. SENTENCE SELECTION USING D ISCOURSE MARKER commonly used by NLP to form sentences and carry out the
Discourse Sense Q-type Target similarity. The tree helps the program to formulate a proper
connective argument question sentence and also at the same time look for some
missing grammatical mistakes. Here we use lemmatization to
because causal why arg1
find the root words and if the words have some alphabets
Since temporal, Why, when arg1 missing they can be added. The mistakes can be some extra
causal words like in, at, etc. which weren’t removed during question
when causal + when arg1 formation or some extra words left that will make the
temporal, sentence grammatically correct. Below is an example of
temporal, question and answer generation from a given input sentence
conditional as per the proposed approach.

although contrast, yes/ no arg1 Input Sentence:


concession >>> He went bankrupt because he took too many loans.
as a result result why arg2 Tokenization:
>>> ['He', 'went', 'bankrupt', 'because', 'he', 'took', 'too',
for instantiation give an arg1 'many', 'loans', '.']
example example where Lowercasing, Stemming and Lemmatization:
for instantiation give an instance arg1 >>> ['he', 'went', 'bankrupt', 'because', 'he', 'took', 'too', 'many',
instance where 'loan', '.']
Preprocessed Sentence:
>>> ['he', 'went', 'bankrupt', 'because', 'he', 'took', 'too', 'many',
Sentences are traversed to find auxiliary verbs. If the
'loan', '.']
auxiliary verb exists, we tokenize the part of the sentence that
Discourse Marker Identified:
will form the question, separate the auxiliary verb from the
>>> ‘because’
main verb and form a question using the auxiliary verb. The
Map the discourse marker to Wh-type Question according to
sentence string is again combined as it is with a single change
Table 3:
i.e. the auxiliary verb is replaced by a wh-question at the start
>>> ‘why’
and a question mark in the end. The wh-word is found out by
Target arguments according to Table 3:
traversing the specific tagged words in sentences like
>>> arg1 (question part): ‘he went bankrupt’
date/time (when), names (who), etc. The best-suited wh-word
>>> agr2 (answer part): ‘he took too many loans’
for the sentence is put at the start of the sentence. For the
POS Tagging question part:
contrary case, when the sentence does not comprise of any
>>> [('he', 'PRP'), ('went', 'VBD'), ('bankrupt', 'NN')]
auxiliary verb, the only possibility is the use of some form of
Identification of verb based on Noun-Verb combination:
‘do’. There can be the following combinations of nouns and
>>> no auxilary verb case: NNP+VBD= ‘did’
verbs:
Question Formation:
>>> Why did he went bankrupt?
Answer POS Tagging and Formation:

IJISRT21APR580 www.ijisrt.com 499


Volume 6, Issue 4, April – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

>>> [('He', 'PRP'), ('went', 'VBD'), ('bankrupt', 'RB'), questions pertaining to the “Analysis” category are chosen to
('because', 'IN'), ('he', 'PRP'), ('took', 'VBD'), ('too', 'RB'), generate the question paper. The random module of python is
('many', 'JJ'), ('loans', 'NNS'), ('.', '.')] used for the random selection of questions stored in the
QnA Pair Generated: database. The Mersenne Twister PRNG algorithm is used by
>>> [['He went bankrupt because he took too many loans.', the ‘rand’ function and has a period of 2**19937-1, ensuring
'Why did he went bankrupt ?']] a purely random and unbiased question paper. When the pdf
QnA pair with Bloom’s Taxonomy level identified for of the question paper is generated, appropriate answers for the
Question Stem (Why did…) according to Table 2: selected question are simultaneously inserted in a separate
>>> [['He went bankrupt because he took too many loans.', pdf file.
'Why did he went bankrupt ?', 'Analysis']]
VII. RESULTS
Finally, the question and answer pairs generated are
stored in a database with details about the course, module We evaluate the performance of the system by
along with which level of Bloom’s Taxonomy a particular providing paragraphs with a varied number of sentences as in
question belongs to. Additionally, the spreadsheet or existing input. The questions generated were checked and compared
question bank of a particular professor for a particular test can with questions generated with human English proficiency.
be integrated into the database as the more the number of Following this iterative process 10 times, the attributes of a
questions, the less is the chance of question repetition. confusion matrix (TP, TN, FP, and FN) were calculated.
Further, using these values, performance parameters-
B. Question Paper Generation Using Bloom’s Taxonomy accuracy, precision, and recall are calculated and represented
The system is provided with a predefined list of graphically. Accuracy can be defined as the proximity of
question cues and question stems for each category of given values to the true value. By analyzing the results we
Bloom’s Taxonomy. Using POS tags of the tokens in the can say that the system works with an accuracy of 72.9%.
question, the accurate level of Bloom’s Taxonomy is Precision shows us the closeness of different measured values
identified for the question. The examiner has to specify the to each other and Recall depicts the fraction of relevant
number of questions for a specific category on the UI of the instances to the total number of values retrieved. Figure 3
system developed such as “Knowledge”-based 5 questions, depicts the graphical representation of the performance
“Application”-based 3 questions and “Analysis”-based 2 measures calculated (where the X-axis denotes the number of
questions. Out of the question repository, 5 random questions sentences and the Y-axis represents accuracy, precision,
pertaining to the “Knowledge” category, 3 random questions recall respectively).
pertaining to the “Application” category and 2 random

TABLE 4. CALCULATION OF PERFORMANCE MEASURES

VIII. CONCLUSION

Many processes in the educational domain are carried


out manually. The main aim of the paper is to reduce the gap
between manpower and technology by focusing on
automating the task of question paper generation. Various
approaches and methodologies adopted by existing papers
were studied and analyzed to propose an AQGS system using
NLTK in python language. The system accepts text passages
Fig. 3. Graph of Accuracy, Prescision and Recall calculated as input that is subjected to tokenization, lemmatization and
for the proposed system stemming for pre-processing. Potential sentences are selected
from these processed phrases with help of discourse markers
which undergo syntactic analysis using POS tagging and

IJISRT21APR580 www.ijisrt.com 500


Volume 6, Issue 4, April – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

semantic analysis using NER. Grammatically sound questions [8]. Anderson LW, Krathwohl DR. A taxonomy for
are formed using NER and syntax tree and are stored in the learning, teaching, and assessing: a revision of Bloom’s
database after mapping each to an appropriate Bloom’s taxonomy of educational objectives. New YorkNY:
taxonomy. The test paper is generated by random selection of Longmans; 2001
questions for a specific category of the taxonomy used.
Future work of the system includes increasing the accuracy of
the system by enhancing question framing. Questions other
than wh-questions (like true/false, MCQs, etc.) can be
incorporated. An Answer evaluation module can be integrated
to evaluate and score the test answers submitted by students
by calculating semantic similarity with the correct answer.
Out of the numerous papers on approaches for question
generation, this paper focuses on the implementation of
AQGS system in python to contribute to automated, quick,
unbiased question paper generation.

ACKNOWLEDGMENT

We would like to thank our project guide Prof. Ravindra


Divekar and Project Committee for their indispensable
support, suggestions and timely guidance.

REFERENCES

[1]. D. R. CH and S. K. Saha, "Automatic Multiple Choice


Question Generation From Text: A Survey," in IEEE
Transactions on Learning Technologies, vol. 13, no. 1,
pp. 14-25, 1 Jan.-March 2020, doi:
10.1109/TLT.2018.2889100.
[2]. Narendra, A., Manish Agarwal and Rakshit shah,
“Automatic Cloze-Questions
Generation.” RANLP, 2013.
[3]. Agarwal, Manish & Shah, Rakshit & Mannem,
Prashanth, Automatic question generation using
discourse cues, 2011, pp. 1-9.
[4]. Eldesoky, Ibrahim. (2015). Semantic Question
Generation Using Artificial Immunity. I.J. Modern
Education and Computer Science. 7. 1-8.
10.5815/ijmecs.2015.01.01.
[5]. L. R. Pillai, V. G. and D. Gupta, "A Combined
Approach Using Semantic Role Labelling and Word
Sense Disambiguation for Question Generation and
Answer Extraction," 2018 Second International
Conference on Advances in Electronics, Computers and
Communications (ICAECC), Bangalore, India, 2018,
pp. 1-6, doi: 10.1109/ICAECC.2018.8479468.
[6]. Deokate Harshada G., Jogdand Prasad P., Satpute
Priyanka S., Shaikh Sameer B., Automatic Question
Generation from Given Paragraph, IJSRD -
International Journal for Scientific Research &
Development| Vol. 7, Issue 03, 2019 | ISSN (online):
2321-0613
[7]. Kalpana B. Khandale, Ajitkumar Pundage ,C. Namrata
Mahender, Similarities in words Using Different Pos
Taggers, IOSR Journal of Computer Engineering
(IOSR-JCE) e-ISSN: 2278-0661,p-ISSN: 2278-8727,
PP 51-55

IJISRT21APR580 www.ijisrt.com 501

You might also like