0% found this document useful (0 votes)
19 views5 pages

A Comprehensive Survey of Abstractive Text Summarization Techniques

The document presents a comprehensive survey of abstractive text summarization techniques, highlighting the differences between abstractive and extractive methods. It discusses various models and approaches, including sequence-to-sequence models, attention mechanisms, and word sense disambiguation, while addressing challenges such as maintaining coherence and reducing factual errors. The research aims to enhance the quality of generated summaries through advanced machine learning techniques and evaluates their effectiveness using various metrics.

Uploaded by

uliseraja1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views5 pages

A Comprehensive Survey of Abstractive Text Summarization Techniques

The document presents a comprehensive survey of abstractive text summarization techniques, highlighting the differences between abstractive and extractive methods. It discusses various models and approaches, including sequence-to-sequence models, attention mechanisms, and word sense disambiguation, while addressing challenges such as maintaining coherence and reducing factual errors. The research aims to enhance the quality of generated summaries through advanced machine learning techniques and evaluates their effectiveness using various metrics.

Uploaded by

uliseraja1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2024 International Conference on Communication, Computer Sciences and Engineering (IC3SE)

A Comprehensive Survey of Abstractive Text


Summarization Techniques
Nikesh Kumar Rayirth Soni Tarun Adhikari
Department of Computer Science Department of Computer Science Department of Computer Science
Engineering Engineering Engineering
Netaji Subhas University of Technology Netaji Subhas University of Technology Netaji Subhas University of Technology
Delhi, India Delhi, India Delhi, India
[email protected] [email protected] [email protected]
2024 International Conference on Communication, Computer Sciences and Engineering (IC3SE) | 979-8-3503-6684-6/24/$31.00 ©2024 IEEE | DOI: 10.1109/IC3SE62002.2024.10592874

Suresh Kumar
Department of Computer Science Engineering
Netaji Subhas University of Technology
Delhi, India
[email protected]

Abstract: Abstractive text summarization [16] leverages A. Investigating the Impact of WSD [21] on Factual
advanced machine learning and deep neural networks to Accuracy and Coherence
generate entirely new text that encapsulates the essence of the
input text. It differs from extractive summarization [20], which
Understanding WSD [21]'s Role in Contextual Word
mechanically concatenates text segments. Abstractive Meaning Selection: The research aims to enhance the model's
summarization is needed to address information overload and accuracy in selecting relevant word meanings based on
enhance content creation. However, it faces challenges such as context, thereby reducing factual errors in generated
maintaining consistency, capturing subtle nuances, and striking summaries.
a balance between conciseness and comprehensiveness. Various Improving Model Accuracy in Selecting Relevant Word
approaches to abstractive summarization are presented,
Meanings: Through WSD [21], the study seeks to improve the
including the seq2seq model [17], attention mechanism [18],
model's discernment in selecting the most contextually
pointer-generator network [10], and copy mechanism. These
approaches have different strengths and weaknesses depending
relevant word meanings, leading to more accurate summaries.
on the application. Abstractive summarization finds practical B. Evaluating WSD [21]'s Role in Promoting Semantic
applications in news summarization, research paper Diversity
summarization, legal document summarization, customer
support, and e-commerce product descriptions. Disambiguating Words for Enhanced Semantic
Expression: The study explores how WSD [21] can foster
Keywords- Natural Language Processing, Abstractive Text semantic diversity within summaries by disambiguating
Summarisation, Sequence-to-Sequence (Model), Attention words with multiple meanings, encouraging the exploration of
Mechanism, Deep Learning, Extractive Summarisation, Word a wider range of expressions and concepts.
Sense Disambiguation (WSD), Recurrent Neural Network (RNN),
Encoder-Decoder, Term Frequency-Inverse Document Frequency Encouraging Exploration of Diverse Expressions and
(TF-IDF), PageRank, Latent Dirichlet Allocation (LDA), Hidden Concepts: By promoting semantic diversity, WSD [21]
Markov Models, Pointer-Generator Networks, Transformer, encourages the model to explore a broader range of
ROUGE, BLEU, METEOR, BERT, Consensus-Based Image expressions and concepts, leading to more varied and
Description Evaluation, Generative Pre-trained Transformer comprehensive summaries.
(GPT)
C. Analyzing Computational Trade-offs of Incorporating
I. INTRODUCTION WSD [21]
Abstractive Text Summarization [16] generates new and Balancing Complexity and Efficiency in WSD [21]
coherent summaries using advanced machine learning, unlike Integration: The research critically analyzes the computational
extractive summarization [20], which concatenates text trade-offs associated with integrating WSD [21] mechanisms
segments. Its architecture often uses a sequence-to-sequence into Seq2Seq models, ensuring a balance between complexity
model [17], with the encoder comprehending the input text and computational efficiency for practical deployment.
and the decoder generating a summary. Abstractive Addressing Computational Overhead in Seq2Seq Models:
summarization has advanced with deep learning [19] and text The study focuses on addressing computational overhead in
datasets. It is used in generating articles, research papers, and Seq2Seq models when incorporating WSD [21], aiming to
marketing content, and finds applications in customer service maintain efficiency without compromising the quality of
chatbots, email summarization, and social media content generated summaries.
generation.
D. Measuring Success Through Multifaceted Evaluation
II. PROBLEM STATEMENT Metrics
In the domain of Abstractive Text Summarization [16], ROUGE [29] Scores: Assessing Fluency and Accuracy in
this research delves into the potential of Word sense Summaries: The research includes ROUGE [29] scores as part
disambiguation (WSD [21]) to elevate the quality of of a comprehensive evaluation framework, focusing on
summaries generated by Sequence-to-sequence [17] assessing the fluency and accuracy of generated summaries.
(Seq2Seq) models. The core areas of investigation focus on:

995
979--83503-6684-6/24/$31.00 ©2024 IEEE
Authorized licensed use limited to: JNT University Kakinada. Downloaded on December 16,2024 at 10:51:09 UTC from IEEE Xplore. Restrictions apply.
2024 International Conference on Communication, Computer Sciences and Engineering (IC3SE)

Semantic Similarity Metrics: MoverScore, BERTScore, learning that enables the model to generate new words while
and Beyond: The study incorporates semantic similarity also copying essential phrases directly from the source text,
metrics such as MoverScore and BERTScore to evaluate the leading to more diverse and informative summaries.
semantic coherence and similarity between generated
summaries and reference texts. C. Advancements with Transformer [28] Models
Pilault (2020) [5] investigated the use of transformer [28]
III. MOTIVATION models for both extractive and abstractive summarization,
Abstractive Text Summarization [16] offers superior likely comparing the performance of these approaches.
summaries compared to extractive approaches. Abstractive Building on this exploration, Abdel-Salam & Rafea (2022) [1]
summarization involves generating a concise and coherent introduced SqueezeBERTSumm, a novel model that harnesses
summary that captures the main points of a source document the power of BERT [32] architecture specifically for extractive
by understanding and reformulating its content. Unlike summarization [20] purposes. This model places a significant
extractive summarization [20], which simply extracts salient emphasis on automating the selection process of the most
phrases or sentences from the original text, abstractive crucial sentences, aiming to produce concise and informative
summarization leverages natural language processing [15] summaries. By leveraging BERT's [32] capabilities,
techniques to produce summaries that are more fluent, SqueezeBERTSumm offers a promising avenue for enhancing
informative, and relevant to the user's query. the efficiency and effectiveness of extractive summarization
[20] techniques, contributing to advancements in automated
Abstractive Text Summarization [16] can produce more text summarization systems.
natural and coherent outputs that are not confined to mirroring
the original text. By utilizing deep learning [19] models D. Attention mechanisms [18] and Abstractive
trained on large text datasets, abstractive summarization Summarization
systems can learn the underlying structure and semantics of Sanjabi (2018) [3] likely explored the application of
language, enabling them to generate summaries that are both attention mechanism [18] in neural networks for abstractive
informative and engaging. These summaries often exhibit a summarization, enabling models to focus on specific parts of
higher level of coherence and readability, making them easier the source text for informative summaries. Concurrently,
for users to understand and digest. Khan et al. (2018) [13] investigated abstractive summarization
Abstractive Text Summarization [16] addresses using semantic graphs to represent relationships between
vocabulary limitations by generating new words and phrases, words and concepts within the text.
expanding the scope of summarization. Extractive E. Recent Trends and Future Directions
summarization [20] methods are constrained by the Dedhia et al. (2020) [2] delved into abstract
vocabulary present in the original text, which can limit their summarization techniques, where the primary objective is to
ability to convey complex or nuanced information. In contrast, condense the meaning of the source text while potentially
abstractive summarization models have the capacity to introducing new information in the summary. By focusing on
generate new words and phrases that are not explicitly abstract summarization, researchers aimed to develop
mentioned in the source document. This allows them to methods capable of generating concise yet informative
express ideas and concepts in a more concise and effective summaries that capture the essence of the original text.
manner, enhancing the overall quality and completeness of the Around the same time Lewis et al. (2020) [9] explored
summary. techniques to enhance neural text summarization, contributing
IV. LITERATURE SURVEY to ongoing advancements in the field. Further advancements
by Liu & Healey (2023) [14] who investigated how large
A. Early Work on Text Summarization language models like GPT can automatically generate
In the field of text summarization, a significant leap was summaries for large document collections, leveraging their
achieved by Sutskever et al. (2014) [7] who introduced training on massive text data.
sequence-to-sequence [17] learning for text summarization, F. User Perception and Evaluation
treating it as a translation task where the model learns to
condense a sequence of words into a shorter summary. Further Monsen & Rennes (2022) [6] explored how people
advancements came from Rush et al. (2015) [11] who focused perceive the quality and readability of summaries generated
on applying neural network architectures with attention by different methods, shedding light on user preferences and
mechanism [18]s for abstractive summarization, evaluation criteria in summarization tasks.
demonstrating the effectiveness of capturing sentence V. OBJECTIVE
relationships for generating meaningful summaries.
In this research dissertation, we undertake a scholarly
B. Exploring Attention mechanism [18]s inquiry into the realm of Abstractive Text Summarization [16],
Ashish et al. (2017) [8] delved into attention mechanism a fascinating and formidable frontier within the discipline of
[18]s within neural networks, specifically focussing on their natural language processing [15]. Our primary aim is to
application in text summarization. By integrating attention present a comprehensive survey of this field, encompassing
mechanism [18] into their models, they enabled prioritization pivotal techniques, prominent datasets, and evaluation
of important information during the summarization process. methodologies. By delving into the complexities of
This approach allowed the model to dynamically allocate Abstractive Text Summarization [16], we seek to elucidate its
attention to relevant parts of the input text, leading to more theoretical foundations and practical applications.
accurate and informative summaries. Around the same time
Furthermore, we shed light on the prevailing challenges
See et al. (2017) [10] introduced the pointer-generator
that confront Abstractive Text Summarization [16]. These
network [10], a specific approach sequence-to-sequence [17]
challenges range from the fundamental issue of accurately

996

Authorized licensed use limited to: JNT University Kakinada. Downloaded on December 16,2024 at 10:51:09 UTC from IEEE Xplore. Restrictions apply.
2024 International Conference on Communication, Computer Sciences and Engineering (IC3SE)

capturing the essence of a document in a concise and coherent Topic modelling techniques, such as Latent Dirichlet
summary to the more practical concerns of computational Allocation [26] (LDA), were utilized to identify topics present
efficiency and scalability. By examining these challenges, we in the text. Summaries were then generated based on
hope to identify potential avenues for future research and representative sentences from these topics. Topic modelling
development. allowed for the extraction of latent semantic structures and the
organization of the text into coherent themes.
VI. METHODOLOGY
9) Clustering Methods:
A. Early Approaches (Pre-deep learning [19] era) Clustering methods were employed to group similar
In the early days of text summarization, before the advent sentences or phrases together based on their semantic
of deep learning [19], researchers explored various non-neural similarity. Sentences from each cluster were then selected to
approaches to automatically generate summaries. These cover a diverse range of topics in the summary.
methods sought to capture the essence of the source text while
addressing challenges such as handling complex linguistic 10) Hidden markov models [27] (HMMs):
structures, recognizing domain-specific concepts, and Hidden markov models [27] (HMMs) were used to model
maintaining coherence and accuracy. the underlying structure of the text. HMMs represented the
text as a sequence of states, with each state corresponding to a
1) Rule-based Methods: specific topic or concept.
Initially, rule-based systems were employed to identify
important sentences or phrases for summarization. These B. Sequence-to-sequence [17] (Seq2Seq) Models
methods relied on handcrafted rules and heuristics based on With the advent of deep learning [19], Seq2Seq [17]
linguistic features, such as sentence length, position within the models, based on recurrent neural networks [22] (RNNs) or
text, and presence of specific keywords. more recently transformer [28]s, revolutionized Abstractive
Text Summarization [16]. These models learned to map
2) Template-based Summarization: variable-length input sequences to variable-length output
Another early approach involved using predefined sequences directly, enabling the generation of coherent and
templates to extract and rearrange information from the source informative summaries.
text. These templates consisted of slots or placeholders that
were filled with relevant information from the text. 1) Encoder-decoder [23] architectures:
Seq2Seq [17] models employed encoder-decoder [23]
3) Word sense disambiguation [21] (WSD): architectures, where the encoder processed the source text and
Early systems occasionally employed basic WSD [21] the decoder generated the summary. However, early Seq2Seq
techniques to resolve ambiguous terms in the text. WSD [21] [17] models faced challenges with generating coherent and
aimed to determine the intended meaning of words with informative summaries, especially for long documents.
multiple senses, improving the coherence and accuracy of
summaries. 2) Attention mechanism [18]:
Attention mechanism [18]s was introduced to address the
4) Sentence Extraction: limitations of early Seq2Seq [17] models. They allowed the
Instead of generating new sentences, some methods model to focus on relevant parts of the input text dynamically
focused on selecting and extracting existing sentences from while generating the summary, improving the quality and
the source text that were deemed most relevant or fluency of the generated summaries significantly.
representative of its content.
C. Transformer [28]-Based Architectures
5) Keyword-based Summarization:
1) Transformer [28]:
Keyword-based summarization relied on identifying
keywords or phrases that frequently appeared in the source Transformer [28] model revolutionized natural language
text. These keywords were assumed to represent the most processing [15] tasks. It employs self-attention mechanism
significant concepts or topics discussed in the text. Techniques [18]s that allow it to capture long-range dependencies in the
such as term frequency-inverse document frequency [24] (TF- input sequence, leading to improved performance in text
IDF) were used to determine the importance of words or summarization. Transformer [28]s has been widely adopted in
phrases. Summaries were then generated by extracting state-of-the-art summarization models, such as BERT [32],
sentences containing these keywords or phrases. XLNet, and T5.
D. Hybrid Approaches
6) Statistical Approaches:
Statistical machine translation (SMT) techniques were 1) Hybrid extractive-abstractive methods:
adapted for text summarization. SMT models treated These methods combine the strengths of extractive and
summarization as a translation task, where the source text is abstractive approaches to generate summaries. They first
translated into a shorter summary. identify salient sentences or phrases from the source text using
extractive methods and then paraphrase and rephrase them
7) Graph-based Algorithms: using abstractive techniques.
Graph-based ranking algorithms, such as Pagerank [25],
were applied to identify important sentences or phrases based 2) Knowledge-Enhanced Summarization:
on their connectivity within the text graph. The underlying This approach leverages external knowledge sources, such
assumption was that sentences with higher connectivity to as knowledge graphs or ontologies, to enrich the
other important sentences were more likely to be informative summarization process. Extractive methods identify relevant
and representative of the text's content. information from the source text, and abstractive techniques
use the additional knowledge to expand and enhance the
8) Topic Modelling: summary. Knowledge-enhanced summarization methods can

997

Authorized licensed use limited to: JNT University Kakinada. Downloaded on December 16,2024 at 10:51:09 UTC from IEEE Xplore. Restrictions apply.
2024 International Conference on Communication, Computer Sciences and Engineering (IC3SE)

generate summaries that are more accurate, comprehensive, coupled with machine learning integration holds potential for
and contextually aware. Some representative models include enhancing operational efficiency. Turning to security concerns
KG-Sum, KnowBERT, and K-BERT [32]. within the Semantic E-Commerce Web [35], avenues for
research include the exploration of novel cryptographic
3) Pointer-generator network [10]: algorithms and blockchain integration [35]. Additionally, the
A neural network architecture combining extractive and convergence of machine learning and ontology-based
abstractive methods, featuring a pointer mechanism to methodologies presents an opportunity for refining semantic
generate words directly from input text, enhancing flexibility document indexing practices [36], while the establishment of
and adaptability in summarization. comprehensive security frameworks remains imperative for
VII. EVALUATION METHODS fostering trust in the Semantic Web [37]. Further research
directions encompass the enhancement of ontology-based
A. Recall-Oriented Understudy for Gisting Evaluation [29] semantic retrieval systems [38] and the exploration of security
ROUGE [29] is a widely used metric for evaluating enforcement mechanisms using PKI [39]. Bayesian rough set
abstractive summaries. It measures the overlap between the models [40] and automated threat detection strategies for
generated summary and the reference summaries in terms of Semantic Web services [41] warrant focused attention.
n-grams. Specifically, ROUGE-1, ROUGE -2, and ROUGE- Moreover, the development of rule-based methodologies for
L are commonly used variants. ROUGE-1 considers link-context extraction [42] and the refinement of uncertainty
unigrams, ROUGE-2 considers bigrams, and ROUGE-L analysis techniques in ontology-based knowledge
considers the longest common subsequence of words. Higher representation [43] are paramount. The evaluation of semantic
ROUGE [29] scores indicate a greater degree of overlap web-based information retrieval models [44] and the
between the generated summary and the reference summaries. formulation of robust countermeasures against semantic web
attacks [45] are deemed indispensable. Additionally,
B. Bilingual Evaluation Understudy [30] investigating cancellable biometrics [46] and advancing
It assesses the similarity between the generated summary question-answering systems such as JOSN [47] constitute
and the reference summaries at the word level. BLEU [30] essential research endeavours. Lastly, the exploration of
calculates the n-gram precision between the generated blockchain-based e-voting platforms [12] and the utilization
summary and the reference summaries and then weights the of Wireshark for intrusion detection [4] within the semantic
precision values using inverse document frequency (IDF). web landscape are imperative for bolstering security
Higher BLEU [30] scores indicate a higher degree of word- measures.
level similarity between the generated summary and the
reference summaries. REFERENCES
[1] Abdel-Salam, S., & Rafea, A. (2022). Performance study on extractive
C. Metric for Evaluation of Translation with Explicit text summarization using BERT models. Information, 13(2), 67.
Ordering [31] [2] Dedhia, P. R., Pachgade, H. P., Malani, A. P., Raul, N., & Naik, M.
(2020, February). Study on Abstractive Text Summarization
METEOR [31] is an evaluation metric that considers both techniques. In 2020 international conference on emerging trends in
word-level similarity and semantic similarity between the information technology and engineering (ic-ETITE), 1-8. IEEE.
generated summary and the reference summaries. METEOR [3] Sanjabi, N. (2018). Abstractive Text Summarization with attention-
[31] calculates the unigram precision and recall between the based mechanism (Master's thesis, Universitat Politècnica de
generated summary and the reference summaries and then Catalunya).
combines these values with a semantic similarity score. The [4] Singh, S., & Kumar, S. (2020). Capability of wireshark as intrusion
semantic similarity score is computed using WordNet and detection system. International Journal of Recent Technology and
paraphrasing databases. Engineering (IJRTE), 8(5), 4574-4578
[5] Pilault, J., Li, R., Subramanian, S., & Pal, C. (2020, November). On
D. Consensus-based Image Description Evaluation [33] extractive and abstractive neural document summarization with
transformer language models. In Proceedings of the 2020 conference
A metric that measures the consensus between multiple on empirical methods in natural language processing (EMNLP), 9308-
human judges on the quality of the generated summary. 9319.
[6] Monsen, J., & Rennes, E. (2022, June). Perceived text quality and
VIII. FUTURE DIRECTIONS readability in extractive and abstractive summaries. In Proceedings of
Advancements in abstractive summarization research span the Thirteenth Language Resources and Evaluation Conference, 305-
312.
several critical domains. Firstly, the expansion of datasets
[7] Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence
across diverse fields, including specialized areas such as legal learning with neural networks. Advances in neural information
and medical texts [28], stands as a pivotal strategy to alleviate processing systems, 27.
data sparsity issues, thereby augmenting model generalization [8] Ashish, V. (2017). Attention is all you need. Advances in neural
[28]. Secondly, the exploration of innovative model information processing systems, 30, I.
architectures, exemplified by transformer-based models like [9] Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., ...
BERT [32] tailored for summarization tasks, offers a & Kiela, D. (2020). Retrieval-augmented generation for knowledge-
promising avenue for enhancing both the quality and intensive nlp tasks. Advances in Neural Information Processing
Systems, 33, 9459-9474.
scalability of summarization systems [28]. Furthermore,
extending the capabilities of abstractive summarization to [10] See, A., Liu, P. J., & Manning, C. D. (2017). Get to the point:
Summarization with pointer-generator networks. arXiv preprint
encompass multiple documents [28] and domain-specific arXiv:1704.04368.
content [28] unveils intriguing research prospects. The [11] Rush, A. M., Chopra, S., & Weston, J. (2015). A neural attention model
integration of human feedback [28] emerges as a cornerstone for abstractive sentence summarization. arXiv preprint
for enhancing the reliability of summarization systems. In the arXiv:1509.00685.
realm of querying RDF and OWL data sources via SPARQL [12] Malhotra, M., Kumar, A., Kumar, S., & Yadav, V. (2022). Untangling
[34], the adoption of advanced query optimization techniques e-voting platform for secure and enhanced voting using blockchain

998

Authorized licensed use limited to: JNT University Kakinada. Downloaded on December 16,2024 at 10:51:09 UTC from IEEE Xplore. Restrictions apply.
2024 International Conference on Communication, Computer Sciences and Engineering (IC3SE)

technology. In Transforming Management with AI, Big-Data, and IoT, [37] Dwivedi, A., Kumar, S., Dwivedi, A., & Singh, M. (2011). Current
51-72. Cham: Springer International Publishing. security considerations for issues and challenges of trustworthy
[13] Khan, A., Salim, N., Farman, H., Khan, M., Jan, B., Ahmad, A., ... & semantic web. International Journal of Advanced Networking and
Paul, A. (2018). Abstractive Text Summarization based on improved Applications, 3(1), 978.
semantic graph approach. International Journal of Parallel [38] Sharma, A., & Kumar, S. (2023). Ontology-based semantic retrieval of
Programming, 46, 992-1016. documents using Word2vec model. Data & knowledge Engineering,
[14] Liu, S., & Healey, C. G. (2023). Abstractive Summarization of Large 144, 102110.
Document Collections Using GPT. arXiv preprint arXiv:2310.05690. [39] Kumar, S., Prajapati, R. K., Singh, M., & De, A. (2010, October).
[15] Chowdhary, K., & Chowdhary, K. R. (2020). Natural language Security enforcement using PKI in Semantic Web. In 2010
processing. Fundamentals of artificial intelligence, 603-649. International Conference on Computer Information Systems and
Industrial Management Applications (CISIM), 392-397. IEEE.
[16] Kumar, N., & Kumar, S. (2024, March). Enhancing Abstractive Text
Summarisation Using Seq2Seq Models: A Context-Aware Approach. [40] Sharma, A., & Kumar, S. (2020). Bayesian rough set based information
In 2024 International Conference on Automation and Computation retrieval. Journal of Statistics and Management Systems, 23(7), 1147-
(AUTOCOM), 490-496. IEEE. 1158.
[17] Neubig, G. (2017). Neural machine translation and sequence-to- [41] Kumar, M. S., Prajapati, M. R. K., Singh, M., & De, A. (2010).
sequence models: A tutorial. arXiv preprint arXiv:1703.01619. Realization of threats and countermeasure in Semantic Web services.
International Journal of Computer Theory and Engineering, 2(6), 919.
[18] Niu, Z., Zhong, G., & Yu, H. (2021). A review on the attention
mechanism of deep learning. Neurocomputing, 452, 48-62. [42] Kumar, S., Kumar, N., Singh, M., & De, A. (2013). A Rule-based
approach for extraction of link-context from anchor-text structure. In
[19] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning nature, Intelligent Informatics: Proceedings of the International Symposium on
521(7553), 436-444.
Intelligent Informatics ISI’12 Held at August 4-5 2012, Chennai, India,
[20] Hachey, B., & Grover, C. (2006). Extractive summarisation of legal 261-271. Springer Berlin Heidelberg.
texts. Artificial Intelligence and Law, 14, 305-345.
[43] Anand, S. K., & Kumar, S. (2022). Uncertainty analysis in ontology-
[21] Navigli, R. (2009). Word sense disambiguation: A survey. ACM based knowledge representation. New Generation Computing, 40(1),
computing surveys (CSUR), 41(2), 1-69. 339-376.
[22] Grossberg, S. (2013). Recurrent neural networks. Scholarpedia, 8(2), [44] Sharma, A., & Kumar, S. (2020). Semantic web-based information
1888. retrieval models: a systematic survey. In Data Science and Analytics:
[23] Cho, K., Van Merriënboer, B., Bahdanau, D., & Bengio, Y. (2014). On 5th International Conference on Recent Developments in Science,
the properties of neural machine translation: Encoder-decoder Engineering and Technology, REDSET 2019, Gurugram, India,
approaches. arXiv preprint arXiv:1409.1259. November 15–16, 2019, Revised Selected Papers, Part II 5, 204-222.
[24] Ramos, J. (2003, December). Using tf-idf to determine word relevance Springer Singapore.
in document queries. In Proceedings of the first instructional [45] Sumit, K., & Suresh, K. (2014). Semantic Web attacks and
conference on machine learning 242(1), 29-48. countermeasures. In Engineering and Technology Research (ICAETR),
[25] Rogers, I. (2002). The Google Pagerank algorithm and how it works. 2014 International Conference on.
[26] Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet [46] Akhilesh, D., Suresh, K., Abhishek, D., & Manjeet, S. (2011).
allocation. Journal of machine Learning research, 3(Jan), 993-1022. Cancellable biometrics for security and privacy enforcement on
semantic web. International Journal of Computer Applications (IJCA),
[27] Rabiner, L., & Juang, B. (1986). An introduction to hidden markov 21(8), 1-8.
models. ieee assp magazine, 3(1), 4-16.
[47] Garg, S., & Kumar, S. (2016, August). JOSN: JAVA oriented question-
[28] Han, K., Xiao, A., Wu, E., Guo, J., Xu, C., & Wang, Y. (2021). answering system combining semantic web and natural language
Transformer in transformer. Advances in neural information processing processing techniques. In 2016 1st India International Conference on
systems, 34, 15908-15919. Information Processing (IICIP), 1-6. IEEE.
[29] Lin, C. Y. (2004, July). ROUGE: A package for automatic evaluation
of summaries. In Text summarization branches out, 74-81.
[30] Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. (2002, July). BLEU:
a method for automatic evaluation of machine translation. In
Proceedings of the 40th annual meeting of the Association for
Computational Linguistics, 311-318.
[31] Banerjee, S., & Lavie, A. (2005, June). METEOR: An automatic metric
for MT evaluation with improved correlation with human judgments.
In Proceedings of the acl workshop on intrinsic and extrinsic evaluation
measures for machine translation and/or summarization, 65-72.
[32] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). BERT: Pre-
training of deep bidirectional transformers for language understanding.
arXiv preprint arXiv:1810.04805.
[33] Vedantam, R., Zitnick, C. L., & Parikh, D. Consensus-based image
description evaluation. In Proceedings of the IEEE conference on
computer vision and pattern recognition, 4566-4575.
[34] Kumar, N., & Kumar, S. (2013, July). Querying RDF and OWL data
source using SPARQL. In 2013 Fourth international conference on
computing, communications and networking technologies (ICCCNT),
1-6, IEEE.
[35] Dwivedi, A., Dwivedi, A., Kumar, S., Pandey, S. K., & Dabra, P.
(2013). A cryptographic algorithm analysis for security threats of
Semantic E-Commerce Web (SECW) for electronic payment
transaction system. In Advances in Computing and Information
Technology: Proceedings of the Second International Conference on
Advances in Computing and Information Technology (ACITY) July
13-15, 2012, Chennai, India-Volume 3, 367-379. Springer Berlin
Heidelberg.
[36] Sharma, A., & Kumar, S. (2023). Machine learning and ontology-based
novel semantic document indexing for information retrieval.
Computers & Industrial Engineering, 176, 108940.

999

Authorized licensed use limited to: JNT University Kakinada. Downloaded on December 16,2024 at 10:51:09 UTC from IEEE Xplore. Restrictions apply.

You might also like