Exp 5
Exp 5
Applying the Deep Learning models in the field of Natural language Processing
AIM : To applying the Deep Learning models in the field of Natural language Processing
Theory : Natural Language Processing (NLP) is a field of artificial intelligence (AI) that focuses on the
interaction between computers and human (natural) languages. It involves enabling computers to understand,
interpret, and generate human languages in a way that is valuable.
Tokenization: Splitting text into smaller chunks (words or sentences).
Stemming: Reducing words to their root form.
Lemmatization: Reducing words to their base or dictionary form.
Stopwords Removal: Removing common, less meaningful words.
POS Tagging: Identifying the part of speech for each word in a sentence.
Named Entity Recognition: Identifying entities like names, locations, etc.
Program :
import nltk
from nltk.tokenize import word_tokenize, sent_tokenize
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer, WordNetLemmatizer
from nltk import pos_tag
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
text = """
Natural Language Processing (NLP) is a subfield of Artificial
Intelligence that focuses on the interaction between computers and human
language.
It enables machines to understand, interpret, and respond to text or
speech inputs.
Applications of NLP include language translation, sentiment analysis,
chatbots, and text summarization.
For example, virtual assistants like Siri and Alexa rely on NLP to
process and respond to user queries.
The field combines computational linguistics with machine learning
techniques to create models that can analyze vast amounts of data
efficiently.
"""
print("\n=== Tokenization ===")
word_tokens = word_tokenize(text)
sentence_tokens = sent_tokenize(text)
print("Word Tokens:", word_tokens)
print("Sentence Tokens:", sentence_tokens)
print("\n=== Stopword Removal ===")
stop_words = set(stopwords.words('english'))
filtered_words = [word for word in word_tokens if word.lower() not in
stop_words]
print("Filtered Words:", filtered_words)
print("\n=== Stemming ===")
stemmer = PorterStemmer()
stemmed_words = [stemmer.stem(word) for word in filtered_words]
print("Stemmed Words:", stemmed_words)
print("\n=== Lemmatization ===")
lemmatizer = WordNetLemmatizer()
lemmatized_words = [lemmatizer.lemmatize(word) for word in
filtered_words]
print("Lemmatized Words:", lemmatized_words)
print("\n=== Part-of-Speech (POS) Tagging ===")
pos_tags = pos_tag(filtered_words)
print("POS Tags:", pos_tags)
RESULT: Applying the Deep Learning models in the field of Natural language Processing is executed
successfully and verified output.