Removing stop words with NLTK in Python
Last Updated :
26 Jul, 2025
Natural language processing tasks often involve filtering out commonly occurring words that provide no or very little semantic value to text analysis. These words are known as stopwords include articles, prepositions and pronouns like "the", "and", "is" and "in." While they seem insignificant, proper stopword handling can dramatically impact the performance and accuracy of NLP applications.
Stop-words ImpactConsider the sentence: "The quick brown fox jumps over the lazy dog"
- Stopwords: "the" and "over"
- Content words: "quick", "brown", "fox", "jumps", "lazy", "dog"
It becomes particularly important when dealing with large text corpora where computational efficiency matters. Processing every single word including high-frequency stopwords can consume unnecessary resources and potentially skew analysis results.
When to Remove Stopwords
The decision to remove stopwords depends heavily on the specific NLP task at hand:
Tasks that benefit from stopword removal:
- Text classification and sentiment analysis
- Information retrieval and search engines
- Topic modelling and clustering
- Keyword extraction
Tasks that require preserving stopwords:
- Machine translation (maintains grammatical structure)
- Text summarization (preserves sentence coherence)
- Question-answering systems (syntactic relationships matter)
- Grammar checking and parsing
Language modeling presents an interesting middle ground where the decision depends on the specific application requirements and available computational resources.
Categories of Stopwords
Understanding different types of stopwords helps in making informed decisions:
- Standard Stopwords: Common function words like articles ("a", "the"), conjunctions ("and", "but") and prepositions ("in", "on")
- Domain-Specific Stopwords: Context-dependent terms that appear frequently in specific fields like "patient" in medical texts
- Contextual Stopwords: Words with extremely high frequency in particular datasets
- Numerical Stopwords: Digits, punctuation marks and single characters
Implementation with NLTK
NLTK provides robust support for stopword removal across 16 different languages. The implementation involves tokenization followed by filtering:
- Setup: Import NLTK modules and download required resources like stopwords and tokenizer data.
- Text preprocessing: Convert the sample sentence to lowercase and tokenize it into words.
- Stopword removal: Load English stopwords and filter them out from the token list.
- Output: Print both the original and cleaned tokens for comparison.
Python
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
nltk.download('stopwords')
nltk.download('punkt')
# Sample text
text = "This is a sample sentence showing stopword removal."
# Get English stopwords and tokenize
stop_words = set(stopwords.words('english'))
tokens = word_tokenize(text.lower())
# Remove stopwords
filtered_tokens = [word for word in tokens if word not in stop_words]
print("Original:", tokens)
print("Filtered:", filtered_tokens)
Output:
Original: ['this', 'is', 'a', 'sample', 'sentence', 'showing', 'stopword', 'removal', '.']
Filtered: ['sample', 'sentence', 'showing', 'stopword', 'removal', '.']
Other Methods for Stopword Removal
Lets see various methods for stopwords removal:
1. Implementation using SpaCy
SpaCy offers a more sophisticated approach with built-in linguistic analysis:
- Imports spaCy: Used for natural language processing.
- Load model: Loads the English NLP model with tokenization and stopword detection.
- Process text: Converts the sentence into a
Doc
object with linguistic features. - Remove stopwords: Filters out common words using
token.is_stop
. - Print output: Displays non-stopword tokens like
['researchers', 'developing', 'advanced', 'algorithms']
.
Python
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("The researchers are developing advanced algorithms.")
# Filter stopwords using spaCy
filtered_words = [token.text for token in doc if not token.is_stop]
print("Filtered:", filtered_words)
Output:
Filtered: ['researchers', 'developing', 'advanced', 'algorithms', '.']
2. Removing stop words with Genism
We can use Genism for stopword removal:
- Import function: Brings in
remove_stopwords
from Gensim. - Define text: A sample sentence is used.
- Apply stopword removal: Removes common words like “the,” “a”.
- Print output: Shows original and filtered text.
Python
from gensim.parsing.preprocessing import remove_stopwords
# Another sample text
new_text = "The majestic mountains provide a breathtaking view."
# Remove stopwords using Gensim
new_filtered_text = remove_stopwords(new_text)
print("Original Text:", new_text)
print("Text after Stopword Removal:", new_filtered_text)
Output:
Original Text: The majestic mountains provide a breathtaking view.
Text after Stopword Removal: The majestic mountains provide breathtaking view.
3. Implementation with Scikit Learn
We can use Scikit Learn for stopword removal:
- Imports necessary modules from
sklearn
and nltk
for tokenization and stopword removal. - Defines a sample sentence
- Tokenizes the sentence into individual words using NLTK's
word_tokenize
. - Filters out common English stopwords from the token list.
- Prints both the original and stopword-removed versions of the text.
Python
from sklearn.feature_extraction.text import CountVectorizer
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
# Another sample text
new_text = "The quick brown fox jumps over the lazy dog."
# Tokenize the new text using NLTK
new_words = word_tokenize(new_text)
# Remove stopwords using NLTK
new_filtered_words = [
word for word in new_words if word.lower() not in stopwords.words('english')]
# Join the filtered words to form a clean text
new_clean_text = ' '.join(new_filtered_words)
print("Original Text:", new_text)
print("Text after Stopword Removal:", new_clean_text)
Output:
Original Text: The quick brown fox jumps over the lazy dog.
Text after Stopword Removal: quick brown fox jumps lazy dog .
Among all libraries NLTK provides best performance.
Advanced Techniques and Custom Stopwords
Real-world applications often require custom stopword lists tailored to specific domains:
- Imports Counter to count word frequencies.
- Tokenizes all texts and flattens them into one word list.
- Calculates frequency of each word.
- Adds words to custom stopwords if they exceed a set frequency threshold.
- Merges custom stopwords with NLTK’s default stopword list.
Python
from collections import Counter
def create_custom_stopwords(texts, threshold=0.1):
# Count word frequencies across all texts
all_words = []
for text in texts:
words = word_tokenize(text.lower())
all_words.extend(words)
word_freq = Counter(all_words)
total_words = len(all_words)
# Words appearing more than threshold become stopwords
custom_stops = {word for word, freq in word_freq.items()
if freq / total_words > threshold}
return custom_stops.union(set(stopwords.words('english')))
This approach identifies domain-specific high-frequency words that may not appear in standard stopword lists but function as noise in particular contexts.
Edge Cases and Limitations
Stopword removal is essential in NLP but must be handled carefully. It requires normalization (e.g., handling case and contractions) and language-specific lists for multilingual text. Removing words like "not" or certain prepositions can harm tasks such as sentiment analysis or entity recognition. Over-removal may lose valuable signals while under-removal can keep noise. Its impact varies—beneficial in classification but risky in tasks needing full semantic context.
Aspect | Details |
---|
Normalization | Handle case differences and contractions (e.g., "don't", "THE") |
---|
Language Specificity | Use stopword lists tailored to each language |
---|
Context Risk | Important words like "not" or prepositions may be needed for meaning |
---|
Signal vs. Noise | Too much removal = loss of signal or too little = extra noise |
---|
Task Sensitivity | Helps in classification but may hurt tasks needing deeper understanding |
---|
Modern deep learning approaches sometimes learn to ignore irrelevant words automatically, but traditional machine learning methods and resource-constrained applications still benefit significantly from thoughtful stopword handling.
Stop-Words Removal with spaCy
Similar Reads
Natural Language Processing (NLP) Tutorial Natural Language Processing (NLP) is a branch of Artificial Intelligence (AI) that helps machines to understand and process human languages either in text or audio form. It is used across a variety of applications from speech recognition to language translation and text summarization.Natural Languag
5 min read
Introduction to NLP
Natural Language Processing (NLP) - OverviewNatural Language Processing (NLP) is a field that combines computer science, artificial intelligence and language studies. It helps computers understand, process and create human language in a way that makes sense and is useful. With the growing amount of text data from social media, websites and ot
9 min read
NLP vs NLU vs NLGNatural Language Processing(NLP) is a subset of Artificial intelligence which involves communication between a human and a machine using a natural language than a coded or byte language. It provides the ability to give instructions to machines in a more easy and efficient manner. Natural Language Un
3 min read
Applications of NLPAmong the thousands and thousands of species in this world, solely homo sapiens are successful in spoken language. From cave drawings to internet communication, we have come a lengthy way! As we are progressing in the direction of Artificial Intelligence, it only appears logical to impart the bots t
6 min read
Why is NLP important?Natural language processing (NLP) is vital in efficiently and comprehensively analyzing text and speech data. It can navigate the variations in dialects, slang, and grammatical inconsistencies typical of everyday conversations. Table of Content Understanding Natural Language ProcessingReasons Why NL
6 min read
Phases of Natural Language Processing (NLP)Natural Language Processing (NLP) helps computers to understand, analyze and interact with human language. It involves a series of phases that work together to process language and each phase helps in understanding structure and meaning of human language. In this article, we will understand these ph
7 min read
The Future of Natural Language Processing: Trends and InnovationsThere are no reasons why today's world is thrilled to see innovations like ChatGPT and GPT/ NLP(Natural Language Processing) deployments, which is known as the defining moment of the history of technology where we can finally create a machine that can mimic human reaction. If someone would have told
7 min read
Libraries for NLP
Text Normalization in NLP
Normalizing Textual Data with PythonIn this article, we will learn How to Normalizing Textual Data with Python. Let's discuss some concepts : Textual data ask systematically collected material consisting of written, printed, or electronically published words, typically either purposefully written or transcribed from speech.Text normal
7 min read
Regex Tutorial - How to write Regular Expressions?A regular expression (regex) is a sequence of characters that define a search pattern. Here's how to write regular expressions: Start by understanding the special characters used in regex, such as ".", "*", "+", "?", and more.Choose a programming language or tool that supports regex, such as Python,
6 min read
Tokenization in NLPTokenization is a fundamental step in Natural Language Processing (NLP). It involves dividing a Textual input into smaller units known as tokens. These tokens can be in the form of words, characters, sub-words, or sentences. It helps in improving interpretability of text by different models. Let's u
8 min read
Python | Lemmatization with NLTKLemmatization is an important text pre-processing technique in Natural Language Processing (NLP) that reduces words to their base form known as a "lemma." For example, the lemma of "running" is "run" and "better" becomes "good." Unlike stemming which simply removes prefixes or suffixes, it considers
6 min read
Introduction to StemmingStemming is an important text-processing technique that reduces words to their base or root form by removing prefixes and suffixes. This process standardizes words which helps to improve the efficiency and effectiveness of various natural language processing (NLP) tasks.In NLP, stemming simplifies w
6 min read
Removing stop words with NLTK in PythonNatural language processing tasks often involve filtering out commonly occurring words that provide no or very little semantic value to text analysis. These words are known as stopwords include articles, prepositions and pronouns like "the", "and", "is" and "in." While they seem insignificant, prope
5 min read
POS(Parts-Of-Speech) Tagging in NLPParts of Speech (PoS) tagging is a core task in NLP, It gives each word a grammatical category such as nouns, verbs, adjectives and adverbs. Through better understanding of phrase structure and semantics, this technique makes it possible for machines to study human language more accurately. PoS tagg
7 min read
Text Representation and Embedding Techniques
NLP Deep Learning Techniques
NLP Projects and Practice
Sentiment Analysis with an Recurrent Neural Networks (RNN)Recurrent Neural Networks (RNNs) are used in sequence tasks such as sentiment analysis due to their ability to capture context from sequential data. In this article we will be apply RNNs to analyze the sentiment of customer reviews from Swiggy food delivery platform. The goal is to classify reviews
5 min read
Text Generation using Recurrent Long Short Term Memory NetworkLSTMs are a type of neural network that are well-suited for tasks involving sequential data such as text generation. They are particularly useful because they can remember long-term dependencies in the data which is crucial when dealing with text that often has context that spans over multiple words
4 min read
Machine Translation with Transformer in PythonMachine translation means converting text from one language into another. Tools like Google Translate use this technology. Many translation systems use transformer models which are good at understanding the meaning of sentences. In this article, we will see how to fine-tune a Transformer model from
6 min read
Building a Rule-Based Chatbot with Natural Language ProcessingA rule-based chatbot follows a set of predefined rules or patterns to match user input and generate an appropriate response. The chatbot canât understand or process input beyond these rules and relies on exact matches making it ideal for handling repetitive tasks or specific queries.Pattern Matching
4 min read
Text Classification using scikit-learn in NLPThe purpose of text classification, a key task in natural language processing (NLP), is to categorise text content into preset groups. Topic categorization, sentiment analysis, and spam detection can all benefit from this. In this article, we will use scikit-learn, a Python machine learning toolkit,
5 min read
Text Summarization using HuggingFace ModelText summarization involves reducing a document to its most essential content. The aim is to generate summaries that are concise and retain the original meaning. Summarization plays an important role in many real-world applications such as digesting long articles, summarizing legal contracts, highli
4 min read
Advanced Natural Language Processing Interview QuestionNatural Language Processing (NLP) is a rapidly evolving field at the intersection of computer science and linguistics. As companies increasingly leverage NLP technologies, the demand for skilled professionals in this area has surged. Whether preparing for a job interview or looking to brush up on yo
9 min read