Open In App

Removing stop words with NLTK in Python

Last Updated : 26 Jul, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

Natural language processing tasks often involve filtering out commonly occurring words that provide no or very little semantic value to text analysis. These words are known as stopwords include articles, prepositions and pronouns like "the", "and", "is" and "in." While they seem insignificant, proper stopword handling can dramatically impact the performance and accuracy of NLP applications.

Stop-word-removal-using-NLTK
Stop-words Impact

Consider the sentence: "The quick brown fox jumps over the lazy dog"

  • Stopwords: "the" and "over"
  • Content words: "quick", "brown", "fox", "jumps", "lazy", "dog"

It becomes particularly important when dealing with large text corpora where computational efficiency matters. Processing every single word including high-frequency stopwords can consume unnecessary resources and potentially skew analysis results.

When to Remove Stopwords

The decision to remove stopwords depends heavily on the specific NLP task at hand:

Tasks that benefit from stopword removal:

  • Text classification and sentiment analysis
  • Information retrieval and search engines
  • Topic modelling and clustering
  • Keyword extraction

Tasks that require preserving stopwords:

  • Machine translation (maintains grammatical structure)
  • Text summarization (preserves sentence coherence)
  • Question-answering systems (syntactic relationships matter)
  • Grammar checking and parsing

Language modeling presents an interesting middle ground where the decision depends on the specific application requirements and available computational resources.

Categories of Stopwords

Understanding different types of stopwords helps in making informed decisions:

  • Standard Stopwords: Common function words like articles ("a", "the"), conjunctions ("and", "but") and prepositions ("in", "on")
  • Domain-Specific Stopwords: Context-dependent terms that appear frequently in specific fields like "patient" in medical texts
  • Contextual Stopwords: Words with extremely high frequency in particular datasets
  • Numerical Stopwords: Digits, punctuation marks and single characters

Implementation with NLTK

NLTK provides robust support for stopword removal across 16 different languages. The implementation involves tokenization followed by filtering:

  • Setup: Import NLTK modules and download required resources like stopwords and tokenizer data.
  • Text preprocessing: Convert the sample sentence to lowercase and tokenize it into words.
  • Stopword removal: Load English stopwords and filter them out from the token list.
  • Output: Print both the original and cleaned tokens for comparison.
Python
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize

nltk.download('stopwords')
nltk.download('punkt')

# Sample text
text = "This is a sample sentence showing stopword removal."

# Get English stopwords and tokenize
stop_words = set(stopwords.words('english'))
tokens = word_tokenize(text.lower())

# Remove stopwords
filtered_tokens = [word for word in tokens if word not in stop_words]

print("Original:", tokens)
print("Filtered:", filtered_tokens)

Output:

Original: ['this', 'is', 'a', 'sample', 'sentence', 'showing', 'stopword', 'removal', '.']
Filtered: ['sample', 'sentence', 'showing', 'stopword', 'removal', '.']

Other Methods for Stopword Removal

Lets see various methods for stopwords removal:

1. Implementation using SpaCy

SpaCy offers a more sophisticated approach with built-in linguistic analysis:

  • Imports spaCy: Used for natural language processing.
  • Load model: Loads the English NLP model with tokenization and stopword detection.
  • Process text: Converts the sentence into a Doc object with linguistic features.
  • Remove stopwords: Filters out common words using token.is_stop.
  • Print output: Displays non-stopword tokens like ['researchers', 'developing', 'advanced', 'algorithms'].
Python
import spacy

nlp = spacy.load("en_core_web_sm")
doc = nlp("The researchers are developing advanced algorithms.")

# Filter stopwords using spaCy
filtered_words = [token.text for token in doc if not token.is_stop]
print("Filtered:", filtered_words)

Output:

Filtered: ['researchers', 'developing', 'advanced', 'algorithms', '.']

2. Removing stop words with Genism

We can use Genism for stopword removal:

  • Import function: Brings in remove_stopwords from Gensim.
  • Define text: A sample sentence is used.
  • Apply stopword removal: Removes common words like “the,” “a”.
  • Print output: Shows original and filtered text.
Python
from gensim.parsing.preprocessing import remove_stopwords

# Another sample text
new_text = "The majestic mountains provide a breathtaking view."

# Remove stopwords using Gensim
new_filtered_text = remove_stopwords(new_text)

print("Original Text:", new_text)
print("Text after Stopword Removal:", new_filtered_text)

Output:

Original Text: The majestic mountains provide a breathtaking view.
Text after Stopword Removal: The majestic mountains provide breathtaking view.

3. Implementation with Scikit Learn

We can use Scikit Learn for stopword removal:

  • Imports necessary modules from sklearn and nltk for tokenization and stopword removal.
  • Defines a sample sentence
  • Tokenizes the sentence into individual words using NLTK's word_tokenize.
  • Filters out common English stopwords from the token list.
  • Prints both the original and stopword-removed versions of the text.
Python
from sklearn.feature_extraction.text import CountVectorizer
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize

# Another sample text
new_text = "The quick brown fox jumps over the lazy dog."

# Tokenize the new text using NLTK
new_words = word_tokenize(new_text)

# Remove stopwords using NLTK
new_filtered_words = [
    word for word in new_words if word.lower() not in stopwords.words('english')]

# Join the filtered words to form a clean text
new_clean_text = ' '.join(new_filtered_words)

print("Original Text:", new_text)
print("Text after Stopword Removal:", new_clean_text)

Output:

Original Text: The quick brown fox jumps over the lazy dog.
Text after Stopword Removal: quick brown fox jumps lazy dog .

Among all libraries NLTK provides best performance.

Advanced Techniques and Custom Stopwords

Real-world applications often require custom stopword lists tailored to specific domains:

  • Imports Counter to count word frequencies.
  • Tokenizes all texts and flattens them into one word list.
  • Calculates frequency of each word.
  • Adds words to custom stopwords if they exceed a set frequency threshold.
  • Merges custom stopwords with NLTK’s default stopword list.
Python
from collections import Counter

def create_custom_stopwords(texts, threshold=0.1):
    # Count word frequencies across all texts
    all_words = []
    for text in texts:
        words = word_tokenize(text.lower())
        all_words.extend(words)
    
    word_freq = Counter(all_words)
    total_words = len(all_words)
    
    # Words appearing more than threshold become stopwords
    custom_stops = {word for word, freq in word_freq.items() 
                   if freq / total_words > threshold}
    
    return custom_stops.union(set(stopwords.words('english')))

This approach identifies domain-specific high-frequency words that may not appear in standard stopword lists but function as noise in particular contexts.

Edge Cases and Limitations

Stopword removal is essential in NLP but must be handled carefully. It requires normalization (e.g., handling case and contractions) and language-specific lists for multilingual text. Removing words like "not" or certain prepositions can harm tasks such as sentiment analysis or entity recognition. Over-removal may lose valuable signals while under-removal can keep noise. Its impact varies—beneficial in classification but risky in tasks needing full semantic context.

AspectDetails
NormalizationHandle case differences and contractions (e.g., "don't", "THE")
Language SpecificityUse stopword lists tailored to each language
Context RiskImportant words like "not" or prepositions may be needed for meaning
Signal vs. NoiseToo much removal = loss of signal or too little = extra noise
Task SensitivityHelps in classification but may hurt tasks needing deeper understanding

Modern deep learning approaches sometimes learn to ignore irrelevant words automatically, but traditional machine learning methods and resource-constrained applications still benefit significantly from thoughtful stopword handling.


Stop-Words Removal with spaCy

Similar Reads