Scrapy is a web scraping library that is used to scrape, parse and collect web data. For all these functions we are having a pipelines.py file which is used to handle scraped data through various components (known as class) which are executed sequentially.
In this article, we will be learning through the methods defined for this pipeline's file and will show different examples of it.
Setting Up Project
Let's, first, create a scrapy project. For that make sure that Python and PIP are installed in the system. Then run the given commands below one by one to create a scrapy project similar to the one which we will be using in this article.
Step 1: Let's first create a virtual environment in a folder named GFGScrapy and activate that virtual environment there.
# To create a folder named GFGScrapy
mkdir GFGScrapy
cd GFGScrapy
# making virtual env there.
virtualenv
cd scripts
# activating it
activate
cd..
Hence, after running all these commands, we will get the output as shown:
Creating virtual environment
Step 2: Now it's time to create a scrapy project. For that Make sure that scrapy is installed in the system or not. If not installed, install it using the given command below.
pip install scrapy
Now to create a scrapy project use the, given command below and also create a spider.
# projEct name is scrapytutorial
scrapy startproject scrapytutorial
cd scrapytutorial
scrapy genspider spider_to_crawl https://quotes.toscrape.com/
Then the output of the project directory looks like the one given in the image. (Please refer this if you want to know more about a scrapy project and get familiar with it).
Directory structure
Let's have a look at our spider_to_crawl.py file present inside our spiders folder. This is the file where we are writing the URL where our spider has to crawl and also a method named parse() which is used to describe what should be done with the data scraped by the spider.
This file is automatically generated by "scrapy genspider" command used above. The file is named after the spider's name. Below given is the default file generated.
spider_to_crawl.py
Item pipeline is a pipeline method that is written inside pipelines.py file and is used to perform the below-given operations on the scraped data sequentially. The various operations we can perform on the scraped items are listed below:
- Parse the scraped files or data.
- Store the scraped data in databases.
- Validating and checking the data obtained.
- Converting files from one format to another. eg to JSON.
We will be performing some of these operations in the examples below.
Operations are performed sequentially since we are using settings.py file to describe the order in which the operations should be done. i.e. we can mention which operation to be performed first and which to be performed next. This is usually done when we are performing several operations on the items.
Let's first see the inner structure of a default pipeline file. Below is the default class mentioned in that file.
Default pipelines.py file
For performing different operations on items we have to declare a separated component( classes in the file) which consists of various methods, used for performing operations. The pipelines file in default has a class named after the project name. We can also create our own classes to write what operations they have to perform. If any pipelines file consists of more than one class then we should mention their execution order explicitly. The structure of components are defined below:
Each component (class) must have one default function named process_item(). This is the default method which is always called inside the class or component of the pipelines file.
Syntax: process_item( self, item, spider )
Parameters:
- self : This is reference to the self object calling the method.
- item : These are the items list scraped by the spider
- spider : mentions the spider used to scrape.
The return type of this method is the modified or unmodified item object or an error will be raised if any fault is found in item.
This method is also used to call other method in this class which can be used to modify or store data.
Additional methods: These methods are used along with the above-mentionedself-object method to gain extra control over the items.
Method | Description |
---|
open_spider(self,spider) |
Spider object which is opened and a reference to self object are the parameters. ( These are default cases of python language).
Returns nothing except the fact that it is used to either make some changes or open a file or close a file.
|
close_spider(self,spider) |
Spider object which is closed and a reference to self-object.
It also either is used to modify the file or open or close it.
|
from_crawler(cls, crawler) |
Crawler object that is specified.
This method is used to give pipeline accessibility to all the core components of the scrapy settings so that pipelines can enhance their functionality,
|
Apart from all these methods, we can also create our own method to perform more operations like if we want to store some data then we can have the component that initializes the database and create tables in it, Another component may be there which will add the data to the database.
Before we move ahead and refer to examples, an important point to note is that we should have to register all the components (classes) of the pipelines.py file in the settings.py of the folder structure. This is done to maintain an ordering of the components to be executed and hence produce accurate results.
Creating Items to be passed over files.
One more thing to note is that we will require a description of what our item will contain in items.py file. Hence our items.py file contains the below-given code:
Python3
# Define here the models for your scraped items
import scrapy
class ScrapytutorialItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
Quote = scrapy.Field() #only one field that it of Quote.
We will require this file to be imported in our spider_to_crawl.py file. Hence in this way we can create items to be passed to pipeline. We will mainly be using the Wisdom quotes web page where we can get several quotes based on their authors and respective tags and then we will modify and use items pipelines on the scraped data throughout the example.
Example 1: Converting scraped data to JSON format
To convert the data in JSON format we will be using JSON library of python along with its dumps() method.
The idea is that we will get the scraped data in pipelines.py file and then we will open a file named result.json (if not already present then it will be created automatically) and write all the JSON data in it.
- open_spider() will be called to open the file (result.json) when spider starts crawling.
- close_spider() will be called to close the file when spider is closed and scraping is over.
- process_item() will always be called (since it is default) and will be mainly responsible for converting the data to JSON format and print the data to the file. We will be using the concept of python web frameworks, i.e. how they convert backend retrieved data to JSON and other formats.
Hence the code in our pipelines.py looks like this:
Python3
from itemadapter import ItemAdapter
import json
class ScrapytutorialPipeline:
def process_item(self, item, spider):
# calling dumps to create json data.
line = json.dumps(dict(item)) + "\n"
self.file.write(line)
return item
def open_spider(self, spider):
self.file = open('result.json', 'w')
def close_spider(self, spider):
self.file.close()
Our spider_to_crawl.py:
Python3
import scrapy
from ..items import ScrapytutorialItem
class SpiderToCrawlSpider(scrapy.Spider):
name = 'spider_to_crawl'
start_urls = ['https://quotes.toscrape.com//']
def parse(self, response):
# creating items dictionary
items = ScrapytutorialItem()
Quotes_all = response.xpath('//div/div/div/span[1]')
# These paths are based on the selectors
for quote in Quotes_all: #extracting data
items['Quote'] = quote.css('p::text').extract()
yield items
Output:
Explanation:
After using the command "scrapy crawl spider_to_crawl", The below-given steps are going to take place.
- The spider is crawled due to which result.json file is created. Now the spider scrapes the web page and collects the data in Quotes_all Variable. Then we will send each data from this variable one by one to our pipelines.py file.
- We receive item variables from spider in pipelines.py file which is, then converted to JSON using the dumps() method, and then the output is written in the opened file.
This is the JSON file which got created:

Example 2: Pipeline to upload data to database in SQLite3
Now we are going to present an items pipeline that will scrape the content of the web and store it on the database table defined by us. For simplicity, we will be using the SQLite3 database.
So we will use the idea of how to implement SQLite3 in python to create a pipeline that will receive data from spider scraping and will insert that data to the table in the database created.
spider_to_crawl.py:
Python3
import scrapy
from ..items import ScrapytutorialItem
class SpiderToCrawlSpider(scrapy.Spider):
name = 'spider_to_crawl'
start_urls = ['https://quotes.toscrape.com//']
def parse(self, response):
# creating items dictionary
items = ScrapytutorialItem()
Quotes_all = response.xpath('//div/div/div/span[1]')
# These paths are based on the selectors
# extracting data
for quote in Quotes_all:
items['Quote'] = quote.css('::text').extract()
yield items
We are mentioning the pipeline methods below which are to be written in the pipelines.py File so that the database will be created:
pipelines.py file
Python3
from itemadapter import ItemAdapter
import sqlite3
class ScrapytutorialPipeline(object):
# init method to initialize the database
# and create connection and tables
def __init__(self):
self.create_conn()
self.create_table()
# create connection method to create
# database or use database to store scraped data
def create_conn(self):
self.conn = sqlite3.connect("mydata.db")
self.curr = self.conn.cursor()
# Create table method
# using SQL commands to create table
def create_table(self):
self.curr.execute("""DROP TABLE IF EXISTS firsttable""")
self.curr.execute("""create table firsttable(
Quote text
)""")
# store items to databases.
def process_item(self, item, spider):
self.putitemsintable(item)
return item
def putitemsintable(self,item):
self.curr.execute("""insert into firsttable values (?)""",(
item['Quote'][0], # extracting item.
))
self.conn.commit()
Output:

Explanation:
After using the command "scrapy crawl spider_to_crawl", the below-given steps are going to take place
- In spider.py we had mentioned the code that our spider should go to that site and extract all data mentioned in the URL format and then will create items list of it and pass that list to the pipelines.py file for further processing.
- We are also creating an items object to contain data to be passed and item it at items.py file in the directory.
- Then when the spider crawls it collects data in items object and transfers it to the pipelines and what happens next is already clear from the above code with hints in comments. pipelines.py file creates a database and stores all the incoming items.
- Here the init() method is called which is called a default method always in any python file. It then calls all other methods which are used to create a table and initialize the database.
- Then process_item() method is called which is used to call a method named putitemintable() where data is added to the database. Then, after executing this method the reference is returned to the spider to pass another item to be operated.
Similar Reads
Python Tutorial - Learn Python Programming Language Python is one of the most popular programming languages. Itâs simple to use, packed with features and supported by a wide range of libraries and frameworks. Its clean syntax makes it beginner-friendly. It'sA high-level language, used in web development, data science, automation, AI and more.Known fo
10 min read
Python Fundamentals
Python IntroductionPython was created by Guido van Rossum in 1991 and further developed by the Python Software Foundation. It was designed with focus on code readability and its syntax allows us to express concepts in fewer lines of code.Key Features of PythonPythonâs simple and readable syntax makes it beginner-frien
3 min read
Input and Output in PythonUnderstanding input and output operations is fundamental to Python programming. With the print() function, we can display output in various formats, while the input() function enables interaction with users by gathering input during program execution. Taking input in PythonPython's input() function
7 min read
Python VariablesIn Python, variables are used to store data that can be referenced and manipulated during program execution. A variable is essentially a name that is assigned to a value. Unlike many other programming languages, Python variables do not require explicit declaration of type. The type of the variable i
6 min read
Python OperatorsIn Python programming, Operators in general are used to perform operations on values and variables. These are standard symbols used for logical and arithmetic operations. In this article, we will look into different types of Python operators. OPERATORS: These are the special symbols. Eg- + , * , /,
6 min read
Python KeywordsKeywords in Python are reserved words that have special meanings and serve specific purposes in the language syntax. Python keywords cannot be used as the names of variables, functions, and classes or any other identifier. Getting List of all Python keywordsWe can also get all the keyword names usin
2 min read
Python Data TypesPython Data types are the classification or categorization of data items. It represents the kind of value that tells what operations can be performed on a particular data. Since everything is an object in Python programming, Python data types are classes and variables are instances (objects) of thes
9 min read
Conditional Statements in PythonConditional statements in Python are used to execute certain blocks of code based on specific conditions. These statements help control the flow of a program, making it behave differently in different situations.If Conditional Statement in PythonIf statement is the simplest form of a conditional sta
6 min read
Loops in Python - For, While and Nested LoopsLoops in Python are used to repeat actions efficiently. The main types are For loops (counting through items) and While loops (based on conditions). In this article, we will look at Python loops and understand their working with the help of examples. For Loop in PythonFor loops is used to iterate ov
9 min read
Python FunctionsPython Functions is a block of statements that does a specific task. The idea is to put some commonly or repeatedly done task together and make a function so that instead of writing the same code again and again for different inputs, we can do the function calls to reuse code contained in it over an
9 min read
Recursion in PythonRecursion involves a function calling itself directly or indirectly to solve a problem by breaking it down into simpler and more manageable parts. In Python, recursion is widely used for tasks that can be divided into identical subtasks.In Python, a recursive function is defined like any other funct
6 min read
Python Lambda FunctionsPython Lambda Functions are anonymous functions means that the function is without a name. As we already know the def keyword is used to define a normal function in Python. Similarly, the lambda keyword is used to define an anonymous function in Python. In the example, we defined a lambda function(u
6 min read
Python Data Structures
Python StringA string is a sequence of characters. Python treats anything inside quotes as a string. This includes letters, numbers, and symbols. Python has no character data type so single character is a string of length 1.Pythons = "GfG" print(s[1]) # access 2nd char s1 = s + s[0] # update print(s1) # printOut
6 min read
Python ListsIn Python, a list is a built-in dynamic sized array (automatically grows and shrinks). We can store all types of items (including another list) in a list. A list may contain mixed type of items, this is possible because a list mainly stores references at contiguous locations and actual items maybe s
6 min read
Python TuplesA tuple in Python is an immutable ordered collection of elements. Tuples are similar to lists, but unlike lists, they cannot be changed after their creation (i.e., they are immutable). Tuples can hold elements of different data types. The main characteristics of tuples are being ordered , heterogene
6 min read
Dictionaries in PythonPython dictionary is a data structure that stores the value in key: value pairs. Values in a dictionary can be of any data type and can be duplicated, whereas keys can't be repeated and must be immutable. Example: Here, The data is stored in key:value pairs in dictionaries, which makes it easier to
7 min read
Python SetsPython set is an unordered collection of multiple items having different datatypes. In Python, sets are mutable, unindexed and do not contain duplicates. The order of elements in a set is not preserved and can change.Creating a Set in PythonIn Python, the most basic and efficient method for creating
10 min read
Python ArraysLists in Python are the most flexible and commonly used data structure for sequential storage. They are similar to arrays in other languages but with several key differences:Dynamic Typing: Python lists can hold elements of different types in the same list. We can have an integer, a string and even
9 min read
List Comprehension in PythonList comprehension is a way to create lists using a concise syntax. It allows us to generate a new list by applying an expression to each item in an existing iterable (such as a list or range). This helps us to write cleaner, more readable code compared to traditional looping techniques.For example,
4 min read
Advanced Python
Python OOPs ConceptsObject Oriented Programming is a fundamental concept in Python, empowering developers to build modular, maintainable, and scalable applications. OOPs is a way of organizing code that uses objects and classes to represent real-world entities and their behavior. In OOPs, object has attributes thing th
11 min read
Python Exception HandlingPython Exception Handling handles errors that occur during the execution of a program. Exception handling allows to respond to the error, instead of crashing the running program. It enables you to catch and manage errors, making your code more robust and user-friendly. Let's look at an example:Handl
6 min read
File Handling in PythonFile handling refers to the process of performing operations on a file, such as creating, opening, reading, writing and closing it through a programming interface. It involves managing the data flow between the program and the file system on the storage device, ensuring that data is handled safely a
4 min read
Python Database TutorialPython being a high-level language provides support for various databases. We can connect and run queries for a particular database using Python and without writing raw queries in the terminal or shell of that particular database, we just need to have that database installed in our system.A database
4 min read
Python MongoDB TutorialMongoDB is a popular NoSQL database designed to store and manage data flexibly and at scale. Unlike traditional relational databases that use tables and rows, MongoDB stores data as JSON-like documents using a format called BSON (Binary JSON). This document-oriented model makes it easy to handle com
2 min read
Python MySQLMySQL is a widely used open-source relational database for managing structured data. Integrating it with Python enables efficient data storage, retrieval and manipulation within applications. To work with MySQL in Python, we use MySQL Connector, a driver that enables seamless integration between the
9 min read
Python PackagesPython packages are a way to organize and structure code by grouping related modules into directories. A package is essentially a folder that contains an __init__.py file and one or more Python files (modules). This organization helps manage and reuse code effectively, especially in larger projects.
12 min read
Python ModulesPython Module is a file that contains built-in functions, classes,its and variables. There are many Python modules, each with its specific work.In this article, we will cover all about Python modules, such as How to create our own simple module, Import Python modules, From statements in Python, we c
7 min read
Python DSA LibrariesData Structures and Algorithms (DSA) serve as the backbone for efficient problem-solving and software development. Python, known for its simplicity and versatility, offers a plethora of libraries and packages that facilitate the implementation of various DSA concepts. In this article, we'll delve in
15 min read
List of Python GUI Library and PackagesGraphical User Interfaces (GUIs) play a pivotal role in enhancing user interaction and experience. Python, known for its simplicity and versatility, has evolved into a prominent choice for building GUI applications. With the advent of Python 3, developers have been equipped with lots of tools and li
11 min read
Data Science with Python
NumPy Tutorial - Python LibraryNumPy (short for Numerical Python ) is one of the most fundamental libraries in Python for scientific computing. It provides support for large, multi-dimensional arrays and matrices along with a collection of mathematical functions to operate on arrays.At its core it introduces the ndarray (n-dimens
3 min read
Pandas TutorialPandas is an open-source software library designed for data manipulation and analysis. It provides data structures like series and DataFrames to easily clean, transform and analyze large datasets and integrates with other Python libraries, such as NumPy and Matplotlib. It offers functions for data t
6 min read
Matplotlib TutorialMatplotlib is an open-source visualization library for the Python programming language, widely used for creating static, animated and interactive plots. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, Qt, GTK and wxPython. It
5 min read
Python Seaborn TutorialSeaborn is a library mostly used for statistical plotting in Python. It is built on top of Matplotlib and provides beautiful default styles and color palettes to make statistical plots more attractive.In this tutorial, we will learn about Python Seaborn from basics to advance using a huge dataset of
15+ min read
StatsModel Library- TutorialStatsmodels is a useful Python library for doing statistics and hypothesis testing. It provides tools for fitting various statistical models, performing tests and analyzing data. It is especially used for tasks in data science ,economics and other fields where understanding data is important. It is
4 min read
Learning Model Building in Scikit-learnBuilding machine learning models from scratch can be complex and time-consuming. Scikit-learn which is an open-source Python library which helps in making machine learning more accessible. It provides a straightforward, consistent interface for a variety of tasks like classification, regression, clu
8 min read
TensorFlow TutorialTensorFlow is an open-source machine-learning framework developed by Google. It is written in Python, making it accessible and easy to understand. It is designed to build and train machine learning (ML) and deep learning models. It is highly scalable for both research and production.It supports CPUs
2 min read
PyTorch TutorialPyTorch is an open-source deep learning framework designed to simplify the process of building neural networks and machine learning models. With its dynamic computation graph, PyTorch allows developers to modify the networkâs behavior in real-time, making it an excellent choice for both beginners an
7 min read
Web Development with Python
Flask TutorialFlask is a lightweight and powerful web framework for Python. Itâs often called a "micro-framework" because it provides the essentials for web development without unnecessary complexity. Unlike Django, which comes with built-in features like authentication and an admin panel, Flask keeps things mini
8 min read
Django Tutorial | Learn Django FrameworkDjango is a Python framework that simplifies web development by handling complex tasks for you. It follows the "Don't Repeat Yourself" (DRY) principle, promoting reusable components and making development faster. With built-in features like user authentication, database connections, and CRUD operati
10 min read
Django ORM - Inserting, Updating & Deleting DataDjango's Object-Relational Mapping (ORM) is one of the key features that simplifies interaction with the database. It allows developers to define their database schema in Python classes and manage data without writing raw SQL queries. The Django ORM bridges the gap between Python objects and databas
4 min read
Templating With Jinja2 in FlaskFlask is a lightweight WSGI framework that is built on Python programming. WSGI simply means Web Server Gateway Interface. Flask is widely used as a backend to develop a fully-fledged Website. And to make a sure website, templating is very important. Flask is supported by inbuilt template support na
6 min read
Django TemplatesTemplates are the third and most important part of Django's MVT Structure. A Django template is basically an HTML file that can also include CSS and JavaScript. The Django framework uses these templates to dynamically generate web pages that users interact with. Since Django primarily handles the ba
7 min read
Python | Build a REST API using FlaskPrerequisite: Introduction to Rest API REST stands for REpresentational State Transfer and is an architectural style used in modern web development. It defines a set or rules/constraints for a web application to send and receive data. In this article, we will build a REST API in Python using the Fla
3 min read
How to Create a basic API using Django Rest Framework ?Django REST Framework (DRF) is a powerful extension of Django that helps you build APIs quickly and easily. It simplifies exposing your Django models as RESTfulAPIs, which can be consumed by frontend apps, mobile clients or other services.Before creating an API, there are three main steps to underst
4 min read
Python Practice
Python QuizThese Python quiz questions are designed to help you become more familiar with Python and test your knowledge across various topics. From Python basics to advanced concepts, these topic-specific quizzes offer a comprehensive way to practice and assess your understanding of Python concepts. These Pyt
3 min read
Python Coding Practice ProblemsThis collection of Python coding practice problems is designed to help you improve your overall programming skills in Python.The links below lead to different topic pages, each containing coding problems, and this page also includes links to quizzes. You need to log in first to write your code. Your
1 min read
Python Interview Questions and AnswersPython is the most used language in top companies such as Intel, IBM, NASA, Pixar, Netflix, Facebook, JP Morgan Chase, Spotify and many more because of its simplicity and powerful libraries. To crack their Online Assessment and Interview Rounds as a Python developer, we need to master important Pyth
15+ min read