How to Extract Script and CSS Files from Web Pages in Python ? Last Updated : 08 Sep, 2021 Summarize Comments Improve Suggest changes Share Like Article Like Report Prerequisite: RequestsBeautifulSoupFile Handling in Python In this article, we will discuss how to extract Script and CSS Files from Web Pages using Python. For this, we will be downloading the CSS and JavaScript files that were attached to the source code of the website during its coding process. Firstly, the URL of the website needed to be scraped is determined and a request is sent to it. After retrieving Websites' content two folders for two file types are created and the files are placed into them and then we can perform various operations on them according to our need. Module Neededbs4: Beautiful Soup(bs4) is a Python library for pulling data out of HTML and XML files. This module does not come in built-in with Python.requests: Requests allow you to send HTTP/1.1 requests extremely easily. This module also does not come in built-in with Python. Example 1: Here we are counting the number of fetched links for each respective type. Python3 # Import Required Library import requests from bs4 import BeautifulSoup # Web URL web_url = "https://www.geeksforgeeks.org/" # get HTML content html = requests.get(web_url).content # parse HTML Content soup = BeautifulSoup(html, "html.parser") js_files = [] cs_files = [] for script in soup.find_all("script"): if script.attrs.get("src"): # if the tag has the attribute # 'src' url = script.attrs.get("src") js_files.append(web_url+url) for css in soup.find_all("link"): if css.attrs.get("href"): # if the link tag has the 'href' # attribute _url = css.attrs.get("href") cs_files.append(web_url+_url) print(f"Total {len(js_files)} javascript files found") print(f"Total {len(cs_files)} CSS files found") Output: Total 7 javascript files found Total 14 CSS files found We can also use file handling to import fetched links into the text files. Example 2: Python3 # Import Required Library import requests from bs4 import BeautifulSoup # Web URL web_url = "https://www.geeksforgeeks.org/" # get HTML content html = requests.get(web_url).content # parse HTML Content soup = BeautifulSoup(html, "html.parser") js_files = [] cs_files = [] for script in soup.find_all("script"): if script.attrs.get("src"): # if the tag has the attribute # 'src' url = script.attrs.get("src") js_files.append(web_url+url) for css in soup.find_all("link"): if css.attrs.get("href"): # if the link tag has the 'href' # attribute _url = css.attrs.get("href") cs_files.append(web_url+_url) # adding links to the txt files with open("javajavascript_files.txt", "w") as f: for js_file in js_files: print(js_file, file=f) with open("css_files.txt", "w") as f: for css_file in cs_files: print(css_file, file=f) Output: Comment More infoAdvertise with us Next Article How to Extract Script and CSS Files from Web Pages in Python ? D deepanshu_rustagi Follow Improve Article Tags : Python Python BeautifulSoup Python bs4-Exercises Practice Tags : python Similar Reads How to extract paragraph from a website and save it as a text file? Perquisites:  Beautiful soupUrllib Scraping is an essential technique which helps us to retrieve useful data from a URL or a html file that can be used in another manner. The given article shows how to extract paragraph from a URL and save it as a text file. Modules Needed bs4: Beautiful Soup(bs4) 2 min read Parsel: How to Extract Text From HTML in Python Parsel is a Python library used for extracting data from HTML and XML documents. It provides tools for parsing, navigating, and extracting information using CSS selectors and XPath expressions. Parsel is particularly useful for web scraping tasks where you need to programmatically extract specific d 2 min read How to Download All Images from a Web Page in Python? Prerequisite: Requests BeautifulSouposFile Handling Web scraping is a technique to fetch data from websites. While surfing on the web, many websites donât allow the user to save data for personal use. One way is to manually copy-paste the data, which both tedious and time-consuming. Web Scraping is 3 min read Email Id Extractor Project from sites in Scrapy Python Scrapy is open-source web-crawling framework written in Python used for web scraping, it can also be used to extract data for general-purpose. First all sub pages links are taken from the main page and then email id are scraped from these sub pages using regular expression. This article shows the e 8 min read How to Scrape Websites with Beautifulsoup and Python ? Have you ever wondered how much data is created on the internet every day, and what if you want to work with those data? Unfortunately, this data is not properly organized like some CSV or JSON file but fortunately, we can use web scraping to scrape the data from the internet and can use it accordin 10 min read Like