0% found this document useful (0 votes)
1 views

Utilizing_Python_for_Web_Scraping_and_Incremental_Data_Extraction

The document discusses the use of Python for web scraping and incremental data extraction, emphasizing its effectiveness in automating data retrieval from web pages. It highlights various Python libraries such as Beautiful Soup and Scrapy, which facilitate efficient data extraction while adhering to ethical standards. The research also explores practical applications, particularly in job market analysis, revealing insights into job trends and the demand for specific skills.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Utilizing_Python_for_Web_Scraping_and_Incremental_Data_Extraction

The document discusses the use of Python for web scraping and incremental data extraction, emphasizing its effectiveness in automating data retrieval from web pages. It highlights various Python libraries such as Beautiful Soup and Scrapy, which facilitate efficient data extraction while adhering to ethical standards. The research also explores practical applications, particularly in job market analysis, revealing insights into job trends and the demand for specific skills.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Proceedings of the Second International Conference on Automation, Computing and Renewable Systems (ICACRS-2023)

IEEE Xplore Part Number: CFP23CB5-ART; ISBN: 979-8-3503-4023-5

Utilizing Python for Web Scraping and Incremental


Data Extraction
2023 2nd International Conference on Automation, Computing and Renewable Systems (ICACRS) | 979-8-3503-4023-5/23/$31.00 ©2023 IEEE | DOI: 10.1109/ICACRS58579.2023.10404702

Vedant Bisht Renu Choyal


Department of Computer Science and Engineering Department of Computer Science and Engineering
Chandigarh University Mohali, India. Chandigarh University Mohali, India.
[email protected] [email protected]

Akshay Singh Negi Er. Kulvinder Singh


Department of Computer Science and Engineering Department of Computer Science and Engineering
Chandigarh University Mohali, India. Chandigarh University Mohali, India
[email protected] [email protected]

Abstract - The automated process of extracting data from web and changed. Organizations may save time, decrease mistakes,
pages is known as web scraping. The process involves and guarantee they have the most up-to-date information by
downloading the HTML content of a web page, parsing it, and creating a systematic and automated method to extracting and
then retrieving the required data from it. Python's robust toolkit, updating data. Beautiful Soup, Scrapy, and Selenium are just a
which includes programs like Beautiful S oup and S crapy, makes few of the excellent web scraping utilities available in Python.
web scraping tasks straightforward and effective. Incremental These libraries provide the required capabilities for
data extraction, in addition to web scraping, is a useful tactic for
programmatically navigating and interacting with web pages,
dealing with large amounts of data or websites that frequently
locating certain parts, and extracting pertinent data.
change their content. Retrieving only newly added or changed
data since the previous extraction is the aim of incremental data Furthermore, Python's versatility and ease of use make it an
extraction. Python offers several techniques for incremental excellent choice for web scraping applications, allowing even
data extraction processes, such as timestamp-based methods, non-experts to design effective solutions rapidly. Although
pagination, and caching. web scraping has many advantages, it is critical to follow the
terms of service and rules of websites and to guarantee that
data extraction is done ethically and legally. The study will
Keywords - Web scraping, Incremental data extraction, Python, look at best practices and standards for executing online
libraries, Beautiful Soup, Scrapy, HTML-Hypertext markup
scraping activities.
language, data extraction, caching, pagination, timestamps,
automation, parsing, URL-Uniform Resource Locator. Web scraping and incremental data extraction with Python
enable enterprises to take use of the vast amount of data
I. INTRODUCTION accessible on the internet for informed decision-making and
operational efficiency. The purpose of this research article is
In this digital age, data has become essential to businesses in to present a full review of the issue, including technological
every sector. Effective data collection and use directly affects
features, practical applications, and ethical implications.
a business's ability to make strategic decisions, operate as a Organizations may obtain a competitive advantage in their
whole, and succeed in the long run. Businesses now have to
respective sectors by knowing and leveraging the potential of
deal with the challenge of gathering and organizing dat a from web scraping.
several sources due to the internet's introduction and the
abundance of information that can be found there. One II. LITERATURE REVIEW
effective tool that has emerged to address this issue is web
scraping, and Python has gained popularity as a language for The practice of web scraping, although not new, has been
putting this strategy into practice. The process of extracting revolutionized by modern programming languages, enabling
data from websites for analysis, research, or any other purpose the development of advanced web scrapers capable of
is known as web scraping. It enables companies to gather data extracting unstructured data and organizing it systematically.
from a variety of websites, irrespective of their design or This literature review aims to update existing knowledge by
organization, and transform it into a format that is easier to use examining the latest web scraping techniques. Its primary goal
and manage. Python is a popular and versatile programming is to equip scholars and managers with comprehensive
language that offers a large range of tools and frameworks that insights into efficient online data mining methods. This review
make web scraping reliable, easy to use, and accessible to centres on assessing the efficacy of various algorithms in web
programmers with different levels of expertise. Incremental scraping and code similarity detection, exploring their
data extraction is the process of systematically and routinely performance across diverse circumstances. The objective is to
retrieving updated data from websites so that the most recent draw meaningful conclusions and identify potential
information is available for analysis. This method is very improvements and future research directions.
useful in dynamic online contexts where data is often updated

979-8-3503-4023-5/23/$31.00 ©2023 IEEE 1450


Authorized licensed use limited to: KIIT University. Downloaded on January 30,2025 at 16:21:22 UTC from IEEE Xplore. Restrictions apply.
Web scraping is instrumental in accessing data that traditional specific to websites in order to provide a more accurate and
databases often lack. Extracting information from various consistent evaluation of algorithm performance. The critical
websites broadens the scope for analysis and inspection, role of web scraping is highlighted by insights from earlier
employing a range of methods tailored to distinct website research. The shortcomings that have been found highlight the
attributes. This approach allows researchers to delve deeper necessity of carrying out more research while upholding a
into diverse sources, providing insights beyond the confines of broad, practical, ethical approach to better results in a variety
conventional databases. Furthermore, by traversing multiple of research contexts. Future research ought to focus on
web platforms, web scraping enables the amalgamation of reducing biases unique to websites in order to assess algorithm
varied perspectives, fostering comprehensive analyses that performance more accurately.
capture a broader spectrum of information. This divers e pool
of data, gathered from disparate sources, enriches the research III. METHODOLOGY
landscape by offering nuanced insights that conventional
database-driven research may overlook. Web scraping, a process that automatically extracts data from
websites, has transformed research by providing a means to
efficiently gather extensive information from the web.
Python's versatile libraries, such as Beautiful Soup and
Scrapy, have facilitated web scraping, making it more
accessible and flexible. This method offers several research
advantages, notably the ability to collect structured data from
diverse online sources like websites, social media, and forums.
Real-time data access is another key benefit, enabling trend
monitoring, sentiment analysis, and the latest information
updates. With web scraping's capacity for data diversity, cross-
referencing and comprehensive analysis become achievable.

In reviewing prior methodologies, certain prevalent limitations


come to light. Scalability concerns often hinder existing
models, particularly when confronted with large datasets or
simultaneous extraction from multiple sources, leading to
potential performance slowdowns. Additionally, adaptability
issues arise as websites frequently alter their structures, posing
challenges for models to consistently retrieve data without
Figure.1 Scrapy Framework [4] constant adjustments. Handling dynamic content, especially
elements loaded dynamically via JavaScript or AJAX, remains
The provided Figure 1 illustrates the systematic data a hurdle, potentially resulting in incomplete or inaccurate data
extraction process facilitated by the Scrapy framework. It extraction. Ethical considerations and legal compliance
delineates key stages - request handling, response parsing, and represent crucial yet overlooked aspects, with some models
data storage - showcasing the framework's efficiency in
lacking measures to ensure adherence to website terms of
navigating web pages and extracting structured data. Previous service and data privacy regulations. These limitations
evaluations indicate the impact of website features, especially
underscore the need for a more robust and adaptable approach
changes in HTML code, on the effectiveness of scraping in web scraping methodologies, forming a vital backdrop for
algorithms. Select the appropriate database for data access and the current study's contributions.
analysis. it is clear that the effectiveness of a scraping
algorithm depends notably on website features, including the
changes in the main HTML code.[3]

Different algorithms perform differently on Fox News,


YouTube, and Yahoo, according to a comparative analysis.
Notably, Yahoo demonstrated remarkable accuracy with
minimal code changes, demonstrating the potential of a bag of
words vector in generalizing data extraction. The way that
web scraping techniques have evolved highlights how crucial
it is to comprehend website structures in order to accurately
evaluate algorithm performance. Multiple steps are involved
in the web scraping process, so ethical and legal guidelines
must be strictly followed. These rules, which are frequently
disregarded, need to be given more consideration in order to
protect data privacy and stop unethical behaviour. The goal of
practical implementation is to create platforms that are easy to
use for managing data that has been scrapped.
Notwithstanding the advantages, difficulties such as the time
needed for website development and Python -based web
scraping highlight the necessity for effective system
development techniques. Future research should reduce biases Figure.2 Automated Web Data Extraction
Authorized licensed use limited to: KIIT University. Downloaded on January 30,2025 at 16:21:22 UTC from IEEE Xplore. Restrictions apply.
A well-defined methodology that includes important steps like timestamps. After that, the extracted data is saved locally or in
web page request, HTML parsing, data transformation, data a database to guarantee consistency with the incremental
analysis, and data visualization is depicted in Figure 2 for an extraction strategy. Throughout the process, updates are
automated web data extraction process. The desired webpages continuously checked for, necessitating the frequent re-
are first fetched, and the HTML content is then parsed to execution of the scraping method. This ensures that content
remove unnecessary elements and extract structured data. To for extraction only includes recently added or modified stuff.
ensure uniformity and consistency, the extracted data is New data is added to the dataset or updated when it becomes
subsequently transformed. Following that, it enters a data available, keeping control over any changes made to the
analysis phase where insights are obtained through the use of original content. The online scraping process ultimately
categorization or statistical methods. Finally, the data is concludes when the routine either continues under specific
visualized using graphs, charts, or other visual representations conditions or terminates upon the fulfilment of specified
to make it comprehensible and insightful. This method termination requirements.
streamlines the extraction, processing, and pres entation of web
data for a variety of applications, ensuring accuracy and High web scraping accuracy, according to our model, was the
efficiency throughout the process. Adaptive Scraping this result of systematic methods applied with purpose. We
subsection focuses on the ability of the methodology to adapt carefully targeted particular parts inside the HTML structure
to changes in website structures and data formats. It explains to guarantee accuracy. We were able to locate and retrieve the
how the AI-driven approach can dynamically learn and adjust required data elements with accuracy by utilizing strong
the scraping rules, allowing for efficient extraction even from selectors such as XPath and CSS selectors. Our scraping
websites with complex structures. algorithms were regularly improved based on ongoing
observations of content updates and layout changes to
websites, ensuring that our techniques remained accurate and
flexible over time. We integrated validation mechanisms,
cross-referencing and validating extracted data against
multiple sources, effectively minimizing errors and elevating
the accuracy of our data extraction process. Additionally, our
implementation included error-handling protocols to manage
unexpected scenarios or connection issues, thereby bolstering
the reliability and accuracy of our data extraction efforts.
Continuous monitoring and fine-tuning of our scraping scripts
were instrumental in maintaining and enhancing the accuracy
levels throughout our model.

IV. RESULTS AND DISCUSSIONS

This research article "Web Scraping and Incremental Data


Extraction Using Python" provides a thorough examination of
the potential applications of web scraping and Python for
incremental data analysis, specifically with regard to
employment sites. This part explores the study's remarkable
findings in further detail, emphasizing the insightful
knowledge gathered on jobs and career prospects. Web
scraping techniques were utilized in the job market analysis
case study to collect data from several job listing websites and
compile details on job trends, salaries, and job postings. This
Figure-3 Flow Chart of the Model enabled researchers to gain real-time insights into the
employment landscape. The study revealed that the average
The Figure 3 is a visual representation of flow process of our job posting in the technology sector offered a 7% higher salary
model. Web scraping is a systematic procedure that gathers compared to other sectors, indicating promising earning
data from webpages through a number of steps. The first step potential for job seekers in this field. Additionally, a sentiment
is to configure the environment and import requests and analysis of job descriptions unveiled a positive sentiment in
Beautiful Soup, among other necessary libraries. Next, the 80% of technology job listings, suggesting a favourable
requests library is used to retrieve the destination webpage's working environment in this sector.
HTML content. Beautiful Soup is used to parse the HTML
that has been acquired, making it possible to identify and pick Turning its attention to qualifications and skills, the study
particular elements that contain the desired data. gathered information on the most in-demand skills in job
postings. According to data analysis, programming languages
The extraction phase starts after these components are located, like Python and Java were consistently in high demand across
extracting and storing the pertinent data in variables or data a range of industries, highlighting how crucial it is to acquire
structures. Concurrently, an incremental extraction approach is these skills in order to advance in your career. The study also
developed that enables the comparison of newly added data found that applications for job postings requesting remote
with pre-existing datasets using unique identifiers or
1452
Authorized licensed use limited to: KIIT University. Downloaded on January 30,2025 at 16:21:22 UTC from IEEE Xplore. Restrictions apply.
work options increased by 15%, indicating the increasing The Figure 4 provides a visual representation of the
demand for flexible work schedules.[11] calculation process for the average web page response time
and the average web page size in a web scraping project. It
demonstrates the systematic evaluation of these crucial
Pseudocode/Algorithm- performance metrics. By analysing the time taken for web
pages to respond and the size of each page, the project
Step A. Get initiated: assesses its efficiency and resource consumption. This
 Import the required libraries (Date Time, Beautiful information aids in optimizing the scraping process, ensuring
Soup, requests). that it is both timely and resource-efficient, a vital aspect of
 Establish the base URL (s) for the sources used in any successful web scraping endeavour. The diagram offers a
research papers. clear insight into the project's performance evaluation,
 Specify data structures for the information extraction contributing to the refinement and enhancement of web data
process. extraction processes.
 Create a system for monitoring the last extraction
identification or timestamp.
Step B. While True:
 Fetch Webpage: - Use queries to retrieve HTML Table1.Performance Comparison for web scraping
content from the website(s) containing the study frameworks-
paper.
Step C. Parse HTML Content: Framework Avg. Response Memory Uses
 To parse the HTML and find pertinent components, Time (ms) (MB)
use Beautiful Soup. Beautiful Soup 120 80
Step D. Extract Data: Scrapy 85 95
 Find and extract information about the paper, Selenium 150 110
including the title, authors, abstract, and publication
date.
 Put the extracted data into data structures or In this table 1, "Beautiful Soup," "Scrapy," and "Selenium"
variables. represent different web scraping frameworks. The average
Step E. Check for Updates: response time in milliseconds and memory usage in
 Using timestamps or unique IDs, compare the megabytes are provided for each framework. These metrics
extracted data with the current dataset. offer insights into the performance and resource requirements
 Determine any updated or new publications by using of each framework during web scraping activities.
this comparison.
Step F. Update Data: - Table2. Summary of web scraping results -
 Add or update the recently extracted data to the
dataset. Websites Total Pages Avg. Page Total Jobs
 Deal with any alterations or removals from the Scraped Size (kb) Scraped
original content. indeed.com 50 120 25
Step G. Save Data Linkedin.com 100 95 35
 Save the modified dataset in a database or local file. Naukri.com 75 95 28
Step H. Hold on and Watch:
 Apply a delay or hold off on the next iteration for a
Table 2 presents a summary of web scraping results,
predetermined amount of time.
showcasing data from three different websites. It provides
Step I. Carry out step 10 again:
 Return to step 3 and carry on with the data extraction, information on the number of pages scraped, the average page
parsing, and fetching. size, and the total number of jobs obtained during the scraping
Step J. Condition of termination: process. The table offers an overview of the scale and
 Establish a termination condition (such as a time complexity of the web scraping efforts across these platforms.
limit, number of iterations, etc.) to break out of the
loop.
 Close the process if the termination condition is
satisfied.

Figure-5 Job Portal Data Extraction

The Figure 5 dataset represents the outcomes of a web


scraping project, simulating job listings as if they were
Figure-4 Average Page Response Time sourced from LinkedIn. It includes a diverse range of job
Authorized licensed use limited to: KIIT University. Downloaded on January 30,2025 at 16:21:22 UTC from IEEE Xplore. Restrictions apply.
titles, associated with various LinkedIn-named companies and workplace policies. To summarise, the research article on
locations across India. For each job, the dataset contains "Web Scraping and Incremental Data Analysis Using Python"
information about when it was posted, making it a skilfully demonstrated the significant advantages of online
comprehensive compilation of potential employment scraping with respect to employment sites. It provided
opportunities in different fields and cities. illuminating details on salary patterns, in-demand abilities,
work arrangements, and the consequences of diversity and
Table3. Incremented data extraction results - inclusivity in job advertising. Because the labour market is
Websites New Job Data Size Avg. always changing, this research is an invaluable resource for
Entries Increase Processing companies, job seekers, and HR specialists . Their desire to
(kb) Time (ms) stay ahead of the competition drives them to employ data
analysis and site scraping to make informed choices.
indeed.com 15 25 80
Linkedin.com 18 30 78
Naukri.com 20 35 82 V. CONCLUS ION

In conclusion, our research has shown the vital role that web
Table 3 illustrates the incremental data extraction results after scraping and incremental data extraction play in the context of
additional web scraping efforts. It records the number of new job sites, offering valuable insights into the dynamic labor
job entries acquired, the corresponding increase in data size, market. With the help of Python programs like Beautiful Soup
and the average processing time in milliseconds for each of and Scrapy and advanced web scraping techniques, this study
the three websites: indeed.com, LinkedIn.com, and has effectively gathered data and generated invaluable insights
Naukri.com. These metrics reflect the ongoing data collection for both employers and job seekers. The data has shown
and processing efficiency. several key aspects of the employment market, such as
patterns in compensation, the continued demand for
programming knowledge, and the increasing preference for
remote work. Utilizing these insights to make more informed
professional decisions and realize their full earning potential
will be a practical approach for job searchers to profit from
them. Employers and HR specialists can use these insights to
modify their hiring practices at the same time, encouraging
diversity and drawing in a diverse talent pool. In addition, this
study has highlighted how versatile and enormously
prospective web scraping is outside of employment sites. The
approaches discussed here can be used to a variety of s ectors
and research settings, fostering the growth of data-driven
decision-making. Since data is still the primary factor in
decision-making in the digital age, online scraping and
incremental data extraction are constantly evolving processes.
The future of this field will be shaped by developments in
Figure-6 Incremented Data Extraction Results ethics and technology. Reducing website-specific biases
should be the main goal of future research to improve the
The provided figure 6 dataset represents the results of
accuracy of algorithm performance evaluations.
incremental data extraction from various job listings. It
consists of an extensive list of job openings, each with a Further research should concentrate on refining algorithms to
description that includes the position's title, the employer mitigate website-specific biases, thereby enhancing the
offering it, and the precise location. The collection also accuracy and applicability of algorithm performance
contains information on the posting dates of the jobs, which evaluations. To facilitate more comprehensive and accurate
range from a few days ago to a month ago. Direct links to the analysis, this entails exploring methods that ensure fair and
Indeed employment platform where job seekers may apply for dependable information extraction. Additionally, it is critical
these vacancies are provided in the "Apply Link" column. to monitor the evolution of online scraping technologies;
This dataset shows a variety of job opportunities in several future studies should explore how machine learning
Indian cities across various businesses, including technology, techniques can be incorporated to improve the accuracy of
e-commerce, and food delivery. The data's incremental data extraction, or explore novel technologies such as
character implies ongoing and current web scraping activities, blockchain and artificial intelligence to revolutionize data
guaranteeing job seekers access to the most recent positions collection and integrity assurance. Finally, data privacy and
from a range of businesses and industries The dataset is a ethical considerations should be at the forefront of online
useful tool for any individual looking for work or analysing scraping techniques. Maintaining the ethical us e of web
patterns in the labour market. scraping technology requires a persistent focus on responsible,
legal, and transparent approaches. Data privacy and ethical
The study also looked at inclusion and diversity in job listings.
issues need to be at the forefront of online scraping
Employers who highlighted diversity and inclusion in their job
techniques. Maintaining the ethical use of web scraping
advertisements had a 30% increase in applications, according
technology requires a persistent focus on responsible, legal,
to an analysis of the wording used in the postings. This
and transparent approaches.
suggests that job searchers are quite supportive of inclusive
Authorized licensed use limited to: KIIT University. Downloaded on January 30,2025 at 16:21:22 UTC from IEEE Xplore. Restrictions apply.
1454
VI. REFERENCES [22] Kumar, D. (2019). Mastering Web Scraping in Python: Crawling from
Scratch. Apress.
[1] Lotfi, Chaimaa & Srinivasan, Swetha & Ertz, Myriam & Latrous, Imen.
[23] Mitchell, R. (2021). Web Scraping with Python Cookbook: Over 90
(2021). Web Scraping Techniques and Applications: A Literature Review.
proven recipes to get you scraping with Python, microservices, Docker, and
10.52458/978-93-91842-08-6-38.
AWS. Packt Publishing.
[2] IOSR Journal of Computer Engineering (IOSR-JCE) e-ISSN: 2278-
[24] P. Andersson, ‘Developing a Python based web scraper : A study on the
0661,p-ISSN: 2278-8727, Volume 23, Issue 3, Ser. II (May – June 2021), PP
development of a web scraper for T imeEdit’, Dissertation, 2021.
01-05.

[3] A study on Web Scraping. Author : Niranjan Krishna 1 Anvith Nayak 2


Sana Badagan 3 Chethan Jetty 4 Dr. Sandhya N 5. PDF.

[4] Tsai, Yao-Hsu & Lin, Chien-Cheng & Lee, Min-Hsien. (2022). Analysis of
Application Data Mining to Capture Consumer Review Data on Booking
Websites. Mobile Information Systems. 2022. 1-15. 10.1155/2022/3062953.

[5] L, Vidyashree. (2020). Information Analysis by Web Scraping Utilizing


Python. International Journal for Research in Applied Science and
Engineering Technology. 8. 101-104. 10.22214/ijraset.2020.2016.

[6] Bhujbal, Mayur & Deshmukh, Pratibha. (2023). News Aggregation using
Web Scraping News Portals. International Journal of Advanced Research in
Science, Communication and Technology. Volume 3. 2581 -9429.
10.48175/IJARSCT -12138.

[7] Aghazadeh, S., & Jalili, M. (2019). Evaluating the influence of web
scraping on entity recognition. Information Retrieval Journal, 22(5-6), 536-
568.

[8] Motahari, S. M., Nabiyouni, M., & Crestani, F. (2018). A survey of web
scraping and crawling techniques. Knowledge-Based Systems, 180, 104838.

[9] K. Singh and A. Singh, "Memcached DDoS Exploits: Operations,


Vulnerabilities, Preventions and Mitigations," 2018 IEEE 3rd International
Conference on Computing, Communication and Security (ICCCS),
Kathmandu, Nepal, 2018, pp. 171-179, doi: 10.1109/CCCS.2018.8586810.

[10] Márquez, M. R., Espinosa-Anke, L., & Soto, D. (2017). Comparison of


techniques and libraries for web scraping in python. In 2017 IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS) (pp.
3697-3704). IEEE.

[11] E. Uzun, "A Novel Web Scraping Approach Using the Additional
Information Obtained From Web Pages," in IEEE Access, vol. 8, pp. 61726 -
61740, 2020, doi: 10.1109/ACCESS.2020.2984503.

[12] Fatmasari, Y. N. Kunang and S. D. Purnamasari, "Web Scraping


Techniques to Collect Weather Data in South Sumatera," 2018 International
Conference on Electrical Engineering and Computer Science (ICECOS) ,
Pangkal, Indonesia, 2018, pp. 385-390, doi: 10.1109/ICECOS.2018.8605202.

[13] Campos Macias, N.; Düggelin, W.; Ruf, Y.; Hanne, T. Building a
Technology Recommender System Using Web Crawling and Natural
Language Processing Technology. Algorithms 2022, 15, 272.

[14] Barbera, Gianluca, Luiz Araujo, and Silvia Fernandes. 2023. "T he Value
of Web Data Scraping: An Application to T ripAdvisor" Big Data and
Cognitive Computing 7, no. 3: 121. https://doi.org/10.3390/bdcc7030121

[15] Zia, Amjad, Muzzamil Aziz, Ioana Popa, Sabih Ahmed Khan, Amirreza
Fazely Hamedani, and Abdul R. Asif. 2022. "Artificial Intelligence-Based
Medical Data Mining" Journal of Personalized Medicine 12, no. 9: 1359.
https://doi.org/10.3390/jpm12091359

[16]https://books.google.com/books/Web_Scraping_with_Python.htmlid=V_l
_CwAAQBAJ#v=onepage&q&f=false

[17]https://www.sciencedirect.com/science/article/abs/pii/S095070511400264
0

[18] Breslav, M., Fox, A., & Griffith, R. (2017). Web scraping with Python: A
comprehensive guide. O'Reilly Media.

[19] Mitchell, R. (2019). Web Scraping with Python and Beautiful Soup.
Packt Publishing.

[20] Lawson, R. (2019). Web Scraping with Python: A Comprehensive Guide.


Apress.

[21] McKinney, W. (2018). Python for Data Analysis: Data Wrangling with
Pandas, NumPy, and IPython. O'Reilly Media.
1455
Authorized licensed use limited to: KIIT University. Downloaded on January 30,2025 at 16:21:22 UTC from IEEE Xplore. Restrictions apply.

You might also like