0% found this document useful (0 votes)
31 views77 pages

WEB Scrap Report

The project report on 'Web Scraping' by Anshu Jangir outlines the automated technique of extracting data from websites using Python, highlighting its applications in data science and market research. It discusses various tools like BeautifulSoup, Scrapy, and Selenium, as well as challenges such as legal restrictions and anti-scraping mechanisms. The report aims to provide insights into effective web scraping practices while ensuring ethical compliance.

Uploaded by

Yuvraj Tkd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views77 pages

WEB Scrap Report

The project report on 'Web Scraping' by Anshu Jangir outlines the automated technique of extracting data from websites using Python, highlighting its applications in data science and market research. It discusses various tools like BeautifulSoup, Scrapy, and Selenium, as well as challenges such as legal restrictions and anti-scraping mechanisms. The report aims to provide insights into effective web scraping practices while ensuring ethical compliance.

Uploaded by

Yuvraj Tkd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 77

A

Project Report on
“ WEB SCRAPING”

Supervisor: Submitted by:


Mr. Dileep Kumar Agarwal (HOD) Anshu jangir
21ESGCS011

SOBHASARIA GROUP OF INSTITUTIONS, SIKAR


DEPARTMENT OF COMPUTER SCIENCE ENGINEERING
SESSION: 2024-2025

CERTIFICATE
This is Certified that the Major Project named web Scrpaing using Python has
been successfully presented by Anshu Jangir bearing University Roll No.
21ESGCS011 in partial fulfillment of the requirements for the degree of
Bachelor of Engineering in CSE of Bikaner Technical University , Bikaner
during academic year 2024-2025. The Major Project report has been approved
as it satisfies the academic requirements in respect of Major Project work for
the said degree.

Mr. Dileep Agarwal


(HOD)
(Department of computer & Engineering)

Sobhasariya Group of Institute,Sikar

Place :Sikar
Date:28/04/2025
DECLARATION
I, Anshu Jangir student of VIII Semester B.Tech., in Computer Science and
Engineering, Sobhasaria Group of Institutions, Sikar here by declare that the
Project entitled

“Web Scraping using python ” has been carried out by me and in partial
fulfillment of the requirements for the VIII Semester degree of Bachelor of
Engineering in Computer Science and Engineering of Bikaner Technical
University, Bikaner during academic year 2024- 2025.

Anshu Jangir
B.Tech(Computer Science & Engineering)
Roll no:21ESGCS011

Counter Signed By

Supervisor
Mr.Dileep Agarwal
HOD
(Department of Computer & Engineering)
Sobhasariya Group of Institutions,Sikar

ACKNOWLEDGEMENT

I wish to express my deep sense of gratitude towards my guide Mr. Dileep Kumar
Agarwal (HOD, Computer Science & Engineering), Sobhasaria Group of Institutions,
Sikar for her guidance and encouraging support which were invaluable for the
completion of this work.

Words are inadequate in offering my thanks to the Dr. L. Solanki (Principal),


Sobhasaria group of Institutions, Sikar (Raj.) for their encouragement and cooperation
in carrying out the Project Web scraping using python .

I take immense pleasure in thanking all the faculty members, staff members and
colleagues for their valuable assistance in the Project.

Ak

Name : Anshu Jangir


Rollno:21ESGCS011

ABSTRACT

Web Scraping

Web scraping is an automated technique used to extract large amounts of data from
websites efficiently. It involves parsing HTML content, retrieving specific information, and
storing it in a structured format for further analysis. Web scraping is widely applied in fields
such as data science, market research, competitive analysis, and sentiment analysis.

Various tools and libraries, such as BeautifulSoup, Scrapy, and Selenium, facilitate web
scraping by enabling users to navigate, extract, and store data from dynamic and static web
pages. However, web scraping presents challenges such as anti-scraping mechanisms, legal
restrictions, and ethical concerns related to data privacy and website terms of service.
This paper explores the fundamentals of web scraping, its applications, popular tools,
challenges, and best practices to ensure ethical and legal compliance. The study aims to
provide insights into how web scraping can be effectively utilized for data-driven decision-
making while adhering to ethical guidelines.

TABLE OF CONTENTS

FIRST PAGE …………………………… I


CERTIFICATE………………………….. II
DECLARATION………………………… III
ACKNOWLEDGEMENT………………. IV
ABSTRACT……………………………. V
LIST OF FIGURES……………………... VI
INDEX……………………………….. VII

CHAPTER TOPIC PAGE


NO.
Chapter I INTRODUCTION TO WEB SCRAPING 1-2

1.1 What is Web scraping?……………………………………… 1

1.2 Importance and Applications……………………………………… 1

Chapter II HOW WEB SCRAPING WORKS……………………………... 3-5


2.1 Basic Workflow
2.2 Web Page Structure (HTML, CSS)
Chapter III TOOLS AND LIBRARIES ……………………………………. 7-9

3.1 BeautifulSoup ………………. 7

3.2 Scrapy ………………………. 7

3.3 Selenium ………………………………… 8

3.4 Other popular tools………………………………………… 8

Chapter IV TECHNIQUES IN WEB SCRAPING………………………… 10-13

4.1 Static vs Dynamic Scarping………………………………. 10


4.2 Html scarping…………………………… 11

Chapter V CHALLENGES IN WEB SCRAPING…………… 14-17

5.1 IP blocking………………………………….
5.2 Changing website Structure……………………………….
Chapter VI FUTURE OF WEB SCRAPING ………………. 18

6.1 Trends and innovation……………………………………………… 18

6.2 Web scraping vs web APIs


Chapter VII CONCLUSION……………………………………….......... 19

Chapter VIII REFERENCE…………………………………………………

CHAPTER1

INTRODUCTION TO WEB SCRAPING

1.1 What is Web Scraping?


Web scraping is an automated method used to extract large amounts of data from websites
quickly and efficiently. It enables users to collect information that is publicly available online
and use it for various purposes such as research, business analysis, and application
development.
Imagine you want a list of all smartphones available on Flipkart, along with their prices and
ratings.

Manually, you would:

 Open each product page.


 Copy the name, price, rating.
 Paste it into Excel.

But with web scraping, a Python script (like with BeautifulSoup or Scrapy) could:
 Automatically open each page.
 Pull the required information.
 Save hundreds of product records into a file in minutes.

Real-World Analogy:
 Manual Browsing = Walking door-to-door to collect information.
 Web Scraping = Sending a drone to fly over the entire neighborhood and instantly
collect all the information!

Key Points:

 Web scraping reads the webpage’s content (HTML code).


 It identifies useful information (like product prices, news headlines, contact details).
 It extracts that information.
 It saves the extracted data in a usable form.

1.2 Importance and Applications

Web scraping is essential in fields like data science, digital marketing, competitive analysis,
and machine learning. It allows organizations to gather insights, monitor prices, collect
reviews, track trends, and even build datasets for AI training.
Big Data Access:

Most valuable data (prices, reviews, stock info, weather updates, etc.) is available online, and
scraping helps to collect it easily.

Business Intelligence:

Companies monitor competitors' prices and offers by scraping their websites regularly.

Market Research:

Brands scrape customer feedback from sites like Amazon or TripAdvisor to analyze trends.
Academic Research:
Universities scrape online articles, journals, and forums to study public behavior or opinions

Chapter2

How Web Scraping Works

1.1 Basic Workflow

The basic steps in web scraping include:


 Sending a request to the web page's server.
 Receiving the HTML content.
 Parsing the HTML to extract required data.
 Storing the extracted data in a structured format like CSV, Excel, or a database.
  It asks: "Hey server, can you please give me the web page?"
  This is similar to how your browser (Chrome, Firefox) requests a page when you
type a URL.

2.2 Web Page Structure (HTML, CSS, JavaScript)

Web pages are primarily built using HTML and styled with CSS. JavaScript adds
interactivity. Understanding the Document Object Model (DOM) of a page is crucial for
locating the right elements to scrape.

 Send a Request to the website.


 Receive the Response (usually HTML).
 Parse the HTML to locate needed data.
 Extract the Data (using code or tools).
 Store the Data in a desired format (CSV, JSON, database). [ Start Scraper ]

[ Send HTTP Request ]

[ Receive HTML Response ]

[ Parse HTML (DOM Tree) ]

[ Locate Data Elements ]

[ Extract Required Data ]

[ Store in File/Database ]

[ End Scraper ]

1. Basic Components of a Web Page

A typical web page consists of:


Component Description
HTML The backbone of the web page, providing the structure.
(Cascading Style Sheets) Defines the appearance (colors, fonts, layout) of the
CSS
page.
JavaScript Adds interactivity (dynamic content, buttons, animations).
Images/Videos Multimedia content embedded into the page.
Links Hyperlinks to other web pages or resources.

<!DOCTYPE html>
<html>
<head>
<title>Web Page Title</title>
<link rel="stylesheet" href="styles.css">
<script src="script.js"></script>
</head>
<body>
<header>
<h1>Welcome to My Website</h1>
</header>

<nav>
<!-- Navigation links -->
</nav>

<main>
<!-- Main content of the page -->
<section>
<h2>Section Title</h2>
<p>This is a paragraph of text.</p>
</section>
</main>

<footer>
<!-- Footer content -->
<p>Copyright © 2025</p>
</footer>
</body>
</html>

3. Important HTML Elements for Scraping

When scraping, you often target these elements:


Tag Purpose
<div> Generic container for content and layout
<p> Paragraph text
<h1>, <h2>, <h3> Headings, subheadings
<ul>, <ol>, <li> Lists
<a> Hyperlinks
<img> Images
<table>, <tr>, <td> Tabular data
<span> Inline text container
<form>, <input>, <button> Forms for user input
Scrapers often using libraries like BeautifulSoup or Scrapy.

4. Attributes in HTML

HTML tags often include attributes that give more information about elements.
Common attributes important for web scraping:
Attribute Purpose
id Unique identifier for an element.
class Groups multiple elements together for styling or scripting.
href URL linked to an <a> tag.
src Source URL of an image (<img>) or video.
alt Alternate text for images.
name Used in form elements.
data-* Custom data attributes often used in JavaScript frameworks.

5. Dynamic Web Pages

Some modern web pages don't have all the content in the initial HTML.
Instead, they load data dynamically using JavaScript (AJAX or API calls).
To handle this:

 You may need to use tools like Selenium, Playwright, or Puppeteer to render
JavaScript content.
 Alternatively, intercept XHR requests (network requests made by JavaScript) to
directly get JSON data.

HTML
├── HEAD
│ ├── TITLE
│ └── LINK
└── BODY
├── HEADER
│ └── H1
├── NAV
├── MAIN
│ ├── SECTION
│ │ ├── H2
│ │ └── P
└── FOOTER
└── P
<div class="product">
<h2 class="product-name">Smartphone X</h2>
<span class="product-price">$999</span>
</div>

<div class="product">
<h2 class="product-name">Laptop Pro</h2>
<span class="product-price">$1999</span>
</div>
from bs4 import BeautifulSoup

html_doc = """(HTML content above)"""


soup = BeautifulSoup(html_doc, 'html.parser')

products = soup.find_all('div', class_='product')


for product in products:
name = product.find('h2', class_='product-name').text
price = product.find('span', class_='product-price').text
print(f"Product: {name}, Price: {price}")

Product: Smartphone X, Price: $999


Product: Laptop Pro, Price: $1999

Chapter3
2.Tools and Libraries for Web Scraping

1.2 BeautifulSoup

A Python library that makes it easy to navigate and search through HTML and XML
documents.
✅ Description:

 A simple, powerful Python library for parsing HTML and XML documents.
 It's used to search, navigate, and modify HTML trees easily.

✅ Key Features:

 Easy to learn for beginners.


 Handles broken HTML well.
 Integrates with requests to download web pages.

✅ Common Functions:
 find(): Find first occurrence.
 find_all(): Find all matching elements.
 .text: Extract text content

✅ When to Use:

 Static pages with simple HTML structure.

✅ Pros:

 Lightweight and fast.


 Flexible.

✅ Cons:

 Not ideal for large-scale, complex crawling.

1.3 Scrapy

An open-source framework for web scraping that offers a powerful and fast way to build
scraping applications.

✅ Description:

 A full-blown scraping and crawling framework.


 Can handle complex projects involving multiple pages, APIs, or websites.

✅ Key Features:

 High-speed scraping.
 Built-in support for exporting data (CSV, JSON).
 Auto-following links (crawling).
 Middleware support (for proxies, user-agents).

✅ How Scrapy Works:

 You create a Spider (a class that tells Scrapy how to follow links and extract data).
✅ When to Use:

 Big scraping projects, hundreds/thousands of pages.

✅ Pros:

 Super-fast.
 Built for scale.

✅ Cons:

 Steeper learning curve for beginners.

1.4 Selenium

Primarily used for web testing, Selenium can also scrape dynamic content that is rendered by
JavaScript.

✅ Description:
 Originally designed for automating browser testing.
 Now widely used for scraping dynamic, JavaScript-heavy websites.
✅ Key Features:
 Opens a real browser (Chrome, Firefox).
 Can click buttons, fill forms, scroll pages.
 Captures dynamic content (pop-ups, AJAX elements).

✅ When to Use:
 Websites with dynamic content (e.g., Infinite scrolling, SPAs).

✅ Pros:
 Full control over the browser.
 Can mimic human behavior.
✅ Cons:
 Slower compared to BeautifulSoup or Scrapy.

 Needs installation of WebDriver (e.g., chromedriver.exe)


1.5 . lxml

✅ Description:

Fast library for parsing and manipulating XML and HTML.

✅ Key Features:

Super fast
Supports XPath for complex queries.

When to Use:

 When speed is a major concern.


 When needing complex data extraction via XPath.

✅ Pros:
 Extremely fast.
 Powerful queries.
✅ Cons:
 Slightly complex syntax for beginners.

5. Octoparse (No-Code Tool)


✅ Description:

 Visual web scraping tool. No coding needed.


 Point, click, extract data visually.

✅ Key Features:

 Handles login forms, infinite scrolls.


 Cloud-based option available.
 Export to Excel/Database easily.

✅ When to Use:

 For non-programmers.
 Fast setup for smaller projects.

✅ Pros:
 Easy drag-and-drop interface.
 Free and Paid versions.

✅ Cons:

 Less flexible for highly customized needs.

6. ParseHub (No-Code Tool)

✅ Description:

 Cloud-based visual scraping tool.


 Extracts data from dynamic websites.

✅ Key Features:

 Supports JavaScript rendering.


 Schedule scraping tasks.

✅ When to Use:

 Scraping modern websites without coding.

Other tools include Requests (for making HTTP requests), Puppeteer (for headless browser
automation), and Cheerio (for Node.js scraping)

Chapter4
4.Techniques in Web Scraping

1.6 Static vs Dynamic Scraping

 Static Scraping: Collecting data from pages where content is loaded directly in
HTML.

 Dynamic Scraping: Scraping pages where content is generated dynamically using


JavaScript

1.STATIC Definition:
A static website is one that does not change its content dynamically (i.e., no JavaScript is
used to load or manipulate content). The content is served directly as HTML.
Technique Used:

 HTML Parsing: Extracting content directly from the HTML source code.
How It Works:

 The scraper sends an HTTP request to the server.


 The server responds with an HTML file.
 The scraper reads the file using tools like BeautifulSoup or lxml to extract required
data (e.g., product name, price, or any other text).

2.DYNAMIC Definition:

A dynamic website uses JavaScript to load or modify content. This type of website may
not serve all its data in the HTML response. Instead, the data may be loaded
asynchronously through APIs or JavaScript rendering.

Technique Used:

 Browser Automation using tools like Selenium or Puppeteer.


How It Works:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options

# Set Chrome options to run headless


options = Options()
options.headless = True

# Launch headless browser


driver = webdriver.Chrome(options=options)
driver.get('https://example.com')

# Extract title after JS has rendered


title = driver.find_element("tag name", "h1").text
print(title)

driver.quit()

Infinite Scroll Scraping


Definition:
Infinite Scroll is a modern technique used on websites where more content is dynamically
loaded as the user scrolls down the page, without needing to click to the next page.
Challenge:
Scraping websites with infinite scroll can be tricky because the content is loaded via
JavaScript, and you need to simulate scrolling and wait for new data to load dynamically.

1. How Infinite Scroll Works


Infinite scroll typically loads content by sending AJAX requests to a server when the user
scrolls to the bottom of the page. The server responds with the next batch of data, which is
appended to the existing content.
You need to simulate scrolling (by triggering AJAX calls) and wait for the page to load new
data.

2. Techniques to Handle Infinite Scroll


To scrape an infinite scroll page, you have two primary options:
1. Simulate Scrolling with Selenium:
By using Selenium, you can simulate scrolling behavior, which will trigger the
dynamic loading of new content.
2. Direct AJAX Requests:
By inspecting the network requests in the browser and finding the URL that the
website uses to load additional content, you can directly request the AJAX data and
extract the content.

from selenium import webdriver


from selenium.webdriver.common.keys import Keys
import time

# Initialize WebDriver (Chrome in headless mode)


driver = webdriver.Chrome()

# Open the page


driver.get('https://example.com/infinite-scroll')

# Scroll to the bottom of the page


last_height = driver.execute_script("return document.body.scrollHeight")
while True:
# Scroll down
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(2) # Wait for new content to load

# Get the new height of the page


new_height = driver.execute_script("return document.body.scrollHeight")

if new_height == last_height:
break # Exit if no new content is loaded
last_height = new_height

# Now you can scrape the content (e.g., products) from the page
content = driver.page_source
# Process content as needed using BeautifulSoup or similar

driver.quit()

5. Challenges in Web Scraping

5.1 IP Blocking

Repeated scraping can lead to IP bans. Solutions include using proxies or VPNs.

IP Blocking in Web Scraping

1. Anti-Scraping Measures
Websites employ various techniques to block or limit scraping activities. These
measures are designed to detect bots and prevent overloading of their servers.
Common Anti-Scraping Measures:
1. IP Blocking:
Websites may detect scraping activity and block the IP address that’s making too
many requests.
o Solution: Rotate IPs using proxies. Tools like ProxyMesh, ScraperAPI,
and Bright Data (formerly Luminati) offer proxy services to avoid IP
blocking.
2. CAPTCHAs:
Many websites implement CAPTCHA tests (e.g., Google’s reCAPTCHA) to
ensure that a real human is interacting with the site.
o Solution: Use CAPTCHA-solving services like 2Captcha or Anti-Captcha.
Alternatively, you can automate CAPTCHA-solving with Selenium.
3. Rate Limiting:
Websites often limit the number of requests a user can make in a certain period.
If you exceed this limit, the website may block further requests.
o Solution: Implement delays between requests to simulate human
browsing behavior and avoid triggering rate-limiting systems.
4. JavaScript Rendering:
Many websites use JavaScript to load content dynamically. Web scrapers that
don’t execute JavaScript can fail to retrieve the full page content.
o Solution: Use tools like Selenium or Puppeteer to handle JavaScript
rendering.

IP blocking is one of the most common anti-scraping measures that websites use to prevent
web scraping activities. When a website detects unusual traffic patterns or requests coming
from a specific IP address (such as too many requests in a short period), it may block that IP
to protect the site’s resources and prevent automated scraping.
Let's dive into what IP blocking is, how websites detect it, and the methods you can use to
avoid or bypass IP blocking.
How Do Websites Detect Scraping via IP Address?
Websites can detect scraping through various methods that analyze incoming traffic. Here are
some of the main ways they can detect and block scraping attempts:

1. Rate Limiting and Traffic Patterns


Websites can monitor the rate of requests from a particular IP address. If requests are
sent at a very high frequency, it may indicate scraping activity. Legitimate users
typically make requests at a much slower rate, whereas bots can send hundreds or
thousands of requests per minute.
2. Request Headers
Bots might send requests with certain HTTP headers that are different from those of
real browsers. For example, scrapers may not send User-Agent or Referer headers
that a regular browser would.

3. CAPTCHAs and Other Challenges


Some websites implement CAPTCHAs or other challenges when they detect
suspicious activity. If your scraper starts seeing CAPTCHAs frequently, it’s a sign that
your IP is being flagged.
4. Session and Cookie Tracking
Many websites use cookies and session tracking to identify users. A bot might not
store or properly handle cookies, which can lead to detection.
5. Geolocation and Bot Signatures
Some websites use geolocation techniques to detect requests from regions where they
don't expect traffic. They may also compare the IP address to known proxy or VPN
IPs to identify scraping attempts.
6. Content Abnormalities and Referrals
When scraping, the request might lack certain referring pages or have irregular
interactions (like loading all data at once without browsing). This is different from a
typical user experience and can trigger red flags.

How to Avoid or Bypass IP Blocking


If you are scraping a website and face IP blocking, there are several techniques you
can use to bypass or avoid the blocks.
1. Use Proxies
Proxies are intermediary servers that relay requests between your scraper and the
website, masking your real IP address. By rotating proxies, you can avoid triggering
blocks by using different IP addresses for each request.
 Rotating Proxies: Use a pool of proxies to switch between them for each request or
after a specific number of requests. This way, no single IP address sends too many
requests, which reduces the risk of being blocked.
 Residential Proxies: These are IP addresses assigned to real devices, making them
harder for websites to detect as proxies. Services like Luminati and Residential
Proxies provide such IPs.
 Data Center Proxies: These are typically cheaper but easier for websites to detect.
They may be flagged if the site is actively looking for proxies.
Example of using rotating proxies (Python with requests and ProxyMesh):
2. Handling Dynamic Content (JavaScript)
Modern websites often rely heavily on JavaScript to dynamically load content. This makes it
challenging to extract data from static HTML, as the data you need might not even be
present in the initial HTML response.
How to Deal with Dynamic Content:
 Selenium/Puppeteer: Use these tools to automate a real browser to render the page
and allow JavaScript to execute.
 Network Requests: Inspect the network traffic in your browser's Developer Tools to
identify the API endpoints that the page calls to load data. You can then scrape data
directly from these APIs instead of scraping the entire HTML.
 from selenium import webdriver

 # Initialize Selenium WebDriver
 driver = webdriver.Chrome()

 # Open the dynamic webpage
 driver.get('https://example.com/dynamic-content')

 # Wait for JavaScript to render the content
 driver.implicitly_wait(10)

 # Now scrape the page
 content = driver.page_source
 # Process the page as you would with BeautifulSoup

 driver.quit()
3. Dealing with Complex Website Structures
Some websites are complex and have intricate HTML structures, which makes it
difficult to extract data using regular expressions or simple parsing tools.
Challenges Include:
 Nested HTML Elements: Data might be buried deep within nested tags, which makes
it harder to locate.
 Inconsistent Tag Attributes: Sometimes, elements might have no consistent classes,
ids, or attributes, making it hard to identify data patterns.
 Multiple Formats of Data: The data might be spread across various elements like
tables, images, lists, and even JavaScript variables.
How to Overcome:
 Use tools like BeautifulSoup or lxml with XPath queries or CSS selectors to precisely
locate the data you need.
 Inspecting the structure: Analyze the DOM structure of the page carefully to
pinpoint patterns and find consistent ways to extract data.
 Use Regular Expressions: You can use Python’s re library to extract specific patterns
when the data is unstructured.

from lxml import html


import requests

url = 'https://example.com'
response = requests.get(url)
tree = html.fromstring(response.text)

# Extract data using XPath


titles = tree.xpath('//h2[@class="product-title"]/text()')
for title in titles:
print(title)

4. Legal and Ethical Concerns


Legal Issues:
Scraping websites can sometimes lead to legal issues if you violate the site's terms of
service (ToS) or copyright laws. Many websites explicitly prohibit scraping in their ToS, and
scraping large amounts of data without permission can be considered data theft.
 Solution: Always read the website’s robots.txt file to check their scraping rules. Some
websites explicitly state that scraping is not allowed.
 Use the API whenever possible if the website offers one, as APIs are often provided
with terms for legal use.
 Respect the Terms of Service: If a website disallows scraping, avoid scraping their
content unless you have explicit permission.
Ethical Concerns:
 Server Load: Scraping a website at a high frequency may cause server overload,
impacting the user experience of regular visitors.
 Data Privacy: Be mindful of scraping personal or sensitive information, as this could
raise privacy concerns.
How to Mitigate:
 Be respectful of the website's resources by scraping at a low frequency and
implementing delays between requests.
 Use API endpoints (if available) to access data without scraping the entire page.
 Always follow ethical guidelines and respect data privacy.

5. Handling Data Quality and Consistency


When scraping data, especially from diverse sources, you may face
the challenge of ensuring the quality and consistency of the data.
Challenges:
 Missing Data: Some pages might have incomplete data or

missing elements.
 Data Duplication: Multiple pages might contain the same data,

leading to duplicates in your dataset.


 Inconsistent Data Formats: Different pages might format data

inconsistently (e.g., dates, prices, addresses, etc.).

Solutions:
 Data Cleaning: Implement data cleaning techniques, such as:
o Handling missing values (e.g., using default values or removing rows with missing
data).

o De-duplicating data by identifying and removing duplicates based on unique


identifiers.

o Standardizing formats (e.g., converting all dates to the same format).


import pandas as pd

# Load scraped data


data = pd.read_csv('scraped_data.csv')

# Remove duplicates
data = data.drop_duplicates()

# Fill missing values


data['price'].fillna(0, inplace=True)

# Standardize date format


data['date'] = pd.to_datetime(data['date'])
# Save cleaned data
data.to_csv('cleaned_data.csv', index=False)

6. Handling Large Volumes of Data


Web scraping might involve large volumes of data, especially when
scraping multiple pages or websites. This can introduce several
challenges:

 Memory Management: Storing large datasets in memory can


cause your system to run out of resources.

 Performance Issues: Scraping large datasets can slow down the


scraper, causing timeouts and failures.

Solutions:
 Save Incrementally: Instead of storing data in memory, save the

data after scraping each page (e.g., in a CSV file or database).


 Use Databases: For large-scale scraping, consider saving data in

a database (e.g., SQLite, PostgreSQL) to manage large datasets


efficiently.
 Parallel Scraping: Use techniques like multithreading or

multiprocessing to scrape multiple pages simultaneously.


import sqlite3

# Connect to SQLite database


conn = sqlite3.connect('scraped_data.db')
c = conn.cursor()
# Create a table for the scraped data
c.execute('''CREATE TABLE IF NOT EXISTS products (name TEXT, price
TEXT)''')

# Insert data
data = [('Product1', '$10'), ('Product2', '$15')]
c.executemany('INSERT INTO products VALUES (?, ?)', data)

# Commit and close


conn.commit()
conn.close()

5.2 Changing Website Structures

Scraping scripts may break if the website layout changes, requiring regular maintenance.
Changing Website Structure: A Challenge in Web Scraping
Web scraping involves extracting data from a website’s HTML structure. However, websites
are not static; they often undergo changes in their design, layout, or content. This can have a
significant impact on your scraping logic, as changes to the website’s structure may break
your scraping code or result in incorrect or missing data.
Understanding how website structure changes and how to handle these changes is an
important skill for anyone involved in web scraping.

1. Why Do Website Structures Change?

Website structures change for several reasons, including:

 Redesigns or UI/UX Updates: Websites frequently undergo visual and structural


redesigns to enhance the user experience, add new features, or improve
responsiveness.
 Backend Changes: Websites may update their backend infrastructure, which could
affect how data is displayed or structured in the HTML.

 Content Reorganization: A website might reorganize its content into new sections,
categories, or formats.

 Performance Optimization: Websites may change the structure to improve


performance, which could include lazy loading, AJAX content loading, or changing
the layout to a more dynamic design.

 Security Measures: Websites may change their structure to thwart scraping attempts,
such as adding new dynamic elements or CAPTCHA challenges

Summary of Web Scraping Challenges:

1. Anti-Scraping Measures: Websites block scrapers with IP blocking, CAPTCHAs,


and rate-limiting.

2. Dynamic Content: JavaScript and AJAX-driven content require tools like Selenium
for rendering.

3. Complex Website Structures: Scraping sites with intricate HTML or inconsistent


structures requires advanced parsing strategies.

4. Legal and Ethical Concerns: Always review terms of service and respect ethical
guidelines.

5. Data Quality and Consistency: Clean and standardize the scraped data.

6. Handling Large Volumes: Manage data efficiently using databases and parallel
scraping techniques.

6. Future of web scraping


Web scraping has evolved rapidly over the past few years and continues to be an essential
tool for data extraction across industries. With advancements in technology, increased legal
and ethical scrutiny, and the rise of new challenges, the future of web scraping looks both
promising and complex. Below, we explore the key trends, challenges, and opportunities that
will shape the future of web scraping.

6.1 Trends and Innovations in Web Scraping


Web scraping has become an indispensable tool in the data-driven world of today. From
market research and sentiment analysis to price monitoring and academic research, the
applications of web scraping are vast. As the web continues to evolve, so do the tools and
technologies used to extract data from websites. Let's explore some of the emerging trends
and innovations that are shaping the future of web scraping.

1. Use of Artificial Intelligence and Machine Learning

a. AI-Powered Scraping

One of the most significant innovations in web scraping is the integration of artificial
intelligence (AI) and machine learning (ML) techniques. Traditional scraping methods are
rule-based and depend heavily on the structure of the web page. However, AI and ML are
making scrapers more intelligent by enabling them to adapt to changes in a website's
structure and layout.

 Automated Adaptation: AI algorithms can help scrapers automatically adjust to


changing website structures, layouts, and design. For example, if a website changes its
class names or reorganizes its content, an AI-powered scraper can analyze the new
structure and modify its scraping logic accordingly.

 Predictive Scraping: AI can also be used to predict which elements of a page are
likely to contain the data of interest. This predictive approach reduces the need for
hard-coded rules and improves the scraper's adaptability to new or changing websites.

 Natural Language Processing (NLP): NLP allows web scrapers to process and
understand unstructured text data, such as news articles, reviews, and social media
posts. This capability opens up new possibilities for scraping textual content and
extracting valuable insights, like sentiment analysis or trend tracking.
2. Headless Browsers and Real-Time Rendering

a. Headless Browser Scraping


As websites become more interactive and dynamic, traditional methods of scraping static
HTML no longer suffice. The shift towards JavaScript-heavy websites has driven the use of
headless browsers for scraping. These browsers do not have a graphical user interface (GUI)
and run in the background, enabling faster and more efficient scraping of dynamic content.
 Tools like Puppeteer, Selenium, and Playwright allow scrapers to simulate real user
behavior by rendering the JavaScript and capturing dynamic content. This is
especially useful for websites that load content using AJAX, lazy loading, or
WebSockets.

 Faster Crawling and Interaction: Headless browsers enable scraping of websites


that rely heavily on client-side rendering (JavaScript). For instance, scrapers can
interact with web pages by clicking buttons, filling out forms, or navigating through a
multi-step process to extract data that is loaded in stages.

3. Cloud Scraping and Distributed Systems


a. Cloud-Based Web Scraping

Web scraping at scale requires robust infrastructure to handle high volumes of requests and
data processing. This has led to the rise of cloud-based scraping solutions, where users can
run their scrapers on cloud platforms like Amazon Web Services (AWS), Google Cloud
Platform (GCP), and Microsoft Azure.

 Scalability and Cost Efficiency: Cloud-based scraping allows businesses to scale


their scraping operations as needed. Instead of running scrapers on local machines,
which can be limited by hardware resources, users can leverage cloud computing
power to handle larger datasets and more complex scraping tasks.
 Data Redundancy and Fault Tolerance: Cloud platforms offer features like data
redundancy, which ensures that scraped data is safely stored across multiple servers.
These platforms also offer fault-tolerant systems, ensuring minimal downtime in case
of server failures.
b. Distributed Web Scraping

For large-scale scraping tasks that require processing vast amounts of data across multiple
websites, distributed web scraping can be used. This involves spreading the scraping
workload across multiple machines or nodes to optimize processing time and improve
reliability.
 Parallel Scraping: Distributed scraping frameworks allow multiple instances of
scrapers to run simultaneously, speeding up the data collection process. For instance,
if you're scraping information from thousands of product pages across various e-
commerce websites, distributed scraping enables each machine to handle a subset of
the scraping tasks in parallel.

 Managing Load and IP Rotation: Distributed scraping also helps manage the load
on a website’s server. By distributing requests across multiple IPs, the scraper can
reduce the risk of getting blocked by the website. This technique, combined with
proxy rotation (using residential proxies or VPNs), ensures that scraping occurs
without detection or interruption.

4.Proxy Networks and IP Rotation

a. Enhanced Proxy Networks


As websites continue to implement anti-scraping technologies, one of the most effective
ways to bypass restrictions is through proxy networks. Proxy servers mask the scraper’s IP
address, making it appear as though requests are coming from different locations.
 Residential Proxies: These proxies use real IP addresses assigned to residential
devices, which are harder to detect than data center proxies. They are becoming
more popular for scraping tasks that require high levels of anonymity.

 Rotating Proxies: Scrapers can now automatically rotate through a pool of IP


addresses to avoid detection and blocking. Proxies can be rotated every time a new
request is made, ensuring that each request appears to come from a unique IP
address.

 Smart Proxy Services: New proxy services are emerging that automatically rotate
proxies based on real-time analysis of anti-scraping defenses. These services can
adjust proxy rotation based on how a website is reacting to scraping activity, ensuring
that the scraper remains undetected.
1. Real-Time Data Extraction and Streaming

a. Real-Time Web Scraping

Real-time web scraping is gaining popularity as businesses increasingly need up-to-the-


minute data for decision-making. This trend is driven by the need for real-time information in
fields like financial analysis, market research, and social media monitoring.

 Webhooks and Streaming: Some platforms are adopting webhooks or real-time


APIs to allow instant updates when new data becomes available on websites. This
innovation allows scraping tools to instantly capture and process the latest content
without polling the website repeatedly.

 Real-Time Price Monitoring: E-commerce businesses and marketplaces increasingly


rely on real-time data feeds to track competitors’ prices, product availability, and
promotions. Real-time scraping tools can fetch updated prices and inventory data as
soon as it changes, allowing businesses to adjust their strategies accordingly.

5. Web Scraping as a Service (SaaS)

a. Turnkey Web Scraping Solutions

The rise of web scraping as a service (SaaS) platforms has democratized the use of web
scraping, enabling non-technical users to access powerful scraping tools with minimal setup.
These services provide pre-built scrapers that can be customized for specific needs without
writing any code.
 No-Code Platforms: Many SaaS providers are offering no-code scraping solutions
that enable users to point-and-click to configure scraping tasks. These platforms
typically provide an intuitive user interface (UI) for selecting which data elements to
extract, where to store the data, and how often to run the scraper.
 Scalable SaaS Solutions: Some web scraping SaaS platforms are tailored for large-
scale operations and can handle multiple users, manage proxy rotation, and store large
amounts of data in a cloud-based infrastructure. These platforms can serve a variety of
industries, including finance, e-commerce, and travel.

7. Ethical Scraping and Compliance with Regulations


a. Ethical Web Scraping Practices

As data privacy concerns grow and companies take a more proactive stance on website
security, ethical considerations will become increasingly important. Scraping practices that
respect user privacy and legal boundaries will be in demand.

 Compliance with GDPR: With the implementation of General Data Protection


Regulation (GDPR) in the EU and similar regulations around the world, web
scrapers must be mindful of the data they are collecting. Scrapers should avoid
collecting personal data unless explicitly allowed and should implement data
anonymization where appropriate.

 Respecting Terms of Service: Many websites prohibit scraping in their terms of


service. While not always enforceable, it’s crucial for scrapers to be aware of these
terms and consider ethical implications before scraping a website. Future innovations
in scraping tools may involve better integration with robots.txt files, giving scrapers a
way to comply with website owner guidelines automatically.

6. AI-Powered Content Extraction

a. Advanced Data Extraction

AI and machine learning will continue to innovate content extraction, making it easier to
extract data from diverse formats like images, PDFs, and videos. By leveraging image
recognition or optical character recognition (OCR), AI can help scrape data that is
embedded in non-text formats.
 Image Recognition: Tools that combine AI with image recognition technologies will
allow scrapers to analyze images and extract useful information, such as product
images, graphs, or text embedded in screenshots.
 Video and Audio Scraping: Future web scraping tools may incorporate the ability to
extract information from multimedia content such as video and audio. This will be
especially useful for scraping social media platforms like YouTube, where a lot of
valuable content is in video format.

7. Integration with Big Data and Analytics Platforms


a. Seamless Integration with BI Tools
As more businesses embrace data-driven decision-making, the need for integrating scraped
data into business intelligence (BI) tools and data analytics platforms will increase. Future
web scraping solutions will offer direct integrations with platforms like Tableau, Power BI,
and Google Data Studio to provide seamless data pipelines.
 Automated Data Processing: Data collected through scraping can be automatically
processed and fed into data lakes or BI dashboards for real-time analysis, making it
easier for businesses to monitor key metrics and track market trends.

Web APIs are becoming more common, offering a more structured and legal way to access
data.

In the digital world where data is the new oil, accessing and utilizing information from the
web has become critical for businesses, researchers, and developers. Two major methods of
extracting data from websites are Web Scraping and Web APIs. Although both techniques
aim to gather data, they operate in fundamentally different ways and serve different purposes.
This section provides a deep dive into Web Scraping and Web APIs, comparing them on
various parameters including functionality, ease of use, reliability, scalability, legality, and
more.

1. What is Web Scraping?


Web Scraping is a technique where a program automatically extracts data from the front end
of websites. Scrapers mimic human browsing to collect information displayed publicly on a
web page, parse the HTML code, and retrieve the required content.
 Example: A web scraper might visit an e-commerce website, extract the names,
prices, and ratings of products, and save them into a database.
 Common Tools: BeautifulSoup, Scrapy, Selenium, Puppeteer.
Key Characteristics:
 Collects data by navigating web pages and parsing HTML structure.
 May require handling JavaScript-rendered content using headless browsers.
 Susceptible to changes in website design or structure.

2. What is a Web API?


Web APIs (Application Programming Interfaces) are structured interfaces
provided by websites or services that allow programs to access their data in a
systematic, secure, and often authorized manner. APIs are meant to be
consumed by other programs, providing data in formats like JSON or XML
without rendering a website.
 Example: Twitter offers APIs that developers can use to retrieve tweets,
user profiles, and trending topics directly without visiting the Twitter
website.
 Common Tools: Postman, cURL, programming libraries like requests in
Python.
Key Characteristics:
 Provides structured data directly from the server.
 Requires authentication (such as API keys, OAuth).
 More stable and reliable than scraping.

4. Advantages and Disadvantages

Advantages of Web Scraping


 Access to any data visible on the website, even if no API exists.

 Complete flexibility to choose what data to scrape.

 Useful when data is not officially available via API.

Disadvantages of Web Scraping


 Fragile and error-prone if website structure changes.

 Can be legally risky if it violates Terms of Service.

 More resource-intensive (needs headless browsers, captcha

solving, etc.).
Advantages of Web APIs
 Reliable and stable source of structured data.

 Legal and often supported by the data provider.

 Easier to integrate into applications.

 Faster and more efficient.

Disadvantages of Web APIs


 Limited to what the API provider offers.

 Often comes with usage restrictions (rate limits, quotas).

 May involve cost for higher usage tiers.


3. Key Differences Between Web Scraping and Web APIs

Feature Web Scraping Web APIs


Extracts from web Fetches from a server
Source
pages (HTML). endpoint (JSON/XML).
Can be complex if
the website Usually easier, especially with
Ease of Use
structure is good documentation.
dynamic.
Low; prone to break
High; APIs are maintained
Reliability if the site layout
with versioning and updates.
changes.
Websites may block APIs have defined rate limits
Rate Limits
excessive requests. per user or app.
Usually not required Often requires API keys or
Authentication
for public pages. OAuth tokens.
Scraping may API usage is typically within
Legal and Ethical
violate Terms of legal bounds if used
Concerns
Service. according to terms.
Slower due to page Faster as it directly fetches
Speed
loading and parsing. structured data.
Control Over Data High; can scrapeLimited to data endpoints
Feature Web Scraping Web APIs
any visible data. exposed by the API.

5. When to Use Web Scraping?

You might prefer Web Scraping when:

 No official API is available.


 You need very specific data not exposed by the site's API.
 You are working on projects like price comparison tools, news
aggregators, or competitor analysis where public information is
needed.

6. When to Use Web APIs?

You should use Web APIs when:


 The website offers an API for the required data.

 You need a stable and legal data source.

 You want to build scalable and maintainable applications.


 The project requires real-time data retrieval without parsing
HTML.

7. Real-World Examples
 Web Scraping Example: Scraping real estate listings from a

property website without an official API, including property


images, prices, and descriptions.
 Web API Example: Using the OpenWeatherMap API to get real-

time weather forecasts in a mobile app.

8. Hybrid Approach: Combining Web Scraping and APIs


Sometimes, the best solution is a hybrid one:
 Use APIs where available for efficient data retrieval.

 Fall back on scraping for additional data points not covered by

the API.

This approach maximizes data coverage while minimizing the


technical and legal risks associated with scraping entire websites.

9. Legal and Ethical Considerations


 Always check a website’s Terms of Service before scraping.

 If scraping is against the site's policy, prefer requesting data

through official APIs.


 Respect robots.txt files and ethical scraping practices.

 Consider using rate limiting, IP rotation, and user-agent

headers to minimize the impact on target servers.


9. Conclusion

Web scraping is a powerful tool for data collection and analysis.


When done ethically and responsibly, it can open new possibilities in
business, research, and technology development.
Both web scraping and web APIs are powerful methods for data
extraction, each with its own advantages and limitations.
Understanding when and how to use them is crucial for building
reliable and efficient data-driven applications. In general:
 Prefer APIs for structured, reliable, and legal access to data.

 Use scraping when APIs are unavailable or insufficient for your

needs.
Choosing the right approach depends on your specific project goals,
technical constraints, and ethical considerations.
#from bs4 import BeautifulSoup
# Location (city name)
def current_location_temp():

city = "jaipur"
url = f"https://wttr.in/{city}?format=j1"

# Get weather data


response = requests.get(url)

if response.status_code == 200:
data = response.json()

current_temp = data["current_condition"][0]["temp_C"]
weather_desc = data["current_condition"][0]["weatherDesc"][0]["value"]
humidity = data["current_condition"][0]["humidity"]
feels_like = data["current_condition"][0]["FeelsLikeC"]

print(f"Weather in {city.capitalize()}:")
print(f"Temperature: {current_temp}°C")
print(f"Feels Like: {feels_like}°C")
print(f"Condition: {weather_desc}")
print(f"Humidity: {humidity}%")

else:
print("Failed to fetch weather data.")
import requests

city = "jaipur"
url = f"https://wttr.in/{city}?format=j1"

response = requests.get(url)

if response.status_code == 200:
data = response.json()

print(f"3-Day Weather Forecast for {city.capitalize()}:\n")

weather_data = data['weather'] # List of 3 days


for day in weather_data:
date = day['date']
maxtemp = day['maxtempC']
mintemp = day['mintempC']
condition = day['hourly'][4]['weatherDesc'][0]['value'] # Around noon

print(f"📅 {date}")
print(f" Max: {maxtemp}°C | Min: {mintemp}°C")
print(f" Condition: {condition}\n")

else:
print("Could not fetch weather data.")

3-Day Weather Forecast for Jaipur:

📅 2025-04-20
Max: 40°C | Min: 26°C
Condition: Sunny

📅 2025-04-21
Max: 39°C | Min: 25°C
Condition: Sunny

📅 2025-04-22
Max: 40°C | Min: 25°C
Condition: Sunny

#A command-line weather app for any city


import requests

def get_weather(city):
url = f"https://wttr.in/{city}?format=j1"
try:
response = requests.get(url)
if response.status_code == 200:
data = response.json()

current = data["current_condition"][0]
temp = current["temp_C"]
feels_like = current["FeelsLikeC"]
desc = current["weatherDesc"][0]["value"]
humidity = current["humidity"]
wind = current["windspeedKmph"]

print(f"\n📍 Weather in {city.capitalize()}")


print(f" Temperature : {temp}°C")
print(f"🤗 Feels Like : {feels_like}°C")
print(f" Condition : {desc}")
print(f"💧 Humidity : {humidity}%")
print(f"💨 Wind Speed : {wind} km/h\n")
else:
print("Failed to retrieve data.")
except Exception as e:
print("Something went wrong:", e)

# Entry point
if __name__ == "__main__":
print(" Command-Line Weather App")
city = input("Enter city name: ")
get_weather(city)

Command-Line Weather App


Enter city name: udaipur

📍 Weather in Udaipur
Temperature : 38°C
🤗 Feels Like : 36°C
Condition : Sunny
💧 Humidity : 15%
💨 Wind Speed : 10 km/h

#pip install python-telegram-bot requests

from telegram import Update


from telegram.ext import ApplicationBuilder, CommandHandler, MessageHandler, filters,
ContextTypes
import requests

BOT_TOKEN = "7724880436:AAGTxuPZz8ZlJtjJWn-jGTBoVwpkMwteG0E" # Paste


your token here

# Weather fetch function


def get_weather(city):
url = f"https://wttr.in/{city}?format=j1"
try:
response = requests.get(url)
if response.status_code == 200:
data = response.json()
current = data["current_condition"][0]
temp = current["temp_C"]
feels_like = current["FeelsLikeC"]
desc = current["weatherDesc"][0]["value"]
humidity = current["humidity"]
wind = current["windspeedKmph"]
return (f"📍 Weather in {city.capitalize()}:\n"
f" Temp: {temp}°C\n"
f"🤗 Feels Like: {feels_like}°C\n"
f" Condition: {desc}\n"
f"💧 Humidity: {humidity}%\n"
f"💨 Wind: {wind} km/h")
else:
return "Failed to get weather info."
except Exception as e:
return f"Error: {e}"

# Start command
async def start(update: Update, context: ContextTypes.DEFAULT_TYPE):
await update.message.reply_text("👋 Hello! Send me a city name and I'll give you the
current weather ")

# Message handler
async def handle_message(update: Update, context: ContextTypes.DEFAULT_TYPE):
city = update.message.text.strip()
weather = get_weather(city)
await update.message.reply_text(weather)

# Main bot function


async def main():
app = ApplicationBuilder().token(BOT_TOKEN).build()

app.add_handler(CommandHandler("start", start))
app.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, handle_message))

print("Bot is running...")
await app.run_polling()

# Run
if __name__ == '__main__':
import asyncio
asyncio.run(main())
import requests
import tkinter as tk
from tkinter import messagebox

def get_weather():
city = city_entry.get().strip()
if not city:
messagebox.showwarning("Input Error", "Please enter a city name.")
return

url = f"https://wttr.in/{city}?format=j1"

try:
response = requests.get(url)
response.raise_for_status()
data = response.json()
current = data["current_condition"][0]

temp = int(current["temp_C"])
feels_like = current["FeelsLikeC"]
condition = current["weatherDesc"][0]["value"]
humidity = current["humidity"]

# Update GUI
result_label.config(text=(
f"Temperature: {temp}°C\n"
f"Feels Like: {feels_like}°C\n"
f"Condition: {condition}\n"
f"Humidity: {humidity}%"
))

# Alerts
if temp > 35:
messagebox.showinfo("Weather Alert", "🔥 It's too hot outside!")
elif temp < 10:
messagebox.showinfo("Weather Alert", "❄️It's too cold outside!")
if "rain" in condition.lower():
messagebox.showinfo("Weather Alert", " It's rainy. Don't forget your umbrella!")

except requests.exceptions.RequestException as e:
messagebox.showerror("Network Error", f"Could not fetch weather data.\n{e}")
except (KeyError, IndexError):
messagebox.showerror("Data Error", "Unexpected data format received.")
# GUI setup
root = tk.Tk()
root.title("Weather Checker")
root.geometry("300x250")
root.resizable(False, False)

tk.Label(root, text="Enter City:", font=("Arial", 12)).pack(pady=10)


city_entry = tk.Entry(root, font=("Arial", 12))
city_entry.pack(pady=5)

tk.Button(root, text="Get Weather", command=get_weather, font=("Arial",


12)).pack(pady=10)

result_label = tk.Label(root, text="", font=("Arial", 11), justify="left")


result_label.pack(pady=10)

root.mainloop()

import requests
import tkinter as tk
from tkinter import messagebox
from datetime import datetime

def get_weather():
city = city_entry.get().strip()
if not city:
messagebox.showwarning("Input Error", "Please enter a city name.")
return

url = f"https://wttr.in/{city}?format=j1"

try:
response = requests.get(url)
response.raise_for_status()
data = response.json()
current = data["current_condition"][0]

temp = int(current["temp_C"])
feels_like = current["FeelsLikeC"]
condition = current["weatherDesc"][0]["value"]
humidity = current["humidity"]
# Display in GUI
result = (
f"Temperature: {temp}°C\n"
f"Feels Like: {feels_like}°C\n"
f"Condition: {condition}\n"
f"Humidity: {humidity}%"
)
result_label.config(text=result)

# Alerts
if temp > 35:
messagebox.showinfo("Weather Alert", "🔥 It's too hot outside!")
elif temp < 10:
messagebox.showinfo("Weather Alert", "❄️It's too cold outside!")
if "rain" in condition.lower():
messagebox.showinfo("Weather Alert", " It's rainy. Don't forget your umbrella!")

# Save to log
log_entry = (
f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} | "
f"City: {city.title()} | Temp: {temp}°C | Feels Like: {feels_like}°C | "
f"Condition: {condition} | Humidity: {humidity}%\n"
)

with open("weather_log.txt", "a") as log_file:


log_file.write(log_entry)

except requests.exceptions.RequestException as e:
messagebox.showerror("Network Error", f"Could not fetch weather data.\n{e}")
except (KeyError, IndexError):
messagebox.showerror("Data Error", "Unexpected data format received.")

# GUI setup
root = tk.Tk()
root.title("Weather Checker")
root.geometry("300x260")
root.resizable(False, False)

tk.Label(root, text="Enter City:", font=("Arial", 12)).pack(pady=10)


city_entry = tk.Entry(root, font=("Arial", 12))
city_entry.pack(pady=5)
tk.Button(root, text="Get Weather", command=get_weather, font=("Arial",
12)).pack(pady=10)

result_label = tk.Label(root, text="", font=("Arial", 11), justify="left")


result_label.pack(pady=10)

root.mainloop()

import requests
import tkinter as tk
from tkinter import messagebox, scrolledtext
from datetime import datetime

def get_weather():
city = city_entry.get().strip()
if not city:
messagebox.showwarning("Input Error", "Please enter a city name.")
return

url = f"https://wttr.in/{city}?format=j1"

try:
response = requests.get(url)
response.raise_for_status()
data = response.json()
current = data["current_condition"][0]

temp = int(current["temp_C"])
feels_like = current["FeelsLikeC"]
condition = current["weatherDesc"][0]["value"]
humidity = current["humidity"]

# Display in GUI
result = (
f"Temperature: {temp}°C\n"
f"Feels Like: {feels_like}°C\n"
f"Condition: {condition}\n"
f"Humidity: {humidity}%"
)
result_label.config(text=result)
# Alerts
if temp > 35:
messagebox.showinfo("Weather Alert", "🔥 It's too hot outside!")
elif temp < 10:
messagebox.showinfo("Weather Alert", "❄️It's too cold outside!")
if "rain" in condition.lower():
messagebox.showinfo("Weather Alert", " It's rainy. Don't forget your umbrella!")

# Save to log
log_entry = (
f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} | "
f"City: {city.title()} | Temp: {temp}°C | Feels Like: {feels_like}°C | "
f"Condition: {condition} | Humidity: {humidity}%\n"
)

with open("weather_log.txt", "a") as log_file:


log_file.write(log_entry)

except requests.exceptions.RequestException as e:
messagebox.showerror("Network Error", f"Could not fetch weather data.\n{e}")
except (KeyError, IndexError):
messagebox.showerror("Data Error", "Unexpected data format received.")

def view_history():
try:
with open("weather_log.txt", "r") as log_file:
log_data = log_file.read()
except FileNotFoundError:
log_data = "No weather history found."

history_window = tk.Toplevel(root)
history_window.title("Weather History")
history_window.geometry("500x400")

text_area = scrolledtext.ScrolledText(history_window, wrap=tk.WORD, font=("Arial",


10))
text_area.pack(fill=tk.BOTH, expand=True)
text_area.insert(tk.END, log_data)
text_area.config(state=tk.DISABLED) # make read-only

# GUI setup
root = tk.Tk()
root.title("Weather Checker")
root.geometry("300x300")
root.resizable(False, False)

tk.Label(root, text="Enter City:", font=("Arial", 12)).pack(pady=10)


city_entry = tk.Entry(root, font=("Arial", 12))
city_entry.pack(pady=5)

tk.Button(root, text="Get Weather", command=get_weather, font=("Arial",


12)).pack(pady=10)
tk.Button(root, text="View History", command=view_history, font=("Arial",
12)).pack(pady=5)

result_label = tk.Label(root, text="", font=("Arial", 11), justify="left")


result_label.pack(pady=10)

root.mainloop()

<!DOCTYPE html>

<html dir="ltr" lang="en-IN"><head>


<meta charset="utf-8" data-react-helmet="true"/><meta content="width=device-width,
initial-scale=1, viewport-fit=cover" data-react-helmet="true" name="viewport"/><meta
content="max-image-preview:large" data-react-helmet="true" name="robots"/><meta
content="index, follow" data-react-helmet="true" name="robots"/><meta content="origin"
data-react-helmet="true" name="referrer"/><meta content="Today’s and tonight’s Jaipur,
Rajasthanweather forecast, weather conditions and Doppler radar from The Weather Channel
and Weather.com" data-react-helmet="true" name="description"/><meta content="#ffffff"
data-react-helmet="true" name="msapplication-TileColor"/><meta content="/daybreak-
today/assets/ms-icon-144x144.d353af.png" data-react-helmet="true" name="msapplication-
TileImage"/><meta content="#ffffff" data-react-helmet="true" name="theme-color"/><meta
content="app-id=295646461" data-react-helmet="true" name="apple-itunes-app"/><meta
content="Weather forecast and conditions for Jaipur, Rajasthan - The Weather Channel |
weather.com" data-react-helmet="true" property="og:title"/><meta content="https://s.w-
x.co/240x180_twc_default.png" data-react-helmet="true" property="og:image"/><meta
content="https://s.w-x.co/240x180_twc_default.png" data-react-helmet="true"
property="og:image:url"/><meta content="https://s.w-x.co/240x180_twc_default.png" data-
react-helmet="true" property="og:image:secure_url"/><meta content="The Weather
Channel" data-react-helmet="true" property="og:site_name"/><meta content="article" data-
react-helmet="true" property="og:type"/><meta content="en_IN" data-react-helmet="true"
property="og:locale"/><meta content="Today’s and tonight’s Jaipur, Rajasthanweather
forecast, weather conditions and Doppler radar from The Weather Channel and Weather.com"
data-react-helmet="true" property="og:description"/><meta content="https://weather.com/en-
IN/weather/today/l/Purani+Basti+Rajasthan?
canonicalCityId=2be075d3c0a558b26a9370e99b365681" data-react-helmet="true"
property="og:url"/>
<title data-react-helmet="true">Weather forecast and conditions for Jaipur, Rajasthan - The
Weather Channel | weather.com</title>
<link as="script" href="https://weather.com/api/v1/script/dprSdkScript.js"
rel="preload"/><link as="script" href="https://securepubads.g.doubleclick.net/tag/js/gpt.js"
rel="preload"/><link as="script"
href="https://hq2zc2zefnwmg3wcm.ay.delivery/floorPrice/HQ2zc2ZefnWMg3wCM/js/
floorPrice/linreg.min.js" rel="preload"/><link as="script"
href="https://hq2zc2zefnwmg3wcm.ay.delivery/client-v2.js" rel="preload"/>
<link as="script" href="https://micro.rubiconproject.com/prebid/dynamic/10738.js"
rel="preload"/>

<link as="script"
href="//cdn.confiant-integrations.net/sM1wMdWIAB1LeJwC9QvIgGUpPQ0/
gpt_and_prebid/config.js" rel="preload"/><link as="script" href="//c.amazon-
adsystem.com/aax2/apstag.js" rel="preload"/><link data-react-helmet="true"
href="https://weather.com/en-IN/weather/today/l/Purani+Basti+Rajasthan?
canonicalCityId=2be075d3c0a558b26a9370e99b365681" rel="canonical"/><link data-react-
helmet="true" href="/daybreak-today/assets/android-icon-192x192.59ccbc.png" rel="icon"
sizes="192x192"/><link data-react-helmet="true" href="/daybreak-today/assets/apple-icon-
114x114.1354cc.png" rel="apple-touch-icon" sizes="114x114"/><link data-react-
helmet="true" href="/daybreak-today/assets/apple-icon-120x120.ba15c8.png" rel="apple-
touch-icon" sizes="120x120"/><link data-react-helmet="true"
href="/daybreak-today/assets/apple-icon-144x144.d353af.png" rel="apple-touch-icon"
sizes="144x144"/><link data-react-helmet="true" href="/daybreak-today/assets/apple-icon-
152x152.3eb488.png" rel="apple-touch-icon" sizes="152x152"/><link data-react-
helmet="true" href="/daybreak-today/assets/apple-icon-180x180.f0f315.png" rel="apple-
touch-icon" sizes="180x180"/><link data-react-helmet="true"
href="/daybreak-today/assets/apple-icon-57x57.57039f.png" rel="apple-touch-icon"
sizes="57x57"/><link data-react-helmet="true" href="/daybreak-today/assets/apple-icon-
60x60.97d314.png" rel="apple-touch-icon" sizes="60x60"/><link data-react-helmet="true"
href="/daybreak-today/assets/apple-icon-72x72.77210f.png" rel="apple-touch-icon"
sizes="72x72"/><link data-react-helmet="true" href="/daybreak-today/assets/apple-icon-
76x76.625d0f.png" rel="apple-touch-icon" sizes="76x76"/><link data-react-helmet="true"
href="/daybreak-today/assets/favicon-16x16.ff392a.png" rel="icon" sizes="16x16"/><link
data-react-helmet="true" href="/daybreak-today/assets/favicon-32x32.c1752b.png"
rel="icon" sizes="32x32"/><link data-react-helmet="true"
href="/daybreak-today/assets/favicon-96x96.e407ed.png" rel="icon" sizes="96x96"/><link
data-react-helmet="true" href="/manifest.json" rel="manifest"/><link as="script"
href="https://s.w-x.co/helios/twc/1.43.1/helios.js" rel="preload"/>

<link as="font" crossorigin="anonymous"


href="/daybreak-today/assets/../assets/IBMPlexSans-Regular-Latin1.woff2" rel="preload"
type="font/woff2"/><link as="font" crossorigin="anonymous"
href="/daybreak-today/assets/../assets/IBMPlexSans-SemiBold-Latin1.woff2" rel="preload"
type="font/woff2"/><link as="font" crossorigin="anonymous"
href="/daybreak-today/assets/../assets/IBMPlexSerif-Medium-Latin1.woff2" rel="preload"
type="font/woff2"/><link as="script"
href="/daybreak-today/assets/main.633909ec686c48233746.js" rel="preload"/><link
as="script" href="/daybreak-today/assets/6202.lodash.f1a20d447532cc558fb2.js"
rel="preload"/><link as="script" href="https://weather.com/api/v1/script/containerQuery.js"
rel="preload"/>
<script>window.NREUM||(NREUM={}),NREUM.init={session_replay:{enabled:!
0,block_selector:"",mask_text_selector:"*",sampling_rate:.01,error_sampling_rate:1,mask_al
l_inputs:!0,collect_fonts:!0,inline_images:!1,inline_stylesheet:!0,fix_stylesheets:!
0,mask_input_options:{}},distributed_tracing:{enabled:!0},privacy:{cookies_enabled:!
0},ajax:{deny_list:["bam.nr-
data.net","www.wunderground.com","upsx.wunderground.com"]}},NREUM.loader_config=
{accountID:"4772967",trustKey:"1964701",agentID:"1386224892",licenseKey:"NRBR-
dec0e3728cba165f80c",applicationID:"1386224892"},NREUM.info={beacon:"bam.nr-
data.net",errorBeacon:"bam.nr-data.net",licenseKey:"NRBR-
dec0e3728cba165f80c",applicationID:"1386224892",sa:1},/*! For license information please
see nr-loader-spa-1.283.2.min.js.LICENSE.txt */
(()=>{var e,t,r={8122:(e,t,r)=>{"use strict";r.d(t,{a:()=>i});var n=r(944);function i(e,t)
{try{if(!e||"object"!=typeof e)return(0,n.R)(3);if(!t||"object"!=typeof t)return(0,n.R)(4);

const
r=Object.create(Object.getPrototypeOf(t),Object.getOwnPropertyDescriptors(t)),o=0===Obj
ect.keys(r).length?e:r;for(let s in o)if(void 0!==e[s])try{if(null===e[s])
{r[s]=null;continue}Array.isArray(e[s])&&Array.isArray(t[s])?r[s]=Array.from(new
Set([...e[s],...t[s]])):"object"==typeof e[s]&&"object"==typeof t[s]?
r[s]=i(e[s],t[s]):r[s]=e[s]}catch(e){(0,n.R)(1,e)}return r}catch(e){(0,n.R)(2,e)}}},2555:
(e,t,r)=>{"use strict";r.d(t,{Vp:()=>c,fn:()=>a,x1:()=>u});var n=r(384),i=r(8122);const
o={beacon:n.NT.beacon,errorBeacon:n.NT.errorBeacon,licenseKey:void
0,applicationID:void 0,sa:void 0,queueTime:void 0,applicationTime:void 0,ttGuid:void
0,user:void 0,account:void 0,product:void 0,extra:void 0,jsAttributes:{},userAttributes:void
0,atts:void 0,transactionName:void 0,tNamePlain:void 0},s={};function a(e){try{const
t=c(e);return!!t.licenseKey&&!!t.errorBeacon&&!!t.applicationID}catch(e){return!
1}}function c(e){if(!e)throw new Error("All info objects require an agent identifier!");if(!
s[e])throw new Error("Info for ".concat(e," was never set"));return s[e]}function u(e,t){if(!
e)throw new Error("All info objects require an agent identifier!");s[e]=(0,i.a)(t,o);const
r=(0,n.nY)(e);r&&(r.info=s[e])}},9417:(e,t,r)=>{"use strict";r.d(t,{D0:()=>h,gD:()=>g,xN:
()=>p});var n=r(3333);const i=e=>{if(!e||"string"!=typeof e)return!
1;try{document.createDocumentFragment().querySelector(e)}catch{return!1}return!0};var
o=r(2614),s=r(944),a=r(384),c=r(8122);const u="[data-nr-mask]",d=()=>{const
e={feature_flags:[],experimental:{marks:!1,measures:!1,resources:!
1},mask_selector:"*",block_selector:"[data-nr-block]",mask_input_options:{color:!1,date:!
1,"datetime-local":!1,email:!1,month:!1,number:!1,range:!1,search:!1,tel:!1,text:!1,time:!
1,url:!1,week:!1,textarea:!1,select:!1,password:!0}};return{ajax:{deny_list:void
0,block_internal:!0,enabled:!0,autoStart:!0},distributed_tracing:{enabled:void
0,exclude_newrelic_header:void 0,cors_use_newrelic_header:void
0,cors_use_tracecontext_headers:void 0,allowed_origins:void 0},get feature_flags(){return
e.feature_flags},set feature_flags(t){e.feature_flags=t},generic_events:{enabled:!0,autoStart:!
0},harvest:{interval:30},jserrors:{enabled:!0,autoStart:!0},logging:{enabled:!0,autoStart:!
0},metrics:{enabled:!0,autoStart:!0},obfuscate:void 0,page_action:{enabled:!0},

page_view_event:{enabled:!0,autoStart:!0},page_view_timing:{enabled:!0,autoStart:!
0},performance:{get capture_marks(){return e.feature_flags.includes(n.$v.MARKS)||
e.experimental.marks},set capture_marks(t){e.experimental.marks=t},get capture_measures()
{return e.feature_flags.includes(n.$v.MEASURES)||e.experimental.measures},set
capture_measures(t){e.experimental.measures=t},capture_detail:!0,resources:{get enabled()
{return e.feature_flags.includes(n.$v.RESOURCES)||e.experimental.resources},set enabled(t)
{e.experimental.resources=t},asset_types:[],first_party_domains:[],ignore_newrelic:!
0}},privacy:{cookies_enabled:!0},proxy:{assets:void 0,beacon:void 0},session:
{expiresMs:o.wk,inactiveMs:o.BB},session_replay:{autoStart:!0,enabled:!1,preload:!
1,sampling_rate:10,error_sampling_rate:100,collect_fonts:!1,inline_images:!
1,fix_stylesheets:!0,mask_all_inputs:!0,get mask_text_selector(){return e.mask_selector},set
mask_text_selector(t){i(t)?e.mask_selector="".concat(t,",").concat(u):""===t||null===t?
e.mask_selector=u:(0,s.R)(5,t)},get block_class(){return"nr-block"},get ignore_class()
{return"nr-ignore"},get mask_text_class(){return"nr-mask"},get block_selector(){return
e.block_selector}

,set block_selector(t){i(t)?e.block_selector+=",".concat(t):""!==t&&(0,s.R)(6,t)},get
mask_input_options(){return e.mask_input_options},set mask_input_options(t)
{t&&"object"==typeof t?e.mask_input_options={...t,password:!0}:(0,s.R)
(7,t)}},session_trace:{enabled:!0,autoStart:!0},soft_navigations:{enabled:!0,autoStart:!
0},spa:{enabled:!0,autoStart:!0},ssl:void 0,user_actions:{enabled:!0,elementAttributes:
["id","className","tagName","type"]}}},l={},f="All configuration objects require an agent
identifier!";function h(e){if(!e)throw new Error(f);if(!l[e])throw new Error("Configuration
for ".concat(e," was never set"));return l[e]}function p(e,t){if(!e)throw new
Error(f);l[e]=(0,c.a)(t,d());const r=(0,a.nY)(e);r&&(r.init=l[e])}function g(e,t){if(!e)throw
new Error(f);var r=h(e);if(r){for(var n=t.split("."),i=0;i<n.length-1;i++)if("object"!
=typeof(r=r[n[i]]))return;r=r[n[n.length-1]]}return r}},5603:(e,t,r)=>{"use strict";r.d(t,{a:
()=>c,o:()=>a});var n=r(384),i=r(8122);const o={accountID:void 0,trustKey:void
0,agentID:void 0,licenseKey:void 0,applicationID:void 0,xpid:void 0},s={};function a(e){if(!
e)throw new Error("All loader-config objects require an agent identifier!");if(!s[e])throw new
Error("LoaderConfig for ".concat(e," was never set"));return s[e]}function c(e,t){if(!e)throw
new Error("All loader-config objects require an agent identifier!");s[e]=(0,i.a)(t,o);const
r=(0,n.nY)(e);r&&(r.loader_config=s[e])}},3371:(e,t,r)=>{"use strict";r.d(t,{V:()=>f,f:
()=>l});var n=r(8122),i=r(384),o=r(6154),s=r(9324);let a=0;const
c={buildEnv:s.F3,distMethod:s.Xs,version:s.xv,originTime:o.WN},u={customTransaction:vo
id 0,disabled:!1,isolatedBacklog:!1,loaderType:void 0,maxBytes:3e4,onerror:void 0,ptid:void
0,releaseIds:{},appMetadata:{},session:void 0,denyList:void 0,timeKeeper:void
0,obfuscator:void 0,harvester:void 0},d={};function l(e){if(!e)throw new Error("All runtime
objects require an agent identifier!");if(!d[e])throw new Error("Runtime for ".concat(e," was
never set"));return d[e]}function f(e,t){if(!e)throw new Error("All runtime objects require an
agent

identifier!");d[e]={...(0,n.a)(t,u),...c},Object.hasOwnProperty.call(d[e],"harvestCount")||
Object.defineProperty(d[e],"harvestCount",{get:()=>++a});const r=(0,i.nY)
(e);r&&(r.runtime=d[e])}},9324:(e,t,r)=>{"use strict";r.d(t,{F3:()=>i,Xs:()=>o,Yq:()=>s,xv:
()=>n});const n="1.283.2",i="PROD",o="CDN",s="^2.0.0-alpha.17"},6154:(e,t,r)=>{"use
strict";r.d(t,{A4:()=>a,OF:()=>d,RI:()=>i,WN:()=>h,bv:()=>o,gm:()=>s,lR:()=>f,m:
()=>u,mw:()=>c,sb:()=>l});var n=r(1863);const i="undefined"!=typeof window&&!!
window.document,o="undefined"!=typeof WorkerGlobalScope&&("undefined"!=typeof
self&&self instanceof WorkerGlobalScope&&self.navigator instanceof
WorkerNavigator||"undefined"!=typeof globalThis&&globalThis instanceof
WorkerGlobalScope&&globalThis.navigator instanceof WorkerNavigator),s=i?
window:"undefined"!=typeof WorkerGlobalScope&&("undefined"!=typeof self&&self
instanceof WorkerGlobalScope&&self||"undefined"!=typeof globalThis&&globalThis
instanceof
WorkerGlobalScope&&globalThis),a="complete"===s?.document?.readyState,c=Boolean("h
idden"===s?.document?.visibilityState),u=""+s?.location,d=/iPad|iPhone|
iPod/.test(s.navigator?.userAgent),l=d&&"undefined"==typeof SharedWorker,f=(()=>{const
e=s.navigator?.userAgent?.match(/Firefox[/\s](\d+\.\d+)/);return
Array.isArray(e)&&e.length>=2?+e[1]:0})(),h=Date.now()-(0,n.t)()},7295:(e,t,r)=>{"use
strict";r.d(t,{Xv:()=>s,gX:()=>i,iW:()=>o});var n=[];function i(e){if(!e||o(e))return!1;

if(0===n.length)return!0;for(var t=0;t<n.length;t++){var r=n[t];if("*"===r.hostname)return!


1;if(a(r.hostname,e.hostname)&&c(r.pathname,e.pathname))return!1}return!0}function o(e)
{return void 0===e.hostname}function s(e){if(n=[],e&&e.length)for(var t=0;t<e.length;t++)
{let r=e[t];if(!r)continue;0===r.indexOf("http://")?
r=r.substring(7):0===r.indexOf("https://")&&(r=r.substring(8));const i=r.indexOf("/");let
o,s;i>0?(o=r.substring(0,i),s=r.substring(i)):
(o=r,s="");let[a]=o.split(":");n.push({hostname:a,pathname:s})}}function a(e,t){return!
(e.length>t.length)&&t.indexOf(e)===t.length-e.length}function c(e,t){return
0===e.indexOf("/")&&(e=e.substring(1)),0===t.indexOf("/")&&(t=t.substring(1)),""===e||
e===t}},1687:(e,t,r)=>{"use strict";

r.d(t,{Ak:()=>c,Ze:()=>l,x3:()=>u});var n=r(7836),i=r(3606),o=r(860),s=r(2646);const
a={};function c(e,t){const r={staged:!1,priority:o.P3[t]||0};d(e),a[e].get(t)||
a[e].set(t,r)}function u(e,t){e&&a[e]&&(a[e].get(t)&&a[e].delete(t),h(e,t,!
1),a[e].size&&f(e))}function d(e){if(!e)throw new Error("agentIdentifier required");a[e]||
(a[e]=new Map)}function l(e="",t="feature",r=!1){if(d(e),!e||!a[e].get(t)||r)return
h(e,t);a[e].get(t).staged=!0,f(e)}function f(e){const
t=Array.from(a[e]);t.every((([e,t])=>t.staged))&&(t.sort(((e,t)=>e[1].priority-
t[1].priority)),t.forEach((([t])=>{a[e].delete(t),h(e,t)})))}function h(e,t,r=!0){const o=e?
n.ee.get(e):n.ee,a=i.i.handlers;if(!o.aborted&&o.backlog&&a){if(r){const
e=o.backlog[t],r=a[t];

if(r){for(let t=0;e&&t<e.length;+
+t)p(e[t],r);Object.entries(r).forEach((([e,t])=>{Object.values(t||
{}).forEach((t=>{t[0]?.on&&t[0]?.context()instanceof
s.y&&t[0].on(e,t[1])}))}))}}o.isolatedBacklog||delete
a[t],o.backlog[t]=null,o.emit("drain-"+t,[])}}function p(e,t){var r=e[1];Object.values(t[r]||
{}).forEach((t=>{var r=e[0];if(t[0]===r){var n=t[1],i=e[3],o=e[2];n.apply(i,o)}}))}},7836:
(e,t,r)=>{"use strict";r.d(t,{P:()=>c,ee:()=>u});var
n=r(384),i=r(8990),o=r(3371),s=r(2646),a=r(5607);const
c="nr@context:".concat(a.W),u=function e(t,r){var n={},a={},d={},l=!
1;try{l=16===r.length&&(0,o.f)(r).isolatedBacklog}catch(e){}var
f={on:p,addEventListener:p,removeEventListener:function(e,t){var r=n[e];if(r)for(var
i=0;i<r.length;i++)r[i]===t&&r.splice(i,1)},emit:function(e,r,n,i,o){if(!1!==o&&(o=!0),!
u.aborted||i){t&&o&&t.emit(e,r,n);for(var s=h(n),c=g(e),d=c.length,l=0;l<d;l+
+)c[l].apply(s,r);var p=m()[a[e]];return p&&p.push([f,e,r,s]),s}},get:function(t){return
d[t]=d[t]||e(f,t)},listeners:g,context:h,buffer:function(e,t){const r=m();t=t||"feature",f.aborted||
Object.entries(e||{}).forEach((([e,n])=>{a[n]=t,t in r||(r[t]=[])}))},abort:function()
{f._aborted=!0,Object.keys(f.backlog).forEach((e=>{delete
f.backlog[e]}))},isBuffering:function(e){return!!m()[a[e]]},debugId:r,backlog:l?
{}:t&&"object"==typeof t.backlog?t.backlog:{},isolatedBacklog:l};

return Object.defineProperty(f,"aborted",{get:()=>{let e=f._aborted||!1;return e||


(t&&(e=t.aborted),e)}}),f;function h(e){return e&&e instanceof s.y?e:e?(0,i.I)(e,c,(()=>new
s.y(c))):new s.y(c)}function p(e,t){n[e]=g(e).concat(t)}function g(e){return n[e]||[]}function
m(){return f.backlog}}

(void 0,"globalEE"),d=(0,n.Zm)();d.ee||(d.ee=u)},2646:(e,t,r)=>{"use strict";r.d(t,{y:


()=>n});class n{constructor(e){this.contextId=e}}},9908:(e,t,r)=>{"use strict";r.d(t,{d:
()=>n,p:()=>i});var n=r(7836).ee.get("handle");function i(e,t,r,i,o){o?
(o.buffer([e],i),o.emit(e,t,r)):(n.buffer([e],i),n.emit(e,t,r))}},3606:(e,t,r)=>{"use strict";r.d(t,{i:
()=>o});var n=r(9908);o.on=s;var i=o.handlers={};function o(e,t,r,o){s(o||n.d,i,e,t,r)}function
s(e,t,r,i,o){o||(o="feature"),e||(e=n.d);var s=t[o]=t[o]||{};(s[r]=s[r]||[]).push([e,i])}},3878:
(e,t,r)=>{"use strict";function n(e,t){return{capture:e,passive:!1,signal:t}}function i(e,t,r=!
1,i){window.addEventListener(e,t,n(r,i))}function o(e,t,r=!1,i)
{document.addEventListener(e,t,n(r,i))}

r.d(t,{DD:()=>o,jT:()=>n,sp:()=>i})},5607:(e,t,r)=>{"use strict";r.d(t,{W:()=>n});const
n=(0,r(9566).bz)()},9566:(e,t,r)=>{"use strict";r.d(t,{LA:()=>a,ZF:()=>c,bz:()=>s,el:
()=>u});var n=r(6154);const i="xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx";function o(e,t)
{return e?15&e[t]:16*Math.random()|0}function s(){const e=n.gm?.crypto||
n.gm?.msCrypto;let t,r=0;return e&&e.getRandomValues&&(t=e.getRandomValues(new
Uint8Array(30))),i.split("").map((e=>"x"===e?o(t,r++).toString(16):"y"===e?(3&o()|
8).toString(16):e)).join("")}function a(e){const t=n.gm?.crypto||n.gm?.msCrypto;let
r,i=0;t&&t.getRandomValues&&(r=t.getRandomValues(new Uint8Array(e)));const
s=[];for(var a=0;a<e;a++)s.push(o(r,i++).toString(16));return s.join("")}function c(){return
a(16)}function u(){return a(32)}},2614:(e,t,r)=>{"use strict";r.d(t,{BB:()=>s,H3:()=>n,g:
()=>u,iL:()=>c,tS:()=>a,uh:()=>i,wk:()=>o});const
n="NRBA",i="SESSION",o=144e5,s=18e5,a={STARTED:"session-started",PAUSE:

"session-pause",RESET:"session-reset",RESUME:"session-resume",UPDATE:"session-
update"},c={SAME_TAB:"same-tab",CROSS_TAB:"cross-
tab"},u={OFF:0,FULL:1,ERROR:2}},1863:(e,t,r)=>{"use strict";function n(){return
Math.floor(performance.now())}r.d(t,{t:()=>n})},7485:(e,t,r)=>{"use strict";r.d(t,{D:
()=>i});var n=r(6154);

function i(e){if(0===(e||"").indexOf("data:"))return{protocol:"data"};try{const t=new


URL(e,location.href),r={port:t.port,hostname:t.hostname,pathname:t.pathname,search:t.searc
h,protocol:t.protocol.slice(0,t.protocol.indexOf(":")),sameOrigin:t.protocol===n.gm?.location
?.protocol&&t.host===n.gm?.location?.host};return r.port&&""!==r.port||
("http:"===t.protocol&&(r.port="80"),"https:"===t.protocol&&(r.port="443")),r.pathname&
&""!==r.pathname?r.pathname.startsWith("/")||
(r.pathname="/".concat(r.pathname)):r.pathname="/",r}catch(e){return{}}}},944:
(e,t,r)=>{"use strict";function n(e,t){"function"==typeof
console.debug&&console.debug("New Relic Warning: https://github.com/newrelic/newrelic-
browser-agent/blob/main/docs/warning-codes.md#".concat(e),t)}r.d(t,{R:()=>n})},5284:
(e,t,r)=>{"use strict";r.d(t,{t:()=>c,B:()=>a});

var n=r(7836),i=r(6154);const o="newrelic",s=new Set,a={};function c(e,t){const


r=n.ee.get(t);a[t]??={},e&&"object"==typeof e&&(s.has(t)||(r.emit("rumresp",
[e]),a[t]=e,s.add(t),function(e={}){try{i.gm.dispatchEvent(new CustomEvent(o,
{detail:e}))}catch(e){}}({loaded:!0})))}},8990:(e,t,r)=>{"use strict";r.d(t,{I:()=>i});var
n=Object.prototype.hasOwnProperty;function i(e,t,r){if(n.call(e,t))return e[t];var
i=r();if(Object.defineProperty&&Object.keys)try{return Object.defineProperty(e,t,
{value:i,writable:!0,enumerable:!1}),i}catch(e){}return e[t]=i,i}},6389:(e,t,r)=>{"use
strict";function n(e,t=500,r={}){const n=r?.leading||!1;let i;return(...r)=>

{n&&void 0===i&&(e.apply(this,r),i=setTimeout((()=>{i=clearTimeout(i)}),t)),n||
(clearTimeout(i),i=setTimeout((()=>{e.apply(this,r)}),t))}}function i(e){let t=!
1;return(...r)=>{t||(t=!0,e.apply(this,r))}}r.d(t,{J:()=>i,s:()=>n})},3304:(e,t,r)=>{"use
strict";r.d(t,{A:()=>o});var n=r(7836);const i=()=>

{const e=new WeakSet;return(t,r)=>{if("object"==typeof r&&null!==r)


{if(e.has(r))return;e.add(r)}return r}};function o(e){try{return
JSON.stringify(e,i())??""}catch(e){try{n.ee.emit("internal-error",[e])}catch(e)
{}return""}}},5289:(e,t,r)=>{"use strict";r.d(t,{GG:()=>o,sB:()=>s});var n=r(3878);function
i(){return"undefined"==typeof document||"complete"===document.readyState}function
o(e,t){if(i())return e();(0,n.sp)("load",e,t)}function s(e){if(i())return e();(0,n.DD)
("DOMContentLoaded",e)}},384:(e,t,r)=>{"use strict";r.d(t,{NT:()=>o,US:()=>d,Zm:
()=>s,bQ:()=>c,dV:()=>a,nY:()=>u,pV:()=>l});var n=r(6154),i=r(1863);const
o={beacon:"bam.nr-data.net",errorBeacon:"bam.nr-data.net"};function s(){return
n.gm.NREUM||(n.gm.NREUM={}),void
0===n.gm.newrelic&&(n.gm.newrelic=n.gm.NREUM),n.gm.NREUM}function a(){let
e=s();return e.o||
(e.o={ST:n.gm.setTimeout,SI:n.gm.setImmediate,CT:n.gm.clearTimeout,XHR:n.gm.XMLHtt
pRequest,REQ:n.gm.Request,EV:n.gm.Event,PR:n.gm.Promise,MO:n.gm.MutationObserver,
FETCH:n.gm.fetch,WS:n.gm.WebSocket}),e}function c(e,t){let r=s();r.initializedAgents??
={},t.initializedAt={ms:(0,i.t)(),date:new Date},r.initializedAgents[e]=t}function u(e){let
t=s();return t.initializedAgents?.[e]}function d(e,t){s()[e]=t}function l(){return function(){let
e=s();const t=e.info||{};e.info={beacon:o.beacon,errorBeacon:o.errorBeacon,...t}}
(),function(){let e=s();const t=e.init||{};e.init={...t}}(),a(),function(){let e=s();const
t=e.loader_config||{};e.loader_config={...t}}(),s()}},2843:(e,t,r)=>{"use strict";r.d(t,{u:
()=>i});var n=r(3878);

function i(e,t=!1,r,i){(0,n.DD)("visibilitychange",(function()
{t?"hidden"===document.visibilityState&&e():e(document.visibilityState)}),r,i)}},8139:
(e,t,r)=>{"use strict";r.d(t,{u:()=>f});var n=r(7836),i=r(3434),o=r(8990),s=r(6154);const
a={},c=s.gm.XMLHttpRequest,u="addEventListener",d="removeEventListener",l="nr@wra
pped:".concat(n.P);function f(e){var t=function(e){return(e||n.ee).get("events")}
(e);if(a[t.debugId]++)return t;a[t.debugId]=1;var r=(0,i.YM)(t,!0);function f(e)

{r.inPlace(e,[u,d],"-",p)}function p(e,t){return e[1]}return"getPrototypeOf"in


Object&&(s.RI&&h(document,f),c&&h(c.prototype,f),h(s.gm,f)),t.on(u+"-start",
(function(e,t){var n=e[1];if(null!==n&&("function"==typeof n||"object"==typeof n)){var
i=(0,o.I)(n,l,(function(){var e={object:function(){if("function"==typeof n.handleEvent)return
n.handleEvent.apply(n,arguments)},function:n}[typeof n];return e?
r(e,"fn-",null,e.name||"anonymous"):n}));this.wrapped=e[1]=i}})),t.on(d+"-start",(function(e)
{e[1]=this.wrapped||e[1]})),t}function h(e,t,...r){let n=e;for(;

"object"==typeof n&&!
Object.prototype.hasOwnProperty.call(n,u);)n=Object.getPrototypeOf(n);n&&t(n,...r)}},3434
:(e,t,r)=>{"use strict";r.d(t,{Jt:()=>o,YM:()=>c});var n=r(7836),i=r(5607);const
o="nr@original:".concat(i.W);var s=Object.prototype.hasOwnProperty,a=!1;function c(e,t)
{return e||(e=n.ee),r.inPlace=function(e,t,n,i,o){n||(n="");const s="-"===n.charAt(0);for(let
a=0;a<t.length;a++){const c=t[a],u=e[c];d(u)||(e[c]=r(u,s?c+n:n,i,c,o))}},r.flag=o,r;function
r(t,r,n,a,c){return d(t)?t:(r||(r=""),l[o]=t,function(e,t,r)
{if(Object.defineProperty&&Object.keys)try{return Object.keys(e).forEach((function(r)
{Object.defineProperty(t,r,{get:function(){return e[r]},set:function(t){return
e[r]=t,t}})})),t}catch(e){u([e],r)}for(var n in e)s.call(e,n)&&(t[n]=e[n])}(t,l,e),l);function l()
{var o,s,d,l;

try{s=this,o=[...arguments],d="function"==typeof n?n(o,s):n||{}}catch(t){u([t,"",
[o,s,a],d],e)}i(r+"start",[o,s,a],d,c);try{return l=t.apply(s,o)}catch(e){throw i(r+"err",
[o,s,e],d,c),e}finally{i(r+"end",[o,s,l],d,c)}}}function i(r,n,i,o){if(!a||t){var s=a;a=!
0;try{e.emit(r,n,i,t,o)}catch(t){u([t,r,n,i],e)}a=s}}}function u(e,t){t||
(t=n.ee);try{t.emit("internal-error",e)}catch(e){}}function d(e){return!
(e&&"function"==typeof e&&e.apply&&!e[o])}},9414:(e,t,r)=>{"use strict";r.d(t,{J:
()=>c});var n=r(7836),i=r(2646),o=r(944),s=r(3434);const a=new Map;function c(e,t,r,c)

{if("object"!=typeof t||!t||"string"!=typeof r||!r||"function"!=typeof t[r])return(0,o.R)(29);const


u=function(e){return(e||n.ee).get("logger")}(e),d=(0,s.YM)(u),l=new
i.y(n.P);l.level=c.level,l.customAttributes=c.customAttributes;const f=t[r]?.[s.Jt]||t[r];return
a.set(f,l),d.inPlace(t,[r],"wrap-logger-",(()=>a.get(f))),u}},9300:(e,t,r)=>{"use strict";r.d(t,{T:
()=>n});const n=r(860).K7.ajax},3333:(e,t,r)=>{"use strict";r.d(t,{$v:()=>u,TZ:()=>n,Zp:
()=>i,kd:()=>c,mq:()=>a,nf:()=>s,qN:()=>o});const
n=r(860).K7.genericEvents,i=["auxclick","click","copy","keydown","paste","scrollend"],o=["
focus","blur"],s=4,a=1e3,c=["PageAction","UserAction","BrowserPerformance"],u={MARK
S:"experimental.marks",MEASURES:"experimental.measures",RESOURCES:"experimental
.resources"}},6774:(e,t,r)=>{"use strict";r.d(t,{T:()=>n});const n=r(860).K7.jserrors},993:
(e,t,r)=>
{"use strict";r.d(t,{A$:()=>o,ET:()=>s,TZ:()=>a,p_:()=>i});var n=r(860);const
i={ERROR:"ERROR",WARN:"WARN",INFO:"INFO",DEBUG:"DEBUG",TRACE:"TRAC
E"},o={OFF:0,ERROR:1,WARN:2,INFO:3,DEBUG:4,TRACE:5},s="log",a=n.K7.logging},
3785:(e,t,r)=>{"use strict";r.d(t,{R:()=>c,b:()=>u});var
n=r(9908),i=r(1863),o=r(860),s=r(8154),a=r(993);function c(e,t,r={},c=a.p_.INFO){(0,n.p)
(s.xV,["API/logging/".concat(c.toLowerCase(),"/called")],void 0,o.K7.metrics,e),(0,n.p)(a.ET,
[(0,i.t)(),t,r,c],void 0,o.K7.logging,e)}function u(e){return"string"==typeof
e&&Object.values(a.p_).

some((t=>t===e.toUpperCase().trim()))}},8154:(e,t,r)=>{"use strict";r.d(t,{z_:()=>o,XG:
()=>a,TZ:()=>n,rs:()=>i,xV:()=>s}),r(6154),r(9566),r(384);const
n=r(860).K7.metrics,i="sm",o="cm",s="storeSupportabilityMetrics",a="storeEventMetrics"},
6630:(e,t,r)=>{"use strict";r.d(t,{T:()=>n});const n=r(860).K7.pageViewEvent},782:
(e,t,r)=>{"use strict";r.d(t,{T:()=>n});const n=r(860).K7.pageViewTiming},6344:
(e,t,r)=>{"use strict";r.d(t,{BB:()=>d,G4:()=>o,Qb:()=>l,TZ:()=>i,Ug:()=>s,_s:()=>a,bc:
()=>u,yP:()=>c});var n=r(2614);

const
i=r(860).K7.sessionReplay,o={RECORD:"recordReplay",PAUSE:"pauseReplay",REPLAY_
RUNNING:"replayRunning",ERROR_DURING_REPLAY:"errorDuringReplay"},s=.12,a={
DomContentLoaded:0,Load:1,FullSnapshot:2,IncrementalSnapshot:3,Meta:4,Custom:5},c={[
n.g.ERROR]:15e3,[n.g.FULL]:3e5,[n.g.OFF]:0},u={RESET:{message:"Session was
reset",sm:"Reset"},IMPORT:{message:"Recorder failed to
import",sm:"Import"},TOO_MANY:{message:"429: Too Many Requests",sm:"Too-
Many"},TOO_BIG:{message:"Payload was too large",sm:

"Too-Big"},CROSS_TAB:{message:"Session Entity was set to OFF on another


tab",sm:"Cross-Tab"},ENTITLEMENTS:{message:"Session Replay is not allowed and will
not be started",sm:"Entitlement"}},d=5e3,l={API:"api"}},5270:(e,t,r)=>{"use strict";r.d(t,
{Aw:()=>c,CT:()=>u,SR:()=>a});var n=r(384),i=r(9417),o=r(7767),s=r(6154);function a(e)
{return!!(0,n.dV)().o.MO&&(0,o.V)(e)&&!0===(0,i.gD)
(e,"session_trace.enabled")}function c(e){return!0===(0,i.gD)
(e,"session_replay.preload")&&a(e)}function u(e,t){const
r=t.correctAbsoluteTimestamp(e);return{originalTimestamp:e,correctedTimestamp:r,timesta
mpDiff:
er,originTime:s.WN,correctedOriginTime:t.correctedOriginTime,originTimeDiff:Math.floor(s
.WN-t.correctedOriginTime)}}},3738:(e,t,r)=>{"use strict";r.d(t,{He:()=>i,Kp:()=>a,Lc:
()=>u,Rz:()=>d,TZ:()=>n,bD:()=>o,d3:()=>s,jx:()=>l,uP:()=>c});const
n=r(860).K7.sessionTrace,i="bstResource",o="resource",s="-start",a="-
end",c="fn"+s,u="fn"+a,d="pushState",l=1e3},3962:(e,t,r)=>{"use strict";r.d(t,{AM:
()=>o,O2:()=>c,Qu:()=>u,TZ:()=>a,ih:()=>d,pP:()=>s,tC:()=>i});var n=r(860);

const
i=["click","keydown","submit","popstate"],o="api",s="initialPageLoad",a=n.K7.softNav,c={I
NITIAL_PAGE_LOAD:"",ROUTE_CHANGE:1,UNSPECIFIED:2},u={INTERACTION:1,
AJAX:2,CUSTOM_END:3,CUSTOM_TRACER:4},d={IP:"in
progress",FIN:"finished",CAN:"cancelled"}},7378:(e,t,r)=>{"use strict";r.d(t,{$p:()=>R,BR:
()=>b,Kp:()=>x,L3:

()=>y,Lc:()=>c,NC:()=>o,SG:()=>d,TZ:()=>i,U6:()=>p,UT:()=>m,d3:()=>w,dT:()=>f,e5:
()=>T,gx:()=>v,l9:()=>l,oW:()=>h,op:()=>g,rw:()=>u,tH:()=>A,uP:()=>a,wW:()=>E,xq:
()=>s});var n=r(384);const
i=r(860).K7.spa,o=["click","submit","keypress","keydown","keyup","change"],s=999,a="fn-
start",c="fn-end",u="cb-start",d="api-
ixn-",l="remaining",f="interaction",h="spaNode",p="jsonpNode",g="fetch-start",m="fetch-
done",v="fetch-body-",b="jsonp-end",y=(0,n.dV)().o.ST,w="-start",x="-end",R="-
body",E="cb"+x,T="jsTime",A="fetch"},4234:(e,t,r)=>{"use strict";r.d(t,{W:()=>o});var
n=r(7836),i=r(1687);

class o{constructor(e,t)
{this.agentIdentifier=e,this.ee=n.ee.get(e),this.featureName=t,this.blocked=!
1}deregisterDrain(){(0,i.x3)(this.agentIdentifier,this.featureName)}}},7767:(e,t,r)=>{"use
strict";r.d(t,{V:()=>o});var n=r(9417),i=r(6154);const o=e=>i.RI&&!0===(0,n.gD)
(e,"privacy.cookies_enabled")},8969:(e,t,r)=>{"use strict";r.d(t,{j:()=>S});var
n=r(860),i=r(2555),o=r(3371),s=r(9908),a=r(7836),c=r(1687),u=r(5289),d=r(6154),l=r(944),f
=r(8154),h=r(384),p=r(6344);const
g=["setErrorHandler","finished","addToTrace","addRelease","recordCustomEvent","addPage
Action","setCurrentRouteName","setPageViewName","setCustomAttribute","interaction","n
oticeError","setUserId","setApplicationVersion","start",p.G4.RECORD,p.G4.PAUSE,"log","
wrapLogger"],m=["setErrorHandler","finished","addToTrace","addRelease"];var
v=r(1863),b=r(2614),y=r(993),w=r(3785),x=r(9414);

function R(){const e=(0,h.pV)();g.forEach((t=>{e[t]=(...r)=>function(t,...r){let n=[];return


Object.values(e.initializedAgents).forEach((e=>{e&&e.api?
e.exposed&&e.api[t]&&n.push(e.api[t](...r)):(0,l.R)(38,t)})),n.length>1?n:n[0]}
(t,...r)}))}const E={};var T=r(9417),A=r(5603),_=r(5284);const N=e=>{const
t=e.startsWith("http");e+="/",r.p=t?e:"https://"+e};let O=!1;function S(e,t={},g,S)
{let{init:I,info:P,loader_config:j,runtime:C={},exposed:k=!0}=t;C.loaderType=g;const
L=(0,h.pV)();P||(I=L.init,P=L.info,j=L.loader_config),(0,T.xN)(e.agentIdentifier,I||{}),(0,A.a)
(e.agentIdentifier,j||{}),P.jsAttributes??={},d.bv&&(P.jsAttributes.isWorker=!0),(0,i.x1)
(e.agentIdentifier,P);const D=(0,T.D0)(e.agentIdentifier),K=[P.beacon,P.errorBeacon];O||
(D.proxy.assets&&(N(D.proxy.assets),K.push(D.proxy.assets)),D.proxy.beacon&&K.push(D.
proxy.beacon),R(),(0,h.US)("activatedFeatures",_.B),e.runSoftNavOverSpa&&=!
0===D.soft_navigations.enabled&&D.feature_flags.includes("soft_nav")),C.denyList=[...D.a
jax.deny_list||[],...D.ajax.block_internal?K:[]],C.ptid=e.agentIdentifier,(0,o.V)
(e.agentIdentifier,C),e.ee=a.ee.get(e.agentIdentifier),void
0===e.api&&(e.api=function(e,t,h=!1){t||(0,c.Ak)(e,"api");const g={};var
R=a.ee.get(e),T=R.get("tracer");E[e]=b.g.OFF,R.on(p.G4.REPLAY_RUNNING,
(t=>{E[e]=t}));

var A="api-",_=A+"ixn-";function N(t,r,n,o){const s=(0,i.Vp)(e);return null===r?delete


s.jsAttributes[t]:(0,i.x1)(e,{...s,jsAttributes:{...s.jsAttributes,[t]:r}}),I(A,n,!0,o||
null===r?"session":void 0)(t,r)}function O(){}g.log=function(e,
{customAttributes:t={},level:r=y.p_.INFO}={}){(0,s.p)(f.xV,["API/log/called"],void
0,n.K7.metrics,R),(0,w.R)(R,e,t,r)},g.wrapLogger=(e,t,
{customAttributes:r={},level:i=y.p_.INFO}={})=>{(0,s.p)(f.xV,["API/wrapLogger/
called"],void 0,n.K7.metrics,R),(0,x.J)(R,e,t,
{customAttributes:r,level:i})},m.forEach((e=>{g[e]=I(A,e,!
0,"api")})),g.addPageAction=I(A,"addPageAction",!
0,n.K7.genericEvents),g.recordCustomEvent=I(A,"recordCustomEvent",!
0,n.K7.genericEvents),g.setPageViewName=function(t,r){if("string"==typeof t)return"/"!
==t.charAt(0)&&(t="/"+t),(0,o.f)(e).customTransaction=(r||"http://custom.transaction")
+t,I(A,"setPageViewName",!0)()},g.setCustomAttribute=function(e,t,r=!1)
{if("string"==typeof e){if(["string","number","boolean"].includes(typeof t)||null===t)return
N(e,t,"setCustomAttribute",r);(0,l.R)(40,typeof t)}else(0,l.R)(39,typeof
e)},g.setUserId=function(e){if("string"==typeof e||null===e)return
N("enduser.id",e,"setUserId",!0);

(0,l.R)(41,typeof e)},g.setApplicationVersion=function(e){if("string"==typeof e||


null===e)return N("application.version",e,"setApplicationVersion",!1);(0,l.R)(42,typeof
e)},g.start=()=>{try{(0,s.p)(f.xV,["API/start/called"],void 0,n.K7.metrics,R),R.emit("manual-
start-all")}catch(e){(0,l.R)(23,e)}},g[p.G4.RECORD]=function(){(0,s.p)(f.xV,["API/
recordReplay/called"],void 0,n.K7.metrics,R),(0,s.p)(p.G4.RECORD,[],void
0,n.K7.sessionReplay,R)},g[p.G4.PAUSE]=function(){(0,s.p)(f.xV,["API/pauseReplay/
called"],void 0,n.K7.metrics,R),(0,s.p)(p.G4.PAUSE,[],void
0,n.K7.sessionReplay,R)},g.interaction=function(e){return(new O).get("object"==typeof e?e:
{})};const S=O.prototype={createTracer:function(e,t){var r={},i=this,o="function"==typeof
t;return(0,s.p)(f.xV,["API/createTracer/called"],void 0,n.K7.metrics,R),h||(0,s.p)(_+"tracer",
[(0,v.t)(),e,r],i,n.K7.spa,R),function(){if(T.emit((o?"":"no-")+"fn-start",[(0,v.t)(),i,o],r),o)

try{return t.apply(this,arguments)}catch(e){const t="string"==typeof e?new Error(e):e;throw


T.emit("fn-err",[arguments,this,t],r),t}finally{T.emit("fn-end",[(0,v.t)()],r)}}}};function
I(e,t,r,i){return function(){return(0,s.p)(f.xV,["API/"+t+"/called"],void
0,n.K7.metrics,R),i&&(0,s.p)(e+t,[r?(0,v.t)():performance.now(),...arguments],r?
null:this,i,R),r?void 0:this}}function

P(){r.e(478).then(r.bind(r,8778)).then((({setAPI:t})=>{t(e),(0,c.Ze)
(e,"api")})).catch((e=>{(0,l.R)
(27,e),R.abort()}))}return["actionText","setName","setAttribute","save","ignore","onEnd","g
etContext","end","get"].forEach((e=>{S[e]=I(_,e,void 0,h?
n.K7.softNav:n.K7.spa)})),g.setCurrentRouteName=h?I(_,"routeName",void
0,n.K7.softNav):I(A,"routeName",!0,n.K7.spa),g.noticeError=function(t,r){"string"==typeof
t&&(t=new Error(t)),(0,s.p)(f.xV,["API/noticeError/called"],void 0,n.K7.metrics,R),(0,s.p)
("err",[t,(0,v.t)(),!1,r,!!E[e]],

void 0,n.K7.jserrors,R)},d.RI?(0,u.GG)((()=>P()),!0):P(),g}
(e.agentIdentifier,S,e.runSoftNavOverSpa)),void 0===e.exposed&&(e.exposed=k),O=!
0}},8374:(e,t,r)=>{r.nc=(()=>{try{return document?.currentScript?.nonce}catch(e)
{}return""})()},860:(e,t,r)=>{"use strict";r.d(t,{$J:()=>u,K7:()=>a,P3:()=>c,XX:()=>i,qY:
()=>n,v4:()=>s});const
n="events",i="jserrors",o="browser/blobs",s="rum",a={ajax:"ajax",genericEvents:"generic_e
vents",jserrors:i,logging:"logging",metrics:"metrics",pageAction:"page_action",pageViewEve
nt:"page_view_event",pageViewTiming:"page_view_timing",sessionReplay:"session_replay"
,sessionTrace:"session_trace",softNav:"soft_navigations",spa:"spa"},c={[a.pageViewEvent]:
1,[a.pageViewTiming]:2,[a.metrics]:3,[a.jserrors]:4,[a.spa]:5,[a.ajax]:6,[a.sessionTrace]:7,
[a.softNav]:8,[a.sessionReplay]:9,[a.logging]:10,
[a.genericEvents]:11},u={[a.pageViewEvent]:s,[a.pageViewTiming]:n,[a.ajax]:n,[a.spa]:n,
[a.softNav]:n,[a.metrics]:i,[a.jserrors]:i,[a.sessionTrace]:o,[a.sessionReplay]:o,
[a.logging]:"browser/logs",[a.genericEvents]:"ins"}}},n={};function i(e){var t=n[e];if(void
0!==t)return t.exports;var o=n[e]={exports:{}};return r[e]
(o,o.exports,i),o.exports}i.m=r,i.d=(e,t)=>

{for(var r in t)i.o(t,r)&&!i.o(e,r)&&Object.defineProperty(e,r,{enumerable:!
0,get:t[r]})},i.f={},i.e=e=>Promise.all(Object.keys(i.f).reduce(((t,r)=>(i.f[r](e,t),t)),
[])),i.u=e=>({212:"nr-spa-compressor",249:"nr-spa-recorder",478:"nr-spa"}[e]+"-
1.283.2.min.js"),i.o=(e,t)=>Object.prototype.hasOwnProperty.call(e,t),e={},t="NRBA-
1.283.2.PROD:",i.l=(r,n,o,s)=>{if(e[r])e[r].push(n);else{var a,c;if(void 0!==o)for(var
u=document.getElementsByTagName("script"),d=0;d<u.length;d++){var
l=u[d];if(l.getAttribute("src")==r||l.getAttribute("data-webpack")==t+o){a=l;break}}if(!a)
{c=!0;var
f={478:"sha512-2oN05BjxuObKuOX8E0vq/zS51M+2HokmNPBRUrIC1fw3hpJqoI18/
nckSFiqV11KxT7ag3C+FunKrR8n0PD9Ig==",249:

"sha512-Zs5nIHr/khH6G8IhAEdnngg+P7y/
IfmjU0PQmXABpCEtSTeKV22OYdaa9lENrW9uxI0lZ6O5e5dCnEMsTS0onA==",212:"sha
512-
LPKde7A1ZxIHzoSqWKxn5uWVhM9u76Vtmp9DMBf+Ry3mnn2jpsfyfigMYD5Yka2RG3
NeIBqOwNYuPrWL39qn6w=="};(a=document.createElement("script")).charset="utf-
8",a.timeout=120,i.nc&&a.setAttribute("nonce",i.nc),a.setAttribute("data-
webpack",t+o),a.src=r,0!
==a.src.indexOf(window.location.origin+"/")&&(a.crossOrigin="anonymous"),f[s]&&(a.inte
grity=f[s])}e[r]=[n];

var h=(t,n)=>{a.onerror=a.onload=null,clearTimeout(p);var i=e[r];if(delete


e[r],a.parentNode&&a.parentNode.removeChild(a),i&&i.forEach((e=>e(n))),t)return
t(n)},p=setTimeout(h.bind(null,void 0,
{type:"timeout",target:a}),12e4);a.onerror=h.bind(null,a.onerror),a.onload=h.bind(null,a.onlo
ad),c&&document.head.appendChild(a)}},i.r=e=>{"undefined"!=typeof
Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,
{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},

i.p="https://js-agent.newrelic.com/",(()=>{var e={38:0,788:0};i.f.j=(t,r)=>{var n=i.o(e,t)?


e[t]:void 0;if(0!==n)if(n)r.push(n[2]);else{var o=new
Promise(((r,i)=>n=e[t]=[r,i]));r.push(n[2]=o);var s=i.p+i.u(t),a=new Error;i.l(s,
(r=>{if(i.o(e,t)&&(0!==(n=e[t])&&(e[t]=void 0),n)){var
o=r&&("load"===r.type?"missing":r.type),s=r&&r.target&&r.target.src;a.message="Loading
chunk "+t+" failed.\n("+o+":

"+s+")",a.name="ChunkLoadError",a.type=o,a.request=s,n[1](a)}}),"chunk-"+t,t)}};var
t=(t,r)=>{var n,o,[s,a,c]=r,u=0;if(s.some((t=>0!==e[t]))){for(n in
a)i.o(a,n)&&(i.m[n]=a[n]);c&&c(i)}for(t&&t(r);u<s.length;u+
+)o=s[u],i.o(e,o)&&e[o]&&e[o][0](),e[o]=0},r=self["webpackChunk:NRBA-
1.283.2.PROD"]=self["webpackChunk:NRBA-1.283.2.PROD"]||
[];r.forEach(t.bind(null,0)),r.push=t.bind(null,r.push.bind(r))})(),(()=>{"use strict";i(8374);var
e=i(944),t=i(6344),r=i(9566);

class n{agentIdentifier;constructor(){this.agentIdentifier=(0,r.LA)(16)}#e(t,...r)
{if("function"==typeof this.api?.[t])return this.api[t](...r);(0,e.R)(35,t)}addPageAction(e,t)
{return this.#e("addPageAction",e,t)}recordCustomEvent(e,t){return
this.#e("recordCustomEvent",e,t)}setPageViewName(e,t){return
this.#e("setPageViewName",e,t)}setCustomAttribute(e,t,r){return
this.#e("setCustomAttribute",e,t,r)}noticeError(e,t){return
this.#e("noticeError",e,t)}setUserId(e){return this.#e("setUserId",e)}setApplicationVersion(e)
{return this.#e("setApplicationVersion",e)}setErrorHandler(e){return
this.#e("setErrorHandler",e)}addRelease(e,t){return this.#e("addRelease",e,t)}log(e,t){return
this.#e("log",e,t)}}class o extends n{#e(t,...r){if("function"==typeof this.api?.[t])return
this.api[t](...r);(0,e.R)(35,t)}start(){return this.#e("start")}finished(e){return
this.#e("finished",e)}recordReplay(){return this.#e(t.G4.RECORD)}pauseReplay(){return
this.#e(t.G4.PAUSE)}addToTrace(e){return
this.#e("addToTrace",e)}setCurrentRouteName(e){return
this.#e("setCurrentRouteName",e)}interaction(){return
this.#e("interaction")}wrapLogger(e,t,r){return this.#e("wrapLogger",e,t,r)}}var
s=i(860),a=i(9417);const c=Object.values(s.K7);var
u=i(8969),d=i(1687),l=i(4234),f=i(5289),h=i(6154),p=i(5270),g=i(7767),m=i(6389);class v
extends

l.W{constructor(e,t,r=!0){super(e.agentIdentifier,t),this.auto=r,this.abortHandler=void
0,this.featAggregate=void 0,this.onAggregateImported=void 0,!
1===e.init[this.featureName].autoStart&&(this.auto=!1),this.auto?(0,d.Ak)
(e.agentIdentifier,t):this.ee.on("manual-start-all",(0,m.J)((()=>{(0,d.Ak)
(e.agentIdentifier,this.featureName),this.auto=!
0,this.importAggregator(e)})))}importAggregator(t,r={}){if(this.featAggregate||!
this.auto)return;let n;this.onAggregateImported=new Promise((e=>{n=e}));const
o=async()=>{let o;try{if((0,g.V)(this.agentIdentifier)){const{setupAgentSession:e}=await
i.e(478).then(i.bind(i,6526));o=e(t)}}catch(t){(0,e.R)(20,t),this.ee.emit("internal-error",
[t]),this.featureName===s.K7.sessionReplay&&this.abortHandler?.()}

try{if(!this.#t(this.featureName,o))return(0,d.Ze)(this.agentIdentifier,this.featureName),void
n(!1);const{lazyFeatureLoader:e}=await i.e(478).then(i.bind(i,6103)),{Aggregate:s}=await
e(this.featureName,"aggregate");this.featAggregate=new
s(t,r),t.runtime.harvester.initializedAggregates.push(this.featAggregate),n(!0)}catch(t)
{(0,e.R)(34,t),this.abortHandler?.(),(0,d.Ze)(this.agentIdentifier,this.featureName,!0),n(!
1),this.ee&&this.ee.abort()}};h.RI?(0,f.GG)((()=>o()),!0):o()}#t(e,t){switch(e){case
s.K7.sessionReplay:return(0,p.SR)(this.agentIdentifier)&&!!t;

case s.K7.sessionTrace:return!!t;default:return!0}}}var b=i(6630);class y extends v{static


featureName=b.T;constructor(e,t=!0){super(e,b.T,t),this.importAggregator(e)}}var
w=i(384),x=i(9908),R=i(2843),E=i(3878),T=i(782),A=i(1863);class _ extends v{static
featureName=T.T;constructor(e,t=!0){super(e,T.T,t),h.RI&&((0,R.u)((()=>(0,x.p)
("docHidden",[(0,A.t)()],void 0,T.T,this.ee)),!0),(0,E.sp)("pagehide",(()=>(0,x.p)
("winPagehide",[(0,A.t)()],void 0,T.T,this.ee))),this.importAggregator(e))}}var
N=i(8154);class O extends v{static featureName=N.TZ;constructor(e,t=!0)
{super(e,N.TZ,t),this.importAggregator(e)}}var S=i(6774),I=i(3304);
class P{constructor(e,t,r,n,i){this.name="UncaughtError",this.message="string"==typeof e?e:
(0,I.A)(e),this.sourceURL=t,this.line=r,this.column=n,this.__newrelic=i}}function j(e){return
L(e)?e:new P(void 0!==e?.message?e.message:e,e?.filename||e?.sourceURL,e?.lineno||
e?.line,e?.colno||e?.col,e?.__newrelic)}function C(e){const t="Unhandled Promise
Rejection";if(!e?.reason)return;if(L(e.reason))try{return e.reason.message=t+":
"+e.reason.message,j(e.reason)}catch(t){return j(e.reason)}const r=j(e.reason);return
r.message=t+": "+r?.message,r}function k(e){if(e.error instanceof SyntaxError&&!/:\d+
$/.test(e.error.stack?.trim())){const t=new
P(e.message,e.filename,e.lineno,e.colno,e.error.__newrelic);return
t.name=SyntaxError.name,t}return L(e.error)?e.error:j(e)}function L(e){return e instanceof
Error&&!!e.stack}class D extends v{static featureName=S.T;#r=!1;constructor(e,r=!0)
{super(e,S.T,r);try{this.removeOnAbort=new AbortController}catch(e){}this.ee.on("internal-
error",

((e,t)=>{this.abortHandler&&(0,x.p)("ierr",[j(e),(0,A.t)(),!0,{},this.#r,t],void
0,this.featureName,this.ee)})),this.ee.on(t.G4.REPLAY_RUNNING,
(e=>{this.#r=e})),h.gm.addEventListener("unhandledrejection",
(e=>{this.abortHandler&&(0,x.p)("err",[C(e),(0,A.t)(),!1,
{unhandledPromiseRejection:1},this.#r],void 0,this.featureName,this.ee)}),(0,E.jT)(!
1,this.removeOnAbort?.signal)),h.gm.addEventListener("error",
(e=>{this.abortHandler&&(0,x.p)("err",[k(e),(0,A.t)(),!1,{},this.#r],void
0,this.featureName,this.ee)}),(0,E.jT)(!
1,this.removeOnAbort?.signal)),this.abortHandler=this.#n,this.importAggregator(e)}#n()
{this.removeOnAbort?.abort(),this.abortHandler=void 0}}var K=i(8990);let H=1;function
U(e)

{const t=typeof e;return!e||"object"!==t&&"function"!==t?-1:e===h.gm?0:(0,K.I)(e,"nr@id",


(function(){return H++}))}function M(e){if("string"==typeof e&&e.length)return
e.length;if("object"==typeof e){if("undefined"!=typeof ArrayBuffer&&e instanceof
ArrayBuffer&&e.byteLength)return e.byteLength;if("undefined"!=typeof Blob&&e
instanceof Blob&&e.size)return e.size;if(!("undefined"!=typeof FormData&&e instanceof
FormData))try{return(0,I.A)(e).length}catch(e){return}}}var
V=i(8139),B=i(7836),G=i(3434);const F={},W=["open","send"];function q(t){var r=t||
B.ee;const n=function(e){return(e||B.ee).get("xhr")}(r);if(void
0===h.gm.XMLHttpRequest)return n;if(F[n.debugId]++)return n;F[n.debugId]=1,(0,V.u)(r);

var i=(0,G.YM)
(n),o=h.gm.XMLHttpRequest,s=h.gm.MutationObserver,a=h.gm.Promise,c=h.gm.setInterval,
u="readystatechange",d=["onload","onerror","onabort","onloadstart","onloadend","onprogres
s","ontimeout"],l=[],f=h.gm.XMLHttpRequest=function(t){const r=new
o(t),s=n.context(r);try{n.emit("new-xhr",[r],s),r.addEventListener(u,(a=s,function(){var
e=this;e.readyState>3&&!a.resolved&&(a.resolved=!0,n.emit("xhr-resolved",
[],e)),i.inPlace(e,d,"fn-",y)}),(0,E.jT)(!1))}

catch(t){(0,e.R)(15,t);try{n.emit("internal-error",[t])}catch(e){}}var a;return r};function


p(e,t){i.inPlace(t,["onreadystatechange"],"fn-",y)}if(function(e,t){for(var r in e)t[r]=e[r]}
(o,f),f.prototype=o.prototype,i.inPlace(f.prototype,W,"-xhr-",y),n.on("send-xhr-start",
(function(e,t){p(0,t),function(e){l.push(e),s&&(g?g.then(b):c?c(b):(m=-m,v.data=m))}
(t)})),n.on("open-xhr-start",p),s){var g=a&&a.resolve();if(!c&&!a){var
m=1,v=document.createTextNode(m);new s(b).observe(v,{characterData:!0})}}

else r.on("fn-end",(function(e){e[0]&&e[0].type===u||b()}));function b(){for(var


e=0;e<l.length;e++)p(0,l[e]);l.length&&(l=[])}function y(e,t){return t}return n}var
z="fetch-",Z=z+"body-",Y=["arrayBuffer","blob","json","text","formData"],X=h.gm.Request
,J=h.gm.Response,$="prototype";const Q={};function ee(e){const t=function(e){return(e||
B.ee).get("fetch")}(e);if(!(X&&J&&h.gm.fetch))return t;if(Q[t.debugId]++)return t;function
r(e,r,n){var i=e[r];"function"==typeof i&&(e[r]=function(){var
e,r=[...arguments],o={};t.emit(n+"before-start",[r],o),o[B.P]&&o[B.P].dt&&(e=o[B.P].dt);

var s=i.apply(this,r);return t.emit(n+"start",[r,e],s),s.then((function(e){return t.emit(n+"end",


[null,e],s),e}),(function(e){throw t.emit(n+"end",[e],s),e}))})}return
Q[t.debugId]=1,Y.forEach((e=>{r(X[$],e,Z),r(J[$],e,Z)})),r(h.gm,"fetch",z),t.on(z+"end",
(function(e,r){var n=this;if(r){var i=r.headers.get("content-length");null!
==i&&(n.rxSize=i),t.emit(z+"done",[null,r],n)}else t.emit(z+"done",[e],n)})),t}var
te=i(7485),re=i(5603);class ne{constructor(e)

{this.agentIdentifier=e}generateTracePayload(e){if(!this.shouldGenerateTrace(e))return
null;var t=(0,re.o)(this.agentIdentifier);if(!t)return null;var n=(t.accountID||"").toString()||
null,i=(t.agentID||"").toString()||null,o=(t.trustKey||"").toString()||null;if(!n||!i)return null;

var s=(0,r.ZF)(),a=(0,r.el)
(),c=Date.now(),u={spanId:s,traceId:a,timestamp:c};return(e.sameOrigin||
this.isAllowedOrigin(e)&&this.useTraceContextHeadersForCors())&&(u.traceContextParent
Header=this.generateTraceContextParentHeader(s,a),u.traceContextStateHeader=this.generat
eTraceContextStateHeader(s,c,n,i,o)),(e.sameOrigin&&!this.excludeNewrelicHeader()||

!
e.sameOrigin&&this.isAllowedOrigin(e)&&this.useNewrelicHeaderForCors())&&(u.newreli
cHeader=this.generateTraceHeader(s,a,c,n,i,o)),u}generateTraceContextParentHeader(e,t)
{return"00-"+t+"-"+e+"-01"}generateTraceContextStateHeader(e,t,r,n,i){return i+"@nr=0-
1-"+r+"-"+n+"-"+e+"----"+t}generateTraceHeader(e,t,r,n,i,o){if("function"!=typeof
h.gm?.btoa)return null;var s={v:[0,1],d:{ty:"Browser",ac:n,ap:i,id:e,tr:t,ti:r}};

return o&&n!==o&&(s.d.tk=o),btoa((0,I.A)(s))}shouldGenerateTrace(e){return
this.isDtEnabled()&&this.isAllowedOrigin(e)}isAllowedOrigin(e){var t=!1,r={};if((0,a.gD)
(this.agentIdentifier,"distributed_tracing")&&(r=(0,a.D0)
(this.agentIdentifier).distributed_tracing),e.sameOrigin)t=!0;else if(r.allowed_origins
instanceof Array)for(var n=0;n<r.allowed_origins.length;n++){var i=(0,te.D)
(r.allowed_origins[n]);

if(e.hostname===i.hostname&&e.protocol===i.protocol&&e.port===i.port){t=!
0;break}}return t}isDtEnabled(){var e=(0,a.gD)
(this.agentIdentifier,"distributed_tracing");return!!e&&!!e.enabled}excludeNewrelicHeader()
{var e=(0,a.gD)(this.agentIdentifier,"distributed_tracing");return!!e&&!!
e.exclude_newrelic_header}useNewrelicHeaderForCors(){var e=(0,a.gD)
(this.agentIdentifier,"distributed_tracing");return!!e&&!1!
==e.cors_use_newrelic_header}useTraceContextHeadersForCors(){var e=(0,a.gD)
(this.agentIdentifier,"distributed_tracing");return!!e&&!!
e.cors_use_tracecontext_headers}}var
ie=i(9300),oe=i(7295),se=["load","error","abort","timeout"],ae=se.length,ce=(0,w.dV)
().o.REQ,ue=(0,w.dV)().o.XHR;class de extends v{static featureName=ie.T;

constructor(e,t=!0){super(e,ie.T,t),this.dt=new
ne(e.agentIdentifier),this.handler=(e,t,r,n)=>(0,x.p)(e,t,r,n,this.ee);try{const
e={xmlhttprequest:"xhr",fetch:"fetch",beacon:"beacon"};h.gm?.performance?.getEntriesByT
ype("resource").forEach((t=>{if(t.initiatorType in e&&0!==t.responseStatus){const
r={status:t.responseStatus},n={rxSize:t.transferSize,duration:Math.floor(t.duration),cbTime:0
};le(r,t.name),this.handler("xhr",[r,n,t.startTime,t.responseEnd,e[t.initiatorType]],void
0,s.K7.ajax)}}))}catch(e){}ee(this.ee),q(this.ee),function(e,t,r,n){function i(e){const
t=this.params,n=this.metrics;if(!this.ended){this.ended=!0;

for(let t=0;t<ae;t++)e.removeEventListener(se[t],this.listener,!1);t.aborted||(0,oe.iW)(t)||
(n.duration=(0,A.t)()-this.startTime,this.loadCaptureCalled||4!==e.readyState?
null==t.status&&(t.status=0):o(this,e),n.cbTime=this.cbTime,r("xhr",
[t,n,this.startTime,this.endTime,"xhr"],this,s.K7.ajax))}}function o(e,r)
{e.params.status=r.status;var n=function(e,t){var r=e.responseType;return"json"===r&&null!
==t?t:"arraybuffer"===r||"blob"===r||"json"===r?M(e.response):"text"===r||""===r||void
0===r?M(e.responseText):void 0}(r,e.lastSize);if(n&&(e.metrics.rxSize=n),e.sameOrigin)
{var i=r.getResponseHeader("X-NewRelic-App-Data");i&&((0,x.p)(N.rs,["Ajax/
CrossApplicationTracing/Header/Seen"],void 0,s.K7.metrics,t),e.params.cat=i.split(",
").pop())}e.loadCaptureCalled=!0}t.on("new-xhr",

(function(e){var t=this;t.totalCbs=0,t.called=0,t.cbTime=0,t.end=i,t.ended=!
1,t.xhrGuids={},t.lastSize=null,t.loadCaptureCalled=!1,t.params=this.params||
{},t.metrics=this.metrics||{},e.addEventListener("load",(function(r){o(t,e)}),(0,E.jT)(!
1)),h.lR||e.addEventListener("progress",(function(e){t.lastSize=e.loaded}),

(0,E.jT)(!1))})),t.on("open-xhr-start",(function(e)
{this.params={method:e[0]},le(this,e[1]),this.metrics={}})),t.on("open-xhr-end",
(function(t,r){e.loader_config.xpid&&this.sameOrigin&&r.setRequestHeader("X-NewRelic-
ID",e.loader_config.xpid);

var i=n.generateTracePayload(this.parsedOrigin);if(i){var o=!


1;i.newrelicHeader&&(r.setRequestHeader("newrelic",i.newrelicHeader),o=!
0),i.traceContextParentHeader&&(r.setRequestHeader("traceparent",i.traceContextParentHea
der),i.traceContextStateHeader&&r.setRequestHeader("tracestate",i.traceContextStateHeader
),o=!0),o&&(this.dt=i)}})),

t.on("send-xhr-start",(function(e,r){var n=this.metrics,i=e[0],o=this;if(n&&i){var
s=M(i);s&&(n.txSize=s)}this.startTime=(0,A.t)(),this.body=i,this.listener=function(e)
{try{"abort"!==e.type||o.loadCaptureCalled||(o.params.aborted=!0),("load"!==e.type||
o.called===o.totalCbs&&(o.onloadCalled||"function"!=typeof
r.onload)&&"function"==typeof o.end)&&o.end(r)}catch(e){try{t.emit("internal-error",
[e])}catch(e){}}};

for(var a=0;a<ae;a++)r.addEventListener(se[a],this.listener,(0,E.jT)(!1))})),t.on("xhr-cb-
time",(function(e,t,r){this.cbTime+=e,t?this.onloadCalled=!0:this.called+=1,this.called!
==this.totalCbs||!this.onloadCalled&&"function"==typeof r.onload||"function"!=typeof
this.end||this.end(r)})),t.on("xhr-load-added",(function(e,t){var r=""+U(e)+!!
t;this.xhrGuids&&!this.xhrGuids[r]&&(this.xhrGuids[r]=!0,this.totalCbs+=1)})),t.on("xhr-
load-removed",(function(e,t)

{var r=""+U(e)+!!t;this.xhrGuids&&this.xhrGuids[r]&&(delete
this.xhrGuids[r],this.totalCbs-=1)})),t.on("xhr-resolved",(function(){this.endTime=(0,A.t)
()})),t.on("addEventListener-end",(function(e,r){r instanceof
ue&&"load"===e[0]&&t.emit("xhr-load-added",[e[1],e[2]],r)})),t.on("removeEventListener-
end",(function(e,r){r instanceof ue&&"load"===e[0]&&t.emit("xhr-load-removed",
[e[1],e[2]],r)})),t.on("fn-end",(function(e,r){this.xhrCbStart&&t.emit("xhr-cb-time",[(0,A.t)
()-this.xhrCbStart,this.onload,r],r)})),

t.on("fetch-before-start",(function(e){var t,r=e[1]||{};if("string"==typeof e[0]?


0===(t=e[0]).length&&h.RI&&(t=""+h.gm.location.href):e[0]&&e[0].url?
t=e[0].url:h.gm?.URL&&e[0]&&e[0]instanceof URL?t=e[0].href:"function"==typeof
e[0].toString&&(t=e[0].toString()),"string"==typeof t&&0!==t.length)
{t&&(this.parsedOrigin=(0,te.D)(t),this.sameOrigin=this.parsedOrigin.sameOrigin);

\var i=n.generateTracePayload(this.parsedOrigin);if(i&&(i.newrelicHeader||
i.traceContextParentHeader))if(e[0]&&e[0].headers)a(e[0].headers,i)&&(this.dt=i);else{var
o={};

for(var s in r)o[s]=r[s];o.headers=new Headers(r.headers||


{}),a(o.headers,i)&&(this.dt=i),e.length>1?e[1]=o:e.push(o)}}function a(e,t){var r=!1;

return t.newrelicHeader&&(e.set("newrelic",t.newrelicHeader),r=!
0),t.traceContextParentHeader&&(e.set("traceparent",t.traceContextParentHeader),t.traceCon
textStateHeader&&e.set("tracestate",t.traceContextStateHeader),r=!0),r}})),t.on("fetch-start",
(function(e,t){this.params={},this.metrics={},this.startTime=(0,A.t)
(),this.dt=t,e.length>=1&&(this.target=e[0]),e.length>=2&&(this.opts=e[1]);

var r,n=this.opts||{},i=this.target;"string"==typeof i?r=i:"object"==typeof i&&i instanceof


ce?r=i.url:h.gm?.URL&&"object"==typeof i&&i instanceof URL&&(r=i.href),le(this,r);var
o=(""+(i&&i instanceof ce&&i.method||
n.method||"GET")).toUpperCase();this.params.method=o,this.body=n.body,this.txSize=M(n.b
ody)||0})),t.on("fn-start",(function(e,t,r){t instanceof ue&&("onload"===r&&(this.onload=!
0),("load"===(e[0]&&e[0].type)||this.onload)&&(this.xhrCbStart=(0,A.t)()))})),t.on("fetch-
done",(function(e,t){if(this.endTime=(0,A.t)(),this.params||(this.params={}),(0,oe.iW)
(this.params))return;let n;this.params.status=t?t.status:0,"string"==typeof
this.rxSize&&this.rxSize.length>0&&(n=+this.rxSize);const
i={txSize:this.txSize,rxSize:n,duration:(0,A.t)()-this.startTime};r("xhr",
[this.params,i,this.startTime,this.endTime,"fetch"],this,s.K7.ajax)}))}
(e,this.ee,this.handler,this.dt),this.importAggregator(e)}}function le(e,t){var r=(0,te.D)
(t),n=e.params||
e;n.hostname=r.hostname,n.port=r.port,n.protocol=r.protocol,n.host=r.hostname+":"+r.port,n.
pathname=r.pathname,e.parsedOrigin=r,e.sameOrigin=r.sameOrigin}const
fe={},he=["pushState","replaceState"];

function pe(e){const t=function(e){return(e||B.ee).get("history")}(e);return!h.RI||


fe[t.debugId]++||(fe[t.debugId]=1,(0,G.YM)(t).inPlace(window.history,he,"-")),t}var
ge=i(3738);
const{He:me,bD:ve,d3:be,Kp:ye,TZ:we,Lc:xe,uP:Re,Rz:Ee}=ge;var Te=i(2614);class Ae
extends v{static featureName=t.TZ;#i;#o;constructor(e,r=!0){let
n;super(e,t.TZ,r),this.replayRunning=!
1,this.#o=e;try{n=JSON.parse(localStorage.getItem("".concat(Te.H3,"_").concat(Te.uh)))}cat
ch(e){}(0,p.SR)(e.agentIdentifier)&&this.ee.on(t.G4.RECORD,(()=>this.#s())),this.#a(n)?
(this.#i=n?.sessionReplayMode,this.#c()):this.importAggregator(e),this.ee.on("err",
(e=>{this.replayRunning&&(this.errorNoticed=!0,(0,x.p)(t.G4.ERROR_DURING_REPLAY,
[e],void 0,this.featureName,this.ee))})),

this.ee.on(t.G4.REPLAY_RUNNING,(e=>{this.replayRunning=e}))}#a(e){return
e&&(e.sessionReplayMode===Te.g.FULL||e.sessionReplayMode===Te.g.ERROR)||
(0,p.Aw)(this.agentIdentifier)}#u=!1;async#c(e){if(!this.#u){this.#u=!
0;try{const{Recorder:t}=await
Promise.all([i.e(478),i.e(249)]).then(i.bind(i,8589));this.recorder??=new
t({mode:this.#i,agentIdentifier:this.agentIdentifier,trigger:e,ee:this.ee,agentRef:this.#o}),

this.recorder.startRecording(),this.abortHandler=this.recorder.stopRecording}catch(e)
{}this.importAggregator(this.#o,
{recorder:this.recorder,errorNoticed:this.errorNoticed})}}#s(){this.featAggregate?
this.featAggregate.mode!
==Te.g.FULL&&this.featAggregate.initializeRecording(Te.g.FULL,!0):
(this.#i=Te.g.FULL,this.#c(t.Qb.API),this.recorder&&this.recorder.parent.mode!
==Te.g.FULL&&(this.recorder.parent.mode=Te.g.FULL,this.recorder.stopRecording(),this.re
corder.startRecording(),this.abortHandler=this.recorder.stopRecording))}}var
_e=i(3962);class Ne extends v{static featureName=_e.TZ;

constructor(e,t=!0){if(super(e,_e.TZ,t),!h.RI||!(0,w.dV)().o.MO)return;const
r=pe(this.ee);_e.tC.forEach((e=>{(0,E.sp)(e,(e=>{s(e)}),!0)}));const n=()=>(0,x.p)
("newURL",[(0,A.t)(),""+window.location],void 0,this.featureName,this.ee);r.on("pushState-
end",n),r.on("replaceState-end",n);try{this.removeOnAbort=new AbortController}catch(e){}
(0,E.sp)("popstate",(e=>(0,x.p)("newURL",
[e.timeStamp,""+window.location],void 0,this.featureName,this.ee)),!
0,this.removeOnAbort?.signal);let i=!1;

const o=new((0,w.dV)().o.MO)(((e,t)=>{i||(i=!0,requestAnimationFrame((()=>{(0,x.p)
("newDom",[(0,A.t)()],void 0,this.featureName,this.ee),i=!1})))})),s=(0,m.s)((e=>{(0,x.p)
("newUIEvent",[e],void 0,this.featureName,this.ee),o.observe(document.body,{attributes:!
0,childList:!0,subtree:!0,characterData:!0})}),100,{leading:!0});this.abortHandler=function()
{this.removeOnAbort?.abort(),o.disconnect(),this.abortHandler=void
0},this.importAggregator(e,{domObserver:o})}}var Oe=i(7378);const
Se={},Ie=["appendChild","insertBefore","replaceChild"];
const Pe={};const je={},Ce="setTimeout",ke="setInterval",Le="clearTimeout",De="-
start",Ke=[Ce,"setImmediate",ke,Le,"clearImmediate"];const
He={};const{TZ:Ue,d3:Me,Kp:Ve,$p:Be,wW:Ge,e5:Fe,tH:We,uP:qe,rw:ze,Lc:Ze}=Oe;var
Ye=i(3333);class Xe extends v{static featureName=Ye.TZ;constructor(e,t=!0)
{super(e,Ye.TZ,t);const
r=[e.init.page_action.enabled,e.init.performance.capture_marks,e.init.performance.capture_m
easures,e.init.user_actions.enabled,e.init.performance.resources.enabled];h.RI&&(e.init.user_
actions.enabled&&(Ye.Zp.forEach((e=>(0,E.sp)(e,(e=>(0,x.p)("ua",[e],void
0,this.featureName,this.ee)),!0))),Ye.qN.forEach((e=>{const t=(0,m.s)((e=>{(0,x.p)("ua",
[e],void 0,this.featureName,this.ee)}),500,{leading:!0});(0,E.sp)
(e,t)}))),e.init.performance.resources.enabled&&h.gm.PerformanceObserver?.supportedEntry
Types.includes("resource"))&&new
PerformanceObserver((e=>{e.getEntries().forEach((e=>{(0,x.p)
("browserPerformance.resource",[e],void
0,this.featureName,this.ee)}))})).observe({type:"resource",buffered:!0}),r.some((e=>e))?
this.importAggregator(e):this.deregisterDrain()}}var Je=i(993),$e=i(3785),Qe=i(9414);

class et extends v{static featureName=Je.TZ;constructor(e,t=!0){super(e,Je.TZ,t);const


r=this.ee;(0,Qe.J)(r,h.gm.console,"log",{level:"info"}),(0,Qe.J)(r,h.gm.console,"error",
{level:"error"}),(0,Qe.J)(r,h.gm.console,"warn",{level:"warn"}),(0,Qe.J)
(r,h.gm.console,"info",{level:"info"}),(0,Qe.J)(r,h.gm.console,"debug",{level:"debug"}),
(0,Qe.J)(r,h.gm.console,"trace",{level:"trace"}),this.ee.on("wrap-logger-end",(function([e])
{const{level:t,customAttributes:n}=this;(0,$e.R)(r,e,n,t)})),this.importAggregator(e)}}new
class extends o{constructor(t)

{super(),h.gm?(this.features={},(0,w.bQ)(this.agentIdentifier,this),this.desiredFeatures=new
Set(t.features||
[]),this.desiredFeatures.add(y),this.runSoftNavOverSpa=[...this.desiredFeatures].some((e=>e.
featureName===s.K7.softNav)),(0,u.j)(this,t,t.loaderType||"agent"),this.run()):(0,e.R)(21)}get
config()
{return{info:this.info,init:this.init,loader_config:this.loader_config,runtime:this.runtime}}run
(){try{const t=function(e){const t={};return c.forEach((r=>{t[r]=function(e,t){return!
0===(0,a.gD)(t,"".concat(e,".enabled"))}(r,e)})),t}
(this.agentIdentifier),r=[...this.desiredFeatures];r.sort(((e,t)=>s.P3[e.featureName]-
s.P3[t.featureName])),r.forEach((r=>{if(!t[r.featureName]&&r.featureName!
==s.K7.pageViewEvent)return;if(this.runSoftNavOverSpa&&r.featureName===s.K7.spa)ret
urn;

if(!this.runSoftNavOverSpa&&r.featureName===s.K7.softNav)return;const n=function(e)
{switch(e){case s.K7.ajax:return[s.K7.jserrors];case
s.K7.sessionTrace:return[s.K7.ajax,s.K7.pageViewEvent];case
s.K7.sessionReplay:return[s.K7.sessionTrace];case
s.K7.pageViewTiming:return[s.K7.pageViewEvent];

default:return[]}}(r.featureName).filter((e=>!(e in this.features)));n.length>0&&(0,e.R)(36,
{targetFeature:r.featureName,missingDependencies:n}),this.features[r.featureName]=new
r(this)}))}catch(t){(0,e.R)(22,t);for(const e in this.features)this.features[e].abortHandler?.();

const r=(0,w.Zm)();return delete r.initializedAgents[this.agentIdentifier]?.api,delete


r.initializedAgents[this.agentIdentifier]?.features,delete
this.sharedAggregator,r.ee.get(this.agentIdentifier).abort(),!1}}}({features:[de,y,_,class
extends v{static featureName=we;constructor(e,t=!0){if(super(e,we,t),!(0,g.V)
(this.agentIdentifier))return void this.deregisterDrain();const r=this.ee;

let n;pe(r),this.eventsEE=(0,V.u)(r),this.eventsEE.on(Re,(function(e,t){this.bstStart=(0,A.t)
()})),this.eventsEE.on(xe,(function(e,t){(0,x.p)("bst",[e[0],t,this.bstStart,(0,A.t)()],void
0,s.K7.sessionTrace,r)})),r.on(Ee+be,(function(e){this.time=(0,A.t)(),

this.startPath=location.pathname+location.hash})),r.on(Ee+ye,(function(e){(0,x.p)("bstHist",
[location.pathname+location.hash,this.startPath,this.time],void
0,s.K7.sessionTrace,r)}));try{n=new PerformanceObserver((e=>{const t=e.getEntries();
(0,x.p)(me,[t],void 0,s.K7.sessionTrace,r)})),n.observe({type:ve,buffered:!0})}catch(e)
{}this.importAggregator(e,{resourceObserver:n})}},Ae,O,D,Xe,et,Ne,class extends v{static
featureName=Ue;constructor(e,t=!0){if(super(e,Ue,t),!
h.RI)return;try{this.removeOnAbort=new AbortController}catch(e){}let r,n=0;const
i=this.ee.get("tracer"),o=function(e){const t=function(e){return(e||B.ee).get("jsonp")}(e);if(!
h.RI||Se[t.debugId])return t;Se[t.debugId]=!0;

var r=(0,G.YM)(t),n=/[?&](?:callback|cb)=([^&#]+)/,i=/(.*)\.([^.]+)/,o=/^(\w+)(\.|$)(.*)
$/;function s(e,t){if(!e)return t;const r=e.match(o),n=r[1];return s(r[3],t[n])}return
r.inPlace(Node.prototype,Ie,"dom-"),t.on("dom-start",(function(e){!function(e)
{if(e&&"string"==typeof
e.nodeName&&"script"===e.nodeName.toLowerCase()&&"function"==typeof
e.addEventListener){var o,a=(o=e.src.match(n))?o[1]:null;if(a){var c=function(e){var
t=e.match(i);return t&&t.length>=3?{key:t[2],parent:s(t[1],window)}:
{key:e,parent:window}}(a);if("function"==typeof c.parent[c.key]){var
u={};r.inPlace(c.parent,[c.key],"cb-",u),e.addEventListener("load",d,(0,E.jT)(!
1)),e.addEventListener("error",l,(0,E.jT)(!1)),t.emit("new-jsonp",[e.src],u)}}}function d()
{t.emit("jsonp-end",[],u),e.removeEventListener("load",d,(0,E.jT)(!
1)),e.removeEventListener("error",l,(0,E.jT)(!1))}function l(){t.emit("jsonp-error",
[],u),t.emit("jsonp-end",[],u),e.removeEventListener("load",d,(0,E.jT)(!
1)),e.removeEventListener("error",l,(0,E.jT)(!1))}}(e[0])})),t}(this.ee),s=function(e){const
t=function(e){return(e||B.ee).get("promise")}(e);if(Pe[t.debugId])return t;Pe[t.debugId]=!
0;var r=t.context,n=(0,G.YM)(t),i=h.gm.Promise;return i&&function(){function e(r){var
o=t.context(),s=n(r,"executor-",o,null,!1);const a=Reflect.construct(i,[s],e);return
t.context(a).getCtx=function(){return o},a}h.gm.Promise=e,Object.defineProperty(e,"name",
{value:"Promise"}),e.toString=function(){return i.toString()},Object.setPrototypeOf(e,i),
["all","race"].forEach((function(r){const n=i[r];e[r]=function(e)

{let i=!1;

[...e||[]].forEach((e=>{this.resolve(e).then(s("all"===r),s(!1))}));const
o=n.apply(this,arguments);return o;function s(e){return function(){t.emit("propagate",[null,!
i],o,!1,!1),i=i||!e}}}})),["resolve","reject"].forEach((function(r){const n=i[r];e[r]=function(e)
{const r=n.apply(this,arguments);return e!==r&&t.emit("propagate",[e,!0],r,!1,!
1),r}})),e.prototype=i.prototype;const o=i.prototype.then;i.prototype.then=function(...e){var
i=this,s=r(i);s.promise=i,e[0]=n(e[0],"cb-",s,null,!1),e[1]=n(e[1],"cb-",s,null,!1);const
a=o.apply(this,e);return s.nextPromise=a,t.emit("propagate",[i,!0],a,!1,!
1),a},i.prototype.then[G.Jt]=o,t.on("executor-start",(function(e)
{e[0]=n(e[0],"resolve-",this,null,!1),e[1]=n(e[1],"resolve-",this,null,!1)})),t.on("executor-err",
(function(e,t,r){e[1](r)})),t.on("cb-end",(function(e,r,n){t.emit("propagate",[n,!
0],this.nextPromise,!1,!1)})),t.on("propagate",(function(e,r,n)

{this.getCtx&&!r||(this.getCtx=function(){if(e instanceof Promise)var r=t.context(e);return


r&&r.getCtx?r.getCtx():this})}))}(),t}(this.ee),a=function(e){const t=function(e){return(e||
B.ee).get("timer")}(e);if(je[t.debugId]++)return t;je[t.debugId]=1;var r=(0,G.YM)(t);return
r.inPlace(h.gm,Ke.slice(0,2),Ce+"-"),r.inPlace(h.gm,Ke.slice(2,3),ke+"-"),r.inPlace(h.gm,Ke.s
lice(3),Le+"-"),t.on(ke+De,(function(e,t,n){e[0]=r(e[0],"fn-",null,n)})),t.on(Ce+De,
(function(e,t,n){this.method=n,this.timerDuration=isNaN(e[1])?
0:+e[1],e[0]=r(e[0],"fn-",this,n)})),t}
(this.ee),c=q(this.ee),u=this.ee.get("events"),d=ee(this.ee),l=pe(this.ee),f=function(e){const
t=function(e){return(e||B.ee).get("mutation")}(e);if(!h.RI||He[t.debugId])return
t;He[t.debugId]=!0;var r=(0,G.YM)(t),n=h.gm.MutationObserver;return
n&&(window.MutationObserver=function(e){return this instanceof n?new
n(r(e,"fn-")):n.apply(this,arguments)},MutationObserver.prototype=n.prototype),t}
(this.ee);function p(e,t){l.emit("newURL",[""+window.location,t])}function g(){n+
+,r=window.location.hash,this[qe]=(0,A.t)()}function m(){n--,window.location.hash!
==r&&p(0,!0);var e=(0,A.t)();this[Fe]=~~this[Fe]+e-this[qe],this[Ze]=e}function v(e,t)
{e.on(t,(function(){this[t]=(0,A.t)
()}))}this.ee.on(qe,g),s.on(ze,g),o.on(ze,g),this.ee.on(Ze,m),s.on(Ge,m),o.on(Ge,m),

this.ee.on("fn-err",((...t)=>{t[2]?.__newrelic?.[e.agentIdentifier]||(0,x.p)("function-err",
[...t],void 0,this.featureName,this.ee)})),this.ee.buffer([qe,Ze,"xhr-
resolved"],this.featureName),u.buffer([qe],this.featureName),a.buffer(["setTimeout"+Ve,"clea
rTimeout"+Me,qe],this.featureName),c.buffer([qe,"new-xhr","send-
xhr"+Me],this.featureName),d.buffer([We+Me,We+"-
done",We+Be+Me,We+Be+Ve],this.featureName),l.buffer(["newURL"],this.featureName),f.b
uffer([qe],this.featureName),s.buffer(["propagate",ze,Ge,"executor-
err","resolve"+Me],this.featureName),i.buffer

([qe,"no-"+qe],this.featureName),o.buffer(["new-jsonp","cb-start","jsonp-error","jsonp-
end"],this.featureName),v(d,We+Me),v(d,We+"-done"),v(o,"new-jsonp"),v(o,"jsonp-
end"),v(o,"cb-start"),l.on("pushState-end",p),l.on("replaceState-
end",p),window.addEventListener("hashchange",p,(0,E.jT)(!
0,this.removeOnAbort?.signal)),window.addEventListener("load",p,(0,E.jT)(!
0,this.removeOnAbort?.signal)),window.addEventListener("popstate",(function(){p(0,n>1)}),
(0,E.jT)(!
0,this.removeOnAbort?.signal)),this.abortHandler=this.#n,this.importAggregator(e)}#n()
{this.removeOnAbort?.abort(),this.abortHandler=void
0}}],loaderType:"spa"})})()})()</script> <script>
window.NREUM.setCustomAttribute('meta.deviceClass', 'desktop');
window.NREUM.setCustomAttribute('meta.locale', 'en-IN');
window.NREUM.setCustomAttribute('meta.privacy', 'exempt');
window.NREUM.setCustomAttribute('meta.connectionSpeed', '4g');
window.NREUM.setCustomAttribute('meta.subsTier', 'none');
window.NREUM.setCustomAttribute('meta.partner', '');
window.NREUM.setCustomAttribute('meta.siteMode', 'normal');
window.NREUM.setCustomAttribute('meta.noAds', 'false');
window.NREUM.setCustomAttribute('meta.seqAdLoad', 'undefined');
window.NREUM.setCustomAttribute('meta.pcId', '87f056d0-4270-4d64-a31e-
c70921a68956');
window.NREUM.setCustomAttribute('meta.gitCommit', '84c4d6291b');
window.NREUM.setCustomAttribute('meta.page', 'today');
window.NREUM.addPageAction("clientSideStart");
10. References

 BeautifulSoup Documentation: https://www.crummy.com/software/BeautifulSoup/


 Scrapy Official Site: https://scrapy.org/

You might also like