0% found this document useful (0 votes)
16 views42 pages

The Internet 2

Uploaded by

Mia Claire
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views42 pages

The Internet 2

Uploaded by

Mia Claire
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

The

Internet
The Internet
The internet is defined as a global network of linked
computers, servers, phones, and smart appliances that
communicate with each other using the transmission
control protocol (TCP) standard to enable the fast
exchange of information and files, along with other types
of services.
A system architecture that has revolutionized mass
communication, mass media, and commerce by allowing
various computer networks around the world to
interconnect. Sometimes referred to as a “network of
networks,” the Internet emerged in the United States in
the 1970s but did not become visible to the general
public until the early 1990s. As of October 2023, there
are 5.3 billion internet users worldwide, which amounted
to 65.7 percent of the global population. And that
number is growing, largely due to the prevalence of
“smart” technology and the "Internet of Things," where
computer-like devices connect with the Internet or
The Internet provides a capability so powerful and
general that it can be used for almost any purpose
that depends on information, and it is accessible
by every individual who connects to one of its
constituent networks. It supports human
communication via social media, electronic mail (e-
mail), “chat rooms,” newsgroups, and audio and
video transmission and allows people to work
collaboratively at many different locations. It
supports access to digital information by many
applications, including the World Wide Web. The
Internet has proved to be a spawning ground for a
large and growing number of “e-businesses”
(including subsidiaries of traditional “brick-and-
mortar” companies) that carry out most of their
sales and services over the Internet.
A Brief History of the
Internet
On October 29, 1969, an organization called
ARPANET (Advanced Research Projects Agency)
launched the first iteration of the internet (also
known as ARPANET) connecting four major
computers at The University of Utah, UCSB, UCLA,
and Stanford Research Institute.
When this network of computers was connected,
universities were able to access files and transmit
information from one organization to the other, as
well as internally.
As researchers developed the system, they continued
to connect computers from other universities,
including MIT, Harvard, and Carnegie Mellon.
Eventually, ARPANET was renamed “internet.”
WHO USED THE INTERNET IN
THIS STAGE?
In its earliest days, the internet
was only used by computer
experts, scientists, engineers, and
librarians who had to learn a
complicated system in order to use
it, but as the technology improved
and consumers adapted, it became
an essential tool for people around
the globe.
HOW AND WHEN DID THE
FUNCTIONALITY OF THE INTERNET
CHANGE?
The 1970s was a serious time of transition
for the internet. Email was introduced in
1972, libraries across the country were
linked, and above all, information exchange
became more seamless thanks to Transport
Control Protocol and Internet Protocol
(TCP/IP) architecture.
The invention of these protocols helped to
standardize how information was sent and
received over the web, making the delivery
more consistent, regardless of where or how
you’re accessing the internet.
When did the Internet Became
User-friendly?
Then in 1986, the National Science Foundation took the
development of the internet to the next echelon by
funding NSFNET, a network of supercomputers
throughout the country.

These supercomputers laid the groundwork for personal


computing, bridging the gap between computers being
used exclusively for academic purposes and computers
used to perform daily tasks.

In 1991, The University of Minnesota developed the first


user-friendly internet interface, making it easier to
access campus files and information. The University of
Nevada at Reno continued to develop this usable
interface, introducing searchable functions and indexing.
When Did Consumers Begin
Using the Internet?
As the internet’s development continued to evolve and shift
focus, the National Science Foundation discontinued its
sponsorship of the internet’s backbone (NSFNET) in May of
1995.

This change lifted all commercial use limitations on the


internet and ultimately, allowed the internet to diversify
and grow rapidly. Shortly after, AOL, CompuServe, and
Prodigy joined Delphi to offer commercial internet service
to consumers.

The debut of WiFi and Windows 98 in the late nineties


marked the tech industry’s commitment to developing the
commercial element of the internet. This next step gave
companies like Microsoft access to a new audience,
consumers (like yourself).
What Does Internet Usage Look
Like Today?
Flash-forward to today. It’s estimated that
three billion people now use the internet,
many of whom use it on a daily basis to help
them get from Point A to Point B, catch up with
loved ones, collaborate at work, or to learn
more about important questions like how does
the internet work?
As technology changes and the internet
weaves its way into just about every aspect of
our lives, even more people are expected to
use it. In 2030, researchers project there will
be 7.5 billion internet users and 500 billion
devices connected to the internet.
How does the internet
work?
The internet is a worldwide computer network that
transmits a variety of data and media across
interconnected devices. It works by using a packet
routing network that follows Internet Protocol (IP) and
Transport Control Protocol (TCP).

TCP and IP work together to ensure that data


transmission across the internet is consistent and
reliable, no matter which device you’re using or where
you’re using it.

When data is transferred over the internet, it’s delivered


in messages and packets. Data sent over the internet is
called a message, but before messages get sent, they’re
broken up into tinier parts called packets.
These messages and packets travel from one
source to the next using Internet Protocol (IP)
and Transport Control Protocol (TCP). IP is a
system of rules that govern how information is
sent from one computer to another computer
over an internet connection.
Using a numerical address (IP Address) the IP
system receives further instructions on how the
data should be transferred.
The Transport Control Protocol (TCP) works with
IP to ensure transfer of data is dependable and
reliable. This helps to make sure that no
packets are lost, packets are reassembled in
proper sequence, and there’s no delay
negatively affecting the data quality.
How Data Travels the Internet
Data and information are transferred around the world through wired
or wireless transmission media. In the
Philippines, the transmission media that make up the internet
backbone allow information or data exchanges between networks at
several locations across the country, such as La Union in the northern
part, and Batangas, Cavite, and Davao down south. The high-speed
equipment in these sites functions similarly to a highway interchange.
Data is transferred from one network to another until it reaches its
final destination.

Much of internet runs on the ordinary public telephone network.


However, there is a big difference between how a telephone can call
works and how the internet carries data for example, friend A gives
friend B a telephone call;
the telephone then opens a direct connection (also known as the
circuit) between friends A’s and friends B’s home.
In this scenario, a direct line can be pictured out, running along lies of
cable, from friends A’s telephone to friend B’s telephone. As long as
the two friends are over the telephone, that connection or circuit stays
open between the two telephones. This method of linking the
Circuit switching is one of the most common schemes utilized
to build a communications network, such as the case of
ordinary telephone calls. Circuit switching however, is
inefficient because if you stay connected with your friend or
relative over the phone all the time, the circuit is still
connected, and is, therefore, blocking other people of using it.
A traditional dial-up connection to the net, in which a
computer dials a telephone number to reach the internet
service provider, uses circuit switching. This appears
inefficient because browsing the internet and using the
telephone at the same time are not possible.
As time goes by, technologies are improves and developed
as well. Most data that moves over the internet in an entirely
different way is called packet switching. This is a mode of
transmission in which the message is broken into smaller
parts (called packets) which are sent independently, and
then reassemble at the ultimate destination. Suppose an
email from the Philippines is sent to someone in South Korea,
Singapore, Thailand, China, Italy, the U.S. and other
countries. Instead of having knotty circuit countries, and
sending the emails in one go, the email is “broken” these
packets are assigned their ultimate destination. They travel
via different routes, and when they reach their definitive
destination, these packets will then reassemble to make the
email message one and complete.

Compared to circuit switching, packet switching, therefore, is


much more efficient. A permanent connection is not
necessary between the two places communicating, which
avoids blocking the entire chunk of the network each time a
message is sent.
Comparison between Circuit
Switching and Packet Switching
Switching Pros
Method
Cons Key
Features
Circuit It offers a Dedicated channels It offers the
can cause delays
Switchin dedicated capability of
because a channel
g transmission storing
is unavailable until
channel that one side messages
is reserved disconnects. It uses temporarily to
until it is a dedicated
disconnected
reduce
physical link
. between the
network
Packet Packets Packets
sending andcan congestion
The two types of
Switchin can be get lostdevices.
receiving while packet switching
g route taking are datagram and
around alternative virtual circuit.
network routes to the Datagram packets
congestion. destination. are independently
sent and can take
Packet Messages are
different paths
switching divided into throughout the
makes packets that network. Virtual
efficient contain source circuit uses a logical
use of and connection
network destination between the source
bandwidth. information. and the destination
device.
Data Can Take Many Paths
This network of networks is a little more interesting and complex than it
might seem. With all these networks connected together, there isn't just
a single path data takes. Because networks are connected to multiple
other networks, there's a whole web of connections stretching out
around the globe. This means that those packets (small pieces of data
sent between devices) can take multiple paths to get where they're
going.

In other words, even if a network between you and a website goes down,
there's usually another path the data can take. The routers along the
path use something called the Border Gateway Protocol, or BGP, to
communicate information about whether a network is down and the
optimal path for data to take.

Creating this interconnected network (or internet) isn't just as simple as


plugging each network into a nearby one, one by one. Networks are
connected in many different ways along many different paths, and the
software running on these routers (so named because they route traffic
along the network) is always working to find the optimal paths for data
to take.
You can actually see the path your packets take
to a destination address by using the traceroute
command, which tells routers along the path
the packet travels to report back.

For example, in the screenshot below, we traced


the route to howtogeek.com from a Comcast
internet connection in Eugene, Oregon. The
packets traveled to our router, through
Comcast's network north to Seattle, before
being routed onto a Tata Communications
(as6453.net) backbone network through
Chicago, New York, and Newark before making
their way to a Linode data center in Newark,
New Jersey where the website is hosted.
We speak of packets "traveling", but of course, they're
just pieces of data. A router contacts another router and
communicates the data in the packet. The next router
uses the information on the packet to figure out where
it's going and transmits the data to the next router
along its the path. The packet is just a signal on the
wire.
What Computer Do on the
Internet
Computers do different jobs on the internet.
Some computers work like electronic filing
cabinets that store information and send it when
clients request so. These are called the servers.

A server is a computer that is designed to


process any request for data and delivers data
to other client computers over a local network
or the internet. A client is a computer or device
that gets information from a server. Any
computer running with special software can
function as a server, and servers have different
roles to play.
A computer that holds the user accounts, computer accounts,
organizational units, and applications services is called the
Active Directory Domain Services (AD DS). Another machine
that helps the Dynamic Host Configuration Protocol (DHCP)
server is a server that configures IPv4 and IPv6 addresses
specifically in giving names to each IP address up to its root
recognition which is called domain name system (DNS) server.
Another machines that holds and manages documents is
known as the file server while the other one that holds users’
mail services and Web services is referred to as Web Server II.
A device that connects printers to client computers through
the internet is called a print server. It accepts print jobs from
the computers, queues these jobs, and sends them to the
appropriate printers.

Besides clients and servers, the internet is made up of a


hardware device designed to receive, analyse, and send
incoming packets to another network. This is called router.
Having several computer devices both at home and in school,
you probably have a router that connects all of your devices to
the internet. The router can be compared to a simple
mailbox placed at the corner of a street and which represents
The World Wide Web
The terms Internet and World Wide Web are often used without much
distinction. However, the two terms do not mean the same thing. The
Internet is a global system of computer networks interconnected
through telecommunications and optical networking. In contrast, the
World Wide Web is a global collection of documents and other
resources, linked by hyperlinks and URIs. Web resources are accessed
using HTTP or HTTPS, which are application-level Internet protocols that
use the Internet's transport protocols.

Viewing a web page on the World Wide Web normally begins either by
typing the URL of the page into a web browser or by following a
hyperlink to that page or resource. The web browser then initiates a
series of background communication messages to fetch and display the
requested page. In the 1990s, using a browser to view web pages—and
to move from one web page to another through hyperlinks—came to be
known as 'browsing,' 'web surfing' (after channel surfing), or 'navigating
the Web'. Early studies of this new behavior investigated user patterns
in using web browsers. One study, for example, found five user
patterns: exploratory surfing, window surfing, evolved surfing, bounded
navigation and targeted navigation.
The following example demonstrates the functioning of
a web browser when accessing a page at the URL
http://example.org/home.html. The browser resolves
the server name of the URL (example.org) into an
Internet Protocol address using the globally distributed
Domain Name System (DNS). This lookup returns an IP
address such as 203.0.113.4 or 2001:db8:2e::7334. The
browser then requests the resource by sending an HTTP
request across the Internet to the computer at that
address. It requests service from a specific TCP port
number that is well known for the HTTP service so that
the receiving host can distinguish an HTTP request from
other network protocols it may be servicing. HTTP
normally uses port number 80 and for HTTPS it normally
uses port number 443. The content of the HTTP request
can be as simple as two lines of text:
GET /home.html HTTP/1.1
Host: example.org
The computer receiving the HTTP request delivers it to web server software listening for
requests on port 80. If the webserver can fulfil the request it sends an HTTP response back
to the browser indicating success:

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8

followed by the content of the requested page. Hypertext Markup Language (HTML) for a
basic web page might look like this:

<html>
<head>
<title>Example.org – The World Wide Web</title>
</head>
<body>
<p>The World Wide Web, abbreviated as WWW and commonly known ...</p>
</body>
</html>

The web browser parses the HTML and interprets the markup (<title>, <p> for paragraph,
and such) that surrounds the words to format the text on the screen. Many web pages use
HTML to reference the URLs of other resources such as images, other embedded media,
scripts that affect page behaviour, and Cascading Style Sheets that affect page layout. The
browser makes additional HTTP requests to the web server for these other Internet media
types. As it receives their content from the web server, the browser progressively renders
the page onto the screen as specified by its HTML and these additional resources.
HTML
Hypertext Markup Language (HTML) is the standard markup language for creating web
pages and web applications. With Cascading Style Sheets (CSS) and JavaScript, it forms
a triad of cornerstone technologies for the World Wide Web.

Web browsers receive HTML documents from a web server or from local storage and
render the documents into multimedia web pages. HTML describes the structure of a
web page semantically and originally included cues for the appearance of the
document.

HTML elements are the building blocks of HTML pages. With HTML constructs, images
and other objects such as interactive forms may be embedded into the rendered page.
HTML provides a means to create structured documents by denoting structural
semantics for text such as headings, paragraphs, lists, links, quotes and other items.
HTML elements are delineated by tags, written using angle brackets. Tags such as <img
/> and <input /> directly introduce content into the page. Other tags such as <p>
surround and provide information about document text and may include other tags as
sub-elements. Browsers do not display the HTML tags, but use them to interpret the
content of the page.

HTML can embed programs written in a scripting language such as JavaScript, which
affects the behavior and content of web pages. Inclusion of CSS defines the look and
layout of content. The World Wide Web Consortium (W3C), maintainer of both the HTML
and the CSS standards, has encouraged the use of CSS over explicit presentational
HTML since 1997.
Most web pages contain hyperlinks to other related pages and perhaps to
downloadable files, source documents, definitions and other web resources.
In the underlying HTML, a hyperlink looks like this: <a
href="http://example.org/home.html">Example.org Homepage</a>.

Such a collection of useful, related resources, interconnected via hypertext


links is dubbed a web of information. Publication on the Internet created
what Tim Berners-Lee first called the WorldWideWeb (in its original
CamelCase, which was subsequently discarded) in November 1990.

The hyperlink structure of the web is described by the webgraph: the nodes
of the web graph correspond to the web pages (or URLs) the directed edges
between them to the hyperlinks. Over time, many web resources pointed to
by hyperlinks disappear, relocate, or are replaced with different content.
This makes hyperlinks obsolete, a phenomenon referred to in some circles
as link rot, and the hyperlinks affected by it are often called "dead" links.
The ephemeral nature of the Web has prompted many efforts to archive
websites. The Internet Archive, active since 1996, is the best known of such
efforts.
Graphic representation of a minute fraction of the WWW,
demonstrating hyperlinks
WWW prefix
Many hostnames used for the World Wide Web begin with www because of the long-standing
practice of naming Internet hosts according to the services they provide. The hostname of a web
server is often www, in the same way that it may be ftp for an FTP server, and news or nntp for a
Usenet news server. These hostnames appear as Domain Name System (DNS) or subdomain
names, as in www.example.com. The use of www is not required by any technical or policy standard
and many web sites do not use it; the first web server was nxoc01.cern.ch.[45] According to Paolo
Palazzi, who worked at CERN along with Tim Berners-Lee, the popular use of www as subdomain
was accidental; the World Wide Web project page was intended to be published at www.cern.ch
while info.cern.ch was intended to be the CERN home page; however the DNS records were never
switched, and the practice of prepending www to an institution's website domain name was
subsequently copied.[46][better source needed] Many established websites still use the prefix, or
they employ other subdomain names such as www2, secure or en for special purposes. Many such
web servers are set up so that both the main domain name (e.g., example.com) and the www
subdomain (e.g., www.example.com) refer to the same site; others require one form or the other, or
they may map to different web sites. The use of a subdomain name is useful for load balancing
incoming web traffic by creating a CNAME record that points to a cluster of web servers. Since,
currently[as of?], only a subdomain can be used in a CNAME, the same result cannot be achieved
by using the bare domain root.

When a user submits an incomplete domain name to a web browser in its address bar input field,
some web browsers automatically try adding the prefix "www" to the beginning of it and possibly
".com", ".org" and ".net" at the end, depending on what might be missing. For example, entering
"microsoft" may be transformed to http://www.microsoft.com/ and "openoffice" to
http://www.openoffice.org. This feature started appearing in early versions of Firefox, when it still
had the working title 'Firebird' in early 2003, from an earlier practice in browsers such as Lynx.[48]
[unreliable source?] It is reported that Microsoft was granted a US patent for the same idea in 2008,
but only for mobile devices.
Scheme Specifiers
The scheme specifiers http:// and https:// at
the start of a web URI refer to Hypertext
Transfer Protocol or HTTP Secure, respectively.
They specify the communication protocol to
use for the request and response. The HTTP
protocol is fundamental to the operation of
the World Wide Web, and the added
encryption layer in HTTPS is essential when
browsers send or retrieve confidential data,
such as passwords or banking information.
Web browsers usually automatically prepend
http:// to user-entered URIs, if omitted.
Pages
A web page (also written as webpage) is a document that is suitable for the World
Wide Web and web browsers. A web browser displays a web page on a monitor or
mobile device.

The term web page usually refers to what is visible, but may also refer to the
contents of the computer file itself, which is usually a text file containing
hypertext written in HTML or a comparable markup language. Typical web pages
provide hypertext for browsing to other web pages via hyperlinks, often referred
to as links. Web browsers will frequently have to access multiple web resource
elements, such as reading style sheets, scripts, and images, while presenting
each web page.

On a network, a web browser can retrieve a web page from a remote web server.
The web server may restrict access to a private network such as a corporate
intranet. The web browser uses the Hypertext Transfer Protocol (HTTP) to make
such requests to the web server.

A static web page is delivered exactly as stored, as web content in the web
server's file system. In contrast, a dynamic web page is generated by a web
application, usually driven by server-side software. Dynamic web pages are used
when each user may require completely different information, for example, bank
websites, web email etc.
Static page
A static web page (sometimes
called a flat page/stationary
page) is a web page that is
delivered to the user exactly as
stored, in contrast to dynamic
web pages which are generated
by a web application.

Consequently, a static web page


displays the same information
for all users, from all contexts,
subject to modern capabilities of
a web server to negotiate
content-type or language of the
document where such versions
are available and the server is
configured to do so.
Dynamic pages
A server-side dynamic web page is a web page whose construction is controlled
by an application server processing server-side scripts. In server-side scripting,
parameters determine how the assembly of every new web page proceeds,
including the setting up of more client-side processing.

A client-side dynamic web page processes the web page using JavaScript
running in the browser. JavaScript programs can interact with the document via
Document Object Model, or DOM, to query page state and alter it. The same
client-side techniques can then dynamically update or change the DOM in the
same way.

A dynamic web page is then reloaded by the user or by a computer program to


change some variable content. The updating information could come from the
server, or from changes made to that page's DOM. This may or may not
truncate the browsing history or create a saved version to go back to, but a
dynamic web page update using Ajax technologies will neither create a page to
go back to nor truncate the web browsing history forward of the displayed page.
Using Ajax technologies the end user gets one dynamic page managed as a
single page in the web browser while the actual web content rendered on that
page can vary. The Ajax engine sits only on the browser requesting parts of its
DOM, the DOM, for its client, from an application server.
Dynamic HTML, or DHTML, is the umbrella term for technologies and methods used to create
web pages that are not static web pages, though it has fallen out of common use since the
popularization of AJAX, a term which is now itself rarely used. Client-side-scripting, server-
side scripting, or a combination of these make for the dynamic web experience in a browser.

JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then
of Netscape, for use within web pages. The standardised version is ECMAScript. To make
web pages more interactive, some web applications also use JavaScript techniques such as
Ajax (asynchronous JavaScript and XML). Client-side script is delivered with the page that
can make additional HTTP requests to the server, either in response to user actions such as
mouse movements or clicks, or based on elapsed time. The server's responses are used to
modify the current page rather than creating a new page with each response, so the server
needs only to provide limited, incremental information. Multiple Ajax requests can be
handled at the same time, and users can interact with the page while data is retrieved. Web
pages may also regularly poll the server to check whether new information is available.
Website
A website is a collection of related web resources including web pages, multimedia
content, typically identified with a common domain name, and published on at least one
web server. Notable examples are wikipedia.org, google.com, and amazon.com.

A website may be accessible via a public Internet Protocol (IP) network, such as the
Internet, or a private local area network (LAN), by referencing a uniform resource locator
(URL) that identifies the site.

Websites can have many functions and can be used in various fashions; a website can be
a personal website, a corporate website for a company, a government website, an
organization website, etc. Websites are typically dedicated to a particular topic or
purpose, ranging from entertainment and social networking to providing news and
education. All publicly accessible websites collectively constitute the World Wide Web,
while private websites, such as a company's website for its employees, are typically a
part of an intranet.

Web pages, which are the building blocks of websites, are documents, typically
composed in plain text interspersed with formatting instructions of Hypertext Markup
Language (HTML, XHTML). They may incorporate elements from other websites with
suitable markup anchors. Web pages are accessed and transported with the Hypertext
Transfer Protocol (HTTP), which may optionally employ encryption (HTTP Secure, HTTPS)
to provide security and privacy for the user. The user's application, often a web browser,
renders the page content according to its HTML markup instructions onto a display
terminal.
Hyperlinking between web pages
conveys to the reader the site
structure and guides the navigation
of the site, which often starts with a
home page containing a directory of
the site web content. Some websites
require user registration or
subscription to access content.
Examples of subscription websites
include many business sites, news
websites, academic journal websites,
gaming websites, file-sharing
websites, message boards, web-
based email, social networking
websites, websites providing real-
time price quotations for different
types of markets, as well as sites
providing various other services. End
users can access websites on a
range of devices, including desktop
and laptop computers, tablet
computers, smartphones and smart
TVs.
Browser
A web browser (commonly referred to as a browser) is a
software user agent for accessing information on the World
Wide Web. To connect to a website's server and display its
pages, a user needs to have a web browser program. This is
the program that the user runs to download, format, and
display a web page on the user's computer.

In addition to allowing users to find, display, and move


between web pages, a web browser will usually have
features like keeping bookmarks, recording history,
managing cookies (see below), and home pages and may
have facilities for recording passwords for logging into web
sites.

The most popular browsers are Chrome, Firefox, Safari,


Internet Explorer, and Edge.
Server
A Web server is server software, or hardware dedicated to running said
software, that can satisfy World Wide Web client requests. A web server
can, in general, contain one or more websites. A web server processes
incoming network requests over HTTP and several other related protocols.

The primary function of a web server is to store, process and deliver web
pages to clients. The communication between client and server takes place
using the Hypertext Transfer Protocol (HTTP). Pages delivered are most
frequently HTML documents, which may include images, style sheets and
scripts in addition to the text content.

A user agent, commonly a web browser or web crawler, initiates


communication by making a request for a specific resource using HTTP and
the server responds with the content of that resource or an error message
if unable to do so. The resource is typically a real file on the server's
secondary storage, but this is not necessarily the case and depends on
how the webserver is implemented.

While the primary function is to serve content, full implementation of HTTP


also includes ways of receiving content from clients. This feature is used
for submitting web forms, including uploading of files.
Many generic web servers also support server-side scripting
using Active Server Pages (ASP), PHP (Hypertext
Preprocessor), or other scripting languages. This means that
the behavior of the webserver can be scripted in separate
files, while the actual server software remains unchanged.
Usually, this function is used to generate HTML documents
dynamically ("on-the-fly") as opposed to returning static
documents. The former is primarily used for retrieving or
modifying information from databases. The latter is typically
much faster and more easily cached but cannot deliver
dynamic content.

Web servers can also frequently be found embedded in


devices such as printers, routers, webcams and serving only
a local network. The web server may then be used as a part
of a system for monitoring or administering the device in
question. This usually means that no additional software has
to be installed on the client computer since only a web
browser is required (which now is included with most
operating systems).
Cookie
An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie) is a small
piece of data sent from a website and stored on the user's computer by the user's web browser while
the user is browsing. Cookies were designed to be a reliable mechanism for websites to remember
stateful information (such as items added in the shopping cart in an online store) or to record the
user's browsing activity (including clicking particular buttons, logging in, or recording which pages
were visited in the past). They can also be used to remember arbitrary pieces of information that the
user previously entered into form fields such as names, addresses, passwords, and credit card
numbers.

Cookies perform essential functions in the modern web. Perhaps most importantly, authentication
cookies are the most common method used by web servers to know whether the user is logged in or
not, and which account they are logged in with. Without such a mechanism, the site would not know
whether to send a page containing sensitive information or require the user to authenticate
themselves by logging in. The security of an authentication cookie generally depends on the security
of the issuing website and the user's web browser, and on whether the cookie data is encrypted.
Security vulnerabilities may allow a cookie's data to be read by a hacker, used to gain access to user
data, or used to gain access (with the user's credentials) to the website to which the cookie belongs
(see cross-site scripting and cross-site request forgery for examples).

Tracking cookies, and especially third-party tracking cookies, are commonly used as ways to compile
long-term records of individuals' browsing histories – a potential privacy concern that prompted
European and U.S. lawmakers to take action in 2011. European law requires that all websites targeting
European Union member states gain "informed consent" from users before storing non-essential
cookies on their device.

Google Project Zero researcher Jann Horn describes ways cookies can be read by intermediaries, like
Wi-Fi hotspot providers. He recommends using the browser in incognito mode in such circumstances.
Search Engine
A web search engine or Internet search engine is a
software system that is designed to carry out web search
(Internet search), which means to search the World Wide
Web in a systematic way for particular information
specified in a web search query. The search results are
generally presented in a line of results, often referred to
as search engine results pages (SERPs). The information
may be a mix of web pages, images, videos, infographics,
articles, research papers, and other types of files. Some
search engines also mine data available in databases or
open directories. Unlike web directories, which are
maintained only by human editors, search engines also
maintain real-time information by running an algorithm on
a web crawler. Internet content that is not capable of
being searched by a web search engine is generally
described as the deep web.
Deep Web
The deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not
indexed by standard web search engines. The opposite term to the deep web is the surface web,
which is accessible to anyone using the Internet. Computer scientist Michael K. Bergman is credited
with coining the term deep web in 2001 as a search indexing term.

The content of the deep web is hidden behind HTTP forms, and includes many very common uses such
as web mail, online banking, and services that users must pay for, and which is protected by a
paywall, such as video on demand, some online magazines and newspapers, among others.

The content of the deep web can be located and accessed by a direct URL or IP address, and may
require a password or other security access past the public website page.
Caching
A web cache is a server computer located either on the public
Internet or within an enterprise that stores recently accessed
web pages to improve response time for users when the same
content is requested within a certain time after the original
request. Most web browsers also implement a browser cache by
writing recently obtained data to a local data storage device.
HTTP requests by a browser may ask only for data that has
changed since the last access. Web pages and resources may
contain expiration information to control caching to secure
sensitive data, such as in online banking, or to facilitate
frequently updated sites, such as news media. Even sites with
highly dynamic content may permit basic resources to be
refreshed only occasionally. Web site designers find it
worthwhile to collate resources such as CSS data and JavaScript
into a few site-wide files so that they can be cached efficiently.
Enterprise firewalls often cache Web resources requested by one
user for the benefit of many users. Some search engines store
cached content of frequently accessed websites.
ACTIVITY
1. Make a list of your top 10 favorite websites
and identify the reasons for choosing them.
2. Search online for an application that has both
a mobile and a desktop/web version.
Compare and contrast the versions in terms
of content, layout, and design.
3. List at least 5 examples of search engines.

Deadline: October 8, 2024


Thank You!

You might also like