0% found this document useful (0 votes)
37 views38 pages

WT-Unit 1.1

The document provides information on web technology, including the history and development of the Internet, World Wide Web, and web browsers. It discusses how the Internet and TCP/IP protocols were developed in the 1960s-1970s from the ARPANET project. Tim Berners-Lee later invented the World Wide Web in 1989-1990 to share information globally. Early web browsers included Viola in 1992 and Mosaic in 1993, followed by Netscape Navigator and Internet Explorer bringing the web mainstream. Protocols like IP, ICMP, and TCP were developed to enable communication over the internet.

Uploaded by

Bruce Wayne
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views38 pages

WT-Unit 1.1

The document provides information on web technology, including the history and development of the Internet, World Wide Web, and web browsers. It discusses how the Internet and TCP/IP protocols were developed in the 1960s-1970s from the ARPANET project. Tim Berners-Lee later invented the World Wide Web in 1989-1990 to share information globally. Early web browsers included Viola in 1992 and Mosaic in 1993, followed by Netscape Navigator and Internet Explorer bringing the web mainstream. Protocols like IP, ICMP, and TCP were developed to enable communication over the internet.

Uploaded by

Bruce Wayne
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Web Technology

Unit 1

Notes
World Wide Web:- The World Wide Web (abbreviated WWW or the Web) is
an information space where documents and other web resources are identified
by Uniform Resource Locators (URLs), interlinked by hypertext links, and can be
accessed via the Internet.

Internet:- The Internet is the global system of interconnected computer


networks that use the Internet protocol suite (TCP/IP) to link devices worldwide.
It is a network of networks that consists of private, public, academic, business,
and government networks of local to global scope, linked by a broad array of
electronic, wireless, and optical networking technologies.

The Internet carries an extensive range of information resources and services,


such as the inter-linked hypertext documents and applications of the World
Wide Web (WWW), telephony, and file sharing.
History of internet
The Internet had its roots during the 1960's as a project of the United States
government's Department of Defense, to create a non-centralized network.
This project was called ARPANET (Advanced Research Projects Agency
Network), created by the Pentagon's Advanced Research Projects Agency
established in 1969 to provide a secure and survivable communications
network for organizations engaged in defense-related research.

In order to make the network more global a new sophisticated and standard
protocol was needed. They developed IP (Internet Protocol) technology which
defined how electronic messages were packaged, addressed, and sent over
the network. The standard protocol was invented in 1977 and was called
TCP/IP (Transmission Control Protocol/Internet Protocol). TCP/IP allowed users
to link various branches of other complex networks directly to the ARPANET,
which soon came to be called the Internet.
Researchers and academics in other fields began to make use of the network, and
eventually the National Science Foundation (NSF), which had created a similar and
parallel network, called NSFNet, took over much of the TCP/IP technology from
ARPANET and established a distributed network of networks capable of handling far
greater traffic. In 1985, NSF began a program to establish Internet access across the
United States.

They created a backbone called the NSFNET and opened their doors to all
educational facilities, academic researchers, government agencies, and international
research organizations. By the 1990's the Internet experienced explosive growth. It is
estimated that the number of computers connected to the Internet was doubling
every year.

Businesses rapidly realized that, by making effective use of the Internet they could
tune their operations and offer new and better services to their customers, so they
started spending vast amounts of money to develop and enhance the Internet.
This generated violent competition among the communications carriers
and hardware and software suppliers to meet this demand. The result is that
bandwidth (i.e., the information carrying capacity of communications lines) on
the Internet has increased tremendously and costs have dropped. It is widely
believed that the Internet has played a significant role in the economic success.
History of World Wide Web
The World Wide Web (WWW) allows computer users to position and view
multimedia-based documents (i.e., documents with text, graphics, animations, audios
and/or videos) on almost any subject.

In 1980 when Tim Berners Lee of CERN (the European Laboratory for Particle Physics)
was working on a project known as 'Enquire'. Enquire was a simple database of people
and software who were working at the same place as Berners Lee. It was during this
project that he experimented with hypertext.

Hypertext is text that can be displayed on devices which utilize hyperlinks. The Berners
Lee Enquire system used hyperlinks on each page of the database, each page
referencing other relevant pages within the system.

Berners Lee was a physicist and in his need to share information with other physicists
around the world found out that there was no quick and easy solution for doing so.
With this in mind, in 1989 he set about putting a proposal together for a centralized
database which contained links to other documents.
This would have been the perfect solution for Tim and his colleagues, but it turned
out nobody was interested in it and nobody took any notice - except for one person.
Tim's boss liked his idea and encouraged him to implement it in their next project.

This new system was given a few different names such as TIM (The Information Mine)
which was turned down as it abbreviated Tim's initials. After a few suggestions, there
was only one name that stuck; the World Wide Web. By December 1990 Tim had
joined forces with another physicist Robert Cailliau who rewrote Tim's original
proposal. It was their vision to combine hypertext with the internet to create web
pages, but no one at that time could appreciate how successful this idea could be.

Despite little interest, Berners Lee continued to develop three major components for
the web; HTTP, HTML and the world first web browser. Funnily enough, this browser
was also called "the World Wide Web" and it also doubled as an editor.
Shortly afterwards other browsers were released, each bringing differences and
improvements. Let's take a look at some of these browsers.

Line Mode Browser - feb 1992. This was also brought to us by Berners Lee. It
was the first browser to support multiple platforms.

Viola WWW Browser released - march 1992. This is widely suggested to be the
world's first popular browser. It brought with it a stylesheet and scripting
language, long before JavaScript and CSS.

Mosaic Browser released - Jan 5th 1993. Mosaic was really highly rated when it
first came out. It was developed at University of Illinois.
• Mosaic was a popular browser at the time of its launch in 1993.

• Cello Browser released - June 8th, 1993. This was the first browser available
for Windows.

• Netscape Navigator 1.1 released - March 1995. This was the first browser to
introduce tables to HTML.

• Opera 1.0 released - April 1995. This was originally a research project for a
Norwegian telephone company. The browser is still available today and is
currently at version 12.

• Internet Explorer 1.0 released - August 1995. Microsoft decided to get in on


the act when its Windows operating system '95 was released. This was the
browser that ran exclusively on that.
Protocols Governing Web
In networking, a protocol is a set of rules for formatting and processing data. Network
protocols are like a common language for computers. The computers within a
network may use vastly different software and hardware; however, the use of
protocols enables them to communicate with each other regardless.

1. Internet Protocol (IP)

The Internet Protocol (IP) is a network-layer (Layer 3) protocol that contains addressing
information and some control information that enables packets to be routed. IP is
documented in RFC 791 and is the primary network-layer protocol in the Internet protocol
suite. Along with the Transmission Control Protocol (TCP), IP represents the heart of the
Internet protocols. IP has two primary responsibilities: providing connectionless, best-effort
delivery of datagrams through an internetwork; and providing fragmentation and reassembly
of datagrams to support data links with different maximum-transmission unit (MTU) sizes.
Each host on a TCP/IP network is assigned a unique 32-bit logical address that is
divided into two main parts: the network number and the host number.
The network number identifies a network and must be assigned by the Internet
Network Information Center (InterNIC) if the network is to be part of the Internet. An
Internet Service Provider (ISP) can obtain blocks of network addresses from the
InterNIC and can itself assign address space as necessary. The host number identifies a
host on a network and is assigned by the local network administrator.

2. Internet Control Message Protocol (ICMP)


The Internet Control Message Protocol (ICMP) is a network-layer Internet protocol
that provides message packets to report errors and other information regarding IP
packet processing back to the source. ICMP is documented in RFC 792. ICMPs
generate several kinds of useful messages, including Destination Unreachable, Echo
Request and Reply, Redirect, Time Exceeded, and Router Advertisement and Router
Solicitation. If an ICMP message cannot be delivered, no second one is generated.
This is to avoid an endless flood of ICMP messages. When an ICMP destination-
unreachable message is sent by a router, it means that the router is unable to send
the package to its final destination. The router then discards the original packet.
An ICMP echo-request message, which is generated by the ping command, is sent by any host to test
node reachability across an internetwork. The ICMP echo-reply message indicates that the node can
be successfully reached.

An ICMP Redirect message is sent by the router to the source host to stimulate more efficient routing.
The router still forwards the original packet to the destination. ICMP redirects allow host routing tables
to remain small because it is necessary to know the address of only one router, even if that router
does not provide the best path.

3. Transmission Control Protocol (TCP)


The TCP provides reliable transmission of data in an IP environment. TCP corresponds to the transport
layer (Layer 4) of the OSI reference model. Among the services TCP provides are stream data transfer,
reliability, efficient flow control, full-duplex operation, and multiplexing. With stream data transfer,
TCP delivers an unstructured stream of bytes identified by sequence numbers. This service benefits
applications because they do not have to chop data into blocks before handing it off to TCP. Instead,
TCP groups bytes into segments and passes them to IP for delivery. TCP offers reliability by providing
connection-oriented, end-to-end reliable packet delivery through an internetwork.A time-out
mechanism allows devices to detect lost packets and request retransmission. TCP offers efficient flow
control, which means that, when sending acknowledgments back to the source, the receiving TCP
process indicates the highest sequence number it can receive without overflowing its internal buffers.
Full-duplex operation means that TCP processes can both send and receive at the same time. Finally,
TCP’s multiplexing means that numerous simultaneous upper-layer conversations can be multiplexed
over a single connection.
4. The User Datagram Protocol (UDP)
UDP is a connectionless transport-layer protocol (Layer 4) that belongs to the Internet protocol
family. UDP is basically an interface between IP and upper-layer processes. UDP protocol ports
distinguish multiple applications running on a single device from one another. Unlike the TCP, UDP
adds no reliability, flow-control, or error-recovery functions to IP. Because of UDP’s simplicity, UDP
headers contain fewer bytes and consume less network overhead than TCP. UDP is useful in situations
where the reliability mechanisms of TCP are not necessary, such as in cases where a higher-layer
protocol might provide error and flow control. The UDP packet format contains four fields these
include source and destination ports, length, and checksum fields

5. Simple Mail Transfer Protocol (SMTP)


SMTP is an Internet standard for electronic mail (email) transmission. Although electronic mail
servers and other mail transfer agents use SMTP to send and receive mail messages, user-level client
mail applications typically use SMTP only for sending messages to a mail server for relaying. For
retrieving messages, client applications usually use either IMAP or POP3. SMTP communication
between mail servers uses TCP port 25. Mail clients on the other hand, often submit the outgoing
emails to a mail server on port 587. Despite being deprecated, mail providers sometimes still permit
the use of nonstandard port 465 for this purpose.SMTP connections secured by TLS, known as SMTPS.
Although proprietary systems (such as Microsoft Exchange and IBM Notes) and webmail systems
(such as Outlook.com, Gmail and Yahoo! Mail) use their own non-standard protocols to access mail
box accounts on their own mail servers, all use SMTP when sending or receiving email from outside
their own systems.
6. File Transfer Protocol (FTP)
The File Transfer Protocol (FTP) is a standard network protocol used for the transfer of
computer files between a client and server on a computer network. FTP is built on a
client-server model architecture and uses separate control and data connections
between the client and the server. FTP users may authenticate themselves with a clear-
text sign-in protocol, normally in the form of a username and password, but can
connect anonymously if the server is configured to allow it.
For secure transmission that protects the username and password, and encrypts the
content, FTP is often secured with SSL/TLS (FTPS). FTP may run in active or passive
mode, which determines how the data connection is established. In both cases, the
client creates a TCP control connection from a random, usually an unprivileged, port N
to the FTP server command port 21.
In active mode, the client starts listening for incoming data connections from the
server on port M. It sends the FTP command PORT M to inform the server on which
port it is listening. The server then initiates a data channel to the client from its port
20, the FTP server data port.In situations where the client is behind a firewall and
unable to accept incoming TCP connections, passive mode may be used. In this mode,
the client uses the control connection to send a PASV command to the server and then
receives a server IP address and server port number from the server, which the client
then uses to open a data connection from an arbitrary client port to the server IP
address and server port number received.
Hypertext Transfer Protocol (HTTP)

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed,


collaborative, and hypermedia information systems. HTTP is the foundation of data
communication for the World Wide Web.

Hypertext is structured text that uses logical links (hyperlinks) between nodes containing
text. HTTP is the protocol to exchange or transfer hypertext.

Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Standards


development of HTTP was coordinated by the Internet Engineering Task Force (IETF) and the
World Wide Web Consortium (W3C), culminating in the publication of a series of Requests
for Comments (RFCs).

HTTP functions as a request–response protocol in the client–server computing model.

A web browser, for example, may be the client and an application running on a computer
hosting a website may be the server.
HTTP is designed to permit intermediate network elements to improve or enable
communications between clients and servers.

High-traffic websites often benefit from web cache servers that deliver content
on behalf of upstream servers to improve response time.

Web browsers cache previously accessed web resources and reuse them when
possible to reduce network traffic.

HTTP proxy servers at private network boundaries can facilitate communication


for clients without a globally routable address, by relaying messages with
external servers.
HTTP is an application layer protocol designed within the framework of the
Internet protocol suite.

Its definition presumes an underlying and reliable transport layer protocol, and
Transmission Control Protocol (TCP) is commonly used.

However HTTP can be adapted to use unreliable protocols such as the User
Datagram Protocol (UDP), for example in HTTPU and Simple Service Discovery
Protocol (SSDP).
The client submits an HTTP request message to the server. The server, which provides
resources such as HTML files and other content, or performs other functions on behalf
of the client, returns a response message to the client.

The response contains completion status information about the request and may also
contain requested content in its message body.

A web browser is an example of a user agent (UA). Other types of user agent include
the indexing software used by search providers (web crawlers), voice browsers,
mobile apps, and other software that accesses, consumes, or displays web content.
Telnet
Telnet is a user command and an underlying TCP/IP protocol for accessing
remote computers.

Through Telnet, an administrator or another user can access someone else's


computer remotely.

On the Web, HTTP and FTP protocols allow you to request specific files from
remote computers, but not to actually be logged on as a user of that computer.

With Telnet, you log on as a regular user with whatever privileges you may have
been granted to the specific application and data on that computer.
Telnet command request looks like this (the computer name is made-up):

telnet the.libraryat.whatis.edu

The result of this request would be an invitation to log on with a userid and a
prompt for a password. If accepted, you would be logged on like any user
who used this computer every day.

Telnet is most likely to be used by program developers and anyone who has a
need to use specific applications or data located at a particular host
computer.
Writing web Projects
Developing web project is a crucial activity and web project development differs from
traditional web projects Phases of writing the web projects are

1. Write a project mission statement: Write the specific mission statement that you
want to do. It focuses on following three tasks
- Identify project’s objectives
- Identify users
- Determine the scope of the project
A mission statement describes solution to the problem. It answers following three
questions
- What are we going to do?
- For whom are we doing it?
- How do we go about it?
(2) Identify objectives:-
Objectives are result
Objective should be
- Specific
- Measurable
- Attainable
- Realistic
- Time-limited

(3) Identify target users:-


we need market research
focus group:-
is a group of people who represents your target users for giving feedback of the web site.
Internet audience:-
Web site is directly access by customers. Web site content will change accordingly.
Type of users depends on content of the web site. A research is needed to identify users of the web site.

4. Determine the scope:


By supporting documents and client’s approval. scope of the project is related to time and cost.
In scope, time and cost, if one changes, it affects the other two.
Writing scope document
scope document:-
- rewrite mission and objectives of the web project along with
features that you will build to meet the project goal.
- document should be easy to follow and must include sign-off page i.e.
agreement for features of the web.
you can include visual sketches of the interface or data models of the
web site or h/w and s/w specifications.

For scope, determine key elements of the web and cost of key elements.

For scope, break down components and analyze the tasks for time, cost and
completion of the task so that budget and schedule will not overrun.
(5 ) The Budget
- is well-defined project with price.
- better your project scope, more accurate your budget.
- do refinement in the budget.
- specify price of each task such that no alteration in future.
- specify any assumptions
- provision for hidden cost like meeting with team , phone calls , e-mail etc.

(6) Tools
- Specify tools needed to develop the project.

(7) More Preliminary Planning Issues


- Related to infrastructure of the organization
- Setting internet on client site.
- All legal and non-disclosure agreements (NDAs) for sign-off (i.e. agreement).
- Develop an email convention for your subject line
- set-up an email for the poject.
Finding the people you need:
- for large organization, internet people may be used for web
development or hire the people as skills required in the web site.
Two important members in web team
Creative lead :-
develop visual deign of the web site.
Technical Lead:- responsible for setting up your web site’s
network infrastructure and hiring the right people to build the web
site.
Connecting to Internet
It’s possible to connect to the internet via a range of devices these days — though desktop
and laptop computers, mobile phones and tablets are the most common.

However, everyday items such as watches, even central heating systems and refrigerators,
are now capable of using the internet.

In order for any device to actually get online though, requires signing up for a specialized
service for accessing the internet.

These internet access services are generally of two types: internet fixed to a specific
location and provided by internet service providers, or mobile internet that can be used
out and about, which are provided by mobile phone networks.

People use one or the other (or both) types of internet access — fixed or mobile —
depending on the device they’re using, their immediate environment and budgets.
Connecting to the internet requires two key ingredients:

1. A device capable of connecting to the internet.


2. Access to an internet service that will allow that device to get connected.

Basically, there are many types of both of the above things.

In other words, it’s possible to connect to the internet on an ever-increasing


range of devices. Plus, there’s also quite a few different types of services that
allow these devices to get online.
Devices that can connect to the internet :

The most common devices people use today to get online, include:
• Desktop computers
• Laptop computers
• Mobile phones
• Tablets
• E-readers

However, the range of devices capable of connecting to the internet is ever-


expanding and shifting our understanding of what “being online” means.
There are two key types of service that can provide you with internet access.
They are:

• Fixed internet
• Mobile internet

Fixed internet

As the name suggests, this is an internet connection that is fixed to a specific


location (such as a home, office or shop) — meaning that the internet
connection is unique to that property, and as such you can only access it when
you’re physically situated there.
Today, the three most common types of fixed internet connection are:

ADSL broadband
ADSL (Asymmetric Digital Subscriber Line) is a technology for transmitting digital
information at a high bandwidth on existing phone lines to homes and businesses.
Unlike regular dialup phone service, ADSL provides continously-available, "always on"
connection.

Cable broadband

Instead of using a phone line as ADSL does, cable broadband establishes an internet
connection via a specialised cable that shares the same line as your TV service. Cable
broadband generally offers higher speeds than ADSL connections (average download
speeds of 50.5 mb), but as a cable broadband connection is often shared with many
other users, speeds can suffer from time to time due to congestion during peak times.
Fibre broadband
The most recently rolled out form of internet connection in the UK (and therefore, still not that widely available) is fibre
broadband. Fibre broadband claims to offer more consistent and reliable speeds than cable and ADSL (average download
speeds of 59.4mb) — allowing multiple devices to be performing high-capacity tasks, simultaneously, without any slow
downs or breakages in the connection, making it an attractive proposition for busy family homes or office environments.

Mobile internet
Mobile internet is a way of getting online anywhere without relying on a fixed-location connection — as the name suggests,
by using your mobile device.
Mobile phone operators provide access to this alternative method of internet usage. When you sign up to a mobile phone
operator’s services — either on a contract or pay-as-you-go basis — you can include access to a certain amount of data
(measured in megabytes), allowing you to use your mobile device to connect to the internet within that capped usage limit.
Mobile internet is currently offered at two different speeds and capability levels:

3G mobile internet: has been around for many years and typically offers basic access and download speeds that allow users
to complete basic tasks such as load a web page or access an email. 3G mobile internet is gradually being replaced by 4G
services.

4G mobile internet: is the more recently available level of mobile internet available, offering much higher speeds than 3G. In
fact, due to excellent connection and download speeds, 4G might eventually replace fixed internet connections in more rural
parts of the country that may struggle to get access to quicker connections.

5G mobile internet: is the proposed next telecommunications standard beyond the current 4G advanced standards.
Introduction to client-server computing
The client/server model brings out a logical perspective of distributed
corporative processing where a server handles and processes all client requests.
This can be also viewed as a revolutionary milestone to the data processing
industry.

“Client/server computing is the most effective source for the tools that
empower employees with authority and responsibility.”

Workstation power, workgroup empowerment, preservation of existing


investments, remote network management, and market-driven business are the
forces creating the need for client/server computing
Client/server computing has a vast progression in the computer industry
leaving any area or corner untouched.

Often hybrid skills are required for the development of client/server


applications including database design, transaction processing, communication
skills, graphical user interface design and development etc. Advanced
applications require expertise of distributed objects and component
infrastructures.

Most commonly found client/server strategy today is PC LAN implementation


optimized for the usage of group/batch. This has basically given threshold to
many new distributed enterprises as it eliminates host-centric computing.
What Is A Client/Server?

Client
A client is a program on a machine that sends
request for a resource on a server.

Server
A server is a program on a machine that receives
the request and send the response to the client.

Client/Server computing provides an environment


that enhances business procedures by
appropriately synchronizing the application
processing between the client and the server.
Often clients and servers communicate over a computer network on separate
hardware, but both client and server may reside in the same system. A server
host runs one or more server programs, which share their resources with
clients.

A client usually does not share any of its resources, but it requests content or
service from a server. Clients, therefore, initiate communication sessions with
servers, which await incoming requests. Examples of computer applications
that use the client-server model are email, network printing, and the World
Wide Web.

You might also like