0% found this document useful (0 votes)
16 views18 pages

Chapter One

Uploaded by

kaleb123 shumu12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views18 pages

Chapter One

Uploaded by

kaleb123 shumu12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Chapter One

The Fundamentals
Overview of the internet
 Internet is network of networks (i.e. when two or more devices are connected for sharing
data or resources or exchange messages we call it networks)
 The Internet is a collection of computers connected by network cables or through satellite
links.
 Internet is a physical collection of routers and circuits as a set of shared resources.
 Internet is a networking infrastructure that provides services to distributed applications.
 Most of these computing devices are traditional desktop PCs, Unix-based workstations,
and so called servers that store and transmit information such as Web (WWW) pages and
e-mail messages.
 Increasingly, non-traditional computing devices such as Web TVs, mobile computers,
pagers, and toasters are being connected to the Internet.
 In the Internet jargon, all of these devices are called hosts or end systems.
 End systems, as well as most other ‘pieces’ of the Internet, run protocols that control the
sending and receiving of information within the Internet.
 TCP (Transmission Control Protocol) and IP (Internet Protocol) are two of the most
important protocols in the Internet.
 End systems are connected together by communication links.
 No company owns the internet (I.e.no central administration or control for the internet), it
is cooperative effort governed by a system of standard and rules.
 Internet Service Providers (ISP) provides the Internet connectivity. These corporations
dedicate computers to act as servers - that is they make information (such as Web pages
or e-mail) available to users of the Internet.
Origin of the internet
In the 1969, departments of defense (DOD of USA) started a network called ARPANET
(Advanced Research Projects Administration Network) with one computer at California and
three at Utah.

1|Prepared by: Firomsa K.


Internet-Based Services
E-mail:- allows user to:
 Send a message to just only one user or group of users,
 To read, print, forward, answer (replay) or delete a message
 Attach large documents.
Usenet/ News groups: -
 World -wide computer network that encompasses thousands of discussion groups.
 Allows exchange of news comments, and questions on a specific topic/issue.
Telnet (Remote login):
 Is away that enables a user to logon to a remote computer and interactively access its
resources.
 Uses a special protocol called Network Terminal Protocol
 Protocol is the standard set of rule that governs all communication over a computer
network
File Transfer Protocol (FTP):-
 Used to download files from an FTP sites

World Wide Web (www)


 Is the leading information retrieval service of the Internet
 Gives users access to a vast array of documents that are connected to each other by means
of hypertext or hypermedia links—i.e., hyperlinks (electronic connections that link
related pieces of information in order to allow a user easy access to them)
 Hypertext allows the user to select a word or phrase from text and thereby access other
documents that contain additional information pertaining to that word or phrase.
 Hypermedia documents feature links to images, sounds, animations, and movies.
 A hypertext document with its corresponding text and hyperlinks is written in HyperText
Markup Language (HTML) and is assigned an online address called a Uniform Resource
Locator (URL).
 Is the most popular application or service of the internet.
 It consists of a worldwide collection of electronic documents.

2|Prepared by: Firomsa K.


 An electronic document on the Web is called a Web page, which can contain text,
graphics, animation, audio, and video.
 A web page is a simple document (HTML document) displayable by a browser.
 A web page can embed a variety of different types of resources such as:
 Style information- controlling a page’s look-and-feel.
 scripts — which add interactivity to the page
 Media — images, sounds, and videos.

 Web pages could be static (fixed) or dynamic (changing).


 A Web site is a collection of related Web pages stored on the same web server.
 A Web Server is a computer that delivers requested Web pages to your computer. The
same Web server can store multiple Web sites.
World Wide Web is combination of four ideas:
 Hypertext
 Resource identifiers
 Client-server architecture
 Markup language
Hypertext, that is the ability, in a computer environment, to move from one part of a document
to another or from one document to another through internal connections among these
documents (called "hyperlinks");
Resource identifier that is the ability on a computer network, to locate a particular resource
(computer, document or other resource) on the network through a unique identifier called IP
address;
The client-server model, of computing, in which client software or a client computer makes
requests of server software or a server computer that provides the client with resources or
services, such as data or files;
Markup language, in which characters or codes embedded in text indicate to a computer how to
print or display the text, e.g. as in italics or bold type or font.
• The rule making body of the web is called W3C (world- wide web consortium)
• W3C is the none- profit organizations that makes the web standard.
– the most know web standards are: HTML, CSS, XML, and XHTML.

3|Prepared by: Firomsa K.


Origin of the WWW
 The World Wide Web and its associated Hyper Text Transfer Protocol (HTTP) grew out
of work done at the European laboratory for particle physics (CERN) in 1990
 Time Bernes-Lee developed HTTP as a network protocol for distributing documents and
wrote the first web browser.
 HTTP is simple request/response protocol in which web browser asks for a document and
the web server return the document in the form of HTML data stream preceding by a few
descriptive headers.
Static & Dynamic web page
Static (fixed) web page
 Is a web page that is written entirely using HTML.
 Each web page is a separate document and there are no databases or external files that are
drawn upon.
 The only way to edit this type of website is to go into each page and edit the HTML.
 Difficult to manage and handle the web page
 Hard for Mobility and Modification, also difficult to host up-to-date information
 Visitors to a static Web page all see the same content (unchanged).
 prebuild document
 not sufficient
Dynamic (changing) web page
 Is written using more complex code — such as PHP or ASP — and has a greater degree
of functionality.
 Potentially able to make updates without needing any change of HTML document.
 Each page of a dynamic website is generated from information stored in a database or
external file.
 Visitors to a dynamic Web page see the changed content.
 Visitors can customize some or all of the viewed content such as weather for a region, or
ticket availability for flights.

4|Prepared by: Firomsa K.


Why build web pages dynamically?
Build web page dynamically
 The web page is based on data sent by the client.
User can submit two kinds of data Explicit (i.e. HTML form of data) and implicit (i.e.
HTTP request header) either kind of input can be used to build the output page.
 The web is derived from data that change frequently.
The web page use information from corporate databases or other server side sources.
Drawbacks of Dynamic Site
 cost more to develop, because they require more complex coding
 You will need to obtain web hosting which supports databases and dynamic
languages.
 Needs a skill person
SN Static web page Dynamic web page
1. Pages will remain same until someone Content of pages are different for
changes it manually. different visitors.
2. Simple in terms of complexity. Complicated
3. Information is changed rarely. Information are change frequently.
4. Takes less time for loading than dynamic Takes more time for loading.
web page.
5. Database is not used. Database is used.
6. Written in languages such as: HTML, written in languages such as: CGI,
JavaScript, CSS, etc. AJAX, ASP, ASP.NET, etc.
7. Does not contain any application program. Contains application program for
different services.
8. Require less work and cost in designing Require comparatively more work and
them. cost in designing them.

5|Prepared by: Firomsa K.


Client-server Architecture
A network architecture in which each computer or process on the network is either a client or a
server. In client server architecture, clients (remote processors) request and receive service from a
centralized server (host computer). Client computers provide an interface to allow a computer user
to request services of the server and to display the results the server returns. Clients are often
situated at workstations or on personal computers, while servers are located elsewhere on the
network, usually on more powerful machines. This computing model is especially effective when
clients and the server each have distinct tasks that they routinely perform.
SN Client Server
1. Refers to any aspect of the design Refers to the programs and scripts that work on the
process that appears in or relates server behind the scenes to make web pages
directly to the browser. dynamic and interactive.

2. Presents an interface to the user A database from which a client requests information

3. Gathers information from the Responsible for data storage and management
user, submits it to a server
4. Client relies on the services of Server authorizes the client's requests.
server.
5. The configuration of client The configuration of the server is more complex and
systems is simple. sophisticated.
6. The efficiency of client is limited. The performance of server is high, and they are
highly efficient.
7. The client systems can be switch Switching off servers may be disastrous for client
off without any fear. systems that continuously request the services.
8. There can be single user logins. Server support multiple user login and request
processing simultaneously.
9. Examples of clients are Examples of servers are web servers, file servers,
smartphones, desktops, laptops, database servers, etc.

etc.
Table 1: Clients vs Server

6|Prepared by: Firomsa K.


Types of client-server Architecture:
Client-Server Architecture is a distributed system architecture where the workloads of client server
are separated. There are three tiers of client server architecture.
1. One tier architecture
In this type of client server environment the user interface, business logic & data logic are present
in same system. This kind of client server service is cheapest but it is difficult to handle because
of data inconsistency that allows repetition of work.

Fig 1: one-tire client server architecture


2. A two-tier architecture
In this type of client server environment, user interface is stored at client machine and databases
are stored on server. Database logic & business logic are stored at either client or server but it must
be unchanged. Two-tier architecture is useful where a client talks directly to a server. There is no
intervening server. The user interface is placed at user’s desktop environment and the DBMS
services are usually placed in a server. Information processing is split between the user system
interface environment and the database management server environment.

Figure 2: Two-tier architecture

7|Prepared by: Firomsa K.


3. Three tier architecture
In this kind of client server environment an additional middle-ware is used that means client
request goes to server through that middle layer and the response of server is firstly accepted by
middle-ware then to client. This architecture overcomes all the drawbacks of 2-tier architecture
and gives best performance. It is costly and easy to handle. The middle-ware stores all the business
logic and data access logic. If there is multiple Business Logic & Data Logic, it is called n-tier
architecture. The purpose of middle-ware is to database staging, queuing, application execution,
scheduling etc. Middle-ware can be file server, message server, application server, transaction-
processing monitor etc. It improves flexibility and gives best performance.

Figure 3: Three-tier architecture

How the Web Work?


The stream of event that occurs with every web page that appear on screen.
1. You request a web page by either typing its URL (for e.g. http://amazon.com) directly in
the browser or by clicking on a link on the page. The URL contains all the information
needed to target a specific document on a specific web server on the internet.
2. Your browser send an HTTP request on the server named in the URL and asks for a specific
file .If the URL specifies a directory(not a file), it is the same as request the default file in
that directory.
3. The server looks for the request file and sends an either of the following HTTP response:
a) If the page cannot be found. The server return an error message .the message
typical says “404” not found.
b) If the document is found. The server retrieves the requested file and returns it to
the browser.

8|Prepared by: Firomsa K.


4. The browser parses the HTML document. If the page contains images(indicated by the
HTML img tag), the browser contacts the server again to request each image files specified
in the markup
5. The browser inserts each image in the document flow where indicated by the img. Then, the
assembled web page is displayed for your viewing pleasure.

Fig 3:-How the web work

Browser and Web Server


• Browser
– A program that retrieves information from the Web.
– Its purpose is to retrieve and display information from a Web server by using
HTTP protocol.
– It allows any user to access a server easily

There are different web browsers that are available and in use today and they all come with a
variety of features. Some of the :-
 NCSA Mosaic :- The first real HTML browser (1993)
 Microsoft Internet Explorer :- Most commonly found browsers
 Opera:-The fastest browser on Earth
 Lynx:-Text based web client
 Netscape, Safari, Google Chrome etc.

9|Prepared by: Firomsa K.


Web Server
 A program that provides documents to clients.
E.g.
 CERN httpd (the first webserver)
 Apache(most widely used Web server software)
 IIS(internet information server)
 Netscape Web server.

Hypertext Transfer Protocol


HTTP is a protocol for fetching resources such as HTML documents. HTTP is an application-
layer protocol for transmitting hypermedia documents, such as HTML. It is the set of rules for
transferring files such as text, images, sound, video and other multimedia files over the web. HTTP
is an application protocol that runs on top of the TCP/IP suite of protocols, which forms the
foundation of the internet. Through the HTTP protocol, resources are exchanged between client
devices and servers over the internet. HTTP follows a classical client-server model, with a client
opening a connection to make a request, then waiting until it receives a response. Whenever the
user is trying to get in information from the server, the browser sends the request to the server
using HTTP, and HTTP tells the server that the user is looking for the HTML. HTTP is a stateless
protocol, meaning that the server does not keep any data (state) between two requests.

HTTPS protocol is an extension of HTTP. The “S” in the abbreviation comes from the word Secure
and it is powered by Transport Layer Security (TLS) [the successor to Secure Sockets Layer
(SSL)], the standard security technology that establishes an encrypted connection between a web
server and a browser. Without HTTPS, any data you enter into the site (such as your
username/password, credit card or bank details, any other form submission data, etc.) will be sent
as a plaintext and therefore susceptible to interception or eavesdropping. For this reason, you
should always check that a site is using HTTPS before you enter any information.
Evolution of HTTP
HTTP functions as a request–response protocol in the client–server computing model. The Internet
Engineering Task Force (IETF) develops HTTP standards. HTTP has four versions: HTTP/0.9,
HTTP/1.0, HTTP/1.1, and HTTP/2.0.

10 | P r e p a r e d b y : F i r o m s a K .
1. HTTP/0.9: The One-line Protocol
 Initial version of HTTP: a simple client-server, request-response, telnet-friendly protocol
 Request nature: single-line (method + path for requested document)
 Methods supported: GET only
 Response type: hypertext only
 Connection nature: terminated immediately after the response
 No HTTP headers (cannot transfer other content type files), No status/error codes, No URLs,
No versioning
2. HTTP/1.0: Building extensibility
 Browser-friendly protocol
 Provided header fields including rich metadata about both request and response (HTTP
version number, status code, content type)
 Response: not limited to hypertext (Content-Type header provided ability to transmit files
other than plain HTML files — e.g. scripts, stylesheets, and media)
 Methods supported: GET , HEAD , POST
 Connection nature: terminated immediately after the response
3. HTTP/1.1: The standardized protocol
 This is the HTTP version currently in common use.
 Introduced critical performance optimizations and feature enhancements
 Methods supported: GET , HEAD , POST , PUT , DELETE , TRACE , OPTIONS
 Connection nature: long-lived
Generally, HTTP protocols include ways to-

o Ask for a document by name


o Determine who the user is
o Decide how to handle outdate resources
o Indicate the results of a request
o And other use full function

HTTP Methods
Two commonly used methods for a request-response between a client and server are: GET and
POST.

11 | P r e p a r e d b y : F i r o m s a K .
 GET - Requests data from a specified resource.
 POST - Submits data to be processed to a specified resource.

The GET Method

– Note that query strings (name/value pairs) is sent in the URL of a GET request:
/test/demo_form.asp?name1=value1&name2=value2

GET requests

– can be cached
– remain in the browser history
– can be bookmarked
– should never be used when dealing with sensitive data
– have length restrictions
– should be used only to retrieve data

The POST Method

– Note that query strings (name/value pairs) is sent in the HTTP message body of a
POST request:
– POST /test/demo_form.asp HTTP/1.1
Host: w3schools.com

POST requests:

– are never cached


– do not remain in the browser history
– cannot be bookmarked
– have no restrictions on data length

Other HTTP Request Methods

 PUT: a request for the server to store the data in the request and the new contents
on the specified URI.
 DELETE: a request for the server to delete the resource named in the URI.

12 | P r e p a r e d b y : F i r o m s a K .
 OPTIONS: a request for the information about the request methods the server
supports.
 TRACK: a request for the web server to echo the HTTP request and its headers.
 CONNECT:-Converts the request connection to a transparent TCP/IP channel

Other web protocols


Protocols are sets of rules for message formats and procedures that allow machines and
application programs to exchange information. The TCP/IP protocol suite is so named for two of
its most important protocols:-Transmission Control Protocol (TCP) and Internet Protocol (IP).
TCP/IP
 Is a set of protocols developed to allow cooperating computers to share resources across a
network
 A highly standardized protocol used widely on the Internet
 The main design goal of TCP/IP was to build an interconnection of networks, referred to
as an inter-network, or internet, that provided universal communication services over
heterogeneous physical networks.
 The set of TCP/IP protocols are partitioned into 4-layers:
 Network interface layer
 Internet layer
 Transport layer
 Application layer

Layer protocol

13 | P r e p a r e d b y : F i r o m s a K .
1. Application layer protocols- provide accurate and efficient data delivery. Typical protocols
in application layer:
o FTP (File Transfer Protocol) - for file transfer
o Telnet (Teletype Network) - provides remote login service. It allows a user on one
machine to log into another machine on the network.
o SMTP (Simple Mail Transfer Protocol) - for mail transfer
o HTTP (Hypertext Transfer Protocol) - for communication between web browsers and
web servers
o LPD (Line Printer Daemon) - designed for printer sharing
2. Transport Layer protocols - define the rules of dividing a large piece of data into segments
and reassemble segments into the original piece. Typical protocols in transport layer are:
o TCP (Transmission Control Protocol) - Provide further functions such as reordering and
data resend. Takes large blocks of information from an application and breaks them into
segments. It numbers and sequences each segment so that the destination’s TCP protocol
can put the segments back into the order the application intended. After these segments
are sent, TCP (on the transmitting host) waits for an acknowledgment of the receiving
end’s TCP virtual circuit session, retransmitting those that aren’t acknowledged. Before
a transmitting, host starts to send segments down the model, the sender’s TCP protocol
contacts the destination’s TCP protocol to establish a connection. What is created is
known as a virtual circuit. This type of communication is called connection-oriented.
o UDP (User datagram protocol) - does not provide functions such as reordering and data
resend. UDP does not sequence the segments and does not care in which order the
segments arrive at the destination. But after that, UDP sends the segments off and
forgets about them. It does not follow through, check up on them, or even allow for an
acknowledgment of safe arrival—complete abandonment. Because of this, it is referred
to as an unreliable protocol. This does not mean that UDP is ineffective, only that it
doesn’t handle issues of reliability. Further, UDP doesn’t create a virtual circuit, nor
does it contact the destination before delivering information to it. Because of this, it’s
also considered a connectionless protocol. Generally, TCP for reliability and UDP for
faster transfer.

14 | P r e p a r e d b y : F i r o m s a K .
TCP UDP
Sequenced There is no sequencing
Reliable Unreliable
Connection oriented Connectionless
Virtual circuit Low overhead

3. Internet layer protocols - define the rules of how to find the routes for a packet to the
destination. It only gives best effort delivery. Packets can be delayed, corrupted, lost,
duplicated, out-of-order. Typical protocols in internet layer are:
o IP (Internet Protocol)- Provide packet delivery
o ARP (Address Resolution Protocol) - Define the procedures of network address /
MAC address translation
o ICMP (Internet Control Message Protocol) - Define the procedures of error
message transfer
4. Network Interface layer - Formats IP datagrams at the Network layer into packets that
specific network technologies can understand and transmit. Responsible for sending and
receiving TCP/IP packets on the network medium (physical/Data Link)

Web content validation


Web content refers to the textual, aural, or visual content published on a website. Content means
any creative element, for example, text, applications, images, archived e-mail messages, data, e-
services, audio and video files, and so on. Web content is the key behind traffic generation to
websites. Creating engaging content and organizing it into various categories for easy navigation
is most important for a successful website. In addition, it is important to optimize the web content
for search engines so that it responds to the keywords used for searching. Validation should always
be implemented on both the server and client. Server-side validation is needed to sanitize user
inputs so that information is safe. Client-side validation provides better user experience by offering
immediate feedback that help users correct issues immediately.

1. Server-Side validation- The form information is sent to the server and validated. If the
validation fails, the response is sent back to the client, the page containing the form is

15 | P r e p a r e d b y : F i r o m s a K .
refreshed and feedback shown. After correcting errors, the user resends the form to the
server for validation.

Benefits of using server side validation

 Secure validation that cannot easily be bypassed by malicious users.


 Easy to provide persistent feedback. For example, a form is submitted to the server
and the user navigates to another page before validation is complete. Sever-side
validation can allow us to:
o Display a feedback message on the page when the user returns.
o Send an email to the user with validation feedback.

Drawbacks of using server side validation

 Users must fill all the information and submit it to the server before they get
feedback.
 Server response may delay feedback as the information is being validated.

1. Client side validation: The form information is validated on the browser. If validation
fails, feedback is shown on the webpage containing the form. After correcting errors, the
browser validates the form again. The information gets sent to the server once validation
is successful.

16 | P r e p a r e d b y : F i r o m s a K .
Benefits of using client side validation

 Feedback is provided in near real-time.


 User input can be validated as users types; fixes can be immediate

Drawbacks of using client side validation

 Unsecure validation. It is easy to turn off browser scripts and bypass the
validation.
 Not valuable when access to the server is needed to validate user's input. For
example: checking if a username is already in use requires access to the server.
 Cannot provide feedback that persists on the page if the user navigates away.

Website evaluation

There are different criteria used for evaluating the website for its acceptance. The followings are
some of them.

 Strategy- Good website design is backed by strategy. Even the most attractive,
user-friendly website is not successful when it is not achieving what your company
needs.
 Usability- Usability is all about the practical considerations of what goes into good
website design, such as speed, user-friendliness, security, technical details like
sitemaps, etc.
 Style- Beauty may be relative, but that does not mean there are not clear aesthetic
principles to guide your website design. The best designs will align with their
brands, create positive impressions for visitors, be clean, and complement the
content they are communicating.

17 | P r e p a r e d b y : F i r o m s a K .
 Content- The two main considerations regarding content are readability and
usefulness. Readability is important because if the visitors cannot make out the
content, whether that is because it is too small or in a pale color or in an unreadable
font, there is no way for message to get across. Usefulness is just as important,
however, because if the content does not matter to the reader, there may be the
chance to lose him or her anyway.
 Search Optimization- there are many ways that the design of your website impacts
search optimization

18 | P r e p a r e d b y : F i r o m s a K .

You might also like