BCA Part 2 Paper 10
BCA Part 2 Paper 10
E-CONTENT
1.1 Introduction
Each of the past three centuries has been dominated by a single technology. People were doing
lot of paper work in organizations because, lack of advance systems which may help them in
day to day work. The 18th century was the time of the great mechanical systems accompanying
the Industrial revolution. Computer industry has made spectacular progress in short time.
During the first two decades of their existence. Computer systems were highly centralized,
usually within the single large room. A medium size company or an organisation were having
one or two computers, white large institutions having a few dozen. The idea that within 20
years powerful computers smaller than postage stamps would be mass produced was pure
science fiction.
The merging of computers and communications has had a profound influence on the way
computer systems are organized. The old model of single computer serving all of the
organization computational need has been replaced by one which the-large no of separate but
interconnected computers do the job. These systems are called as computer network.
A network is a group of two of more computer systems sharing services and interacting in some
manner. This interaction is, accomplished through a shared communication link, with the
shared components being data. Putting simply a network is a collection of machines that have
been linked both physically and through software components to facilitate communication and
sharing of information.
A physical pathway known as transmission medium, connects the systems and a set of rules
determines how they communicate. These rules are known as protocols. A network protocol is
software installed on a machine that determines the agreed –upon set of rules for two or more
machine to communicate with each other. One common metaphor used to describe different
protocols is to compare them to human languages.
Think of a group of people in the same room who know nothing about each other. In order for
them to communicate, this group must determine what language to speak, how to handle
identifying each other, whether to make general announcements or have private conversations
and so on. Machines using different protocols installed can't communicate with each other.
Networks are widely used by companies or on personal level also. Network for companies
should provide high reliability, cost efficient, and recourse sharing.
A most distinguishing characteristic of a general computer network is that data can enter or
leave at any point and can be processed at any workstation. For example: A printer can be
controlled from any computer on the network. This is an introductory unit where, you will
learn about the basic concepts regarding Computer Networks, different types of Networks, their
applications, Network topology, Network protocols, OSI Reference Model, TCP/IP Reference
Model. We shall also examine some of the popular computer networks like Novell network,
ARPANET, Internet, and ATM networks. We conclude the Unit with a brief summary
followed by an exercise and some suggested readings for the students.
1.2 Objectives
A computer network is a system in which multiple computers are connected to each other to
share information and resources.
• Two computers are said to be interconnected if they interchange information. The connection
between the separate computers can be done via a copper wire, fibre optics, microwaves or
communication satellite.
• A printer, computer, or any machine that is capable of communicating on the network is
referred to as a device or node.
• We can also say that computer network is an interconnection of various computers to share
software, hardware, and data through a communication medium between them. The computers
connected in a network share files, folders, applications and resources like scanner, web-cams,
printers etc.
1. Resource sharing.
2. For providing high reliability.
3. To save money.
4. It can provide a powerful communication medium.
1. Resource sharing
• It allows all programs, equipment and data available to anyone on the network irrespective of
the physical location of the resource and the user.
• Show in Fig (a) and (b) which shows a printer being shared and different information being
shared.
Figure 2 : (a) Sharing of Printer (b) Sharing of Software
2. High reliability due to alternative sources of data:
• It provides high reliability by having alternative sources of data. For e.g. all files could be
replicated on more than one machines, so if one of them is unavailable due to hardware failure
or any other reason, the other copies can be used.
• The aspect of high reliability is very important for military, banking, air traffic control,
nuclear reactor safety and many other applications where continuous operations is a must even
if there are hardware or software failures.
3. Money saving:
• Computer networking is an important financial aspect for organizations because it saves
money.
• Organizations can use separate personal computer one per user instead of using mainframe
computer which are expensive.
• The organizations can use the workgroup model (peer to peer) in which all the PCs are
networked together and each one can have the access to the other for communicating or sharing
purpose.
• The organization, if it wants security for its operation it can go in for the domain model in
which there is a server and clients. All the clients can communicate and access data through
the server.
• The whole arrangement is called as client -server model.
Access to remote information involves interaction· between a person and a remote database.
Access to remote information comes in many forms like:
(i) Home shopping, paying telephone, electricity bills, e-banking, on line share market etc.
(ii) Newspaper is. On-line and is personalized, digital library consisting of books, magazines,
scientific journals etc.
(iii) World wide web which contains information. about the arts, business, cooking,
government, health, history, hobbies, recreation, science, sports etc.
2. Person to person communication:
Person to person communication includes:
(i) Electronic-mail (e-mail)
(ii) Real time e-mail i.e. video conferencing allows remote users to communicate with no delay
by seeing and hearing each other. Video-conferencing is being used for remote school, getting
medical opinion from distant specialists etc.
(iii) Worldwide newsgroups in which one person posts a message and all other subscribers to
the newsgroup can read it or give their feedbacks.
3. Interactive entertainment:
Interactive entertainment includes:
(i) Multiperson real-time simulation games.
(ii) Video on demand.
(iii) Participation in live TV programmes likes quiz, contest, discussions etc.
In short, the ability to merge information, communication and entertainment will surely give
rise to a massive new industry based on computer networking.
1.4 Network Goals and Motivations
Before designing a computer network we should see that the designed network fulfils the basic
goals. One of the main goals of a computer network is to enable its users to share resources, to
provide low cost facilities and easy addition of new processing services. The computer network
thus, creates a global environment for its users and computers.
Some of the basic goals that a Computer network should satisfy are:
The main goal of networking is "Resource sharing", and it is to make all programs,
data and equipment available to anyone on the network without the regard to the
physical location of the resource and the user.
A second goal is to provide high reliability by having alternative sources of supply.
For example, all files could be replicated on two or three machines, so if one of them
is unavailable, the other copies could be available.
Another goal is saving money. Small computers have a much better rice/performance
ratio than larger ones. Mainframes are roughly a factor of ten times faster than the
fastest single chip microprocessors, but they cost thousand times more. This mbalance
has caused many system designers to build systems consisting of powerful personal
computers, one per user, with data kept on one or more shared file server machines.
This goal leads to networks with many computers located in the same building. Such
a network is called a LAN (local area network).
Another closely related goal is to increase the systems performance as the work load
increases by just adding more processors. With central mainframes, when the system
is full, it must be replaced by a larger one, usually at great expense and with even
greater disruption to the users.
Computer networks provide a powerful communication medium. A file that was
updated or modified on a network can be seen by the other users on the network
immediately.
Standards and protocols should be supported to allow many types of equipment from
different vendors to share the network (Interoperatability).
Broadcast
Broadcast networks have a single communication channel that is shared by all the machines
on the network. In this type of network, short messages sent by any machine are received by
all the machines on the network. The packet contains an address field, which specifies for
whom the packet is intended. All the machines, upon receiving a packet check for the address
field, if the packet is intended for itself, it processes it and if not the packet is just ignored.
Using Broadcast networks, we can generally address a packet to all destinations (machines)
by using a special code in the address field. Such packets are received and processed by all
machines on the network. This mode of operation is known as “Broadcasting”. Some
Broadcast networks also support transmission to a subset of machines and this is known as
“Multicasting”. One possible way to achieve Multicasting is to reserve one bit to indicate
multicasting and the remaining (n-1) address bits contain group number. Each machine can
subscribe to any or all of the groups.
Broadcast networks are easily configured for geographically localised networks. Broadcast
networks may be Static or dynamic, depending on how the channel is allocated.
In Static allocation, time is divided into discrete intervals and using round robin method, each
machine is allowed to broadcast only when its time slot comes up. This method is inefficient
because the channel capacity is wasted when a machine has nothing to broadcast during its
allocated slot.
Dynamic allocation may be centralised or decentralised. In centralised allocation method,
there is a single entity, for example, a bus arbitration unit which determine who goes next and
this is achieved by using some internal algorithm. In Decentralised channel allocation
method, there is no central entity, here, each machine decides for itself whether or not to
transmit.
BUS Topology
Bus topology is a network type in which every computer and network device is connected to
single cable. The bus topology connects workstations using a single cable. Each workstation
is connected to the next workstation in a point-to-point fashion. All workstations connect to
the same cable. Figure 5 shows computers connected using Bus Topology.
In this type of topology, if one workstation goes faulty all workstations may be affected as all
workstations share the same cable for the sending and receiving of information. The cabling
cost of bus systems is the least of all the different topologies. Each end of the cable is
terminated using a special terminator.
The common implementation of this topology is Ethernet. Here, message transmitted by one
workstation is heard by all the other workstations.
Figure 5 : Bus Topology
1. It is cost effective.
2. Cable required is least compared to other network topology.
3. Used in small networks.
4. It is easy to understand.
5. Easy to expand joining two cables together.
STAR Topology
In this type of topology all the computers are connected to a single hub through a cable. Star
topology uses a central hub through which, all components are connected. In a Star topology,
the central hub is the host computer, and at the end of each connection is a terminal as shown
in Figure 6.
Nodes communicate across the network by passing data through the hub. A star network uses
a significant amount of cable as each terminal is wired back to the central hub, even if two
terminals are side by side but several hundred meters away from the host. The central hub
makes all routing decisions, and all other workstations can be simple.
An advantage of the star topology is, that failure, in one of the terminals does not affect any
other terminal; however, failure of the central hub affects all terminals.
This type of topology is frequently used to connect terminals to a large time-sharing host
computer.
1. A number of repeaters are used for Ring topology with large number of nodes, because if
someone wants to send some data to the last node in the ring topology with 100 nodes,
then the data will have to pass through 99 nodes to reach the 100th node. Hence to
prevent data loss repeaters are used in the network.
2. The transmission is unidirectional, but it can be made bidirectional by having 2
connections between each Network Node, it is called Dual Ring Topology.
3. In Dual Ring Topology, two ring networks are formed, and data flow is in opposite
direction in them. Also, if one ring fails, the second ring can act as a backup, to keep the
network up.
4. Data is transferred in a sequential manner that is bit by bit. Data transmitted, has to pass
through each node of the network, till the destination node.
1. Transmitting network is not affected by high traffic or by adding more nodes, as only the
nodes having tokens can transmit data.
2. Cheap to install and expand
TREE Topology
It has a root node and all other nodes are connected to it forming a hierarchy. It is also called
hierarchical topology. It should at least have three levels to the hierarchy.
Tree topology is a LAN topology in which only one route exists between any two nodes on
the network. The pattern of connection resembles a tree in which all branches spring from
one root. Figure 8 shows computers connected using Tree Topology. Tree topology is a
hybrid topology, it is similar to the star topology but the nodes are connected to the
secondary hub, which in turn is connected to the central hub. In this topology groups of
star-configured networks are connected to a linear bus backbone.
1. Heavily cabled.
2. Costly.
3. If more nodes are added maintenance is difficult.
4. Central hub fails, network fails.
MESH Topology
It is a point-to-point connection to other nodes or devices. All the network nodes are
connected to each other. Devices are connected with many redundant interconnections
between network nodes. In a well-connected topology, every node has a connection to every
other node in the network. The cable requirements are high, but there are redundant paths
built in. Failure in one of the computers does not cause the network to break down, as they
have alternative paths to other computers.
Mesh topologies are used in critical connection of host computers (typically telephone
exchanges). Alternate paths allow each computer to balance the load to other computer
systems in the network by using more than one of the connection paths available. A fully
connected mesh network therefore has n (n-1)/2 physical channels to link n devices. To
accommodate these, every device on the network must have (n-1) input/output ports.
There are two techniques to transmit data over the Mesh topology, they are :
1. Routing
2. Flooding
Routing
In routing, the nodes have a routing logic, as per the network requirements. Like routing logic
to direct the data to reach the destination using the shortest distance. Or, routing logic which
has information about the broken links, and it avoids those node etc. We can even have
routing logic, to re-configure the failed nodes.
Flooding
In flooding, the same data is transmitted to all the network nodes, hence no routing logic is
required. The network is robust, and the its very unlikely to lose the data. But it leads to
unwanted load over the network.
1. Partial Mesh Topology : In this topology some of the systems are connected in the same
fashion as mesh topology but some devices are only connected to two or three devices.
2. Full Mesh Topology : Each and every nodes or devices are connected to each other.
1. Fully connected.
2. Robust.
3. Not flexible.
Cellular Topology
Cellular topology, divides the area being serviced into cells. In wireless media each point
transmits in a certain geographical area called a cell, each cell represents a portion of the total
network area. Figure 7 shows computers using Cellular Topology. Devices that are present
within the cell, communicate through a central hub. Hubs in different cells are interconnected
and hubs are responsible for routing data across the network. They provide a complete
network infrastructure. Cellular topology is applicable only in case of wireless media that
does not require cable connection.
Computer network applications are network software applications that utilize the Internet
or other network hardware infrastructure to perform useful functions for example file
transfers within a network. They help us to transfer data from one point to another within the
network.
These are applications created to be used in networks. Such applications have a separate and
distinct user interface that users must learn for instance: -
1. Email programs
They allow users to type messages at their local nodes and then send to someone on the
network. It is a fast and easy way of transferring mail from one computer to another.
Examples of electronic mail programs (Clients) are: -
Outlook express
Eudora Windows mail
Yahoo
Gmail
2. File transfer protocol (FTP)
This application facilitates transfer of files from one computer to another e.g. from a client to
a server. There are 2 common processes involved in FTP
Downloading: - This is the process of obtaining files from a server to a workstation or a
client (for example when you download programs and music from a server).
Uploading:- This is obtaining of files from a workstation to a server (for instance when you
attach documents and upload them to a server, a good example being when you upload
photos to Facebook).
Examples of FTP programs are:-
FTP in Unix
FTP in Linux or
FTP in Windows
File Transfer Protocol Process
Data are not directly transferred from layer-n on one computer to layer-n on another
computer. Rather, each layer passes data and control information to the layer directly below
until the lowest layer is reached. Below layer-1 (the bottom layer), is the physical medium
(the hardware) through which the actual transaction takes place. In Figure 12 logical
communication is shown by a broken-line arrow and physical communication by a solid-line
arrow.
Between every pair of adjacent layers is an interface. The interface is a specification that
determines how the data should be passed between the layers. It defines what primitive
operations and services the lower layer should offer to the upper layer. One of the most
important considerations when designing a network is to design clean-cut interfaces between
the layers. To create such an interface between the layers would require each layer to perform
a specific collection of well-understood functions. A clean-cut interface makes it easier to
replace the implementation of one layer with another implementation because all that is
required of the new implementation is that, it offers, exactly the same set of services to its
neighbouring layer above as the old implementation did.
OSI Reference Model
The OSI reference model is the primary model for network communications. The early 1980s
saw tremendous increases in the number and sizes of networks. As companies realized that
they could save money and gain productivity by using networking technology, they added
networks and expanded existing networks as rapidly as new network technologies and
products were introduced.
By the mid-1980s, companies began to experience difficulties from all the expansions they
had made. It became more difficult for networks using different specifications and
implementations to communicate with each other. The companies realized that they needed to
move away from proprietary networking systems, those systems that are privately developed,
owned, and controlled.
To address the problem of networks being incompatible and unable to communicate with
each other, the ISO researched different network schemes. As a result of this research, the
ISO created a model that would help vendors create networks that would be compatible
with, and operate with, other networks.
The OSI reference model, released in 1984, was the descriptive scheme that the ISO
created. It provided vendors with a set of standards that ensured greater compatibility and
interoperability between the various types of network technologies produced by companies
around the world. Although other models exist, most network vendors today relate their
products to the OSI reference model, especially when they want to educate customers on
the use of their products. The OSI model is considered the best tool available for teaching
people about sending and receiving data on a network.
The OSI reference model has seven layers, as shown in Figure 13, each illustrating a
particular network function. This separation of networking functions is called layering. The
OSI reference model defines the network functions that occur at each layer. More
importantly, the OSI reference model facilitates an understanding of how information
travels throughout a network. In addition, the OSI reference model describes how data
travels from application programs (for example, spreadsheets), through a network medium,
to an application program located in another computer, even if the sender and receiver are
connected using different network media.
Dividing the network into these seven layers provides these advantages:
■ Reduces complexity: It breaks network communication into smaller, simpler parts.
■ Standardizes interfaces: It standardizes network components to allow multiple vendor
development and support.
■ Facilitates modular engineering: It allows different types of network hardware and
software to communicate with each other.
■ Ensures interoperable technology: It prevents changes in one layer from affecting the
other layers, allowing for quicker development.
■ Accelerates evolution: It provides for effective updates and improvements to individual
components without affecting other components or having to rewrite the
entire protocol.
■ Simplifies teaching and learning: It breaks network communication into smaller
components to make learning easier.
The following are the seven layers of the Open System Interconnection (OSI) reference
model:
• Layer 7 — Application layer
• Layer 6 — Presentation layer
• Layer 5 — Session layer
• Layer 4 — Transport layer
• Layer 3 — Network layer
• Layer 2 — Data Link layer
• Layer 1 — Physical layer
The network layer establishes the route between the sending and receiving stations. The unit
of data at the network layer is called a packet. It provides network routing and flow and
congestion functions across computer-network interface.
It makes a decision as to where to route the packet based on information and calculations
from other routers, or according to static entries in the routing table.
It examines network addresses in the data instead of physical addresses seen in the Data Link
layer.
The Network layer establishes, maintains, and terminates logical and/or physical connections.
The network layer is responsible for translating logical addresses, or names, into physical
addresses.
The data link layer groups the bits that we see on the Physical layer into Frames. It is
primarily responsible for error-free delivery of data on a hop. The Data link layer is split into
two sub-layers i.e., the Logical Link Control (LLC) and Media Access Control (MAC).
The Data-Link layer handles the physical transfer, framing (the assembly of data into a single
unit or block), flow control and error-control functions (and retransmission in the event of an
error) over a single transmission link; it is responsible for getting the data packaged and onto
the network cable. The data link layer provides the network layer (layer 3) reliable
information-transfer capabilities.
The main network device found at the data link layer is a bridge. This device works at a
higher layer than the repeater and therefore is a more complex device. It has some
understanding of the data it receives and can make a decision based on the frames it receives
as to whether it needs to let the information pass, or can remove the information from the
network. This means that the amount of traffic on the medium can be reduced and therefore,
the usable bandwidth can be increased.
The data units on this layer are called bits. This layer defines the mechanical and electrical
definition of the network medium (cable) and network hardware. This includes how data is
impressed onto the cable and retrieved from it.
The physical layer is responsible for passing bits onto and receiving them from the connecting
medium. This layer gives the data-link layer (layer 2) its ability to transport a stream of serial
data bits between two communicating systems; it conveys the bits that moves along the cable.
It is responsible for ensuring that the raw bits get from one place to another, no matter what
shape they are in, and deals with the mechanical and electrical characteristics of the cable.
This layer has no understanding of the meaning of the bits, but deals with the electrical and
mechanical characteristics of the signals and signalling methods.
The main network device found the Physical layer is a repeater. The purpose of a repeater
(as the name suggests) is simply to receive the digital signal, reform it, and retransmit the
signal. This has the effect of increasing the maximum length of a network, which would not
be possible due to signal deterioration if, a repeater were not available. The repeater, simply
regenerates cleaner digital signal so it doesn’t have to understand anything about the
information it is transmitting, and processing on the repeater is non-existent.
An example of the Physical layer is RS-232.
Information being transferred from a software application in one computer system to software
application in another must pass through each of the OSI layers. Each layer communicates
with three other OSI layers i.e., the layer directly above it, the layer directly below it, and its
peer layer in other networked systems. If, for example, in Figure 14, a software application
in Host A System has information to transmit to a software application in Host B, the
application program in Host A will pass its information to the application layer (Layer 7) of
Host A. The application layer then passes the information to the presentation layer (Layer 6);
the presentation layer reformats the data if required such that B can understand it. The
formatted data is passed to the session layer (Layer 5), which in turn requests for connection
establishment between session layers of A and B, it then passes the data to the transport layer.
The transport layer breaks the data into smaller units called segments and sends them to the
Network layer. The Network layer selects the route for transmission and if, required breaks
the data packets further. These data packets are then sent to the Data link layer that is
responsible for encapsulating the data packets into data frames. The Data link layer also adds
source and destination addresses with error checks to each frame, for the hop.
The data frames are finally transmitted to the physical layer. In the physical layer, the data is
in the form of a stream of bits and this is placed on the physical network medium and is sent
across the medium to Host B.
B receives the bits at its physical layer and passes them on to the Data link layer, which
verifies that no error has occurred. The Network layer ensures that the route selected for
transmission is reliable, and passes the data to the Transport layer. The function of the
Transport layer is to reassemble the data packets into the file being transferred and then, pass
it on to the session layer. The session layer confirms that the transfer is complete, and if so,
the session is terminated.
The data is then passed to the Presentation layer, which may or may not reformat it to suit the
environment of B and sends it to the Application layer. Finally the Application layer of Host
B passes the information to the recipient Application program to complete the
communication process.
Interaction between different layers of OSI model
A given layer in the OSI layers generally communicates with three other OSI layers: the layer
directly above it, the layer directly below it, and its peer layer in another networked computer
system. The data link layer in System A, for example, communicates with the network layer of
System A, the physical layer of System A, and the data link layer in System B.
The TCP/IP suite is a layered model similar to the OSI reference model. Its name is actually a
combination of two individual protocols, Transmission Control Protocol (TCP) and Internet
Protocol (IP). It is divided into layers, each of which performs specific functions in the data
communication process.
Both the OSI model and the TCP/IP stack were developed by different organizations at
approximately the same time as a means to organize and communicate the components that
guide the transmission of data.
The TCP/IP protocol stack has four layers. Note that although some of the layers in the
TCP/IP protocol stack have the same names as layers in the OSI reference model, the layers
have different functions in each model, as is described in the following list:
■ Application layer: The top layer of the protocol stack is the application layer. It refers to
the programs that initiate communication in the first place. TCP/IP includes several
application layer protocols for mail, file transfer, remote access, authentication and name
resolution. These protocols are embodied in programs that operate at the top layer just as any
custom-made or packaged client/server application would.
There are many Application Layer protocols and new protocols are always being developed.
The most widely known Application Layer protocols are those used for the exchange of user
information, some of them are:
• The HyperText Transfer Protocol (HTTP) is used to transfer files that make up the Web
pages of the World Wide Web.
• The File Transfer Protocol (FTP) is used for interactive file transfer.
• The Simple Mail Transfer Protocol (SMTP) is used for the transfer of mail messages and
attachments.
• Telnet, is a terminal emulation protocol, and, is used for remote login to network hosts.
Other Application Layer protocols that help in the management of TCP/IP networks are:
• The Domain Name System (DNS), which, is used to resolve a host name to an IP address.
• The Simple Network Management Protocol (SNMP) which is used between network
management consoles and network devices (routers, bridges, and intelligent hubs) to
collect and exchange network management information.
■ Transport layer: Transport Layer is the third layer of the four layer TCP/IP model. The
position of the Transport layer is between Application layer and Internet layer. The purpose
of Transport layer is to permit devices on the source and destination hosts to carry on a
conversation. Transport layer defines the level of service and status of the connection used
when transporting data.
The main protocols included at Transport layer are TCP (Transmission Control
Protocol) and UDP (User Datagram Protocol).
The Transport Layer encompasses the responsibilities of the OSI Transport Layer and some
of the responsibilities of the OSI Session Layer.
■ Internet layer: Internet Layer is the second layer of the four layer TCP/IP model. The
position of Internet layer is between Network Access Layer and Transport layer. Internet
layer pack data into data packets known as IP datagrams, which contain source and destination
address (logical address or IP address) information that is used to forward the datagrams
between hosts and across networks. The Internet layer is also responsible for routing of IP
datagrams.
Packet switching network depends upon a connectionless internetwork layer. This layer is
known as Internet layer. Its job is to allow hosts to insert packets into any network and have
them to deliver independently to the destination. At the destination side data packets may
appear in a different order than they were sent. It is the job of the higher layers to rearrange
them in order to deliver them to proper network applications operating at the Application layer.
The main protocols included at Internet layer are IP (Internet Protocol), ICMP (Internet
Control Message Protocol), ARP (Address Resolution Protocol) and IGMP (Internet Group
Management Protocol).
The Internet Protocol (IP) is a routable protocol responsible for IP addressing and
the fragmentation and reassembly of packets.
The Address Resolution Protocol (ARP) is responsible for the resolution of the
Internet Layer address to the Network Interface Layer address, such as a hardware
address.
The Internet Control Message Protocol (ICMP) is responsible for providing
diagnostic functions and reporting errors or conditions regarding the delivery of IP
packets.
The Internet Group Management Protocol (IGMP) is responsible for the
management of IP multicast groups.
The Internet Layer is analogous to the Network layer of the OSI model.
■ Network access layer: The name of this layer is broad and somewhat confusing. It is
also called the host-to-network layer. It includes the LAN and WAN protocols and all
the details in the OSI physical and data link layers.
Network Access Layer is the first layer of the four layer TCP/IP model. Network Access
Layer defines details of how data is physically sent through the network, including how bits
are electrically or optically signaled by hardware devices that interface directly with a network
medium, such as coaxial cable, optical fiber, or twisted pair copper wire.
The protocols included in Network Access Layer are Ethernet, Token Ring, FDDI, X.25,
Frame Relay etc.
The most popular LAN architecture among those listed above is Ethernet. Ethernet uses
an Access Method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection) to
access the media, when Ethernet operates in a shared media. An Access Method determines
how a host will place data on the medium.
IN CSMA/CD Access Method, every host has equal access to the medium and can place data
on the wire when the wire is free from network traffic. When a host wants to place data on the
wire, it will check the wire to find whether another host is already using the medium. If there
is traffic already in the medium, the host will wait and if there is no traffic, it will place the data
in the medium. But, if two systems place data on the medium at the same instance, they will
collide with each other, destroying the data. If the data is destroyed during transmission, the
data will need to be retransmitted. After collision, each host will wait for a small interval of
time and again the data will be retransmitted
Following are some major differences between OSI Reference Model and TCP/IP Reference
Model, with diagrammatic comparison below.
OSI(Open System Interconnection) TCP/IP(Transmission Control Protocol / Internet
Protocol)
1. OSI is a generic, protocol independent 1. TCP/IP model is based on standard protocols around
standard, acting as a communication which the Internet has developed. It is a communication
gateway between the network and end user. protocol, which allows connection of hosts over a
network.
2. In OSI model the transport layer 2. In TCP/IP model the transport layer does not
guarantees the delivery of packets. guarantees delivery of packets. Still the TCP/IP model is
more reliable.
4. OSI model has a separate Presentation 4. TCP/IP does not have a separate Presentation layer or
layer and Session layer. Session layer.
5. OSI is a reference model around which 5. TCP/IP model is, in a way implementation of the OSI
the networks are built. Generally it is used as model.
a guidance tool.
6. Network layer of OSI model provides 6. The Network layer in TCP/IP model provides
both connection oriented and connectionless connectionless service.
service.
7. OSI model has a problem of fitting the 7. TCP/IP model does not fit any protocol
protocols into the model.
8. Protocols are hidden in OSI model and are 8. In TCP/IP replacing protocol is not easy.
easily replaced as the technology changes.
9. OSI model defines services, interfaces and 9. In TCP/IP, services, interfaces and protocols are not
protocols very clearly and makes clear clearly separated. It is also protocol dependent.
distinction between them. It is protocol
independent.
10. It has 7 layers 10. It has 4 layers
Diagrammatic Comparison between OSI Reference Model and TCP/IP Reference Model
1) Two tier architectures A two-tier architecture is where a client talks directly to a server,
with no intervening server. It is typically used in small environments(less than 50 users).
In two tier client/server architectures, the user interface is placed at user's desktop environment
and the database management system services are usually in a server that is a more powerful
machine that provides services to the many clients. Information processing is split between the
user system interface environment and the database management server environment.
2) Three tier architectures The three tier architecture is introduced to overcome the
drawbacks of the two tier architecture. In the three tier architecture, a middleware is used
between the user system interface client environment and the database management server
environment.
These middleware are implemented in a variety of ways such as transaction processing
monitors, message servers or application servers. The middleware perform the function of
queuing, application execution and database staging. In addition the middleware adds
scheduling and prioritization for work in progress.
The three tier client/server architecture is used to improve performance for large number of
users and also improves flexibility when compared to the two tier approach.
The drawback of three tier architectures is that the development environment is more difficult
to use than the development of two tier applications.
The widespread use of the term 3-tier architecture also denotes the following architectures:
• Application sharing between a client, middleware and enterprise server
• Application sharing between a client, application server and enterprise database server.
i) Three tier with message server. In this architecture, messages are processed and prioritized
asynchronously. Messages have headers that include priority information, address and
identification number. The message server links to the relational DBMS and other data sources.
Messaging systems are alternative for wireless infrastructures.
ii) Three tier with an application server This architecture allows the main body of an
application to run on a shared host rather than in the user system interface client environment.
The application server shares business logic, computations and a data retrieval engine. In this
architecture applications are more scalable and installation costs are less on a single server than
maintaining each on a desktop client.
1) Combination of a client or front-end portion that interacts with the user, and a server
or back-end portion that interacts with the shared resource. The client process
contains solution-specific logic and provides the interface between the user and the rest of the
application system. The server process acts as a software engine that manages shared
resources such as databases, printers, modems, or high powered processors.
2) The front-end task and back-end task have fundamentally different requirements for
computing resources such as processor speeds, memory, disk speeds and capacities, and
input/output devices.
3) The environment is typically heterogeneous and multivendor. The hardware platform
and operating system of client and server are not usually the same. Client and server processes
communicate through a well-defined set of standard application program interfaces (API's) and
RPC's.
4) An important characteristic of client-server systems is scalability. They can be scaled
horizontally or vertically. Horizontal scaling means adding or removing client workstations
with only a slight performance impact. Vertical scaling means migrating to a larger and faster
server machine or multiservers.
Peer-to-Peer Architecture
A type of network in which each workstation has equal capabilities and responsibilities is
called peer-to-peer network. Figure 18 shows the arrangement of computers in a peer-to-peer
environment. Here each workstation acts as both a client and a server. There is no central
repository for information and there is no central server to maintain. Data and resources are
distributed throughout the network, and each user is responsible for sharing data and
resources connected to their system. This differs from client/server architectures, in which
some computers are dedicated to serving the others. Peer-to-peer networks are generally
simpler and less expensive, but they usually do not offer the same performance under heavy
loads. A peer-to-peer network is also known as a Distributed network.
Computers are extensively used in almost every field nowadays. There are different types of
networks. Some of them are public networks, research networks, and co-operative networks,
commercial or corporate networks. These networks can be distinguished on the basis of their
history, administration, facilities offered, technical design and the users. Example of some of
the popular networks are Novell NetWare, ARPANET, Internet, ATM network etc.
Novell Netware
Novell NetWare is the most popular network system in the PC world. Novell NetWare
contains the protocols that are necessary to allow communication between different types of
PC’s and devices. There are several versions of NetWare. The earlier versions NetWare 286
version 2.X was written to run on 286 machines. NetWare 386 versions 3.X were written to
run on 386 and 486 machines. The most recent version NetWare 4.X can probably run on
almost any type of machine.
Novell Networks are based on the client/server model in which at least one computer
functions as a network file server, which runs all of the NetWare protocols and maintains the
networks shared data on one or more disk drives. File servers generally allow users on other
PC’s to access application software or data files i.e., it provides services to other network
computers called clients.
Dedicated File Servers: Dedicated file server runs only NetWare and do not run any other
software, such as Windows application. Dedicated file servers are mostly used in large
networks, because, in large networks, one extra client is less significant and a dedicated
server can handle a larger number of requests more efficiently. In large networks security is
one of the major concerns and providing a clear distinction between client and server
hardware provides greater security.
Non-dedicated File Server: Non-dedicated file server can run both applications and
NetWare. It is useful in small networks because it allows the server to also act as a client and
thus, increase the number of clients in the network by one.
There are many other servers within a Novell NetWare such as, Print server, Message server,
Database server etc.
ARPANET
ARPANET was the network that became the basis for the Internet. Based on a concept first
published in 1967, ARPANET was developed under the direction of the U.S. Advanced
Research Projects Agency (ARPA). In 1969, the idea became a modest reality with the
interconnection of four university computers. The initial purpose was to communicate with
and share computer resources among mainly scientific users at the connected institutions.
ARPANET took advantage of the new idea of sending information in small units
called packets that could be routed on different paths and reconstructed at their destination.
The development of the TCP/IP protocols in the 1970s made it possible to expand the size of
the network, which now had become a network of networks, in an orderly way.
In the 1980s, ARPANET was handed over to a separate new military network, the Defense
Data Network, and NSFNet, a network of scientific and academic computers funded by the
National Science Foundation. In 1995, NSFNet in turn began a phased withdrawal to turn
the backbone of the Internet (called vBNS) over to a consortium of commercial backbone
providers (PSINet, UUNET,ANS/AOL, Sprint, MCI, and AGIS-Net99).
Because ARPA's name was changed to Defense Advanced Research Projects Agency
(DARPA) in 1971, ARPANET is sometimes referred to as DARPANET. (DARPA was
changed back to ARPA in 1993 and back to DARPA again in 1996.)
Internet
The Internet began in 1969 as the U.S. Department of Defense's Advanced Research Project
Agency (ARPA) to provide immediate communication within the Department in case of war.
Computers were then installed at U.S. universities with defense related projects. As scholars
began to go online, this network changed from military use to scientific use. As ARPAnet
grew, administration of the system became distributed to a number of organizations,
including the National Science Foundation (NSF). This shift of responsibility began the
transformation of the science oriented ARPAnet into the commercially minded and funded
Internet used by millions today.
The Internet acts as a pipeline to transport electronic messages from one network to another
network. At the heart of most networks is a server, a fast computer with large amounts of
memory and storage space. The server controls the communication of information between
the devices attached to a network, such as computers, printers, or other servers.
An Internet Service Provider (ISP) allows the user access to the Internet through their server.
Many teachers use a connection through a local university as their ISP because it is free.
Other ISPs, such as America Online, telephone companies, or cable companies provide
Internet access for their members.
You can connect to the Internet through telephone lines, cable modems, cellphones and other
mobile devices.
The rapid growth of Internet may also be due to several important factors:
1) Easy-to-use software - graphical browsers
2) Improved telecommunications connections
3) Rapid spread of automatic data processing, including electronic mail, bank ransfers, etc.
4) The Information Superhighway projects.
The Internet Society maintains a list of Internet service providers providing connections all
over the world. There is one “universal” aspect of all computers connect to the Internet i.e.,
they all run the TCP/IP family of protocols.
The Internet Protocol (IP) gives the physical 32-bit address, which uniquely identifies an
individual computer connected to the Internet, while Transmission Control Protocol (TCP) is
a connection-oriented protocol, which takes care of the delivery and order of the packages.
TCP also provides the port numbers for individual services within a computer.
The major information services provided by the Internet are (with the protocol in
parentheses): electronic mail (SMTP), remote file copying (FTP), remote login, terminal
connections (TELNET), menu-based file access (GOPHER), wide area information servers
(WAIS, Z39.50), the World Wide Web (HTTP), and the Packet Internet Groper (PING).
There are three major ways to connect your computer to the Internet:
• dial up modem access to a computer connected to Internet,
• dial-up networking, and
• leased lines (usually from a local telephone company).
Leased lines
A leased line, also known as a dedicated line, connects two locations for private voice and/or
data telecommunication service. A leased line is not a dedicated cable; a leased line is
actually a reserved circuit between two points.
Leased lines can span short or long distances. They maintain a single open circuit at all times,
as opposed to traditional telephone services that reuse the same lines for many different
conversations through a process called "switching."
Leased lines are most commonly rented by businesses to connect branch offices of the
organization. Leased lines guarantee bandwidth for network traffic between locations.
ATM Network
Available Bit Rate: Provides a guaranteed minimum capacity but data can be bursted
to higher capacities when network traffic is minimal.
Constant Bit Rate: Specifies a fixed bit rate so that data is sent in a steady stream.
This is analogous to a leased line.
Unspecified Bit Rate: Doesn’t guarantee any throughput level and is used for
applications such as file transfers that can tolerate delays.
Variable Bit Rate (VBR): Provides a specified throughput, but data is not sent evenly.
This makes it a even popular choice for voice and videoconferencing.
Advantages of ATM
Disadvantages of ATM
Types of LAN
There are basically two types of Local Area Networks namely: ARCnet and Ethernet.
A MAN can be created as a single network such as Cable TV Network, covering the entire city
or a group of several Local Area Networks (LANs). It this way resource can be shared from
LAN to LAN and from computer to computer also. MANs are usually owned by large
organizations to interconnect its various branches across a city.
MAN is based on IEEE 802.6 standard known as DQDB (Distributed Queue Dual Bus). DQDB
uses two unidirectional cables (buses) and all the computers are connected to these two buses.
Each bus has a specialized device that initiates the transmission activity. This device is called
head end. Data that is to be sent to the computer on the right hand side of the sender is
transmitted on upper bus. Data that is to be sent to the left hand side of the sender is transmitted
on lower bus.
The two most important components of MANs are security and standardization. Security is
important because information is being shared between dissimilar systems. Standardization is
necessary to ensure reliable data communication.
A MAN usually interconnects a number of local area networks using a high-capacity backbone
technology, such as fiber-optical links, and provides up-link services to wide area networks
and the Internet.
The Metropolitan Area Networks (MAN) protocols are mostly at the data link level (layer 2 in
the OSI model), which are defined by IEEE, ITU-T, etc.
An enterprise class WLAN employs a large number of individual access points to broadcast
the signal to a wide area. The access points have more features than home or small office
WLAN equipment, such as better security, authentication, remote management, and tools to
help integrate with existing networks. These access points have a larger coverage area than
home or small office equipment, and are designed to work together to cover a much larger area.
This equipment can adhere to the 802.11a, b, g, or n standard, or to security-refining standards,
such as 802.1x and WPA2.
Examples:
For WLANs that connect to the Internet, Wireless Application Protocol (WAP) technology
allows Web content to be more easily downloaded to a WLAN and rendered on wireless clients
like cell phones and PDAs.
1.12 Advantages and disadvantages of Networks
With computers wirelessly linked together through a network, computer networking has been
an essential means of sharing information. It is a practice widely used in the modern world, as
it provides a multitude of benefits to individuals and businesses alike. However, it does not
come without any drawbacks. Here are the advantages and disadvantages of computer
networking:
List of Advantages of Computer Networking
1.13 Summary
In this unit, we have learnt about the basic concepts of Networking. We have seen the
different types of networks and the difference between them. Computer networks LAN,
MAN, WAN has also discussed depending on the geographical distance covered and
depending on the various ways of interconnecting computers in a network (network topology)
like Star, Bus, Ring, Tree, Mesh and cellular topologies.
We have seen the immense benefits that the computer networks provide in the form of
excellent sharing of computational resources, computational load, increased level of
reliability, economy and efficient person-to-person communication. Here we have briefly
explained some of the network protocols which define a common set of rules and signals that
computers on the network use to communicate with each other.
We have discussed Standard network architecture for meaningful communication between
end systems. Two most widely used reference models i.e., the OSI reference model and the
TCP/IP reference model has also been discussed. we have learnt about some of the popular
networks such as Novell NetWare, ARPANET, Internet, ATM network.
Suggested Readings: