0% found this document useful (0 votes)
21 views15 pages

Basic Characteristics of Computer Networks

Uploaded by

nk6859
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views15 pages

Basic Characteristics of Computer Networks

Uploaded by

nk6859
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Basic characteristics of Computer

Networks
Computer networks are a system of interconnected computers and other devices that
allow for the sharing of information and resources. They can range in size from a
few connected devices in a small office to millions of devices spread out across the
globe. Here are some of the basic characteristics of computer networks:

1. Connectivity: The primary purpose of a computer network is to enable devices to


communicate with each other. Connectivity is established through a variety of wired
and wireless technologies, such as Ethernet cables, Wi-Fi, and Bluetooth.

2. Scalability: Computer networks must be scalable to accommodate growth and


changing needs. As the number of devices and users on the network increases, the
network must be able to handle the additional traffic and data.

3. Security: Computer networks are vulnerable to security threats such as hacking,


viruses, and data breaches. Security measures such as firewalls, encryption, and user
authentication are essential to protect network resources and data.

4. Reliability: Computer networks must be reliable to ensure that data and


resources are always available when needed. Redundancy and backup systems can
help to ensure that the network remains operational in the event of a failure.

5. Performance: The performance of a computer network is determined by factors


such as bandwidth, latency, and throughput. These factors affect the speed and
responsiveness of the network and can impact the user experience.

6. Standards and protocols: Computer networks rely on standards and protocols to


ensure that devices can communicate with each other. Standards such as TCP/IP
and Ethernet, and protocols such as HTTP and SMTP, are used to ensure
interoperability between devices and networks.
7. Management and administration: Computer networks require on-going
management and administration to ensure that they continue to function properly.
This includes tasks such as monitoring network performance, configuring devices,
and troubleshooting issues.

Computer networks are essential for enabling communication and resource sharing
between devices and users. They must be scalable, secure, reliable, and performant,
and rely on standards and protocols to ensure interoperability. Effective
management and administration are also critical to ensuring the ongoing operation
and maintenance of the network..

Performance of a Network
The performance of a network pertains to the measure of service quality of a
network as perceived by the user. There are different ways to measure the
performance of a network, depending upon the nature and design of the network.
Finding the performance of a network depends on both quality of the network and
the quantity of the network.

Parameters for Measuring Network Performance

 Bandwidth
 Latency (Delay)
 Bandwidth – Delay Product
 Throughput

BANDWIDTH

One of the most essential conditions of a website’s performance is the amount of


bandwidth allocated to the network. Bandwidth determines how rapidly the
webserver is able to upload the requested information. While there are different
factors to consider with respect to a site’s performance, bandwidth is every now and
again the restricting element.
Bandwidth is characterized as the measure of data or information that can be
transmitted in a fixed measure of time. The term can be used in two different
contexts with two distinctive estimating values. In the case of digital devices, the
bandwidth is measured in bits per second(bps) or bytes per second. In the case of
analog devices, the bandwidth is measured in cycles per second, or Hertz (Hz).

Bandwidth is only one component of what an individual sees as the speed of a


network. People frequently mistake bandwidth with internet speed in light of the
fact that Internet Service Providers (ISPs) tend to claim that they have a fast “40Mbps
connection” in their advertising campaigns. True internet speed is actually the
amount of data you receive every second and that has a lot to do with latency
too. “Bandwidth” means “Capacity” and “Speed” means “Transfer rate”.

More bandwidth does not mean more speed. Let us take a case where we have
double the width of the tap pipe, but the water rate is still the same as it was when
the tap pipe was half the width. Hence, there will be no improvement in speed.
When we consider WAN links, we mostly mean bandwidth but when we consider
LAN, we mostly mean speed. This is on the grounds that we are generally
constrained by expensive cable bandwidth over WAN rather than hardware and
interface data transfer rates (or speed) over LAN.

 Bandwidth in Hertz: It is the range of frequencies contained in a composite


signal or the range of frequencies a channel can pass. For example, let us
consider the bandwidth of a subscriber telephone line as 4 kHz.
 Bandwidth in Bits per Seconds: It refers to the number of bits per second that
a channel, a link, or rather a network can transmit. For example, we can say
the bandwidth of a Fast Ethernet network is a maximum of 100 Mbps, which
means that the network can send 100 Mbps of data.

Note: There exists an explicit relationship between the bandwidth in hertz and the
bandwidth in bits per second. An increase in bandwidth in hertz means an increase
in bandwidth in bits per second. The relationship depends upon whether we have
baseband transmission or transmission with modulation.
LATENCY

In a network, during the process of data communication, latency(also known as


delay) is defined as the total time taken for a complete message to arrive at the
destination, starting with the time when the first bit of the message is sent out from
the source and ending with the time when the last bit of the message is delivered at
the destination. The network connections where small delays occur are called “Low-
Latency-Networks” and the network connections which suffer from long delays are
known as “High-Latency-Networks”.

High latency leads to the creation of bottlenecks in any network communication. It


stops the data from taking full advantage of the network pipe and conclusively
decreases the bandwidth of the communicating network. The effect of the latency on
a network’s bandwidth can be temporary or never-ending depending on the source
of the delays. Latency is also known as a ping rate and is measured in
milliseconds(ms).

 In simpler terms latency may be defined as the time required to successfully


send a packet across a network.
 It is measured in many ways like a round trip, one-way, etc.
 It might be affected by any component in the chain utilized to vehiculate data,
like workstations, WAN links, routers, LAN, and servers, and eventually may
be limited for large networks, by the speed of light.

Latency = Propagation Time + Transmission Time + Queuing Time + Processing


Delay

Propagation time refers to the time it takes for a signal or data packet to travel from
the source to the destination over a transmission medium. It depends on the physical
distance between the source and destination and the speed at which the signal can
propagate through the medium.

The propagation time Tp can be calculated using the formula:


Tp= D/v

Where:

 Tp is the propagation time.


 D is the distance between the source and destination.
 v is the speed of propagation of the signal in the transmission medium.

The speed of propagation depends on the medium through which the signal travels.
For example:

 In copper cables, the speed of propagation is typically around 2/3 the speed
of light (approximately 200,000 kilometers per second).
 In fiber-optic cables, the speed of propagation is closer to the speed of light in
vacuum (approximately 300,000 kilometers per second).

Let's illustrate with an example: Suppose you have a network link that spans a
distance of 1000 kilometers and uses fiber-optic cables, where the speed of
propagation is approximately 300,000 kilometers per second.

Tp=1000 km / 300,000 km/s

Tp=1/300 seconds

So, the propagation time in this example would be approximately 3.33 milliseconds.

Transmission Time

Transmission Time is a time based on how long it takes to send the signal down the
transmission line. It consists of time costs for an EM signal to propagate from one
side to the other, or costs like the training signals that are usually put on the front of
a packet by the sender, which helps the receiver synchronize clocks. The
transmission time of a message relies upon the size of the message and the
bandwidth of the channel.

Transmission time, often denoted as T t, refers to the time it takes to transmit a data
packet from the sender to the receiver over a network link. It's influenced by factors
such as the size of the data packet, the bandwidth of the link, and potential
overhead.
The formula to calculate transmission time is:

Tt=L/R

Where:

 Tt is the transmission time.


 L is the size of the data packet (in bits).
 R is the data transmission rate or bandwidth of the link (in bits per second).

For example, if you have a data packet of size 10,000 bits and a network link with a
bandwidth of 1 Mbps (1,000,000 bits per second), the transmission time would be:

Tt=10,000 bits / 1,000,000 bits/s

Tt=0.01 seconds

So, the transmission time in this example would be 0.01 seconds or 10 milliseconds.

It's important to note that this formula assumes ideal conditions without considering
factors such as latency, protocol overhead, or contention for network resources,
which can affect the actual transmission time in real-world scenarios.

Note: Since the message is short and the bandwidth is high, the dominant factor is the propagation
time and not the transmission time(which can be ignored).

Queuing Time

Queuing time is a time based on how long the packet has to sit around in the router.
Quite frequently the wire is busy, so we are not able to transmit a packet
immediately. The queuing time is usually not a fixed factor, hence it changes with
the load thrust in the network. In cases like these, the packet sits waiting, ready to
go, in a queue. These delays are predominantly characterized by the measure of
traffic on the system. The more the traffic, the more likely a packet is stuck in the
queue, just sitting in the memory, waiting.

Processing Delay

Processing delay is the delay based on how long it takes the router to figure out
where to send the packet. As soon as the router finds it out, it will queue the packet
for transmission. These costs are predominantly based on the complexity of the
protocol. The router must decipher enough of the packet to make sense of which
queue to put the packet in. Typically the lower-level layers of the stack have simpler
protocols. If a router does not know which physical port to send the packet to, it will
send it to all the ports, queuing the packet in many queues immediately. Differently,
at a higher level, like in IP protocols, the processing may include making an ARP
request to find out the physical address of the destination before queuing the packet
for transmission. This situation may also be considered as a processing delay.

BANDWIDTH – DELAY PRODUCT

Bandwidth and Delay are two performance measurements of a link. However, what
is significant in data communications is the product of the two, the bandwidth-delay
product. Let us take two hypothetical cases as examples.

Case 1: Assume a link is of bandwidth 1bps and the delay of the link is 5s. Let us
find the bandwidth-delay product in this case. From the image, we can say that this
product 1 x 5 is the maximum number of bits that can fill the link. There can be close
to 5 bits at any time on the link.
Bandwidth Delay Product

Case 2: Assume a link is of bandwidth 3bps. From the image, we can say that there
can be a maximum of 3 x 5 = 15 bits on the line. The reason is that, at each second,
there are 3 bits on the line and the duration of each bit is 0.33s.
Bandwidth Delay

For both examples, the product of bandwidth and delay is the number of bits that
can fill the link. This estimation is significant in the event that we have to send data
in bursts and wait for the acknowledgment of each burst before sending the
following one. To utilize the maximum ability of the link, we have to make the size
of our burst twice the product of bandwidth and delay. Also, we need to fill up the
full-duplex channel. The sender ought to send a burst of data of
(2*bandwidth*delay) bits. The sender at that point waits for the receiver’s
acknowledgement for part of the burst before sending another burst. The amount:
2*bandwidth*delay is the number of bits that can be in transition at any time.
THROUGHPUT

Throughput is the number of messages successfully transmitted per unit time. It is


controlled by available bandwidth, the available signal-to-noise ratio, and hardware
limitations. The maximum throughput of a network may be consequently higher
than the actual throughput achieved in everyday consumption. The terms
‘throughput’ and ‘bandwidth’ are often thought of as the same, yet they are
different. Bandwidth is the potential measurement of a link, whereas throughput is
an actual measurement of how fast we can send data.

Throughput is measured by tabulating the amount of data transferred between


multiple locations during a specific period of time, usually resulting in the unit of
bits per second(bps), which has evolved to bytes per second(Bps), kilobytes per
second(KBps), megabytes per second(MBps) and gigabytes per second(Gbps).
Throughput may be affected by numerous factors, such as the hindrance of the
underlying analog physical medium, the available processing power of the system
components, and end-user behavior. When numerous protocol expenses are taken
into account, the use rate of the transferred data can be significantly lower than the
maximum achievable throughput.

Let us consider: A highway that has a capacity of moving, say, 200 vehicles at a
time. But at a random time, someone notices only, say, 150 vehicles moving through
it due to some congestion on the road. As a result, the capacity is likely to be 200
vehicles per unit time and the throughput is 150 vehicles at a time.

Example:

Input: A network with bandwidth of 10 Mbps can pass only an average of 12, 000 frames per minute
where each frame carries an average of 10, 000 bits. What will be the throughput for this network?

Output: We can calculate the throughput as-


Throughput = (12, 000 x 10, 000) / 60 = 2 Mbps
The throughput is nearly equal to one-fifth of the bandwidth in this case.
Difference between Bandwidth and Throughput
Bandwidth:
It is defined as the potential of the data that is to be transferred in a specific period of time. It
is the data carrying capacity of the network/transmission medium.

Throughput:
It is the determination of the amount of data is transmitted during a specified time period via
a network, interface or channel. Also called as effective data rate or payload rate.

Difference between Bandwidth and Throughput:

S.No. Comparison Bandwidth Throughput


Data capacity is travelled Practical measure of the amount of data
1. Basic
via a channel. actually transmitted through a channel.
Average rate is measured depending on
2. Measured in Bits / Second bandwidth. It is measured in terms of
bits transferred per second (bps).
Transfer of data by some
3. Concerned with Communication between two entities
means.
Work at any of the layers in the OSI
4. Relevance to layer Physical layer property.
model.
5. Dependence Not depend on the latency. It depends on the latency.
It refers to the maximum It is considered as the actual
amount of the data that can measurement of the data that is being
6. Definition
be passed from one point to moved through the media at any
another. particular time.
It is not affected by It can be easily affected by change in
physical obstruction interference, traffic in network,
7. Effect
because it is a theoretical network devices, transmission errors
unit to some extent. and the host of other type.
Real world
It is the speed of tap at It is the total amount of water that
8. Example (Water
which water is coming out. comes out.
Tap Example).

Mesh Networks
A mesh network is a type of networking topology where each node (device) in the
network is connected to every other node, forming a mesh-like structure. In a mesh
network, data can take multiple paths to reach its destination, enhancing
redundancy and reliability. Mesh networks can be wired or wireless, although
wireless mesh networks are more common due to their flexibility and ease of
deployment.

Here are some key characteristics and advantages of mesh networks:

1. Self-Healing: Mesh networks are self-healing, meaning that if one node fails
or is disrupted, data can still find an alternative path to reach its destination.
This redundancy enhances reliability and fault tolerance.
2. Scalability: Mesh networks are highly scalable since new nodes can be added
to the network without significantly affecting its overall performance. Each
new node extends the coverage area and improves network resilience.
3. Flexibility: Mesh networks are flexible and adaptable to various
environments. They can be easily deployed in challenging terrains or areas
where traditional wired infrastructure is impractical or cost-prohibitive.
4. Dynamic Routing: Mesh networks employ dynamic routing algorithms that
determine the most efficient path for data transmission based on network
conditions such as traffic load, signal strength, and node availability. This
dynamic routing ensures optimal performance and load balancing.
5. Redundancy: Mesh networks offer inherent redundancy due to their multi-
path architecture. Even if one or more nodes fail or become unreachable, data
can still be routed through alternative paths, minimizing the risk of network
downtime.
6. Community Networks: Mesh networks are often used in community
networks, where users collectively contribute their resources (such as internet
connectivity) to create a shared network infrastructure. This approach enables
communities to establish affordable and decentralized communication
networks.

Mesh networks find applications in various scenarios, including:

 Urban and rural broadband access


 Disaster recovery and emergency communication
 Smart home automation and IoT (Internet of Things)
 Industrial automation and monitoring
 Military and defense communications
Overall, mesh networks offer a robust and resilient networking solution that can
adapt to diverse environments and provide reliable connectivity, making them
increasingly popular in both commercial and community-driven deployments.

Switches and buses are both components used in computer networking and
communication systems, but they serve different purposes and operate in different
ways.

1. Switch:
o A switch is a networking device that operates at the data link layer
(Layer 2) of the OSI model.
o It is used to connect multiple devices within a local area network
(LAN) and forward data packets between them.
o Switches use MAC addresses to identify devices on the network and
make forwarding decisions based on these addresses.
o They operate in a full-duplex mode, meaning that data can be
transmitted and received simultaneously on different ports.
o Switches are known for their ability to provide dedicated bandwidth to
each connected device, thereby reducing network congestion and
improving performance.
o They are commonly used in Ethernet networks to create efficient and
scalable LAN environments.
2. Bus:
o A bus is a communication pathway that connects various components
within a computer system, such as the CPU, memory, and peripheral
devices.
o It typically consists of a set of electrical conductors (wires or traces) on
a printed circuit board (PCB) or integrated into the motherboard.
o Buses can be classified based on their purpose and the components
they connect, such as the address bus, data bus, and control bus.
o The data bus is responsible for transferring data between the CPU and
memory or between the CPU and peripheral devices.
o Unlike switches, buses are primarily internal to a computer system and
are not used for interconnecting multiple devices or networks.
o Buses operate in various modes, including parallel and serial,
depending on the architecture and requirements of the system.
o In modern computer architectures, buses have evolved to support
higher data transfer rates and accommodate advancements in
technology, such as multi-core processors and high-speed peripherals.

switches are networking devices used to connect multiple devices within a LAN and
forward data packets between them, while buses are communication pathways
within a computer system used to transfer data between components such as the
CPU, memory, and peripherals. While they serve different purposes, both switches
and buses play critical roles in enabling communication and data transfer in
computer networks and systems.

2. Router:
o A router is a networking device that operates at the network layer
(Layer 3) of the OSI model.
o It is used to connect multiple networks together and route data packets
between them based on their IP addresses.
o Routers maintain routing tables that contain information about
network destinations and the best paths to reach them.
o Unlike switches, routers can connect networks with different network
addresses and protocols, such as connecting a LAN to the internet.
o Routers provide functions such as packet forwarding, network address
translation (NAT), and firewall capabilities for network security.
o They are essential for directing traffic between networks and ensuring
that data packets reach their intended destinations across complex
networks.

In summary, switches are used to connect devices within a LAN and forward data
frames based on MAC addresses, while routers are used to connect multiple
networks together and route data packets between them based on IP addresses. Both
switches and routers are critical components of modern networking infrastructure,
and they work together to enable communication and data transfer across networks
of varying sizes and complexities.

You might also like