0% found this document useful (0 votes)
9 views11 pages

Assignment 02 CCE 313

The document discusses circuit switching and packet switching in telecommunication networks, highlighting their differences in bandwidth utilization, advantages, and disadvantages. It explains key concepts like one-hop delay, queuing delay, end-to-end delay, and throughput, emphasizing their impact on network performance. Additionally, it introduces the IP stack and its layered approach to data transmission, promoting flexibility and interoperability in network communication.

Uploaded by

shishir.bestu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views11 pages

Assignment 02 CCE 313

The document discusses circuit switching and packet switching in telecommunication networks, highlighting their differences in bandwidth utilization, advantages, and disadvantages. It explains key concepts like one-hop delay, queuing delay, end-to-end delay, and throughput, emphasizing their impact on network performance. Additionally, it introduces the IP stack and its layered approach to data transmission, promoting flexibility and interoperability in network communication.

Uploaded by

shishir.bestu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Assignment 02

Chapter 01

Circuit Switching
Circuit switching is a method used in telecommunication networks to establish a dedicated path
between two communicating devices. This dedicated path, like a circuit, is similar to a physical
connection and guarantees a fixed bandwidth for the entire duration of the communication
session.

Think of it like renting a private line for your phone call. Once the connection is set up, you have
exclusive use of that line until you hang up. This ensures consistent data flow without any
interruptions or delays caused by sharing bandwidth with other users.

Here are some key points about circuit switching:

 Connection-oriented: A connection is established between sender and receiver before


data transmission begins.
 Dedicated path: A dedicated communication channel is allocated for the duration of the
session.
 Guaranteed bandwidth: The entire bandwidth of the allocated circuit is available for the
communication.
 Constant bit delay: Data transmission occurs at a predictable rate with minimal delay.

A common example of circuit switching is the traditional telephone network. When you make a
call, the network sets up a dedicated circuit between your phone and the recipient's phone. This
circuit remains active until you end the call.

Circuit switching offers several advantages:

 Guaranteed quality: Provides a reliable and predictable connection for real-time


communication like voice calls.
 Low latency: Minimal delays in data transmission ensure smooth and uninterrupted
communication.

However, there are also some drawbacks to consider:

 Inefficiency: Resources (bandwidth) are allocated even during periods of silence or


inactivity in a connection.
 Scalability issues: Setting up dedicated circuits for a large number of concurrent
connections can be resource-intensive for the network.

With the rise of data networks and the internet, packet switching has become a more prevalent
method. Packet switching breaks down data into smaller packets that are routed through the
network more dynamically, making efficient use of bandwidth.

Quantitative Comparison of Packet Switching and Circuit Switching


Packet switching and circuit switching differ significantly in how they utilize bandwidth,
leading to quantitative advantages and disadvantages for each approach. Here's a
breakdown:
Bandwidth Utilization:
 Circuit Switching:
o Advantage: Guaranteed bandwidth for the entire connection. This is ideal
for real-time applications like voice calls where consistent data flow is
crucial.
o Disadvantage: Inefficient when data transfer is bursty (with periods of
inactivity). Even during silent gaps, the allocated bandwidth remains
reserved for that specific connection, potentially leaving other users
waiting.
 Packet Switching:
o Advantage: Efficiently utilizes bandwidth by sharing it dynamically among
multiple users. Packets from different connections can be interleaved on
the same link, maximizing resource use.
o Disadvantage: No guaranteed bandwidth. If many users are transmitting
simultaneously, packets can experience delays or congestion, impacting
performance.
Here's how to quantify this difference:
Imagine a scenario with:
 Link capacity: B Mbps (Total bandwidth available)
 User bandwidth requirement: b Mbps (Bandwidth each user needs when
transmitting)
Circuit Switching:
 Maximum number of users supported (Ncs):
o Ncs = B / b (Limited by the total link capacity)
Packet Switching:
 Users can potentially transmit at a bursty rate (active for a fraction of time, p). We
can define:
o Nps: Number of users supported by packet switching
o Average user bandwidth utilization: actual_rate (average bandwidth used
by a user)
Packet switching can support a higher number of users (Nps) even with a shared link,
as long as the average user bandwidth utilization (actual_rate) is less than the link
capacity (B).
Here's a simplified example (assuming negligible overhead for packets):
 If users are active only 10% of the time (p = 0.1), then the average user
bandwidth on a shared link becomes:
o actual_rate = b * p
Therefore, the number of users supported by packet switching (Nps) can be:
 Nps ≈ B / (b * p)
This shows packet switching can potentially support significantly more users (Nps)
compared to circuit switching (Ncs) when data transfer is bursty (low p).
Real-world considerations:
 Network traffic analysis helps determine average user activity (p) to estimate the
number of users efficiently supported by packet switching.
 Packet switching introduces additional overhead for managing and routing
packets, which can slightly reduce the effective bandwidth compared to a
dedicated circuit.
In conclusion, packet switching offers better bandwidth utilization for bursty data traffic,
leading to supporting more users. However, circuit switching guarantees bandwidth and
provides lower latency, making it preferable for real-time applications with constant data
flow. The choice between the two depends on the specific needs of the communication
scenario.

Car-Caravan Analogy
The car-carven analogy compares two contrasting approaches to creating something:
 Car: This represents a method of assembly and construction. Imagine building
a car on an assembly line, where prefabricated parts are brought together and
put in place according to a plan. This approach is:
o Fast: You can quickly assemble a car from pre-existing components.
o Efficient: It requires minimal waste and uses established techniques.
o Limited customization: The final product is built from a set of existing
parts, offering limited room for unique features.
 Carven: This represents a method of sculpting and shaping. Think of carving a
statue from a block of stone. Here, the final form is created by removing material
from a larger starting point. This approach is:
o Time-consuming: It takes time and skill to carve out the desired shape.
o Wasteful: There's often material that gets removed and discarded during
the process.
o Highly customizable: The artist can create unique and intricate details in
the final product.
Applying the analogy to different contexts:
 Writing:
o Car: Following a formulaic structure with pre-written elements.
o Carven: Crafting a unique and creative story from scratch.
 Software Development:
o Car: Using pre-built libraries and frameworks.
o Carven: Developing everything from the ground up.
 Problem-solving:
o Car: Applying a tried-and-tested method to solve a problem.
o Carven: Inventing a new solution by thinking outside the box.
Overall, the car-carven analogy highlights the trade-off between speed and
efficiency versus creativity and customization. The choice between these
approaches depends on the specific situation and desired outcome.

One-hop Transmission Delay


One-hop delay refers to the time it takes for data to travel across a single network
connection, like a link between two routers or a device and a router. It's essentially the
time it takes for a single packet to traverse that specific link.
Here's a breakdown of the factors affecting one-hop delay:
 Transmission speed (bandwidth): Higher bandwidth allows faster data
transmission, reducing one-hop delay. Imagine a wider highway; more data can
flow through it at once, leading to quicker travel time.
 Propagation speed: This is the physical limitation of how fast signals travel
through the medium (like fiber optic cables or wireless airwaves). It's a constant
value for a specific medium and cannot be improved.
 Packet processing time: Routers and other network devices take some time to
process incoming data packets, checking addresses and routing them
appropriately. This processing time adds to the one-hop delay.
 Distance: In wired connections, longer physical distances between devices can
slightly increase the one-hop delay due to the propagation speed limitation.
However, in most modern networks, this factor is often negligible compared to
others.
Measuring one-hop delay:
There are various tools and techniques to measure one-hop delay, including:
 Ping: This common network utility sends test packets to a specific device and
measures the round-trip time (time taken for a packet to go and come back). Half
of this round-trip time represents the one-hop delay (assuming negligible
processing time at the other end).
 Traceroute: This tool helps visualize the path taken by packets to reach a
destination. By analyzing the time taken at each hop, you can estimate the one-
hop delay for individual links.
One-hop delay is a crucial metric for assessing network performance, especially
for real-time applications like online gaming or video conferencing. Lower one-
hop delays translate to faster data transfer and reduced latency, leading to a
smoother user experience.

Queuing Delay
Queuing delay, also known as queueing delay, refers to the amount of time a data
packet (or any request for service) spends waiting in a queue before it can be
processed. It's a significant factor contributing to overall network latency (delay) and can
significantly impact performance.
Imagine a line of people waiting to be served at a counter. The queuing delay is the time
each person spends waiting in line before reaching the counter. Similarly, in a network:
 Packets arrive: Data packets requesting transmission enter a buffer (temporary
storage) at a router or other network device.
 Queuing: If the device is busy processing other packets, the arriving packet joins
a queue and waits for its turn.
 Processing: Once the device becomes available, it processes the packet at the
head of the queue and transmits it.
Here are the key factors affecting queuing delay:
 Traffic intensity: Higher traffic volume (more packets arriving) means longer
queues and increased waiting time. Think of more people joining the line, leading
to longer wait times for everyone.
 Service rate: The speed at which the device processes packets. Faster
processing reduces queuing time, just like having more cashiers at a store
shortens waiting lines.
 Queue size: Limited buffer space can lead to packet overflow and loss if the
queue fills up completely. Imagine a line reaching its maximum capacity, and
people have to wait outside, potentially giving up.
Queuing delay can be difficult to calculate precisely as it depends on real-time traffic
conditions. However, queuing theory provides mathematical models to estimate
average queuing delays based on traffic patterns and service rates.
Impact of queuing delay:
 Increased latency: Longer queuing delays contribute to higher overall network
latency, impacting applications like video conferencing or online gaming where
responsiveness is crucial.
 Packet loss: If the queue overflows due to excessive traffic, packets can be
dropped, leading to data loss and requiring retransmission.
Techniques to manage queuing delay:
 Traffic shaping: Controlling the incoming traffic rate to prevent overloading
devices and queues.
 Prioritization: Prioritizing certain types of packets (like real-time data) to ensure
they experience shorter queuing delays.
 Queue management algorithms: Network devices use various algorithms to
manage queues efficiently, such as first-in-first-out (FIFO) or priority queuing.
By understanding queuing delay and its impact, network engineers can implement
strategies to optimize network performance and ensure a smooth user experience.
End-to-End Delay
End-to-end delay, also known as one-way delay (OWD), refers to the total time it takes
for a data packet to travel from its source (origin) to its destination (target) across a
network. It essentially measures the elapsed time between a packet being sent by the
source application and arriving at the destination application.
Here's a breakdown of the components contributing to end-to-end delay:
1. Transmission Delay: The time it takes to transmit the entire packet onto the
network from the source. This depends on the packet size and the transmission
speed (bandwidth) of the outgoing link. Think of it as the time it takes to load a
truck full of boxes (packet) onto a highway (network).
2. Propagation Delay: The physical time it takes for the signal carrying the packet
to travel across the network medium (cables, airwaves) between devices. This
speed is limited by the laws of physics and depends on the medium itself (e.g.,
speed of light in fiber optics). Imagine the travel time of the truck itself on the
highway.
3. Processing Delay: The time spent by routers and other network devices
processing the packet headers, checking addresses, and determining the
appropriate route for forwarding. This includes tasks like looking up routing tables
and performing security checks. Think of the time it takes for the truck driver to
check directions and pay tolls at checkpoints along the highway.
4. Queuing Delay: The time a packet spends waiting in a queue at a router or other
network device before it can be processed and transmitted further. This can
happen if the device is busy handling other traffic. Imagine the truck waiting in
line at a busy intersection or toll booth.
End-to- end delay is the sum of all these individual delays:
End-to-End Delay = Transmission Delay + Propagation Delay + Processing
Delay + Queuing Delay
Understanding end-to-end delay is crucial for evaluating network performance,
especially for applications sensitive to latency (delay) such as:
 Real-time communication: Video conferencing, online gaming, and live
streaming require low end-to-end delay for smooth interaction and
responsiveness.
 Voice over IP (VoIP): Delays can cause choppy audio and disrupt
conversations.
 Cloud applications: Fast response times are essential for a seamless user
experience.
Techniques to reduce end-to-end delay:
 Upgrading network infrastructure: Increasing bandwidth (transmission speed)
and using faster network technologies can reduce transmission and propagation
delays.
 Optimizing network paths: Choosing efficient routes with fewer hops (devices
to traverse) can minimize processing delays.
 Prioritizing traffic: Techniques like Quality of Service (QoS) can prioritize real-
time traffic over non-critical data, reducing queuing delays for time-sensitive
applications.
 Content Delivery Networks (CDNs): Replicating content across geographically
distributed servers can bring content closer to users, reducing propagation delay.
By minimizing each of these contributing factors, network engineers can strive to
achieve lower end-to-end delay and deliver a more responsive network experience.

End-to-End Throughput
End-to-end throughput refers to the rate of successful data delivery between a source
and a destination across a network, measured over a specific period. It essentially
quantifies the amount of usable data that reaches the receiving end per unit of time.
Here's a breakdown of how it relates to other network performance metrics:
 End-to-end delay: This focuses on how long it takes for individual data packets
to travel from source to destination.
 Throughput: This emphasizes the total amount of data successfully delivered
over a certain timeframe.
Factors affecting end-to-end throughput:
 Bandwidth: The maximum data transfer rate of the network path between
source and destination. Think of it as the width of a pipe; a wider pipe allows for
more water (data) to flow through at once.
 Packet loss: If data packets are dropped due to errors or congestion, it reduces
the overall amount of data delivered, impacting throughput. Imagine leaks in the
pipe causing water loss.
 Network congestion: When multiple devices compete for bandwidth on a busy
network, it can lead to slower data transfer rates and reduced throughput. Think
of a congested highway with slow traffic movement.
 Protocol overhead: Network protocols add headers and trailers to data packets,
which contribute to overall data size. While essential for routing and
management, excessive overhead can slightly reduce effective throughput.
Imagine the extra weight of packaging materials on the goods being transported.
 Processing delays: While not directly impacting throughput, queuing and
processing delays at network devices can indirectly affect it by slowing down the
overall data flow. Think of delays at checkpoints along the route hindering the
overall delivery speed.
Optimizing end-to-end throughput:
 Bandwidth upgrades: Increasing bandwidth on the bottleneck links in the
network path can significantly improve throughput.
 Congestion control mechanisms: Techniques like TCP congestion control can
help manage data flow and prevent network overload, reducing packet loss and
maintaining efficient throughput.
 Data compression: Reducing the size of data by compression can minimize the
impact of protocol overhead and potentially improve throughput, especially for
text-based data.
 Error correction techniques: Implementing error correction mechanisms can
help ensure data integrity and minimize packet loss, leading to higher throughput.
 Traffic shaping and prioritization: Prioritizing critical data traffic and shaping
overall traffic flow can optimize network resource utilization and improve
throughput for important applications.
By considering these factors and implementing optimization strategies, network
engineers can strive to achieve higher end-to-end throughput, ensuring efficient data
transfer across the network.

The IP Stack and Protocol Layering


The internet protocol (IP) stack, also known as the TCP/IP model, is a fundamental
concept in understanding network communication. It defines a layered approach to data
transmission, breaking down complex tasks into manageable steps. This modular
design allows different protocols to handle specific functions, promoting flexibility and
interoperability between diverse network devices and applications.
The Layers of the IP Stack:
The IP stack consists of four main layers, each with its own responsibilities:
1. Application Layer (Layer 4): This layer interacts directly with user applications
like web browsers, email clients, or online games. It provides services specific to
the application and is responsible for formatting data for network transmission
according to the chosen application-level protocol (e.g., HTTP, FTP, SMTP).
Think of it as the department that packages goods (data) for specific purposes
(applications).
2. Transport Layer (Layer 3): This layer handles reliable data delivery between
applications on different devices. It establishes connections (like TCP) or
connectionless communication (like UDP), manages packet flow control, and
ensures data integrity by checking for errors during transmission. Imagine it as
the shipping company ensuring the packages are delivered to the correct
address and arrive in good condition.
 TCP (Transmission Control Protocol): Provides a reliable, connection-oriented
service, guaranteeing in-order delivery of data packets and error correction.
 UDP (User Datagram Protocol): Offers a connectionless, best-effort service
suitable for real-time applications where speed is more critical than guaranteed
delivery (e.g., online gaming).
3. Network Layer (Layer 2): This layer is responsible for routing data packets
across the network. It determines the most efficient path for packets to reach
their destination by using logical addresses (IP addresses) and routing protocols.
Imagine it as the logistics department figuring out the best route for the packages
to be shipped.
 IP (Internet Protocol): The core protocol of this layer, defining IP addresses for
devices and enabling internetworking.
4. Network Access Layer (Layer 1): This layer deals with the physical
transmission of data packets over the network medium (cables, wireless). It
interacts with the network interface card (NIC) to convert digital data into
electrical signals or radio waves suitable for the specific medium. Think of it as
the packaging department putting the goods on trucks or airplanes for physical
transportation.
Benefits of Layered Approach:
 Modular Design: Each layer performs its specific function independently,
promoting modularity and easier development and management of protocols.
 Interoperability: Standardized protocols at each layer allow different devices
and applications to communicate seamlessly as long as they speak the same
"language" within that layer.
 Scalability: New protocols and applications can be easily integrated into the
existing framework by adding functionalities at specific layers without affecting
the overall structure.
The IP stack serves as a foundational model for internet communication, providing a
structured and efficient way for data to travel across networks. By understanding the
functionalities of each layer and the protocols that operate within them, you gain a
deeper insight into how information flows through the complex web of the internet.

---------------------------

You might also like