Computer Network Assign
Computer Network Assign
Submitted to:
Submitted by Group 1:
Malaika (2125165031)
Switch:
Switching:
Switching is a process to forward data packets coming in from one device to another in network.
In large networks, there can be multiple paths from sender to receiver. The switching technique will decide
the best route for data transmission.
When data comes on a port it is called ingress, and when data leaves a port or goes out it is called egress.
Types of Switching
There are three types of switching methods:
Circuit Switching
Message Switching
Packet Switching
Circuit switching:
Decreases the delay the user experiences before and during a call.
Packets are always delivered in the correct order.
Disadvantages:
Advantages:
Reliability:
Message switching is a reliable form of communication as it ensures the delivery of messages
even some messages are delayed.
Cost effectiveness:
Message switching is most cost effective than any other forms of communication because it
allows messages to be transmitted simultaneously over the same communication lines.
Flexibility:
Message switching allows for flexible communication as it enables the transmission of
messages of varying lengths and formats.
Disadvantage:
The message switches must be equipped with sufficient storage to enable them to store
the message until the message is formatted.
The long delay can occur due to storing and forwarding facility provided by message
switching.
Packet switching:
Advantages:
What is congestion?
Congestion is a state in a network when network nodes and links are overloaded with traffic. This
problem usually makes the end users’ network slow. This can lead to delays, packet loss, and other
performance problems.
Effects of Congestion
The process of managing traffic flow in a network to prevent congestion from occurring. There are two
main types of congestion control:
In open loop congestion control policies are applied to prevent congestion before it happens. Congestion
control is handled by the source or the destination.
Retransmission policy
The policy in which retransmission of the packets are taken care of. If the sender feels that a sent
packet is lost or corrupted, the packet needs to be retransmitted. This transmission may increase
the congestion in the network.
Admission control policy:
This mechanism controls the amount of traffic that is allowed to enter the network.
Resource reservation: This mechanism reserves resources for specific flows or applications.
Load balancing:
This mechanism distributes traffic across multiple links or paths to avoid overloading any one link
or path.
Routing:
This mechanism can be used to direct traffic away from congested areas.
This type of congestion control is reactive and responds to congestion after it has already occurred.
Backpressure:
Backpressure is a technique in which a congested node stops receiving packets from upstream
node. This may cause the upstream node or nodes to become congested and reject receiving data
from above nodes.
Choke Packet Technique:
A choke packet is a packet sent by a node to the source to inform it of congestion. Whenever the
resource utilization exceeds the threshold value which is set by the administrator, the router directly
sends a choke packet to the source giving it a feedback to reduce the traffic.
2. Occurs during peak usage times or due to sudden spikes in data transfer.
Limited Bandwidth:
Network Bottlenecks:
Packet Collisions:
Examples of congestion:
If there is too much traffic on your Internet connection, you may experience slow Internet speeds. This is
especially common during peak times of day, such as when everyone is coming home from work or
school and using the Internet.
2) Dropped connections:
If a network is congested, it may drop packets. This can lead to dropped connections or lost data.
Website outages: If a website is experiencing high traffic, it may go down completely. This is because the
website's servers cannot handle the load.
3) Inefficient Routing: