Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
28 views
Module 4
m4
Uploaded by
Adil Inamdar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Module 4 For Later
Download
Save
Save Module 4 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
28 views
Module 4
m4
Uploaded by
Adil Inamdar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Module 4 For Later
Carousel Previous
Carousel Next
Save
Save Module 4 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 18
Search
Fullscreen
Module 4: End-to-End Protocols and Congestion Control (20MCA13) 1 Module 4 End-to-End Protocols and Cangestion Contral Syllabus End-to-End Protocols and Congestion Control: Simple Demultiplexer (UDP), Reliable Byte Stream (TCP), Queuing Disciplines, TCP Congestion Control, Congestion-Avoidance Mechanisms. The previous three chapters have described various technologies that can be used to connect together a collection of computers, ranging from simple Ethernets and wireless networks to global- scale intemetworks. The next problem is to tum this host-to-host packet delivery service into a process-to-process communication channel. This is the role played by the transport level of the network architecture, which, because it supports communication between application programs running in end nodes, is sometimes called the end-to-end protocol. Two forces shape the end-to-end protocol. From above, the application-level processes that use its services have certain requirements. The following list itemizes some of the common properties that a transport protocol can be expected to provide: © Guarantees message delivery * Delivers messages in the same order they are sent * Delivers at most one copy of each message © Supports arbitrarily large messages ‘* Supports synchronization between the sender and the receiver © Allows the receiver to apply flow control to the sender © Supports multiple application processes on each host The underlying network upon which the transport protocol operates has certain limitations in the level of service it can provide. Some of the more typical limitations of the network are that it may * Drop messages * Reorder messages ‘© Deliver duplicate copies ofa given message © Lime inessayes wo sume finite size * Deliver messages after an arbitrarily long delay For updates vi : https://sites.google.com/view/satishkvr/Module 4: End-to-End Protocols and Congestion Control (20MCA13) 2 This chapter looks at the algorithms in the context of four representative services—a simple asynchronous demultiplexing service, a reliable byte-stream service, a request/reply service, and a service for real-time applications, In the case of the demultiplexing and byte-stream services, we use the Intemet’s User Datagram Protocol (UDP) and Transmission Control Protocol (TCP), respectively, to illustrate how these services are provided in practice. In the case of a request/reply service, we discuss the role it plays in a Remote Procedure Call (RPC) service and what features that entails. Finally, real-time applications make particular demands on the transport protocol, such as the need to carry timing information that allows audio or video samples to be played back at the appropriate point in time. We look at the requirements placed by applications on such a protocol and the most widely used example, the Real-Time Transport Protocol (RTP). that are used to evaluate the performance of computer networks. 4.1 SIMPLE DEMULTIPLEXER (UDP) The simplest possible transport protocol is one that extends the host-to-host delivery service of the underlying network into a process-to-process communication service. There are likely to be many processes running on any given host, so the protocol needs to add a level of demultiplexing, thereby allowing multiple application processes on each host to share the network. Aside from this requirement, the transport protocol adds no other functionality to the best-effort service provided by the underlying network. The Internet's User Datagram Protocol is an example of such a transport protocol. The only interesting issue in such a protocol is the form of the address used to identify the target process. A more common approach, and the one used by UDP. is for processes to indirectly identify each other using an abstract locater, usually called a port. The basic idea is for a source process to send a message to a port and for the destination process to receive the message from a port. receiver (destination) of the message. For example, the UDP header is given in Figure 4.1. Notice that the UDP port field is only 16 bits long. This means that there are 1 The header for an end-to-end protocol that implements this demultiplexing function typically contains an idemtifier (port) for both the sender (source) and the up to 64K possible ports, clearly not enough to identify all the processes on all the hosts in the Internet. Fortunately, ports are not interpreted across the entire Internet, but only on a single host. Rama Satish KV, RNSIT, Bengaluru. For updates vi ites. google.com/view/satishkvr/Module 4: End-to-End Protocols and Congestion Control (20MCA13) That is, a process is really identified by a port on some particular host—a hport, hosti pair. In fact, this pair constitutes the demultiplexing key for the UDP protocol. The next issue is how a process learns the port for the process to which it wants to send a message. Typically, a client process initiates a message exchange with a server process. Once a client has contacted a server, the server knows the client’s port (from the SrePrt field contained in the message header) and can reply to it. In the Internet, for example, the Domain Name Server (DNS) receives messages at well-known port 53 on each host, the mail service accepts messages at well-known port 517, and so on. jens for messages at port 25, and the Unix talk program When a message arrives, the protocol (e.g., UDP) appends the pa, - | message to the end of the queue. Should the queue be full, the message is discarded, There is no flow-control mechanism it ous UDP to tell the sender to slowdown, When an application process wants to receive a message, one is removed from the front of the queue. If the queue is empty, the process blocks ‘nutes until a message becomes available. lly, although UDP does not implement flow control or | f ] reliable/ ordered delivery, it does provide one more function aside from demultiplexing messages to some application process—it also ensures the correctness of the message by the use of a checksum. 2 RELIABLE BYTE STREAM (TCP) A more sophisticated transport protocol is one that offers a reliable, connection-oriented, byte- stream service. Such a service has proven useiul to a wide assortment of applications because it frees the application from having to worry about missing or reordered data. The Internet's Transmission Control Protocol is probably the most widely used protocol of this type. TCP guarantees the reliable, in-order delivery of a stream of bytes. It is a full-duplex protocol, meaning that each TCP connection supports a pair of byte streams, one flowing in each direction. Tt also includes a flow-control mechanism for each of these byte streams that allows the receiver to limit how much data the sender can transmit at a given time. Finally, like UDP, TCP supports a demultiplexing mechanism that allows multiple application programs on any given host to simultaneously carry on a conversation with their peers. In addition to the above features, TCP also implements a highly tuned congestion-control mechanism. For updates vi : https://sites.google.com/view/satishkvr/Module 4: End-to-End Protocols and Congestion Control (20MCA13) 4 At the heart of TCP is the sliding window algorithm. TCP supports logical connections between processes that are running on any two computers in the Intemet. This means that TCP needs an explicit connection establishment phase during which the two sides of the connection agree to exchange data with each other. TCP also has an explicit connection teardown phase. One of the things that happens during connection establishment is that the two parties establish some shared state to enable the sliding window algorithm to begin. Connection teardown is needed so each host knows it is OK to free this state. 4.2.1 Differences between TCP and UDP TCP Reliable: TCP is reliable as it guarantees delivery of data to the destination router. UDP Unreliable: The delivery of data to the destination cannot be guaranteed in UDP. Cor ion-vriented. TCP is a connection-oriented protocol. Connection-orientation means that the communicating devices should establish a connection before transmitting data and should close the connection after transmitting the data. c overhead for ectiouless. This is because there i opening a connection, maintaining a connection, and terminating a connection. UDP is efficient for broadcast and multicast type of network transmission. ‘Segment retransmission and flow control through windowing ‘No windowing or retransmission Segment sequencing: Sequencing of data is a feature of Transmission Control Protocol (TCP). this means that packets arrive in-order at the receiver. No sequencing: There is no sequencing of data in UDP. If ordering is required, it has to be managed by the application layer. Acknowledge sequencing No acknowledgment TCP provides extensive error checking mechanisms. It is because it provides flow control and acknowledgment of data UDP has only the basic error checking mechanism using checksums. TCP is comparatively slower than UDP. UDP is faster, simpler than TCP. Retransmission of lost packets is possible in TCP, but not in UDP. There is no retransmission of lost packets in User Datagram Protocol (UDP). TCP is used by HTTP, HTTPs, FTP, SMTP and Telnet UDP is used by DNS, DHCP, TFTP, SNMP, RIP, and VoIP. For updates vi : https://sites.google.com/view/satishkvr/Module 4: End-to-End Protocols and Congestion Control (20MCA13) 4.2.2 TCP Header TCP is the primary transport protocol used to provide reliable, full-duplex connections. The most common use of TCP is to exchange TCP data encapsulated in an IP datagram. Although IP is implemented on both hosts and routers, TCP is typically implemented on hosts only to provide reliable end-to-end data delivery. The unit of transfer between the TCP software on two machines is called a TCP segment, Segments are exchanged to establish connections, transfer data, send acknowledgements, advertize window sizes and close connections. feaegsr Each TCP segment, encapsulated in an IP Top segmert ————* datagram, has a TCP header - 20 bytes long, IP header _| TOP header TTCP data (optional) unless options are present. Let's look at the anna standard header format in more detail. We will come back to the practical use of this format using a simple case study, after handling the process of connecting and disconnecting two TCP applications as well as the data flow control ‘TCP header format egg 16 mechanisms in TCP Source and Destination Port Number: eae Identification of the sending and receiving § ae application, Along with the source and destination IP ‘Urge Pier addresses in the IP - header identify the connection as | Otins + Pasig Sequence Number: The sequence number of the first data byte in this segment. If the SYN bit is set, the sequence number is the initial sequence number and the first data byte is initial sequence number + 1. Acknowledgement Number: If the ACK bit is set, this field contains the value of the next sequence number the sender of the segment is expecting to receive, Once a connection is established this is always sent. Hien: The number of 32-bit words in the TCP header. This indicates where the data begins. The length of the TCP header is always a multiple of 32 bits, Flags: There are six flags in the TCP header. One or more can be turned on at the same time. URG The URGENT POINTER field contains valid data ACK The acknowledgement number is valid Rama Satish KV, RNSIT, Bengaluru. For updates vi : https://sites.google.com/view/satishkvr/Module 4: End-to-End Protocols and Congestion Control (20MCA13) 6 PSH The receiver should pass this data to the application as soon as possible RST Reset the connection SYN Synchronize sequence numbers to initiate a connection, FIN The sender is finished sending data. Window: This is the number of bytes, starting with the one specified by the acknowledgment number ficld, that the receiver is willing to accept. This is a 16-bit ficld, limiting the window to 65535 bytes. Checksum: This covers both the header and the data. It 's calculated by prepending a pseudo header to the TCP segment, this consists of three 32 bit words which contain the source and destination IP addresses, a byte set to 0, a byte set to 6 (the protocol number for TCP in an IP datagram header) and the segment length (in words). The 16-bit one's complement sum of the header is calculated (i, the entire pseudo-header is considered a sequence of 16-bit words). The 16-bit one's complement of this sum is stored in the checksum field. This is a mandatory field that must be calculated and stored by the sender, and then verified by the receiver. Urgent Pointer: The urgent pointer is valid only if the URG flag is set. This pointer is a positive offtet that must be added to the sequence number field of the segment to yield the sequence number of the last byte of urgent data. TCP’s urgent mode is a way for the sender to transmit emergency data to the other end. This feature is rarely used. 4.2.3 Connection Establishment and Termination Connection estal hment To establish a connection, TCP uses a three-way handshake, Before a client attempts to connect with a server, the server must first bind to and listen at a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may initiate an active open To establish a connection, the three-way (or 3-step) handshake occurs: ‘Active paripant Passive participant SYN: The active open is performed by the client sendinga =“ ht SYN to the server. The client sets the segment’s sequence number to a random value A. SYIN-ACK: In response, the server replies with a SYN-ACK, The acknowledgment number is set to one more than the | eer tre nat se! For updates vi : https://sites.google.com/view/satishkvr/Module 4: End-to-End Protocols and Congestion Control (20MCA13) 7 received sequence number (A+ 1), and the sequence number that the server chooses for the packet is another random number, B. ACK: Finally, the client sends an ACK back to the server. The sequence number is set to the received acknowledgement value i.e, A + 1, and the acknowledgement number is set to one more than the received sequence number ie. B+ 1 At this point, both the client and server have received an acknowledgment of the connection. The steps 1, 2 establish the connection parameter (sequence number) for one direction and it is acknowledged. The steps 2, 3 establish the connection parameter (sequence number) for the other direction and it is acknowledged. With these. a full-duplex communication is established. Connection termination The connection termination phase uses a four-way handshake, with each side of the connection terminating independently. When an endpoint wishes to stop its half of the connection, it transmits a FIN packet, which the other end acknowledges with an ACK. Therefore, a typical tear-down requires a pair of FIN and ACK segments from each TCP endpoint. After both FIN/ACK exchanges are concluded, the side which sent the first FIN before receiving one waits for a timeout before finally closing the connection, during which time the local port is unavailable for new connections; this prevents confusion due to delayed packets being delivered during subsequent commections: Initiator Receiver A connection can be “half-open”, in which case one side "SEARLESHEP has terminated its end, but the other has not. The side that acng.cose |__p ssransen (FIN WAIT 1 IN: has terminated can no longer send any date into the S086. connection, but the other side can. The terminating side rou nart 2 last ack should continue reading the data until the other side t1mewatt a terminates as well cosen cose Itis also possible to terminate the connection by a 3-way handshake, when host A sends a FIN and host B replies with a FIN & ACK (merely combines 2 steps into one) and host A replies with an ACK. This is perhaps the most common method It is possible for both hosts to send FINs simultaneously then both just have to ACK. This could possibly be considered a 2-way handshake since the FIN/ACK sequence is done in parallel for both directions. 4,3 QUEUING DISCIPLINES For updates vi : https://sites.google.com/view/satishkvr/Module 4: End-to-End Protocols and Congestion Control (20MCA13) 8 We now turn to a problem that spans the entire protocol stack—how to effectively and fairly allocate resources among a collection of competing users. The resources being shared include the bandwidth of the links and the buffers on the routers or switches where packets are queued awaiting transmission. Packets contend at a router for the use of a link, with each contending packet placed in a queue waiting its turn to be transmitted over the link, When too many packets are contending for the same link, the queue overflows and packets have to be dropped. When such drops become common events, the network is said to be congested. Most networks provide a congestion-control mechat m to deal with just such a situation Regardless of how simple or how sophisticated the rest of the resource allocation mechanism is, each router must implement some queuing discipline that governs how packets are buffered while waiting to be transmitted. The queuing algorithm can be thought of as allocating both bandwidth (which packets get transmitted) and buffer space (which packets get discarded) This section introduces two common queuing algorithms first-in, first-out (FIFO) and fair queuing (FQ)—and identifies several variations that have been proposed, 4.3.1 FIFO pone Site” arent The idea of FIFO queuing, also called first-come, first-served (FCFS) queuing, is simple: The first packet thatarrives at a router is the first packet to be transmitted. This is illustrated in Figure 6.5(a), which shows a FIFO with “slots” to hold up to eight packets. Given that the amount of buffer space at each router is finite, if'a packet arrives and the queue (buffer space) is full, then the router discards that packer, as shown in Figure 6.5(b). This is LOURE 5) FIFO emg (tl op FIFO que done without regard to which flow the packet belongs to or how important the packet is. This is sometimes called tail drop, since packets that arrive at the tail end of the FIFO are dropped. Note that tail drop and FIFO are two separable ideas. FIFO is a scheduling discipline—it determines the order in which packets are transmitted. Tail drop is a drop policy—it determines which packets get dropped. Because FIFO and tail drop are the simplest instances of scheduling discipline and drop policy, respectively, they are sometimes viewed as a bundle—the vanilla queuing implementation. Unfortunately, the bundle is often referred to simply as FIFO queuing, when it should more precisely be called FIFO with tail drop. For updates vi : https://sites.google.com/view/satishkvr/Module 4: End-to-End Protocols and Congestion Control (20MCA13) 9 A simple variation on basic FIFO queuing is priority queuing. The idea is to mark each packet with a priority; the mark could be carried. The routers then implement multiple FIFO queues, one for each priority class. The router always transmits packets out of the highest-priority queue if that queue is nonempty before moving on to the next priority queue. Within each priority, packets are still managed in a FIFO manner. This idea is a small departure from the best-effort delivery model, but it does not go so far as to make guarantees to any particular priority class. It just allows high priority packets to cut to the front of the line, The problem with priority queuing, of course, is that the high-priority queue can starve out all the other queues; that is, as long as there is at least one high-priority packet in the high-priority queue, lower-priority queues do not get served. One situation in which priority queuing is used in the Internet is to protect the most important packets—typically, the routing updates that are necessary to stabilize the routing tables after a topology change. Often there is a special queue for such packets, which can be identified by the Differentiated Services Code Point (formerly the TOS field) in the IP header. 4.3.2 Fair Queuing The main problem with FIFO queuing is that it does not discriminate between different traffic sources, it docs not scparate packets according to the flow to which they belong. This is a problem at two different levels, At one level, it is not clear that any congestion-control algorithm implemented entirely at the source will be able to adequately control congestion with so little help from the routers, We will suspend judgment on this point until another level, because the entire congestion-control mechanism is implemented at the sources and FIFO queuing does not provide a means to police how well the sources adhere to this mechanism, it is possible for an ill-behaved source (flow) to. capture an arbitrarily large fraction of the network capacity. For updates vi : https://sites.google.com/view/satishkvr/Module 4: End-to-End Protocols and Congestion Control (20MCA13) 10 Considering the Internet again, it is certainly possible for a given application not to use TCP and, as a consequence, to bypass its end-to-end congestion-control mechanism. (Applications such as Internet telephony do this today.) Such an application is able to flood the Internet’s routers with its own packets, thereby causing other applications’ Flow 1 - packets to be discarded. \ \ H Fair queuing (FQ) is an algorithm that has been Flow 2 Round-robin proposed to address this problem. The idea of FQ is | service to maintain a separate queue for each flow currently Flow 3 \ being handled by the router. The router then services these queues in a sort of round-robin, as illustrated in Flow 4 igure 4.6. FIGURE 46 Roundebinsece of fur Bows ata ee, When a flow sends packets too quickly, then its queue fills up. When a queue reaches a particular length, additional packets belonging to that flow’s queue are discarded. In this way, a given source cannot arbitrarily increase its share of the network’s capacity at the expense of other flows. 4.4 CONGESTION AND REASONS FOR CONGESTION + Too many packets present in (a part of) the network causes packet delay and loss that degrades performance. This situation is called congestion. + The network and transport layers share the responsibility for handling congestion. + The most effective way to control congestion is to reduce the load that the transport layer is. placing on the network. + Streams of packets begin arriving on three or four input lines and all need the same output line, a queue will build up. + If there is insufficient memory to hold all of them, packets will be lost. Adding more memory may help up to a point, + Low-bandwidth links or routers that process packets more slowly than the line rate can also become congested, + In this case, the situation can be improved by directing some of the traffic away from the bottleneck to other parts of the network 4.4.1 Approaches to Congestion Control * Increase the resources or decrease the load + The most basic way to avoid congestion is o build a network that is well matched to the traffic that it carries. + These solutions are usually applied on different time scales to either prevent congestion or react to it once it has occurred + The most basic way to avoid congestion is to build a network that is well matched to the traffic that it carries. + Ina virtual-circuit network, new connections can be refused if they would cause the network to become congested. This is called admission control. «Two difficulties with this approach are For updates viModule 4: End- End Protocols and Congestion Control (20MCA13) " + How to identify the onset of congestion. * How to inform the source that needs to slow down, + Solutions + Routers can monitor the average load, queueing delay, or packet loss. + To tackle the second issue, routers must participate in a feedback loop with the sources. Traffic aware routing * — Consider the network of Fig. 5-23, which is divided into two parts, East and West, connected by two links, CF and EI. + Suppose that most of the trafic between East ia $2 Ace i ih end Wes put a at ins and Wect ic using link CF, and, ac a recult, thie link ic heavily loaded with long delays. + Including queuing delay in the weight used for the shortest path calculation will make £1 more attractive. Afier the new routing tables have been installed, most of the East-West traffic will now go over EI, loading this link. * Consequently, in the next update, CF will appear to be the shortest path. Asa result, the routing tables may oscillate wildly, leading to erratic routing and many potential problems. Traffic Throttling + Bandwidth throttling is the intentional slowing of Internet service by an Internet service provider. It isa reactive measure emplayed in communication networks in an apparent attempt to regulate network traffic and minimize bandwidth congestion. + When congestion is imminent, it must tell the senders to throttle back their transmissions and slow down, + Throttling traffic that can be used in both datagram networks and virtual-circuit networks, Choke Packets + A specialized packet that is used for flow control along a network + A router detects congestion by measuring the percentage of buffers in use, line utilization and average queue lengths + When it detects congestion, it sends choke packets across the network to all the data sources associated with the congestion + The sources respond by reducing the amount of data they are sending, Explicit Congestion Notification + Instead of generating additional packets to warn of congestion, a router can tag any packet it forwards (by setting a bit in the packet’s header) to signal that itis experiencing congestion. + When the network delivers the packet, the destination can note that there is congestion and inform the sender when it sends a reply packet. + The sender can then throttle its transmissioas as before. This design is called ECN (Explicit Congestion Notification) and is used in the Internet. By: Dr. Rama Satish KV, RNSIT, Bengaluru. For updates visit: https://sites.google.com/view/satishkvr/End Protocols and Congestion Control (20MCA13) 2 Packet Congested Marked router packet “eh ‘Congestion signal Figure $.28, Explicit congestion nlifcation Load Shedding + Load shedding is a fancy way of saying thet when routers are being in undated by packets that they cannot handle, they just throw them away. The term comes from the world of electrieal power generation. + Action to reduce the load on something, especially the interruption of an electricity supply to avoid excessive load on the generating plant. Random Early Detection + Routers drop packets early, before the situction has become hopeless, there is time for the source: to take action before it is too late. A popular algorithm for doing this is called RED (Random Early Detection) + To determine when to start discarding, routers maintain a running average of their queue lengths When the average queue length on some link exceeds a threshold, the link is said to be congested and a small fraction of the packets are dropped at random. + RED routers improve performance compared to routers that drop packets only when their buffers are full, though they may require tuning to work well. 4.5 QUALITY OF SERVICE + An easy solution to provide good quality o° service is to build a network with enough capacity for whatever traffic will be thrown at it. The name for this solution is over provisioning, * Quality of service mechanisms let a network with less capacity meet application requirements just as well at a lower cost. + A stream of packets from a source to a destination is called a flow. + The needs of each flow can be characterized by four primary parameters: bandwidth, delay, jitter, and loss. + Together, these determine the QoS (Quality of Service) the flow requires. + File transfer applications, including email and video, are not delay sensitive + On the other hand, playing audio or video files from a server does not require low delay. The variation (icc, standard deviation) in the delay or packet arrival times is called jitter + Interactive applications, such as Web. Audio and video applications can tolerate some lost packets without retransmission because people do not notice short pauses or occasional skipped frames. QOS can be achieved in the following + Traffic Shaping + Preket Schediing + Admission Control + Integrated Service * Differentiated Service By: Dr. Rama Satish KV, RNSIT, Bengaluru. For updates visit: https://sites.google.com/view/satishkvr/End Protocols and Congestion Control (20MCA13) 1B Traffic Shaping + Traffic shaping is a technique for regulating the average rate and burstiness of a flow of data that enters the network + When a flow is set up, the user and the network (i¢., the customer and the provider) agree on a certain traffic pattern (je. shape) for that flow. Sometimes this agreement is called an SLA (Service Level Agreement), Leaky and Token Buckets + The Leaky Bucket Algorithm used to control rate in a network. It is implemented as a single- server queue with constant service time. + If the bucket (buffer) overflows then packets are discarded. + In this algorithm the input rate can vary but the output rate remains constant + No matter the rate at which water enters the bucket, the outflow is at a constant rate, R, when there is any water in the bucket and zero when the bucket is empty. + Also, once the bucket is full to capacity B, any additional water entering it spills over the sides and is lost. + Ifa packet arrives when the bucket is full, he packet must either be queued until enough water leaks out to hold it or be discarded. + This technique was proposed by Turner (1986) and is called the leaky bucket algorithm, Token Buckets + The token bucket algorithm compare to leaky bucket algorithm allow the output rate vary depending on the size of burst. in this algorithm the bucket holds token to transmit a packet. the host must capture and destroy one token. + Tokens are generated by at the rate of one token every vt see .Idle hosts can capture and save up tokens (up to maximum size of the bucket) in order to send large burst later. (a) Before (Aner By: Dr. Rama Satish KV, RNSIT, Bengaluru. For updates visit: https://sites.google.com/view/satishkvr/Module 4: End-to-End Protocols and Congestion Control (20MCA13) 4 + A different but equivalent formulation is to imagine the network interface as a bucket that is being filled, as shown in Fig, 5-28(c), + The tap is running at rate R and the bucket has a capacity of B as before + Now to send a packet we must be able to take water or tokens as the contents are commonly called out of the bucket (rather than putting water into the bucket) + No more than a fixed number of tokens B can accumulate in the bucket and if the bucket is empty ‘we must wait until more tokens arrive before we can send another packet + This algorithm is called the token bucket algorithm. Packet Scheduling + Algorithms that allocate router resources among the packets of a flow and between competing flows are called packet scheduling algorithms. Three different kinds of resources can potentially be reserved for different flows Bandwidth, Buffer space. CPU cyeles. Ifa flow requires 1 Mbps and the oulyoing line has a capacity of 2 Mbps, ying to ditect three flows through that line is not going to work + While modem routers are able to process most packets quickly, some kinds of packets require greater CPU processing. + Packet scheduling algorithms allocate bandwidth and other router resources by determining which of the buffered packets to send on the output line next. + Each router buffers packets in a queue for each output line until they can be sent, and they are sent in the same order that they arrived, This algorithm is known as FIFO (First-In First-Out), ot equivalently FCFS (First-Come First-Serve). + FIFO routers usually drop newly arriving packets when the queue is full. Since the newly arrived packet would have been placed at the end of the queue, this behavior is called tail drop. + FIFO scheduling is simple to implement, but it is not suited to providing good quality of service because when there are multiple flows, one flow can easily affect the performance of the other flows, + One of the first ones was the fair queueing algorithm devised by Nagle (1987) + The essence of this algorithm is that routers have separate queues, one for each flow for a given output line. When the line becomes idle, the router scans the queues round-robin. + Sending more packets will not improve this rate. {\ 1 LIU 2 [ } { Raunazobia 3] 2513) 241) — SSS a 2 TT YU Input queves Figure 3.30, Round-robin fair queueing. Admission Control + QoS guarantees are established through the process of admission control. + The user offers a flow with an accompanying QoS requirement to the network. The network then decides whether to accept or reject the flow based on its capacity and the commitments it has, made to other flows + Any routers on the path without reservations might become congested, and a single congested router can break the QoS guarantee. By: Dr. Rama Satish KV, RNSIT, Bengaluru. For updates visit: https://sites.google.com/view/satishkvr/Module 4: End- End Protocols and Congestion Control (20MCA13) 15 + QoS guarantees for new flows may still be accommodated by choosing a different route for the flow that has excess capacity is called QoS routing, + Itisalso possible to split the traffic for each destination over multiple paths to more easily find excess capacity. + A simple method is for routers to choose equal-cost paths and to divide the traffic equally or in proportion to the capacity of the outgoing links. + Given a path, the decision to accept or reject a flow is not a simple matter of comparing the resources (bandwidth, buffers, cycles) requested by the flow with the router's excess capacity in those three dimensions. Integrated Services + IETF puta lot of effort into devising an architecture for streaming multimedia. This work resulted in over two dozen RFCs, starting with RFCs 2205-2212. The generic name for this work is integrated services + Tewas aimed at both unicast and multieast applications. + An example of the former is a single user streaming a video clip from a news site. An example of the latter is a collection of digital television stations broadcasting their programs as streams of IP packets to many receivers at various locations. + The main part of the integrated services architecture that is visible to the users of the network is RSVP. ‘The Resource reservation protocol(RSVP) + The main part of the integrated services architecture that is visible to the users of the network is RSVP. This protocol is used for making the reservations; other protocols are used for sending the data. + RSVP.allows multiple senders to transmit to multiple groups of receivers, permits individual receivers to switch channels freely, and optimizes and width use while at the same time eliminating congestion. + The protocol uses multicast routing using spanning wees + Each group is assigned a group address. To send to a group, a sender puts the group's address in its packets. The standard multicast routing algorithm then builds a spanning tree covering all group members. The routing algorithm is not part of RSVP. I. oe ' Receivers + To get better reception and eliminate congestion, any of the receivers in a group can send a reservation message up the tree to the sender. + The message is propagated using the reverse path forwarding algorithm discussed earlier + At each hop, the router notes the reservation and reserves the necessary bandwidth. By: Dr. Rama Satish KV, RNSIT, Bengaluru. For updates visit: https://sites.google.com/view/satishkvr/Module 4: End-to-End Protocols and Congestion Control (20MCA13) 16 + By the time the message gets back to the source, bandwidth has been reserved all the way from the sender to the receiver making the reservation request along the spanning tree + If insufficient bandwidth is available, it reports back failure. Gos | igure $38. (a) Host 3 requests chanel to Bot 1 ¢b) Host 3 then routs 2 second chanel, to Ros 2 (6) Hos regis & chanel to Rot 1 + Once it has been established, packets can flow from 1 to 3 without congestion. + A second path is reserved, a illustrated in Fig, 5-35(b). Note that two separate channels are needed from host 3 to router E because two independent streams are being transmitted + First, dedicated bandwidth is reserved as far as router H However, this router sees that it already has a feed from host 1, so if the necessary bandwidth has already been reserved, it does not have to reserve any more. Note that hosts 3 and 5 might have asked for different amounts of bandwidth (e.g, iff host 3 is playing on a small screen and only wants the low resolution information), so the capacity reserved must be large enough to satisfy the greediest receiver. Differentiated services + Differentiated services can be offered by a set of routers forming an administrative domain (e.g., an ISP ora telco). * The administration defines a set of service classes with corresponding forwarding rules. + Two types of Differentiated services > Expedited Forwarding > Assured Forwarding Expedited Forwarding + The vast majority of the traffic is expected to be regular, but a limited fraction of the packets are expedited + The idea behind expedited forwarding is very simple. + Two classes of service are available: regular and expedited. + The vast majority of the traffic is expected to be regular, but a limited fraction of the packets are expedited. + The expedited packets should be able to transit the network as though no other packets were present. + In this way they will get low loss, low delay and low jitter service Rama Satish KV, >dogle.com/view/satishkvr/ Far 9 Exide st netEnd Protocols and Congestion Control (20MCA13) 7 Assured Forwarding + A somewhat more elaborate scheme for managing the service classes is called assured forwarding + The top three classes might be called gold, silver, and bronze. + Itdefines three discard classes for packets that are experiencing congestion: low, medium, and high Packets wih Dex mark Ov em = \ Packet Y source Four Twelve pony ——_priontyrop lasses clases Figure 8.37. A possible implementation of asured forwarding. 4.5 Tunneling Handling the general case of making two different networks interwork is exceedingly difficult. However, there is a common special case that is manageable even for different network protocols. This case is where the source and destination hosts are on the same type of network, but there is a different network in between. As an example, think of an international bank with an IPv6 network in Paris, an IPv6 network in London and connectivity between the offices via the IPv4 Internet. This situation is shown in Fig, 5-40. Londen Lp} —— {pa fonaaey —-—-[epaa! Figure £40, Tunneling a packet rom Paris to London, How do you pass an 8000-byte packet through a network whose maximum size is 1500 bytes? If packets. ona connection-oriented network transit a connectionless network, they may arrive in a different order than they were sent, That is something the sender likely did not expect. A large packet might be broken up, sent in pieces, and then joined back together. By: Dr. Rama Satish KV, RNSIT, Bengaluru. For updates visit: https://sites.google.com/view/satishkvr/End Protocols and Congestion Control (20MCA13) 18 English Chane Figure $41, Tumeling sear trom France to England. Consider a person driving her car from Psris to London. Within France, the car moves under its own power, but when ithits the English Channel, itis loaded onto a high-speed train and transported, to England through the Chunnel (cars are not permitted to drive through the Chunnel). Effectively, the car is being carried as freight, as depicted in Fig. 5-41. At the far end, the car is let loose on the English roads and once again continues to move under its own power. Tunneling of packets through a foreign network works the same way. Tunneling is widely used to connect isolated hosts and networks using other networks. The network that results is called an overlay since it has effectively been overlaid on the base network. By: Dr. Rama Satish KV, RNSIT, Bengaluru. For updates visit: https://sites.google.com/view/satishkvr/
You might also like
CN-Mod4 Notes - DRVS
PDF
No ratings yet
CN-Mod4 Notes - DRVS
18 pages
CN Unit 4 notes
PDF
No ratings yet
CN Unit 4 notes
31 pages
UNIT 4 ACN Updated
PDF
No ratings yet
UNIT 4 ACN Updated
68 pages
Unit - Iv: Transport Layer
PDF
No ratings yet
Unit - Iv: Transport Layer
55 pages
Day16 TransportLayer
PDF
No ratings yet
Day16 TransportLayer
13 pages
UNIT 4 CN
PDF
No ratings yet
UNIT 4 CN
8 pages
Chapter 5
PDF
No ratings yet
Chapter 5
26 pages
UNIT 4 tcp
PDF
No ratings yet
UNIT 4 tcp
21 pages
Cs3591 - CN Unit 2 Transport Layer
PDF
No ratings yet
Cs3591 - CN Unit 2 Transport Layer
15 pages
CN R20 Unit-4
PDF
No ratings yet
CN R20 Unit-4
14 pages
Cnunit 4
PDF
No ratings yet
Cnunit 4
21 pages
Datacommunication Unit-4
PDF
No ratings yet
Datacommunication Unit-4
17 pages
networking unit 5
PDF
No ratings yet
networking unit 5
20 pages
PPT-203105255 - 4
PDF
No ratings yet
PPT-203105255 - 4
39 pages
Unit-4 Transport Layer
PDF
No ratings yet
Unit-4 Transport Layer
53 pages
ACN Unit - 4 Notes
PDF
No ratings yet
ACN Unit - 4 Notes
17 pages
CN Module4
PDF
No ratings yet
CN Module4
13 pages
Adobe Scan 24 Jun 2024
PDF
No ratings yet
Adobe Scan 24 Jun 2024
22 pages
Itc4303 - L4&L5
PDF
No ratings yet
Itc4303 - L4&L5
80 pages
U20CS404 - CN UNIT 4 NOTES
PDF
No ratings yet
U20CS404 - CN UNIT 4 NOTES
33 pages
Unit-4 Notes by Pratik
PDF
No ratings yet
Unit-4 Notes by Pratik
27 pages
APznzabpT27xoY1KeUKoMV0DH2uGoZlk7-PT5FjEtE_v28LiawYgFPNP21CJt0LSsB2r4zrRX9W_u_oSnP-cS2Sus30sBaz5pLvCg4zJ1MtPglwN2dIJVtrkpfuBcAGlseHH14OkT9d4bt7lOvPAFT07icYpgHoMxTVuUtS5sZz2DaseKIyYvsM9A3qcd24ffaJCpeXcQhHkpp9Njx0LdQ
PDF
No ratings yet
APznzabpT27xoY1KeUKoMV0DH2uGoZlk7-PT5FjEtE_v28LiawYgFPNP21CJt0LSsB2r4zrRX9W_u_oSnP-cS2Sus30sBaz5pLvCg4zJ1MtPglwN2dIJVtrkpfuBcAGlseHH14OkT9d4bt7lOvPAFT07icYpgHoMxTVuUtS5sZz2DaseKIyYvsM9A3qcd24ffaJCpeXcQhHkpp9Njx0LdQ
18 pages
BSCS_CN_W22_Week14
PDF
No ratings yet
BSCS_CN_W22_Week14
41 pages
ACN CH 4 Notes Completed
PDF
No ratings yet
ACN CH 4 Notes Completed
22 pages
Unit 4
PDF
No ratings yet
Unit 4
25 pages
Module V
PDF
No ratings yet
Module V
46 pages
Unit IV (CN)
PDF
No ratings yet
Unit IV (CN)
12 pages
MMT - Chương 4 - EN - Transport Layer
PDF
No ratings yet
MMT - Chương 4 - EN - Transport Layer
40 pages
31-12-24 02-01-25 Lecture Notes (Wk-14)
PDF
No ratings yet
31-12-24 02-01-25 Lecture Notes (Wk-14)
41 pages
Unit 4
PDF
No ratings yet
Unit 4
64 pages
Unit 5: Transport Layer
PDF
No ratings yet
Unit 5: Transport Layer
17 pages
User Datagram Protocol (Udp)
PDF
No ratings yet
User Datagram Protocol (Udp)
22 pages
CN Unit 4
PDF
No ratings yet
CN Unit 4
27 pages
Unit 4(Transport Layer) (1)
PDF
No ratings yet
Unit 4(Transport Layer) (1)
55 pages
Chapter 04 ACN
PDF
No ratings yet
Chapter 04 ACN
12 pages
6._CCN-NEP-C5-The_Transport_and_Application_Layer[1]
PDF
No ratings yet
6._CCN-NEP-C5-The_Transport_and_Application_Layer[1]
16 pages
CN Notes
PDF
No ratings yet
CN Notes
36 pages
Part 4-Transport Layer
PDF
No ratings yet
Part 4-Transport Layer
49 pages
CS8591 Computer Networks UNIT 4 Notes-1
PDF
No ratings yet
CS8591 Computer Networks UNIT 4 Notes-1
33 pages
MODULE 4 Transport Layer_SHAFIA-1
PDF
No ratings yet
MODULE 4 Transport Layer_SHAFIA-1
8 pages
ACN M2
PDF
No ratings yet
ACN M2
42 pages
CN UNIT 4 Transport Layer
PDF
No ratings yet
CN UNIT 4 Transport Layer
32 pages
End-to-End Protocols: Larry L. Peterson and Bruce S. Davie
PDF
No ratings yet
End-to-End Protocols: Larry L. Peterson and Bruce S. Davie
103 pages
Transport Layer Protocols
PDF
No ratings yet
Transport Layer Protocols
21 pages
CCNA 200-301 Official Cert Guide, Volume 2-18
PDF
No ratings yet
CCNA 200-301 Official Cert Guide, Volume 2-18
3 pages
Unit-IV Transport Layer Protocols 4.1 User Datagram Protocol
PDF
0% (1)
Unit-IV Transport Layer Protocols 4.1 User Datagram Protocol
24 pages
Transport Layer
PDF
No ratings yet
Transport Layer
22 pages
4.1 Transport Layer TCP, UDP
PDF
No ratings yet
4.1 Transport Layer TCP, UDP
89 pages
UNIT IV Udp TCP
PDF
No ratings yet
UNIT IV Udp TCP
26 pages
Unit-3 IOT
PDF
No ratings yet
Unit-3 IOT
16 pages
Transport Layer
PDF
No ratings yet
Transport Layer
9 pages
Unit-5 ECE 3-2
PDF
No ratings yet
Unit-5 ECE 3-2
15 pages
Module 4
PDF
No ratings yet
Module 4
126 pages
Unit 3-1
PDF
No ratings yet
Unit 3-1
23 pages
Transport Layer Protocols
PDF
No ratings yet
Transport Layer Protocols
11 pages
Transport Layer Topics To Cover: Udp, TCP
PDF
No ratings yet
Transport Layer Topics To Cover: Udp, TCP
12 pages
Chapter 5
PDF
No ratings yet
Chapter 5
16 pages
Transport-Layer Protocols.pptx
PDF
No ratings yet
Transport-Layer Protocols.pptx
25 pages
CN Unit Iv VTHT
PDF
No ratings yet
CN Unit Iv VTHT
31 pages