Computer Networks Presentation in Blue Clean Style
Computer Networks Presentation in Blue Clean Style
COMPUTER
NETWORKING
노현용
NETWORK:DATA PLANE
Focus on the data plane to understand the principles of network layer
service.
application
host
transport
0110
network
segment
link
physical
R1 R2
H1 H2
physical
link
physical
Network layer
Forwarding
move packets from a router’s input link to appropriate
router output link
Routing
determine route taken by packets from source to
destination
routing algorithm
NETWORK LAYER : TWO CONTROL-PLANE
APPROACHES
traditional routing algorithms software-defined networking (SDN)
implemented in routers implemented in (remote) servers
traditional routing algorithms
guaranteed delivery
security service
Input port
Physical link coming from the router
Performs link layer functions required to interoperate
with the link layer
Key function is the lookup function
Refers to the forwarding table to determine the router
output port through the switch structure for the
arrived packet
input ports, switching, output ports
Switching Fabric
Transfers packets from input to output port within
Four Elements of a Router
the router
Connects input and output ports
Hardware
Output Port Input port
Stores packets from
Switch the switch fabric
fabric
After performing
Output link
portlayer and physical layer
functions, data is forwarded to the output link.
Software
Routing Processor
Routing processor
Performs control plane functions
Runs routing protocol, manages link state
information, and calculates forwarding table
Input port functions
physical layer
bit-level reception
decentralized switching
destination-based forwarding
link layer
forward based only on destination IP address
e.g., Ethernet
generalized forwarding
forward based on any set of header
Destination-based Forwarding
Longest prefix matching
Match
Match
11001000 00010111 00010110 10100001
examples Packet
11001000 00010111 00011000 10101010 2
Longest prefix matching
Match
11001000 00010111 00010110 10100001 longest prefix matching rule
examples
11001000 00010111 00011000 10101010 Packet Packet
1,2 1
Longest prefix matching
we’ll see why longest prefix matching is used shortly, when we study addressing
longest prefix matching: often preformed using ternary content addressable
memories(TCAMs)
content addressable: present address to TCAM : retrieve address in one clock
cycle, regardless of table size
Cisco Catalyst: ~1M routing table entries in TCAM
switching
Router's core function is packet transmission between input and output links through a switching
structure.
Switching speed determines how fast packets move from input to output, usually measured as a
multiple of line speed.
switching
If the memory bandwidth can write or read packets to or from memory at most B per second, the
total forwarding throughput must be less than B/2, and two packets cannot be forwarded
simultaneously because only one memory read/write operation can be performed at a time over
the shared system bus even if the destination ports are different.
switching
datagram from input port memory to output port memory via a shared bus
bus contention: switching speed limited by bus bandwidth
32 Gbps bus, Cisco 5600
switching
Output port processing takes packets stored in the memory of the output port and
transmits them over the output link. This includes selecting and dequeuing packets
for transmission, and performing any necessary link layer and physical layer
forwarding functions.
Input port queuing
Queuing also occurs at the input port when the switch architecture is not fast enough to
forward all arriving packets
- Queuing delays and packet loss due to input buffer overflow
HOL blocking (head-of-the-line blocking)
- Datagrams stored at the front of the queue prevent other packets from being forwarded
Output port queuing
When the switching structure arrives at the output port faster than the packet
transmission speed (output link speed), buffering (queuing) is required
When the network is congested, the buffer is full and packet loss occurs A scheduling rule
is required to select and transmit datagrams stored in the buffer
Priority-based scheduling
How much buffering?
Link capacity C
Amount of buffering B = RTT (250msec)
Amount of buffer needed = RTT * C
Large number of independent TCP flows N
Buffering required when a large number of independent TCP flows traverse the link
Excessive buffering: increased queuing delay
-> Keep links sufficiently full, but not fuller
PACKET SCHEDULING
Packets arriving at the output link are classified by priority class when they arrive at the
queue
Packets in the highest priority class are transmitted
Same priority -> FCFS method
Non-preemptive priority queuing: Even if packet 4 with higher priority arrives during the
transmission of packet 2, packet 4 waits for transmission because it does not stop once the
transmission of the packet starts.
Round Robin
Q&A
노현용