0% found this document useful (0 votes)
35 views97 pages

Unit 3 CN

Uploaded by

Hemanth Kumar1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views97 pages

Unit 3 CN

Uploaded by

Hemanth Kumar1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 97

UNIT-3

Medium Access Sub Layer


Channel allocation is a process in which a single channel is divided and

allotted to multiple users in order to carry user specific tasks. There are

user’s quantity may vary every time the process takes place. If there are N

number of users and channel is divided into N equal-sized sub channels,

Each user is assigned one portion. If the number of users are small and

don’t vary at times, then Frequency Division Multiplexing can be used as

it is a simple and efficient channel bandwidth allocating technique.

Channel allocation problem can be solved by two schemes: Static

Channel Allocation in LANs and MANs, and Dynamic Channel

Allocation.
Static Channel Allocation in LANs and MANs:
It is the classical or traditional approach of allocating a single channel
among multiple competing users using
Frequency Division Multiplexing (FDM). if there are N users, the
frequency channel is divided into N equal sized portions (bandwidth),
each user being assigned one portion. since each user has a private
frequency band, there is no interference between users.
• However, it is not suitable in case of a large number of users with
variable bandwidth requirements.
It is not efficient to divide into fixed number of chunks.
where

T = 1/(U*C-L) T(FDM) = N*T(1/U(C/N)-L/N)


T = mean time delay,

C = capacity of channel,

L = arrival rate of frames,

1/U = bits/frame,

N = number of sub channels,

T(FDM) = Frequency Division Multiplexing Time


Dynamic Channel Allocation:
In dynamic channel allocation scheme,
frequency bands are not permanently assigned to the
users. Instead channels are allotted to users
dynamically as needed, from a central pool. The
allocation is done considering a number of
parameters so that transmission interference is
minimized.
This allocation scheme optimises bandwidth
usage and results is faster transmissions.
MULTIPLE ACCESS PROTOCOLS

• When nodes or stations are connected and use a common


link, called a multipoint or broadcast link, we need a
multiple-access protocol to coordinate access to the link.
• Many protocols have been devised to handle access to a
shared link. All of these protocols belong to a sub layer in
the data-link layer called media access control (MAC).
Each station can transmit when it desires on the following features.

Two features:

• First, there is no scheduled time for a station to transmit. Transmission is


random among the stations. That is why these methods are called random
access.

• Second, no specific rules, that which station should send next. Stations
compete with one another to access the medium. That is why these
methods are also called contention methods.
ALOHA

• ALOHA, the earliest random access method was developed at the


University of Hawaii in early 1970. It was designed for a radio
(wireless) LAN, but it can be used on any shared medium.
• It is obvious that there are potential collisions in this arrangement.
The medium is shared between the stations. When a station sends
data, another station may attempt to do so at the same time. The
data from the two stations collide and become garbled.
Pure ALOHA

The original ALOHA protocol is called pure ALOHA.


This is a simple but elegant protocol. The idea is that
each station sends a frame whenever it has a frame to
send (multiple access). However, since there is only
one channel to share, there is the possibility of
collision between frames from different stations.
• There are four stations (unrealistic assumption) that
contend with one another for access to the shared
channel.
• The figure shows that each station sends two frames.
• Some of these frames collide because multiple frames
are in contention for the shared channel.
• At an particular time period few of the frames collide to
each other, which is called as “collision duration”.
• Therefore from the figure give, only two frames survive,
one frame from station 1 and one frame from station 3.
• From Pure ALOHA we can draw conclusion that when
the time-out period passes, each station should wait a
random amount of time before resending its frame. The
randomness will help avoid more collisions. We call this
time the back off time Ts.
Vulnerable time: It is the time during which no transmission should be done to avoid any
collision.

Vulnerable Time = 2 x Tfr

Back off time: If the medium is busy, then station


should wait there for a certain period of time called
as Back off time.
Vulnerability time of pure ALOHA
Tfr =
Frame transmission time
Vulnerable time for slotted ALOHA
Slotted ALOHA

• Pure ALOHA has a vulnerable time of 2 x Tfr This is so because there is


no rule that defines when the station can send. A station may send soon
after another station has started or just before another station has
finished.
• Slotted ALOHA was invented to improve the efficiency of pure
ALOHA. In slotted ALOHA we divide the time into slots of Tfr
seconds and force the station to send only at the beginning of the time

slot. The vulnerable time in slotted ALOHA is Tfr.

Throughput : It can be proven that the average number of


successful transmissions for slotted ALOHA is S = G x e-G.

The maximum throughput Smax is 0.368, when G = 1.


Space/time model of the collision in CSMA
Vulnerable time in CSMA
Persistence Methods

What should a station do if the channel is busy? What


should a station do if the channel is idle? Three
methods have been devised to answer these questions:
• 1-persistent method

• Non persistent method

• P-persistent method
1-Persistent: The 1-persistent method is simple and straightforward. In
this method, after the station finds the line idle, it sends its frame
immediately (with probability 1). This method has the highest chance of
collision because two or more stations may find the line idle and send
their frames immediately.
Non persistent : In the non-persistent method, a station that has a
frame to send senses the line. If the line is idle, it sends immediately. If
the line is not idle, it waits a random amount of time and then senses the
line again. The non-persistent approach reduces the chance of collision.
P- Persistent : The p-persistent method is used if the channel has time
slots with a slot duration equal to or greater than the maximum
propagation time.

If the station finds the line idle it follows these Steps:


• 1. With probability p, the station sends its frame.
• 2. With probability q = 1- p, the station waits for the beginning of the
next time slot and checks the line again.
• a. If the line is idle, it goes to step 1.
• b. If the line is busy, it acts as though a collision has occurred and uses
the back off procedure.
Behavior of three persistence methods
Flow diagram of persistence methods
Collision of the first bits in CSMA/CD
Collision and abortion in CSMA/CD
Energy level during transmission, idealness
or collision
CSMA with Collision Avoidance (CSMA/CA)
Ethernet
Ethernet is the standard way to
connect computers on a network
over a wired connection. It
provides a simple interface and
for connecting multiple devices,
such computers, routers, and
switches. With a single router and
a few Ethernet cables, you can
create a LAN, which allows all
connected devices to
communicate with each other.
Ethernet is the technology that is most
commonly used in wired local area networks
(LAN’S). A LAN is a network of computers and
other electronic devices that covers a small
area such as a room, office, or building.
Ethernet is a network protocol that controls
how data is transmitted over a LAN.
Technically it is referred to as the IEEE 802.3
protocol. The protocol has evolved and
improved over time to transfer data at the
speed of a gigabit per second.
Difference between Internet and Ethernet

• Internet is the massive global system that connects computer networks

around the world together. Millions of private, public, academic, business, and

government networks worldwide connect with each other over the internet.

• Both Ethernet and Internet are types of network that are used to connect

computers. However, the scope and range of these networks differ. The

Ethernet is a local area network (LAN) which connects computers in a local

location. There are thousands and hundreds of thousands of Ethernet

networks. The internet, on the other hand, is a massive wide area network

(WAN) that computers far away can connect to in order to access information.

There is only one internet, however. It can be said that the internet is a

network of networks.
History of Ethernet

• On 1973, Metcalfe ( at the Xerox Palo Alto Research


Center, PARC, in California) wrote a memo describing
the Ethernet network system he had invented this for
interconnecting advanced computer workstations.
• Metcalfe Ethernet networking system was invented
with the idea of the Aloha network.
• The Aloha protocol was very simple: an Aloha station
could send whenever it liked, but it lead to collision.
• He developed a new system that is CSMA/CD ( listen
before talk).
• Metcalfe also developed a more sophisticated
backoff time algorithm which is combination with
the CSMA/CD protocol, allowed the Ethernet system
to function more better.

• Metcalfe's first experimental network was called


the Alto Aloha Network. In 1973, Metcalfe changed
the name to "Ethernet," to make it clear that the
system could support any computer.
802.3 MAC frame format
MAC Sub layer
MAC Sub layer: In standard Ethernet, the MAC sub layer, governs the
operation of the access method . And it also frames data received from the
upper layer and passes them to the physical layer.

FRAME FORMAT

•The Ethernet frames contains seven fields : preamble ,SFD ,DA ,SA ,length
or type of protocol data unit(PDU),upper-layer data ,the CRC . Ethernet
does not provide any mechanism for acknowledging received frames ,
making it what is known as an unreliable medium . Acknowledgement
must be implemented at the higher layers . The format of the MAC frame
is shown in fig.
• PREAMBLE: The first field of the 802.3 frame contains 7 bytes (56bits) of alternating
0s and 1s that alerts the receiving system about up coming frame and enables it to
synchronize its input timing.
• START FRAME DELIMITER(SFD):The second field (1byte:10101011) signals the
beginning of the frame . The SFD warns the station that this is the last chance for
synchronization . The last 2 bits is 11 and alerts the receive that the next field is the
destination address.
• DESTINATION ADDRESS(DA):The DA field is 6bytes and contain the physical address of
the destination station to receive the packet
• SOURCE ADDRESS : The SA field is also 6 bytes and contains the physical address of
the sender of the packet.
• LENGTH/TYPE : This field is defined as a type field or length field. The original Ethernet
use this field as the type field to define the upper –layer protocol using the MAC
frame.
• DATA : This field carries data encapsulated from the upper –layer protocols. It is
a minimum of 46 and a maximum of 1500 bytes.
• CRC : The last filed contains error detection information. in this case a CRC-32 in
this case a CRC-32. The CRC is calculated , If the receiver calculates the
Ethernet Evolution

Mbps – Megabits per second


Mb refers to the download and the upload speed.
When your connected to the internet, your connection speed
is displayed as megabits/sec.
Figure 13.8 Categories of Standard Ethernet

A standard Ethernet network can transmit data at a rate up to 10 Megabits per


second (10 Mbps).

13.48
10Base5:Thick Internet
 The name 10BASE5 is derived from several
characteristics of the physical medium. The 10
refers to its transmission speed of 10 Mbit/s.
The BASE is short for baseband signaling, and
the 5 stands for the maximum segment length of
500 meters (1,600 ft.).

 It was the first Ethernet specification to use a


bus topology with a external transceiver
connected via a tap to a thick coaxial cable.
10base2:Thin Internet
 The second implementation is called
10Base2, thin Ethernet , cheaper
net .

 The cable is thinner and more


flexible.

 The transceiver is a part of


NIC , which is installed inside
the station.

 The implementation is most cost


effective than 10Base5 as thin
coaxial cable is less expensive than
thick coaxial cable and the tee
connection are much cheaper than
taps.
10Base-T:Twister Pair Ethernet
 The third implementation
is called 10Base- T or
Twisted Pair Ethernet.
 It uses star topology and
the station are connected
via two pairs of twisted
cable(one for sending and
one for receiving)
between the station and
the hub.
 The maximum length of
the twisted cable here is
defined as 100m,to
minimize the effect of
attenuation in the twisted
cable.
10Base-F:Fiber Ethernet
 Although there are several
types of optical fiber
10Mbps Ethernet, the most
common is called 10Base-F.

 10Base-F uses a star


topology to connect stations
to a hub.

 The stations are connected


to a hub using two-optic
cables.
Fast Ethernet implementations

13.53
It was designed to compete with LAN protocols such as FDDI (
Fiber Distributed Data Interface) or Fiber channel . IEEE
created Fast Ethernet under the name 802.3u. Fast Ethernet is
backward-compatible with standard Ethernet , but it can
transmit data 10 times faster at rate of100Mbps.

 Upgrade the data rate to 100Mbps.


 Make it compatible with standard Ethernet.
 Keep the same 48 bit-address.
 Keep the same frame format.
 Keep the same minimum and maximum frame lengths.
It is a new feature is added to the Fast Ethernet

It was designed for thefollowing purposes:

 To allow incompatible devices to connect to one


another.

 To allow devices to have multiple capabilities.

 To allow a station to check a hub’s capabilities.


Physical Layer

To be able to handle a 100 Mbps data rate, several changes need to be


made at the physical layer.
• Topology
Fast Ethernet is designed to connect two or more stations.
two stations  point-to-point connection
Three or more stations  star topology with a hub or a switch at the
center.
• Encoding
It follows Manchester encoding which needs a 200-Mbaud bandwidth
for a data rate of 100 Mbps.
100Base- TX
• 100Base- TX uses two pairs of twisted-pair cable (either
category UTP or STP).
• Here the MLT-3( multi Level transmitter) scheme is used
because it has good bandwidth performance.
• 4B/5B block coding which is used to provide bit
synchronization by preventing the occurrence of a long
sequence of 0’s and 1’s.
100Base-FX
• 100Base-FX uses two pairs of fiber-optic cables.
• 4B/5B block coding which is used to provide bit synchronization
Figure 13.23 Gigabit Ethernet implementations

13.62
In computer networking, Gigabit Ethernet is a term describing various
technologies for transmitting Ethernet frames at a rate of a gigabit per
second (1,000,000,000 bits per second), as defined by the IEEE 802.3-2008
standard.
. Fast Ethernet increased speed from 10 to 100 megabits per second
(Mbit/s). Gigabit Ethernet was the next iteration, increasing the speed
to 1000 Mbit/s. The initial standard for Gigabit Ethernet was produced
by the IEEE in June 1998 as IEEE 802.3z, and required optical fiber. 802.3z
is commonly referred to as 1000BASE-X, where -X refers to either -CX, -SX,
-LX, or (non-standard) -ZX.
 Upgrade the data rate to 1Gbps.
 Make it compatible with standard
or fast Ethernet.
 Use the same address ,frame
format.
 Keep the same minimum and
maximum frame length.
 To support auto negotiation as
defined in Fast Ethernet.
 1000BASE-SX
1000BASE-SX is a fiber optic Gigabit
Ethernet standard for operation over multi-
mode fiber using a 770 to 860 nanometer,
near infrared (NIR) light wavelength.
The standard specifies a distance capability
between 220 meters
and 550 meters.
1
0
0
0
B
A
S
E
-
C
DE-9 8P8C
 1000BASE-T
1000BASE-T (also known as IEEE
802.3ab) is a standard for Gigabit Ethernet
over copper wiring.
Each 1000BASE-T network segment can be a
maximum length of 100 meters (330 feet), and
must use Category 5 cable or better (including
Cat 5e and Cat 6).

 1000
BAS
E-
LX
1000BASE-LX is a fiber optic Gigabit Ethernet
standard specified in IEEE 802.3 Clause 38
which uses a long wavelength laser (1,270–
1,355 nm).
1000BASE-LX is specified to work over a
distance of up to 5 km.
Gigabit Physical layer
• The two-wire implementations use an NRZ scheme, but NRZ does

not self-synchronize properly. To synchronize bits, particularly at this

high data rate, 8B/10B block encoding is used.

• 8B/10B block encoding prevents long sequence of0’s or 1’s in the

stream.

• In the four-wire implementation it is not possible to have 2 wires for

input and 2 for output, because each wire would need to carry 500

Mbps, which exceeds the capacity for category 5 UTP. As a solution

4D-PAM5 encoding is used to reduce the bandwidth.


SUMMARY FOR GIGABIT ETHERNET
IMPLEMENTATION
Name Medium Specified
distance
1000BASE-CX Shielded balanced copper cable 25 meters

1000BASE-KX Copper backplane 1 meter

220 to 550 meters dependent on


1000BASE-SX Multi-mode fiber
fiber diameter and bandwidth

1000BASE-LX Multi-mode fiber 550 meters

1000BASE-LX Single-mode fiber 5 km

1000BASE-LX10 Single-mode fiber using 1,310 nm 10 km


wavelength
1000BASE-EX Single-mode fiber at 1,310 nm ~ 40 km
wavelength
1000BASE-ZX Single-mode fiber at 1,550 nm ~ 70 km
wavelength
Single-mode fiber, over
1000BASE-BX10 10 km
single-strand fiber: 1,490 nm
downstream 1,310 nm
upstream
1000BASE-T Twisted-pair cabling (Cat-5, Cat-5e, 100 meters
Cat-6, Cat-7)
1000BASE-TX Twisted-pair cabling (Cat-6, Cat-7) 100 meters
IEEE STANDARDS:
•In 1985,the computer society of the IEEE started a project called ,
Project 802,to set standards to enable intercommunication
among equipment from a variety of manufactures . Project 802
does not seek to replace any part of the OSI or the Internet
model . Instead , it is away of specifying functions of the physical
layer and the data link layer of major LAN protocols.

•The IEEE has subdivided the data link layer into two sublayers :
Logical link control (LLC)and media access
control(MAC). IEEE has also created several physical
layer standards for different LAN protocols.
IEEE 802
IEEE 802 refers to a family of IEEE standards dealing with local area networks and
metropolitan area networks. More specifically, the IEEE 802 standards are restricted to
networks carrying variable-size packets. (By contrast, in cell relay networks data is
transmitted in short, uniformly sized units called cells. Isochronous networks, where data
is transmitted as a steady stream of octets, or groups of octets, at regular time intervals,
are also out of the scope of this standard.)
 The services and protocols specified in IEEE 802 map to the lower two layers (Data Link
and Physical) of the seven-layer OSI networking reference model. In fact, IEEE 802 splits
the OSI Data Link Layer into two sub-layers named Logical Link Control (LLC) and Media
Access Control (MAC), so that the layers can be listed like this:
•Data link layer
• LLC Sublayer
• MAC Sublayer
•Physical layer
Working groups:
Name Description Note
Bridging (networking)
IEEE 802.1
and Network
Management
IEEE 802.2 LLC inactive
IEEE 802.3 Ethernet
IEEE 802.4 Token bus disbanded
IEEE 802.5 Defines the MAC layer for a Token inactive
Ring
IEEE 802.6 MANs (DQDB) disbanded
IEEE 802.7 Broadband LAN using Coaxial Cable disbanded
IEEE 802.8 Fiber Optic TAG disbanded
IEEE 802.9 Integrated Services LAN (ISLAN or disbanded
isoEthernet)
IEEE 802.10 Interoperable LAN Security disbanded
Wireless LAN (WLAN)
IEEE 802.11
& Mesh (Wi-Fi
certification)
IEEE 802.12 100BaseVG disbanded
IEEE 802.13 Unused Reserved for Fast Ethernet
development
IEEE 802.14 Cable modems disbanded
IEEE 802.15 Wireless PAN
IEEE 802.15.1 Bluetooth certification
IEEE 802.15.2 IEEE 802.15 and IEEE 802.11
coexistence
IEEE 802.15.3 High-Rate wireless PAN (e.g., UWB,
etc.)
Low-Rate wireless PAN
IEEE 802.15.4
(e.g., ZigBee,
WirelessHART, MiWi,
etc.)
IEEE 802.15.5 Mesh networking for WPAN
IEEE 802.15.6 Body area network
IEEE 802.16 Broadband Wireless Access (WiMAX
certification)
IEEE 802.16.1 Local Multipoint Distribution Service
IEEE 802.17 Resilient packet ring
IEEE 802.18 Radio Regulatory TAG
IEEE 802.19 Coexistence TAG
IEEE 802.20 Mobile Broadband Wireless Access
IEEE 802.21 Media Independent Handoff
IEEE 802.22 Wireless Regional Area Network
IEEE 802.23 Emergency Services Working Group
IEEE 802.24 Smart Grid TAG New (November, 2012)
IEEE 802.25 Omni-Range Area Network Not yet ratified
IEEE 802.3
 IEEE 802.3 is a working group and a collection of IEEE
standards produced by the working group defining the
physical layer and data link layer's media access control (MAC)
of wired Ethernet. This is generally a local area network
technology with some wide area network applications.
 Physical connections are made between nodes and/or
infrastructure devices (hubs, switches, routers) by various
types of copper or fiber cable.

 802.3 is a technology that supports the IEEE 802.1 network


architecture.

 802.3 also defines LAN access method using CSMA/CD.


Token bus network(802.4)

 Token bus is a network implementing the token ring protocol over a "virtual ring" on
a coaxial cable.[A token is passed around the network nodes and only the node
possessing the token may transmit. If a node doesn't have anything to send, the
token is passed on to the next node on the virtual ring. Each node must know the
address of its neighbor in the ring, so a special protocol is needed to notify the
other nodes of connections to, and disconnections from, the ring.
 Token bus was standardized by IEEE standard 802.4. It is mainly used for industrial
applications. The main difference is that the endpoints of the bus do not meet to form
a physical ring.
 Due to difficulties handling device failures and adding new stations to a network, token
bus gained a reputation for being unreliable and difficult to upgrade.
 In order to guarantee the packet delay and transmission in Token bus protocol, a modified
Token bus
was proposed in Manufacturing Automation Systems and flexible manufacturing system
(FMS).
 A means for carrying Internet Protocol over token bus was developed.
 The IEEE 802.4 Working Group is disbanded and the standard has been withdrawn by the
IEEE.
Token Ring/IEEE 802.5
 The Token Ring network was originally developed by
IBM in the 1970s. It is still in IBM's primary local-area
network (LAN) technology. The related IEEE
802.5 specification is almost identical to and completely
compatible with IBM's Token Ring network.

 Token Ring and IEEE 802.5 networks are basically


compatible, although the specifications differ in minor
ways. IBM's Token Ring network specifies a star, with all
end stations attached to a device called a multi station
access unit (MSAU). In contrast, IEEE 802.5 does not
specify a topology, although virtually all IEEE 802.5
implementations are based on a star.
Physical
Connections
IBM Token Ring network
stations are directly
connected to MSAUs, which
can be wired together to
form one large ring (see
Figure: MSAUs Can
Be Wired Together to
Form One
Large Ring in an IBM Toke
n
Ring Network).
Patch cables connect
MSAUs to adjacent
MSAUs, while lobe cables
connect MSAUs to
Token Ring Operation
Token Ring and IEEE 802.5 are two principal examples of token-
passing networks (FDDI is the other). Token-passing networks
move a small frame, called a token, around the network.
Possession of the token grants the right to transmit. If a node
receiving the token has no information to send, it passes the
token to the next end station.
Each station can hold the token for a maximum period of time
.
If a station possessing the token does have information to
transmit, it seizes the token, alters 1 bit of the token (which turns
the token into a start-of-frame sequence), appends the
information that it wants to transmit, and sends this information
to the next station on the ring.Therefore, collisions cannot occur
in Token Ring networks. If early token release is supported, a new
token can be released when frame transmission is complete.
Frame Format
Token Ring and IEEE 802.5 support two basic frame
types: tokens and data/command frames.
Figure: IEEE 802.5 and Token Ring Specify Tokens
and Data/Command Frames.
IEEE 802.6
 IEEE 802.6 is a standard governed by the ANSI for
Metropolitan Area Networks (MAN). It is an
improvement of an older standard (also created by ANSI)
which used the Fiber distributed data interface (FDDI)
network structure. The FDDI-based standard failed due to
its expensive implementation and lack of compatibility with
current LAN standards.

 The IEEE 802.6 standard uses the Distributed


Queue Dual Bus (DQDB) network form. This form
supports 150 Mbit/s transfer rates. It consists of two
unconnected unidirectional buses. DQDB is rated for a
maximum of 160 km before significant signal degradation
over fiber optic cable with an optical wavelength of 1310
nm.

 This standard has also failed, mostly for the same reasons
that the FDDI standard failed. Most MANs now use
Synchronous Optical Network (SONET) or
Asynchronous Transfer Mode (ATM) network designs, with
IEEE 802.11:

In 1990,the IEEE 802 committee


formed a new working group , IEEE
802.11 specifically devoted to wireless
LANs , with a character to develop a
MAC protocol and physical medium
specification.

The demand of the for WANs , at


different frequencies and data
rates,has exploded.

Keeping pace with demand ,the


Within the IEEE 802.11 Working Group , the following
IEEE Standards Association Standard and Amendments exist:
 IEEE 802.11-1997: The WLAN standard was originally 1 Mbit/s and 2 Mbit/s, 2.4 GHz RF
and infrared (IR) standard (1997), all the others listed below are Amendments to this
standard, except for Recommended Practices 802.11F and 802.11T.
 IEEE 802.11a: 54 Mbit/s, 5 GHz standard (1999, shipping products in 2001)
 IEEE 802.11b: Enhancements to 802.11 to support 5.5 Mbit/s and 11 Mbit/s (1999)
 IEEE 802.11c: Bridge operation procedures; included in the IEEE 802.1D standard (2001)
 IEEE 802.11d: International (country-to-country) roaming extensions (2001)
 IEEE 802.11F: Inter-Access Point Protocol (2003) Withdrawn February 2006

 IEEE 802.11g: 54 Mbit/s, 2.4 GHz standard (backwards compatible with b) (2003)
 IEEE 802.11h: Spectrum Managed 802.11a (5 GHz) for European compatibility (2004)
 IEEE 802.11i: Enhanced security (2004)
 IEEE 802.11j: Extensions for Japan (2004)
 IEEE 802.11-2007: A new release of the standard that includes amendments a, b, d, e, g, h,
i, and j. (July 2007)
 IEEE 802.11k: Radio resource measurement enhancements (2008)

You might also like