0% found this document useful (0 votes)
39 views26 pages

2 G Network: 2G (Or 2-G) Is Short For Second-Generation Wireless Telephone Technology. Second Generation

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 26

May 2012

1. 2 G Network
2G (or 2-G) is short for second-generation wireless telephone technology. Second generation 2G cellular telecom networks were commercially launched on the GSM standard in Finland by Radiolinja (now part of Elisa Oyj) in 1991.[1] Three primary benefits of 2G networks over their predecessors were that phone conversations were digitally encrypted; 2G systems were significantly more efficient on the spectrum allowing for far greater mobile phone penetration levels; and 2G introduced data services for mobile, starting with SMS text messages. After 2G was launched, the previous mobile telephone systems were retrospectively dubbed 1G. While radio signals on 1G networks are analog, radio signals on 2G networks are digital. Both systems use digital signaling to connect the radio towers (which listen to the handsets) to the rest of the telephone system. 2G has been superseded by newer technologies such as 2.5G, 2.75G, 3G, and 4G; however, 2G networks are still used in many parts of the world. 1.1. Technology

2G technologies can be divided into TDMA-based and CDMA-based standards depending on the type of multiplexing used. The main 2G standards are:

GSM (TDMA-based), originally from Europe but used in almost all countries on all six inhabited continents. Today accounts for over 80% of all subscribers around the world. Over 60 GSM operators are also using CDMA2000 in the 450 MHz frequency band (CDMA450).[2] IS-95 aka cdmaOne (CDMA-based, commonly referred as simply CDMA in the US), used in the Americas and parts of Asia. Today accounts for about 17% of all subscribers globally. Over a dozen CDMA operators have migrated to GSM including operators in Mexico, India, Australia and South Korea. PDC (TDMA-based), used exclusively in Japan iDEN (TDMA-based), proprietary network used by Nextel in the United States and Telus Mobility in Canada IS-136 a.k.a. D-AMPS (TDMA-based, commonly referred as simply 'TDMA' in the US), was once prevalent in the Americas but most have migrated to GSM.

2G services are frequently referred as Personal Communications Service, or PCS, in the United States. 1.2. Capacity

Using digital signals between the handsets and the towers increases system capacity in two key ways:

Digital voice data can be compressed and multiplexed much more effectively than analog voice encodings through the use of various codecs, allowing more calls to be transmitted in same amount of radio bandwidth. The digital systems were designed to emit less radio power from the handsets. This meant that cells had to be smaller, so more cells had to be placed in the same amount of space. This was possible because cell towers and related equipment had become less expensive.

1.3. Disadvantages In less populous areas, the weaker digital signal transmitted by a cellular phone may not be sufficient to reach a cell tower. This tends to be a particular problem on 2G systems deployed on higher frequencies, but is mostly not a problem on 2G systems Advance Computer Network

May 2012
deployed on lower frequencies. National regulations differ greatly among countries which dictate where 2G can be deployed. Analog has a smooth decay curve, but digital has a jagged steppy one. This can be both an advantage and a disadvantage. Under good conditions, digital will sound better. Under slightly worse conditions, analog will experience static, while digital has occasional dropouts. As conditions worsen, though, digital will start to completely fail, by dropping calls or being unintelligible, while analog slowly gets worse, generally holding a call longer and allowing at least some of the audio transmitted to be understood. 1.4. Advantages While digital calls tend to be free of static and background noise, the lossy compression they use reduces their quality, meaning that the range of sound that they convey is reduced. Talking on a digital cell phone, a caller hears less of the tonality of someone's voice talking on a digital cellphone, he or she can hear it more clearly. 1.5. Evolution 2G networks were built mainly for voice services and slow data transmission (defined in IMT2000 specification documents), but are considered by the general public to be 2.5G or 2.75G services because they are several times slower than present-day 3G service. 1.5.1. 2.5G (GPRS) 2.5G ("second and a half generation") is used to describe 2G-systems that have implemented a packet-switched domain in addition to the circuit-switched domain. It does not necessarily provide faster services because bundling of timeslots is used for circuitswitched data services (HSCSD) as well. The first major step in the evolution of GSM networks to 3G occurred with the introduction of General Packet Radio Service (GPRS). CDMA2000 networks similarly evolved through the introduction of 1xRTT. The combination of these capabilities came to be known as 2.5G. GPRS could provide data rates from 56 kbit/s up to 115 kbit/s. It can be used for services such as Wireless Application Protocol (WAP) access, Multimedia Messaging Service (MMS), and for Internet communication services such as email and World Wide Web access. GPRS data transfer is typically charged per megabyte of traffic transferred, while data communication via traditional circuit switching is billed per minute of connection time, independent of whether the user actually is utilizing the capacity or is in an idle state. 1xRTT supports bi-directional (up and downlink) peak data rates up to 153.6 kbit/s, delivering an average user data throughput of 80-100 kbit/s in commercial networks.[3] It can also be used for WAP, SMS & MMS services, as well as Internet access. 1.5.2. 2.75G (EDGE) GPRS1 networks evolved to EDGE networks with the introduction of 8PSK encoding. Enhanced Data rates for GSM Evolution (EDGE), Enhanced GPRS (EGPRS), or IMT Single Carrier (IMT-SC) is a backward-compatible digital mobile phone technology that allows improved data transmission rates, as an extension on top of standard GSM. EDGE was deployed on GSM networks beginning in 2003initially by Cingular (now AT&T) in the United States. EDGE is standardized by 3GPP as part of the GSM family and it is an upgrade that provides a potential three-fold increase in capacity of GSM/GPRS networks. 1.6. How 2G Technology Works

Advancement in mobile phones technology has been marked by generation (G). Analog phones are related to the 1st generation (1G), and then come digital phones marked by second generation (2G). This second generation mobile phones has changed the concept Advance Computer Network

May 2012
of mobile phones by introducing high data transfer rate, increased frequency band and wireless connectivity. There are three different types of technologies in the second generation these are FDMA (Frequency Division Multiple Access), TDMA (Time Division Multiple Access) and CDMA (Code Division Multiple Access). All types have one common feature of multiple access which means that many users are able to use the same number of cells. First part of all the technologies makes difference. Because of different types of technologies utilizes in 2G mobiles, there are different types of mobiles according to the technology incorporate in them. Let see the 2G technologies use in mobiles and their functions as they work. 1.6.1. How 2G (FDMA) Works Frequency Division Multiple Access (FDMA) enables the calls to use different frequency by splitting it into small cells. Each call uses different frequency. The phenomenon is same as in radio where different channels broadcast on separate frequency. So every radio station has been assigned different frequency according to the specific band available. FDMA is best in case of analog transmission but also support digital transmission. No doubt it is accommodating to the digital signals yet with poor service. 1.6.2. How 2G (TDMA) Works Different technologies are categorized in second generations TDMA standard according to the different time zones indifferent countries in the world. These technologies are

GSM (Global System for Mobile Communication) nearly used in the whole world. IDEN (Integrated Digital Enhanced Network) is introduced by Motorola used in US and Canada. IS-136 (Interim Standard-136) also known as D-AMPS (Digital Advanced Mobile Phone System) prevail in South and North America. PDC (Personal Digital Cellular) is used in Japan.

TDMA is a narrow band of 30 KHz wide and 6.7 millisecond long. It is divided into three slots of time. Using the CODEC, stands for Compression / Decompression algorithm, compresses the digital information and use less space leaving for the other users. Division of this narrow band into three time slots increases the capacity of frequency band. TDMA supports both frequency bands IS-54 and IS-136. GSM (TDMA) is a different standard and provide basis for IDEN and PCS. Being an international standard, it covers many countries of the world. There is only the need for changing the SIM and you can get connected no need to buy a new phone. Having two different bands

900-1800 MHz band covers Europe and Asia 850-1900 MHz band covers United State

First band is in sync widely but second is limited to the United State. It is better to go for the first one if you need to go on extensive travelling.

Advance Computer Network

May 2012
1.7. Architecture

2. 3G Network
Mobile telephones, the 1st generation of which were introduced in the mid-1980's, have been constantly evolving since their inception. Today, over 2B mobile phones are in usage and around 80% of the world's population is within reach of a mobile phone signal). Mobile phones have traditionally been used for voice communications, but today can serve as the platform for a variety of communication outputs -- including data and video. 3G is the thirdgeneration of mobile phone technology standards. The typical services associated with 3G include wireless voice telephony and broadband wireless data, all in a mobile environment. However, with the capability for high-speed wireless data transfer, 3G has enhanced or made possible a myriad of additional applications such as mobile video, secure mobile ecommerce, location-based services, mobile gaming and audio on demand. For example, using 2.5G (or a slightly better version of second-generation wireless) a three-minute song takes between six and nine minutes to download. Using 3G, it can download in 11 to 90 seconds. There are currently almost 100 million 3G wireless subscribers worldwide. The US, with over 200 million mobile subscribers, crossed the 10% mark for 3G penetration for the first time in 2006, while Japan stayed in the lead with over 50% of its subscribers using 3G phones. As 3G adoption accelerates, 3G carriers, handset manufacturing, infrastructure equipment makers, semiconductor OEM's, and 3G application providers stand to gain. Wireless Internet Service Providers (WISP's), carriers without the wherewithal or financial resources to upgrade their networks, and companies that provide services which are standard under 3G (i.e., email access), will be in a position to lose. While the 3G market may be definitely gaining traction, the industry is rapidly approaching a crossroads, where the needs of different market segments can vary substantially, and the potential rewards (and losses) for the different technology vendors and mobile communications operators could be substantial. 3G, short for third Generation, is a term used to represent the 3rd generation of mobile telecommunications technology. Also called Tri-Band 3G. This is a set of standards used for mobile devices and mobile telecommunication services and networks that comply with the International Mobile Telecommunications-2000 (IMT-2000) specifications by the International Telecommunication Union.[1] 3G finds application in wireless voice telephony, mobile Internet access, fixed wireless Internet access, video calls and mobile TV. Several telecommunications companies market wireless mobile Internet services as 3G, indicating that the advertised service is provided over a 3G wireless network. Services advertised as 3G are required to meet IMT-2000 technical standards, including Advance Computer Network

May 2012
standards for reliability and speed (data transfer rates). To meet the IMT-2000 standards, a system is required to provide peak data rates of at least 200 kbit/s (about 0.2 Mbit/s). However, many services advertised as 3G provide higher speed than the minimum technical requirements for a 3G service. Recent 3G releases, often denoted 3.5G and 3.75G, also provide mobile broadband access of several Mbit/s to smartphones and mobile modems in laptop computers. The following standards are typically branded 3G:

the UMTS system, first offered in 2001, standardized by 3GPP, used primarily in Europe, Japan, China (however with a different radio interface) and other regions predominated by GSM 2G system infrastructure. The cell phones are typically UMTS and GSM hybrids. Several radio interfaces are offered, sharing the same infrastructure: o The original and most widespread radio interface is called W-CDMA. o The TD-SCDMA radio interface was commercialised in 2009 and is only offered in China. o The latest UMTS release, HSPA+, can provide peak data rates up to 56 Mbit/s in the downlink in theory (28 Mbit/s in existing services) and 22 Mbit/s in the uplink. the CDMA2000 system, first offered in 2002, standardized by 3GPP2, used especially in North America and South Korea, sharing infrastructure with the IS-95 2G standard. The cell phones are typically CDMA2000 and IS-95 hybrids. The latest release EVDO Rev B offers peak rates of 14.7 Mbit/s downstream.

The above systems and radio interfaces are based on spread spectrum radio transmission technology. While the GSM EDGE standard ("2.9G"), DECT cordless phones and Mobile WiMAX standards formally also fulfill the IMT-2000 requirements and are approved as 3G standards by ITU, these are typically not branded 3G, and are based on completely different technologies. A new generation of cellular standards has appeared approximately every tenth year since 1G systems were introduced in 1981/1982. Each generation is characterized by new frequency bands, higher data rates and non backwards compatible transmission technology. The first release of the 3GPP Long Term Evolution (LTE) standard does not completely fulfill the ITU 4G requirements called IMT-Advanced. First release LTE is not backwards compatible with 3G, but is a pre-4G or 3.9G technology[citation needed], however sometimes branded 4G by the service providers. Its evolution LTE Advanced is a 4G technology. WiMAX is another technology verging on or marketed as 4G. 2.1. Features 2.1.1. Data rates ITU has not provided a clear definition of the data rate users can expect from 3G equipment or providers. Thus users sold 3G service may not be able to point to a standard and say that the rates it specifies are not being met. While stating in commentary that "it is expected that IMT-2000 will provide higher transmission rates: a minimum data rate of 2 Mbit/s for stationary or walking users, and 384 kbit/s in a moving vehicle,"[20] the ITU does not actually clearly specify minimum or average rates or what modes of the interfaces qualify as 3G, so various rates are sold as 3G intended to meet customers expectations of broadband data. 2.1.2. Security 3G networks offer greater security than their 2G predecessors. By allowing the UE (User Equipment) to authenticate the network it is attaching to, the user can be sure the network is the intended one and not an impersonator. 3G networks use the KASUMI block cipher Advance Computer Network

May 2012
instead of the older A5/1 stream cipher. However, a number of serious weaknesses in the KASUMI cipher have been identified.[21] In addition to the 3G network infrastructure security, end-to-end security is offered when application frameworks such as IMS are accessed, although this is not strictly a 3G property. 2.1.3. Applications of 3G The bandwidth and location information available to 3G devices gives rise to applications not previously available to mobile phone users. Some of the applications are:

Mobile TV Video on demand Video Conferencing Telemedicine Location-based services Global Positioning System (GPS)

1.2. 3G Architecture UMTS network architecture is described as a picture, which uses WCDMA air interface and an evolution or development of the GSM core network, consisting of three regions that interact with each other, the Core Network (CN), UMTS Terrestrial Radio Access Network (UTRAN) and User Equipment ( UE) or Mobile Station (MS).

Core Network is divided into areas Circuit Switched and Packet Switched. Some elements of Circuit Switched is a Mobile services Switching Centre (MSC) is an interface that handles MS to handle circuit switched data, Gateway MSC (GMSC) is a gateway interface between the UMTS and the external circuit switched networks such as PSTN, Visitor Location Register (VLR), and Gateway MSC. Packet Switched elements are Serving GPRS Support Node (SGSN) is an interface that functions the same as the MSC, but is used for packet switched services and GPRS Gateway Support Node (GGSN) is a gateway that connects to the UMTS packet switched networks. Some of the other network elements such as HLR and AUC are shared by both regions. CN architecture can change when there is a new service or feature. Data transfer within the core network is supported by the GGSN (gateway GPRS support node) and SGSN (serving GPRS support node). Basically, the GGSN is an additional feature of mobility arrangements, and connect with various elements of the network through standard interfaces. In this network the GGSN is a physical interface that connects to external data packet network (eg internet). SGSN handles packet delivery to and from mobile terminals. Each SGSN is allowed to send a packet to the terminal in the service area. GGSN and SGSN can send data at speeds up to 2 Mbps. Advance Computer Network

May 2012
UTRAN consists of one or more Radio Network System (RNS), which comprises darisebuah RNs radio network controller, called the Radio Network Controller (RNC), multiple node Bs (UMTS Base Station) and User Equipment. UTRAN is connected to the Core Network (CN) via the Iu interface and use the Iub interface to control the node B. While connecting the IUR Interface between RNC serves to regulate the occurrence of soft handover between the RNC. RNC serves to control the radio sources from some node B, its function is similar to BSC in GSM. RNC also plays an important role to control the UTRAN radio resources, such as power control (PC) or handover control (HC), where there sebagiandiantaranya at the RNC. BS in UMTS is called a node B. Node B on the network is the same as in the GSM Base Station (BS / BS), a unit for radio transmission and reception system of the cell. Node B shows the process of air interface used (WCDMA), includes channel coding, interleaving, rate adaptation and spreading. Node B also allows for softer handovers and power control. Bond between the RNC and node B is called a Radio Network Subsystem (RNS), which has the Iub interface. Unlike its equivalent, namely the interface in GSM Abis, Iub interface has an open standard so it is possible each node B and RNC are made by different manufacturers. If the GSM there is no relationship between the BSC, in UMTS is called UTRAN just the opposite. RNC RNC one connected with other through interfaces IUR. UTRAN is connected to the core network via Iu interface. User Equipment (UE) has the same principles as the GSM Mobile Station (MS), has the user identity module, which is similar to the GSM SIM. The EU consists of two parts, the Mobile Equipment (ME) and UMTS Subscriber Identity Module (USIM) that is connected by the Cu interface. ME is a device for radio transmission, while the USIM is a card that contains user identity and personal information. EU with a network interface called Uu interface, which is a WCDMA air interface.

3. Medium access protocols


Multiple network nodes often share the same medium. For example, several computers might connect to a wireless access point or plug into an Ethernet hub. We need a protocol to decide which one can access the medium if more than one has information to send at the same time. We need a media access protocol (MAC). Some MAC protocols are: CSMA/CA, Carrier Sense Multiple Access/Collision Avoidance: Listen to see if the channel is in use. If it is, back off for a time and retry later. ("Carrier Sense" implies that a node can tell when another device is using the communication medium -- like a telephone busy signal). CSMA/CD, Carrier Sense Multiple Access/Collision Detection: If a collision is detected after transmitting a frame, back off for a time and retransmit it. (Collisions are detected when a node receives a garbled frame). Polling: The network has a master node and two or more slaves. The master node queries each slave in turn to see whether it has some data to transmit. If it does, it transmits the data, if not, the master moves on to the next slave node. Token ring: A token (a pattern of bits) is passed from one node to the next. A node can only transmit when it has the token. This is similar to polling, but the nodes are equal peers. There is no master node. RTS/CTS, Request to Send/Clear to Send:

Advance Computer Network

May 2012
After the base station gives A permission to transmit, it tells B to hold off for a short time. 4. Unicast and multicast routing protocols A metric is the cost assigned for passage of a packet through a network. A router consults its routing table to determine the best path for a packet. An autonomous system (AS) is a group of networks and routers under the authority of a single administration. RIP and OSPF are popular interior routing protocols used to update routing tables in an AIS. RIP is based on distance vector routing, in which each router shares, at regular intervals, its knowledge about the entire AS with its neighbor. A RIP routing table entry consists of a destination network address, the hop count to that destination, and the IP address of the next router. OSPF divides an AS into areas, defined as collections of networks, hosts, and routers. OSPF is based on link state routing, in which each router sends the state of its neighborhood to every other router in the area. A packet is sent only if there is a change in the neighborhood. OSPF defines four types of links (networks): point-to-point, transient, stub, and virtual. Five types of link state advertisments (LSAs) disperse information in OSPF: router link, network link, summary link to network, summary link to AS boundary router, and external link. A router compiles all the information from the LSAs it receives into a link state database. This database is common to all routers in an area. An LSA is a mutifield entry in a link state update packet. BGP is an interautonomous system routing protocol used to upadate routing tables. BGP is based on a routing method called path vector routing. In this method, the ASs through which a packet must pass are explicitly listed. There are four types of BGP messages: open, update, keep-alive, and notification. The Internet Group Management Protocol (IGMP) helps multicast routers create and update a list of loyal members related to a router interface. The three IGMP message types are the query message, the membership report, and the leave report. A host or router can have membership in a group. A host maintains a list of processes that have membership in a group. A router maintains a list of groupids that shows group membership for each interface. Multicasting applications include distributed databases, information dissemination, teleconferencing, and distance learning. For efficient multicasting we use a shortest-path spanning tree to represent the communication path. In a source-based tree approach to multicast routing, the source--group combinations determines the tree. In a group-based tree approach to multicast routing, the group determines the tree. DVRMP is a multicast routing protocol that uses the distance routing protocol to create a source-based tree. In reverse path forwarding (RPF), the router forwards only the packets that have traveled the shortest path from the source to the router. Reverse path broadcasting (RPB) creates a shortest-path broadcast tree from the source to each destination. It guarantees that each destination receives one and only one copy of the packet. Reverse path multicasting (RPM) adds pruning and grafting to RPB to create a multicast shortest-path tree that supports dynamic membership changes. MOSPF is a multicast protocol that uses multicast link state routing to create a sourcebased least-cost tree. Advance Computer Network

May 2012
The Core-Based Tree (CBT) protocol is a multicast routing protocol that uses a core as the root of the tree. PIM-DM is a source-based routing protocol that uses RPF and pruning and grafting strategies to handle multicasting. PIM-SM is a group-shared routing protocol that is similar to CBT and u ses a rendzvous point as the source of the tree. For multicasting between two noncontiguous multicast routers, we make a multicast backbone (MBONE) to enable tunneling. 5. TCP/IP over ATM The protocol for classical IP over ATM (sometimes abbreviated as CLIP/ATM) is a wellestablished standard spelled out in RFC 1577 and subsequent documents. Windows 2000 provides a full implementation of this standard. The IP over ATM approach provides several attractive advantages over ELAN solutions. The most obvious advantages are its ability to support QoS interfaces, its lower overhead (as it requires no MAC header), and its lack of a frame size limit. All of these features are discussed in the following sections. 5.1. IP over ATM Architecture

IP over ATM is a group of components that do not necessarily reside in one place, and, in this case, the services are not usually on an ATM switch. In some cases, switch vendors provide some IP over ATM support, but not always. (For the purposes of this discussion, it is assumed the IP over ATM server services reside on a Windows 2000 server.) The core components required for IP over ATM are roughly the same as those required for LANE, as both approaches require the mapping of a connectionless medium to a connectionoriented medium, and vice versa. In IP over ATM, these services are provided by an IP ATMARP server for each IP subnet. This server maintains a database of IP and ATM, and provides configuration and broadcast services, as described in the following section. 5.2. IP over ATM Components

IP over ATM is a very small layer between the ATM protocol and the TCP/IP protocol. As with LANE, the client emulates standard IP to the TCP/IP protocol at its top edge while simultaneously issuing native ATM commands to the ATM protocol layers underneath. IP over ATM is often preferred to LANE because it is faster than LANE. One key reason for this performance advantage is that IP over ATM adds almost no additional header information to packets as they are handed down the stack. Once it has established a connection, the IP over ATM client can generally transfer data without modification. As with LANE, IP over ATM is handled by two main components: the IP over ATM server and the IP over ATM client. The IP over ATM server is composed of an ATMARP server and Multicast Address Resolution Service (MARS). The ATMARP server provides services to map network layer IP unicast addresses to ATM addresses, while MARS provides similar services for broadcast and multicast addresses. Both services maintain IP address databases just as LANE services do. The IP over ATM server can reside on more than one computer, but the ATMARP and MARS databases cannot be distributed. You can have one IP over ATM server handle ATMARP traffic, and one handle MARS. If, however, you divided the ATMARP Server between servers, it would effectively create two different IP networks. All IP over ATM clients in the same logical IP subnet (LIS) need to be configured to use the same ATMARP server. Traditional Advance Computer Network

May 2012
routing methods are used to route between logical IP subnets, even if they are on the same physical network. Windows 2000 includes fully integrated ATMARP and MARS servers. These services are described in more detail in the following sections. 5.3. IP over ATM Operation

IP over ATM faces the same problems, and relies on the same basic tools and fixes as LANE. In particular, it faces the issues of address resolution and broadcasting. In normal ATM, SVC connections are established by sending a connection request containing the ATM address of the destination endpoint to the ATM switch. Before an IP endpoint can create an SVC in this manner, the endpoint must resolve the IP address of the destination to an ATM address. Normally, when an Ethernet host needs to resolve an IP address to an Ethernet MAC address, it uses an ARP broadcast query frame. As explained earlier, hardware broadcasting is not done in ATM. The Address Resolution Protocol (ARP) of the ATMARP server resolve IP addresses to ATM addresses. An ATM endpoint wishing to resolve an IP address sends an ATMARP request to the ATMARP server for their LIS. The ATMARP request contains the sender's ATM address and IP address and the requested IP address. If the ATMARP server knows the requested IP address, it sends back an ATMARP response containing the requested ATM address. If the requested IP address is not found, the ATMARP servers send back a negative ATMARP reply, unlike the procedure in an ELAN, which would send an unresolved address to the LANE BUS. This behavior allows an ARP requestor to distinguish between an unknown address and nonfunctioning ATMARP server. The end result is a three-way mapping from the IP address to an ATM address to a VPI/VCI pair. The IP address and ATM address are required to create a VC. The IP address and VPI/VCI then are required to send the subsequent cells containing data across the VC. An ATM endpoint creates SVCs to other ATM endpoints within its LIS. For an ATM endpoint to resolve an arbitrary IP address, it must be configured with the ATM address of the ATMARP server in its LIS. Upon startup, an ATM endpoint establishes a VC with the ATMARP server using ATM signaling. As soon as the VC is opened with the server, the server sends the ATM endpoint an InATMARP request. When the ATM endpoint sends the response, the ATMARP server has the ATM and IP address of the new ATM endpoint. In this way, the ATMARP server builds its table of ATM to IP address mappings. 5.4. IP over ATM Client Initialization

In Windows 2000, IP over ATM does not require the use of an Inverse ARP. Instead, the client goes directly to the server to register itself on the network. Since the process is automatic, no human intervention is required to initialize the client. Depending on whether the client address is mapped to a static address or a dynamic address, the procedure varies between the following two approaches. 5.5. With a Static IP Address

The following example details each step in establishing an IP over ATM connection for a single IP over ATM client with a static IP address. First, the client initializes and gets an ATM address from the ATM switch. The client then connects to the ATM ARP/MARS server and Advance Computer Network

May 2012
joins the broadcast group. The client's IP to ATM address mapping is also added to the ATMARP server database. The client is now ready to contact other hosts and begin data transfer. 5.6. With DHCP

Establishing an IP over ATM connection for a single IP over ATM client using Dynamic Host Configuration Protocol (DHCP) is similar but not identical. First the client initializes and gets an ATM address from the ATM switch. Then the client connects to the ATM ARP/MARS server and joins the broadcast group. The client connects to the multicast server (MCS) and sends a DHCP request. The MCS broadcasts the DHCP request to all members of the broadcast group. When the DHCP server receives the request, it sends a DHCP reply to the MCS. The MCS then broadcasts the reply to the broadcast group. The client receives the DHCP reply and then registers its IP and ATM addresses with the ATM ARP/MARS server. The client is now ready to contact other hosts and begin data transfer. For more information about DHCP, see "Dynamic Host Configuration Protocol" in the TCP/IP Core Networking Guide.

6. Internet Architecture
6.1. Introduction The Internet system consists of a number of interconnected packet networks supporting communication among host computers using the Internet protocols. These protocols include the Internet Protocol (IP), the Internet Control Message Protocol (ICMP), the Internet Group Management Protocol (IGMP), and a variety transport and application protocols that depend upon them. As was described in Section [1.2], the Internet Engineering Steering Group periodically releases an Official Protocols memo listing all the Internet protocols. All Internet protocols use IP as the basic data transport mechanism. IP is a datagram, or connectionless, internetwork service and includes provision for addressing, type-of-service specification, fragmentation and reassembly, and security. ICMP and IGMP are considered integral parts of IP, although they are architecturally layered upon IP. ICMP provides error reporting, flow control, first-hop router redirection, and other maintenance and control functions. IGMP provides the mechanisms by which hosts and routers can join and leave IP multicast groups. Reliable data delivery is provided in the Internet protocol suite by Transport Layer protocols such as the Transmission Control Protocol (TCP), which provides end-end retransmission, resequencing and connection control. Transport Layer connectionless service is provided by the User Datagram Protocol (UDP). It is by definition a meta-network, a constantly changing collection of thousands of individual networks intercommunicating with a common protocol. The Internet's architecture is described in its name, a short from of the compound word "inter-networking". This architecture is based in the very specification of the standard TCP/IP protocol, designed to connect any two networks which may be very different in internal hardware, software, and technical design. Once two networks are interconnected, communication with TCP/IP is enabled end-to-end, so that any node on the Internet has the near magical ability to communicate with any other no matter where they are. This openness of design has enabled the Internet architecture to grow to a global scale. In practice, the Internet technical architecture looks a bit like a multi-dimensional river system, with small tributaries feeding medium-sized streams feeding large rivers. For Advance Computer Network

May 2012
example, an individual's access to the Internet is often from home over a modem to a local Internet service provider who connects to a regional network connected to a national network. At the office, a desktop computer might be connected to a local area network with a company connection to a corporate Intranet connected to several national Internet service providers. In general, small local Internet service providers connect to medium-sized regional networks which connect to large national networks, which then connect to very large bandwidth networks on the Internet backbone. Most Internet service providers have several redundant network cross-connections to other providers in order to ensure continuous availability. The companies running the Internet backbone operate very high bandwidth networks relied on by governments, corporations, large organizations, and other Internet service providers. Their technical infrastructure often includes global connections through underwater cables and satellite links to enable communication between countries and continents. As always, a larger scale introduces new phenomena: the number of packets flowing through the switches on the backbone is so large that it exhibits the kind of complex non-linear patterns usually found in natural, analog systems like the flow of water or development of the rings of Saturn. Each communication packet goes up the hierarchy of Internet networks as far as necessary to get to its destination network where local routing takes over to deliver it to the addressee. In the same way, each level in the hierarchy pays the next level for the bandwidth they use, and then the large backbone companies settle up with each other. Bandwidth is priced by large Internet service providers by several methods, such as at a fixed rate for constant availability of a certain number of megabits per second, or by a variety of use methods that amount to a cost per gigabyte. Due to economies of scale and efficiencies in management, bandwidth cost drops dramatically at the higher levels of the architecture. 6.2. Architectural Assumptions

The current Internet architecture is based on a set of assumptions about the communication system. The assumptions most relevant to routers are as follows:

The Internet is a network of networks. Each host is directly connected to some particular network(s); its connection to the Internet is only conceptual. Two hosts on the same network communicate with each other using the same set of protocols that they would use to communicate with hosts on distant networks.

Routers do not keep connection state information. To improve the robustness of the communication system, routers are designed to be stateless, forwarding each IP packet independently of other packets. As a result, redundant paths can be exploited to provide robust service in spite of failures of intervening routers and networks. All state information required for end-to-end flow control and reliability is implemented in the hosts, in the transport layer or in application programs. All connection control information is thus co-located with the end points of the communication, so it will be lost only if an end point fails. Routers control message flow only indirectly, by dropping packets or increasing network delay. Note that future protocol developments may well end up putting some more state into routers. This is especially likely for multicast routing, resource reservation, and flow based forwarding.

Advance Computer Network

May 2012

Routing complexity should be in the routers. Routing is a complex and difficult problem, and ought to be performed by the routers, not the hosts. An important objective is to insulate host software from changes caused by the inevitable evolution of the Internet routing architecture.

The system must tolerate wide network variation. A basic objective of the Internet design is to tolerate a wide range of network characteristics - e.g., bandwidth, delay, packet loss, packet reordering, and maximum packet size. Another objective is robustness against failure of individual networks, routers, and hosts, using whatever bandwidth is still available. Finally, the goal is full open system interconnection: an Internet router must be able to interoperate robustly and effectively with any other router or Internet host, across diverse Internet paths. Sometimes implementors have designed for less ambitious goals. For example, the LAN environment is typically much more benign than the Internet as a whole; LANs have low packet loss and delay and do not reorder packets. Some vendors have fielded implementations that are adequate for a simple LAN environment, but work badly for general interoperation. The vendor justifies such a product as being economical within the restricted LAN market. However, isolated LANs seldom stay isolated for long. They are soon connected to each other, to organization-wide internets, and eventually to the global Internet system. In the end, neither the customer nor the vendor is served by incomplete or substandard routers. The requirements in this document are designed for a full-function router. It is intended that fully compliant routers will be usable in almost any part of the Internet.

7. Domain Name Service


Domain Name System (DNS) is the default name resolution service used in a Microsoft Windows Server 2003 network. DNS is part of the Windows Server 2003 TCP/IP protocol suite and all TCP/IP network connections are, by default, configured with the IP address of at least one DNS server in order to perform name resolution on the network. Windows Server 2003 components that require name resolution will attempt to use this DNS server before attempting to use the previous default Windows name resolution service, Windows Internet Name Service (WINS). Typically, Windows Server 2003 DNS is deployed in support of Active Directory directory service. In this environment, DNS namespaces mirror the Active Directory forests and domains used by an organization. Network hosts and services are configured with DNS names so that they can be located in the network, and they are also configured with DNS servers that resolve the names of Active Directory domain controllers. Windows Server 2003 DNS is also commonly deployed as a non-Active Directory, or standard, Domain Name System solution, for the purposes of hosting the Internet presence of an organization, for example. 7.1. DNS Architecture

DNS architecture is a hierarchical distributed database and an associated set of protocols that define:

A mechanism for querying and updating the database. A mechanism for replicating the information in the database among servers.

Advance Computer Network

May 2012

A schema of the database.

DNS originated in the early days of the Internet when the Internet was a small network established by the United States Department of Defense for research purposes. The host names of the computers in this network were managed through the use of a single HOSTS file located on a centrally administered server. Each site that needed to resolve host names on the network downloaded this file. As the number of hosts on the Internet grew, the traffic generated by the update process increased, as well as the size of the HOSTS file. The need for a new system, which would offer features such as scalability, decentralized administration, support for various data types, became more and more obvious. The Domain Name System introduced in 1984 became this new system. With DNS, the host names reside in a database that can be distributed among multiple servers, decreasing the load on any one server and providing the ability to administer this naming system on a perpartition basis. DNS supports hierarchical names and allows registration of various data types in addition to host name to IP address mapping used in HOSTS files. Because the DNS database is distributed, its potential size is unlimited and performance is not degraded when more servers are added. The original DNS was based on Request for Comment (RFC) 882 (Domain Names: Concepts and Facilities) and RFC 883 (Domain NamesImplementation and Specification), which were superseded by RFC 1034 (Domain NamesConcepts and Facilities), and RFC 1035 (Domain NamesImplementation and Specification). Additional RFCs that describe DNS security, implementation, and administrative issues later augmented the original design specifications. The implementation of DNS Berkeley Internet Name Domain (BIND) was originally developed for the 4.3 BSD UNIX Operating System. The Microsoft implementation of DNS became a part of the operating system in Microsoft Windows NT Server 4.0. The Windows NT 4.0 DNS server, like most DNS implementations, has its roots in RFCs 1034 and 1035. The RFCs used in Microsoft Windows 2000 and Windows Server 2003 operating systems are 1034, 1035, 1886, 1996, 1995, 2136, 2308, and 2052. 7.2. DNS Domain Names

The Domain Name System is implemented as a hierarchical and distributed database containing various types of data, including host names and domain names. The names in a DNS database form a hierarchical tree structure called the domain namespace. Domain names consist of individual labels separated by dots, for example: mydomain.microsoft.com. A Fully Qualified Domain Name (FQDN) uniquely identifies the hosts position within the DNS hierarchical tree by specifying a list of names separated by dots in the path from the referenced host to the root. The next figure shows an example of a DNS tree with a host called mydomain within the microsoft.com. domain. The FQDN for the host would be mydomain.microsoft.com. 7.3. Understanding the DNS Domain Namespace

The DNS domain namespace, as shown in the following figure, is based on the concept of a tree of named domains. Each level of the tree can represent either a branch or a leaf of the tree. A branch is a level where more than one name is used to identify a collection of named resources. A leaf represents a single name used once at that level to indicate a specific resource.

Advance Computer Network

May 2012
7.4. DNS Domain Name Hierarchy

The previous figure shows how Microsoft is assigned authority by the Internet root servers for its own part of the DNS domain namespace tree on the Internet. DNS clients and servers use queries as the fundamental method of resolving names in the tree to specific types of resource information. This information is provided by DNS servers in query responses to DNS clients, who then extract the information and pass it to a requesting program for resolving the queried name. In the process of resolving a name, keep in mind that DNS servers often function as DNS clients, querying other servers in order to fully resolve a queried name. 7.5. DNS and Internet Domains

The Internet Domain Name System is managed by a Name Registration Authority on the Internet, responsible for maintaining top-level domains that are assigned by organization and by country/region. These domain names follow the International Standard 3166. Some of the many existing abbreviations, reserved for use by organizations, as well as two-letter and three-letter abbreviations used for countries/regions are shown in the following table: 7.6. Some DNS Top-level Domain Names (TLDs)

DNS Domain Name Type of Organization com edu org net gov mil arpa xx 7.7. Commercial organizations Educational institutions Non-profit organizations Networks (the backbone of the Internet) Non-military government organizations Military government organizations Reverse DNS Two-letter country code (i.e. us, au, ca, fr)

Updating the DNS Database

Since the resource records in the zone files are subjected to changes, they must be updated. The implementation of DNS in Windows 2000 and Windows Server 2003 supports both static and dynamic updates of the DNS database. The details of the dynamic update are discussed later in this document.

Advance Computer Network

May 2012
7.8. DNS Architecture Diagrams

The following diagrams illustrate how the DNS Client and Server services work and provide additional information regarding name resolution, update, and administration operations. The first diagram illustrates the DNS Client service architecture in its name resolution and update operations. In this diagram, name resolution architecture is demonstrated using a Web browser and Microsoft Outlook and updates are represented by the DHCP client. 7.9. DNS Client Service Architecture

The following diagram illustrates the DNS Server service architecture with its administration tools and the Windows Management Instrumentation (WMI) interface. 7.10. DNS Server Service Architecture

7.11.

DNS Protocol

The DNS protocol consists of DNS different types of DNS messages that are processed according to the information in their message fields. This section discusses the different types of DNS messages and the different fields in each message type. In this section, the following DNS message topics are discussed:

Message types

Advance Computer Network

May 2012

DNS query message format DNS query message header DNS query question entries DNS resource records Name query message Name query response Reverse name query message DNS update message format DNS update message flags Dynamic update response message Domain Names

7.12.

The domain name is used with the client computer name to form the fully qualified domain name (FQDN), known also as the full computer name. In general, the DNS domain name is the remainder of the FQDN that is not used as the unique host name for the computer. For example, the DNS domain name used for a client computer could be the following: If the FQDN, or Full computer name, is wkstn1.example.microsoft.com, the domain name is the example.microsoft.com portion of this name. DNS domain names have two variations a DNS name and a NetBIOS name. The full computer name (a fully qualified DNS name) is used during querying and location of named resources on your network. For earlier version clients, the NetBIOS name is used to locate various types of NetBIOS services that are shared on your network. An example that shows the need for both NetBIOS and DNS names is the Net Logon service. In Windows Server 2003 DNS, the Net Logon service on a domain controller registers its service (SRV) resource records on a DNS server. For Windows NT Server 4.0 and earlier versions, domain controllers register a DomainName entry in Windows Internet Name Service (WINS) to perform the same registration and to advertise their availability for providing authentication service to the network. When a client computer is started on the network, it uses the DNS resolver to query a DNS server for SRV records for its configured domain name. This query is used to locate domain controllers and provide logon authentication for accessing network resources. A client or a domain controller on the network optionally uses the NetBIOS resolver service to query WINS servers, attempting to locate DomainName [1C] entries to complete the logon process. Your DNS domain names should follow the same standards and recommended practices that apply to DNS computer naming described in the previous section. In general, acceptable naming conventions for domain names include the use of letters A through Z, numerals 0 through 9, and the hyphen (-). The use of the period (.) in a domain name is always used to separate the discrete parts of a domain name, commonly known as labels. Each label corresponds to an additional level defined in the DNS namespace tree. For most computers, the primary DNS suffix configured for the computer can be the same as its Active Directory domain name, although the two values can be different.

Advance Computer Network

May 2012
7.13. Host Names

Computers using the underlying TCP/IP protocol of a Windows-based network use an IP address, a 32-bit numeric value (in the case of IPv4) or a 128-bit numeric value (in the case of IPv6), to identify the computer network connection of network hosts. However, network users prefer to use memorable, alphanumeric names. To support this need, network resources in a Windows-based network are identified by both alphanumeric names and IP addresses. DNS and WINS are two name resolution mechanisms that enable the use of alphanumeric names, and convert these names into their respective IP addresses. NetBIOS vs. DNS Computer Names In networks running Windows NT 4.0 and earlier, users typically locate and access a computer on the network using a NetBIOS (Network Basic Input Output System) name. In Windows 2000, Windows XP, and Windows Server 2003 operating systems, users locate and access a computer using DNS. In this implementation of DNS, a computer is identified by its full computer name, which is a DNS fully qualified domain name (FQDN). 7.14. DNS Servers List

For DNS clients to operate effectively, a prioritized list of DNS name servers must be configured for each computer to use when processing queries and resolving DNS names. In most cases, the client computer contacts and uses its preferred DNS server, which is the first DNS server on its locally configured list. Listed alternate DNS servers are contacted and used when the preferred server is not available. For this reason, it is important that the preferred DNS server be appropriate for continuous client use under normal conditions. For computers running Windows XP, the DNS server list is used by clients only to resolve DNS names. When clients send dynamic updates, such as when they change their DNS domain name or a configured IP address, they might contact these servers or other DNS servers as needed to update their DNS resource records. By default, the DNS client on Windows XP does not attempt dynamic update over a Remote Access Service (RAS) or virtual private network (VPN) connection. By default, the Windows XP and Windows Server 2003 DNS Client service does not attempt dynamic update of top-level domain (TLD) zones. Any zone named with a single-label name is considered a TLD zone, for example, com, edu, blank, my-company. When DNS clients are configured dynamically using a DHCP server, it is possible to have a larger list of provided DNS servers. To effectively share the load when multiple DNS servers are provided in a DHCP optionsspecified list, you can configure a separate DHCP scope that rotates the listed order of DNS and WINS servers provided to clients. 7.15. DNS Suffix Search List

For DNS clients, you can configure a DNS domain suffix search list that extends or revises their DNS search capabilities. By adding additional suffixes to the list, you can search for short, unqualified computer names in more than one specified DNS domain. Then, if a DNS query fails, the DNS Client service can use this list to append other name suffix endings to your original name and repeat DNS queries to the DNS server for these alternate FQDNs. For computers and servers, the following default DNS search behavior is predetermined and used when completing and resolving short, unqualified names. When the suffix search list is empty or unspecified, the primary DNS suffix of the computer is appended to short unqualified names, and a DNS query is used to resolve the resultant

Advance Computer Network

May 2012
FQDN. If this query fails, the computer can try additional queries for alternate FQDNs by appending any connection-specific DNS suffix configured for network connections. If no connection-specific suffixes are configured or queries for these resultant connectionspecific FQDNs fail, then the client can then begin to retry queries based on systematic reduction of the primary suffix (also known as devolution). For example, if the primary suffix were example.microsoft.com, the devolution process would be able to retry queries for the short name by searching for it in the microsoft.com and com domains. When the suffix search list is not empty and has at least one DNS suffix specified, attempts to qualify and resolve short DNS names are limited to searching only those FQDNs made available by the specified suffix list. If queries for all FQDNs that are formed as a result of appending and trying each suffix in the list are not resolved, the query process fails, producing a name not found result. If the domain suffix list is used, clients continue to send additional alternate queries based on different DNS domain names when a query is not answered or resolved. Once a name is resolved using an entry in the suffix list, unused list entries are not tried. For this reason, it is most efficient to order the list with the most commonly used domain suffixes first. Domain name suffix searches are used only when a DNS name entry is not fully qualified. To fully qualify a DNS name, a trailing period (.) is entered 7.116. DNS Server Service The DNS Server service is the component that provides the server implementation of DNS. The settings discussed in this section include:

Disabling the use of recursion. Round robin use of resource records. Subnet prioritization. Advanced parameters.

DNS-related Files The following files relate to using and configuring DNS servers and clients.

File

Description BIND boot configuration file. This file is not created by the DNS console. However, as an optional configuration for the DNS Server service, it can be copied from another DNS server running the Berkeley Internet Name Domain (BIND) server implementation of DNS. To use this file with the DNS Server service, you need to click From file in Server properties. On BIND servers, this file is often called the named.boot file. Used to preload resource records into the DNS server names cache. DNS servers use this file to help locate root servers on either your network or the Internet. By default, this file contains DNS resource records that prime the local cache

Boot

Cache.dns

Advance Computer Network

May 2012
of the server with the addresses of authoritative root servers for the Internet. If you are setting up a DNS server to resolve Internet DNS names, the information in this file is required unless you enable the use of another DNS server as a forwarder to resolve these names. Traffic to the Internet root servers is heavy, but because host names are not usually resolved at this level, the load can be reasonably handled. Instead, the root hints file provides referral information that can be useful during DNS name resolution to redirect a query to other servers that are authoritative for names located beneath the root. For DNS servers operating privately on your internal network, the DNS console can learn and replace the contents of this file with internal root servers on your network, provided they are reachable through the network when you are setting up and configuring new DNS servers. It can be updated using the DNS console from the Root Hints tab located under the applicable server properties. This file preloads the server names cache when it is started. Root.dns Root zone file. This file can appear at a DNS server if it is configured as a root server for your network.

Used when a standard zone (either primary or secondary) is added and configured for the server. Files of this type are not created or used for zone_name.dns primary type zones that are directory-integrated, which are stored in the Active Directory database. These files can be found in the systemroot\System32\Dns folder on the server computer. DNS Domain Names and Subdomain Names

In this example, the example.microsoft.com domain shows a new subdomain the example.microsoft.com domain delegated away from the microsoft.com zone and managed in its own zone. However, the microsoft.com zone needs to contain a few resource records to provide the delegation information that references the DNS servers that are authoritative for the delegated example.microsoft.com subdomain. If the microsoft.com zone does not use delegation for a subdomain, any data for the subdomain remains part of the microsoft.com zone. For example, the subdomain dev.microsoft.com is not delegated away but is managed by the microsoft.com zone.

Advance Computer Network

May 2012
DNS Notify Windows-based DNS servers support DNS Notify, an update to the original DNS protocol specification that permits a means of initiating notification to secondary servers when zone changes occur (RFC 1996). DNS notification implements a push mechanism for notifying a select set of secondary servers for a zone when the zone is updated. Servers that are notified can then initiate a zone transfer as described above, to pull zone changes from their master servers and update their local replicas of the zone. For secondaries to be notified by the DNS server acting as their configured source for a zone, each secondary server must first have its IP address in the notify list of the source server. When using the DNS console, this list is maintained in the Notify dialog box, which is accessible from the Zone Transfer tab located in zone Properties. In addition to notifying the listed servers, the DNS console permits you to use the contents of the notify list as a means to restrict or limit zone transfer access to only those secondary servers specified in the list. This can help prevent an undesired attempt by an unknown or unapproved DNS server to pull, or request, zone updates. DNS Processes and Interactions DNS processes and interactions involve the communications between DNS clients and DNS servers during the resolution of DNS queries and dynamic update, and between DNS servers during name resolution and zone administration. Secondary processes and interactions depend on the support for technologies such as Unicode and WINS. For information about TCP/IP DNS messages, see DNS Protocol in this document. How DNS Queries Work When a DNS client needs to look up a name used in a program, it queries DNS servers to resolve the name. Each query message the client sends contains three pieces of information, specifying a question for the server to answer: 1. A specified DNS domain name, stated as a fully qualified domain name. 2. A specified query type, which can either specify a resource record by type or a specialized type of query operation. 3. A specified class for the DNS domain name. For Windows DNS servers, this should always be specified as the Internet (IN) class. For example, the name specified could be the FQDN for a computer, such as hosta.example.microsoft.com., and the query type specified to look for an address (A) resource record by that name. Think of a DNS query as a client asking a server a two-part question, such as Do you have any A resource records for a computer named hostname.example.microsoft.com. ? When the client receives an answer from the server, it reads and interprets the answered A resource record, learning the IP address for the computer it asked for by name. DNS queries resolve in a number of different ways. A client can sometimes answer a query locally using cached information obtained from a previous query. The DNS server can use its own cache of resource record information to answer a query. A DNS server can also query or contact other DNS servers on behalf of the requesting client to fully resolve the name, then send an answer back to the client. This process is known as recursion.

Advance Computer Network

May 2012
In addition, the client itself can attempt to contact additional DNS servers to resolve a name. When a client does so, it uses separate and additional queries based on referral answers from servers. This process is known as iteration. In general, the DNS query process occurs in two parts:

A name query begins at a client computer and is passed to a resolver, the DNS Client service, for resolution. When the query cannot be resolved locally, DNS servers can be queried as needed to resolve the name.

Both of these processes are explained in more detail in the following sections. Network Ports Used By DNS During DNS resolution, DNS messages are sent from DNS clients to DNS servers or between DNS servers. Messages are sent over UDP and DNS servers bind to UDP port 53. When the message length exceeds the default message size for a User Datagram Protocol (UDP) datagram (512 octets), the first response to the message is sent with as much data as the UDP datagram will allow, and then the DNS server sets a flag indicating a truncated response. The message sender can then choose to reissue the request to DNS server using TCP (over TCP port 53). The benefit of this approach is that it takes advantage of the performance of UDP but also has a backup failover solution for longer queries. In general, all DNS queries are sent from a high-numbered source port (above 1023) to destination port 53, and responses are sent from source port 53 to a high-numbered destination port. The following table lists the UDP and TCP ports used for different DNS message types. 8. Network Security Monitoring To be able to provide a permanent network situational awareness we need to acquire detailed traffic statistics. Such statistics can be complete packet traces, flow statistics or volume statistics. To efficiently handle high-speed traffic the trade-off between computational feasibility and provided level of information must be chosen. Full packet traces traditionally used by traffic analyzers provide most detailed information. On the other hand the scalability and processing feasibility for permanent traffic observation and storing in high-speed campus networks is an issue including high operational costs. Flow based statistics provide information from IP headers. They do not include any payload information but we still know from IP point of view who communicates with whom, which time, etc. Such approach can reduce up to 1000 times the amount of data necessary to process and store. Volume statistics are often easy to obtain in form of SNMP data. They provide less detailed network view in comparison with flow statistics or full packet traces and do not allow advanced traffic analysis.

Figure 2: Traffic monitoring system deployment in a campus network. Advance Computer Network

May 2012
We use flow data for their scalability and ability to provide a sufficient amount of information. Using flow statistics allows to work even with encrypted traffic. NetFlow initially available in CISCO routers is now used in various flow enabled appliances (routers, probes). Flow based monitoring allows us to permanently observe from small end-user network up to large NREN (National Research and Education Network) backbone links. 8.1. NetFlow Generators In general, flows are a set of packets which share a common property. The most important such properties are the flow's endpoints. The simplest type of flow is a 5-tuple, with all its packets having the same source and destination IP addresses, port numbers and protocol. Flows are unidirectional and all their packets travel in the same direction. A flow begins when its first packet is observed. A flow ends when no new traffic for existing flow is observed (inactive timeout) or connection terminates (e.g. TCP connection is closed). An active timeout is time period after which data about an ongoing flow are exported. Statistics on IP traffic flows provide information about who communicates with whom, when, how long, using what protocol and service and also how much data was transferred. To acquire NetFlow statistics routers or dedicated probes can be used [ZAD10]. Currently not all routers support flow generation. Enabling flow generation can consume up to 30 40 % of the router performance with possible impacts on the network behavior. On the other hand dedicated flow probes observe the traffic in passive manner and the network functionality is not affected. The FlowMon probe is preferred one due to implemented features which contain support for NetFlow v5/v9 and IPFIX standard, packet/flow sampling, active/inactive timeouts, flow filtering, data anonymization, etc. The probe firmware and software can be modified to add support for other advanced features. Hardware-accelerated probes support line-rate traffic processing without packet loss. Standard probes are based on commodity hardware with lower performance. The FlowMon probe was developed by Liberouter project as part of JRA2 activity of the GANT2 project. The FlowMon appliances are now manufactured by INVEATECH company (university start-up company).

To provide input for probes TAP (Test Access Port) devices or SPAN (Switched Port Analyzer) ports can be used. TAP devices are non-obtrusive and are not detectable on the network. They send a copy (1:1) of all network packets to a probe. In case of failure the TAPhas built-in fail-over mode. The observed line will not be interrupted and will stay operational independent on any potential probe failure. Such approach enables us to deploy monitoring devices in environments with high reliability requirements. SPAN (port mirroring) functionality must be enabled on a router/switch side to forward network traffic to monitoring device. It's not necessary to introduce additional hardware in network infrastructure but we need to reconfigure the router/switch and take in count some SPAN port limits. Detailed comparison between using TAP devices or SPAN ports is described in [ZHA07]. Advance Computer Network

May 2012
9. Firewall Architecture
Below I present a basic overview of firewall architecture. I created this as a reference document in the case the LAN-guys are barking network stuff to me on a project I? working on. The untrusted network in the diagrams you can interpret as the Internet and the trusted network you can read as the intranet or the company? LAN. In the past process environments were fully closed networks in which process control systems communicated with each other directly. Nowadays however, we more often see the presence of several office and process automation networks which have to exchange data. People have to be aware of the fact that this brings along certain risks. Over the years the office and process domain have become more and more interconnected, without having people paying enough attention to Process Control Security. The office and process control environment, for example, are often not sufficiently separated. In those situations it is plausible that - without too many difficulties - an intruder, virus or worm can harm the integrity of the production environment. These days the use of laptops within businesses is widely common and picking up viruses and other malware as a result of the possibility to surf the Internet is therefore a realistic risk. 9.1. Packet Filtering Router

The Packet Filtering Router screens the packets exchanged between the untrusted and trusted network. Basic routers cannot do much more then routing traffic and do only support basic rulesets. Basic rules are for example indentifying subsets of ip-addresses that are allowed to communicate between the untrusted and trusted network. More elaborate rulesets allow for filtering based on the detailed content of the ip-packages. In the latter case the router is sometimes called a firewall. 9.2. Bastion Host or Screened Host

Incomming traffic, from the untrusted network, is forwarded to the bastion host server or firewall that will determine whether or not the messages are forwarded to the trusted network. Outgoing communication can follow the reversed route or can go directly from the trusted to the untrusted network bypassing the bastion host. 9.3. Dual-Homed Gateway

Advance Computer Network

May 2012
The gateway has two network interfaces and sits between the trusted and untrusted network. Direct forwarding on the network interfaces is blocked to force the traffic to go through an application or proxy running on the gateway. The application will connect to the two networks. 9.4. Demilitarized Zone (DMZ) or Screened Subnet

Here we have a set of two routers creating an additional network, in between the trusted and untrusted network, called the DMZ. The traffic destined from the trusted network to the untrusted network is routed directly through the two routers or goes to the firewall and is forwarded to the untrusted network from there. Traffic from the untrusted network to the trusted network is sent first to the firewall in the DMZ and is then forwarded to the trusted network. Public accessible servers (for example webservers) are located in the DMZ and the traffic is routed from the untrusted network directly to those servers. The major difference between this architecture and the Bastion Host is that the Bastion Host would have had to combine the functionality ran on the public servers and the traffic forwarding functionality. 9.5. Firewall Appliance

This is piece of hardware with multiple network interfaces that allows it to connect to multiple subnets, i.e. the gateway functionality like with dual-home gateway, and do intelligent packet filtering. 9.6. Proxy If the traffic from the trusted network to the untrusted network is forced to go through one location in the DMZ, it is possible to introduce some additional functionality like:

hiding the ip-addresses from clients in the untrusted network (anonymizing requests) caching request from multiple client in the trusted network to the same source in the untrusted network

If the proxy does this in such a way that exchanged messages are not modified, we say it is a tunneling proxy sometimes called a gateway. Some of the proxies are transparent for the client in the untrusted network. This has the advantage that the clients are unaware of it and do not require any configuration. For anonymization this is not possible because the proxy needs to modify the message exchanged between the trusted and untrusted network.

Advance Computer Network

May 2012
9.7. Reverse Proxy

Here a proxy is acting on traffic coming from the untrusted network destined to the public accessible servers in the DMZ. The reverse proxy can provide following additional functionality transparently for the clients in the untrusted network:

load balancing the incoming request transparently over multiple servers terminating secure communication or decrypting encrypted messages, to take away this burden from the public accessible servers behind it caching frequently requested information

9.8. A solid firewall structure Most companies use firewall solutions in their office environment to protect their selves against cyber incidents as much as possible. In the process control environment this is less common. This is actually quite strange because a cyber related incident in which one part of the process control system is affected, can harm the integrity of the whole plant. It is even possible that the plant shuts down completely. What the PCS Competence Center of Egemin Automation therefore suggests, is that a solid firewall structure is set up in the process control environment. This structure allows the realization of a controlled separation between the office and process control network. This extra layer of security will only allow necessary communication between for example the ERP system of the office environment and the MES system of the process environment. 9.9. Advantages A solid firewall structure within your process environment will provide you with an increased integrity of your process control environment: A controlled separation between the office and process environment; Control over communication; Having incidents stay within their own domain; Clarity over authority of ownership; A procedural set-up in which extensions or adjustments take up only a minimal amount of working hours.

Advance Computer Network

You might also like