10 1 1 115 5203 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 79

A Controllable Faulty Router

Daniel Poltawski [email protected] BSc (Hons) Computer Science Lancaster University March 18, 2005

I certify that the material contained in this dissertation is my own work and does not contain unreferenced or unacknowledged material. I also warrant that the above statement applies to the implementation of the project and all associated documentation. Regarding the electronically submitted version of this submitted work, I consent to this being stored electronically and copied for assessment purposes, including the Departments use of plagiarism detection systems in order to check the integrity of assessed work. I agree to my dissertation being placed in the public domain, with my name explicitly included as the author of the work. Date: Signed:

Abstract As new technology is developed, and hardware gets cheaper, computer networks get more and more widespread and more applications are being developed. Networking brings a whole new area of complexity into application design, the need to cope with adverse network conditions is a fundamental problem which needs to be coped with. This project involve the development of a Linux-based router which is faulty in a controllable way for use in testing networking applications and protocols for use in a less than ideal environment.

Contents
1 Introduction 1.1 The Need for a Controllably Faulty Environment . . . . . . . . . . . 2 Background 2.1 Computer Networks . . . . . 2.1.1 The OSI Model . . . 2.1.2 IP . . . . . . . . . . 2.1.3 TCP . . . . . . . . . 2.1.4 UDP . . . . . . . . . 2.1.5 ICMP . . . . . . . . 2.2 Faulty Environments . . . . 2.2.1 CLEO . . . . . . . . 2.3 Related Programs . . . . . . 2.3.1 Iperf . . . . . . . . . 2.3.2 LAN forge . . . . . . 2.4 Emulation Environments . . 2.4.1 Dummynet . . . . . 2.4.2 ONE - Ohio Network 2.4.3 Honeyd . . . . . . . 2.4.4 NISTNet . . . . . . . 2.5 GNU/Linux . . . . . . . . . 2.5.1 What is Linux? . . . 2.5.2 Why choose Linux? . 5 5 8 8 8 10 11 13 14 14 14 16 16 17 17 17 18 18 18 19 19 20 21 21 21 22 23 23 24 24 24 26 27 28 28 29 29 30 30

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

3 Linux Architecture 3.1 The Linux Kernel and Modules . 3.2 Kernel Space vs. Userspace . . . 3.3 Networking Overview . . . . . . . 3.3.1 Packet Transition Through 3.3.2 sk bu . . . . . . . . . . . 3.4 Netlter . . . . . . . . . . . . . . 3.4.1 What is Netlter? . . . . . 3.4.2 Netlter Hook Points . . . 3.4.3 Netlter Return Codes . . 3.4.4 Using Netlter Hooks . . . 4 Design 4.1 Overall Structure . . . . 4.2 Storing Rules . . . . . . 4.2.1 Packet Matching 4.2.2 Fault Selection . 4.3 Altering Packet ow . .

. . . . . . . . . . . . . . . . . . the Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . Kernel . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . . 1

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

4.4

4.3.1 Dropping . 4.3.2 Delaying . . 4.3.3 Reordering . Controlling Rules .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

30 30 31 32 34 34 35 35 35 36 36 36 37 37 39 39 40 41 42 43 43 44 44 46 46 47 48 50 50 50 51 51 51 51 52 53 53 53 55 56 58

5 Implementation 5.1 Implementation Environment . . . . . . . . . . . . . . 5.2 Useful Kernel Functions . . . . . . . . . . . . . . . . . 5.2.1 printk() . . . . . . . . . . . . . . . . . . . . . . 5.2.2 kmalloc() and kfree() . . . . . . . . . . . . . . . 5.2.3 Double-linked list . . . . . . . . . . . . . . . . . 5.2.4 Kernel Timers . . . . . . . . . . . . . . . . . . . 5.2.5 Spin locks . . . . . . . . . . . . . . . . . . . . . 5.3 The Controllable Faulty Routing Module . . . . . . . . 5.3.1 Overview . . . . . . . . . . . . . . . . . . . . . 5.3.2 Creating Module and Registering with Netlter 5.3.3 Storing and Examining Rules . . . . . . . . . . 5.3.4 Dropping Packets . . . . . . . . . . . . . . . . . 5.3.5 Delaying Packets . . . . . . . . . . . . . . . . . 5.3.6 Reordering Packets . . . . . . . . . . . . . . . . 5.4 Control Communication . . . . . . . . . . . . . . . . . 5.4.1 Method of Communication . . . . . . . . . . . . 5.4.2 Enabling Socket Communications in the Module 5.4.3 Userspace Control Program . . . . . . . . . . . 6 Testing and Evaluation 6.1 The Testing Environment . . . . . . . . . . . . . . . 6.1.1 Routing of the Test Network . . . . . . . . . . 6.1.2 Controlling the System . . . . . . . . . . . . . 6.2 Testing Tools . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Ping . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 tcpdump . . . . . . . . . . . . . . . . . . . . . 6.2.3 SmokePing . . . . . . . . . . . . . . . . . . . 6.2.4 curl . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Dropping . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Flood Ping Under Normal Conditions . . . . . 6.3.2 Flood Ping With Delay Conditions . . . . . . 6.3.3 Probability vs. Sample Size . . . . . . . . . . 6.4 Delaying . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 A Demonstration of Multiple Periods of Delay 6.4.2 Testing Delay Periods . . . . . . . . . . . . . 6.4.3 Probability with varying degrees of delay . . . 6.5 Reordering . . . . . . . . . . . . . . . . . . . . . . . . 2

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

6.6 6.7 6.8

Packet Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Rule Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Evaluation Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 63 63 63 64 64 64 65 65 66 66 67 67 67 67 68 68 69 73 76

7 Conclusion 7.1 Fullment of aims . . . . . . . . . . . . . . . . . . 7.1.1 Research Aims . . . . . . . . . . . . . . . 7.1.2 Implementation Aims . . . . . . . . . . . . 7.2 Deciencies and How They Should Be Addressed 7.2.1 Inecient Rule Storage . . . . . . . . . . . 7.2.2 Rules Overstepping Each Other . . . . . . 7.2.3 Multiple Faults in Rules . . . . . . . . . . 7.2.4 Improved Accuracy Delay Times . . . . . 7.2.5 Improved Reordering Support . . . . . . . 7.3 Future Work . . . . . . . . . . . . . . . . . . . . . 7.3.1 Easier Control . . . . . . . . . . . . . . . . 7.3.2 IPv6 Support . . . . . . . . . . . . . . . . 7.3.3 Support More Rule Parameters . . . . . . 7.4 Lessons Learnt . . . . . . . . . . . . . . . . . . . 7.5 Final Overview . . . . . . . . . . . . . . . . . . . 8 Acknowledgements A 30 Packet Pool Reordering Tcpdump B Project Proposal

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

List of Figures
1 2 3 4 5 6 7 8 9 10 11 12 13 14 RTT of a Core CLEO Node . . . . . . . . . . . . . RTT of Misbehaving DSL Connected CLEO Node . Netlter Hook Structure . . . . . . . . . . . . . . . Overall Design . . . . . . . . . . . . . . . . . . . . . Reordering Basics . . . . . . . . . . . . . . . . . . . Updated Netlter Hook Structure . . . . . . . . . . Hook Function Transition . . . . . . . . . . . . . . Overview of the Queue Handler . . . . . . . . . . . Test Network Diagram . . . . . . . . . . . . . . . . Eects of Dropping on a 10,000 packet ood ping . Eects of Dropping on 100 packet ood ping . . . . Average round trip delay on 100 Packet Flood Ping Gateway Incremental Delay . . . . . . . . . . . . . Increasing Delay with 75% Probability . . . . . . . 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 16 25 28 31 37 38 41 46 54 55 56 57 58

15 16 17

Eects of 30 packet pool reordering on ICMP probe . . . . . . . . . . 59 maggie ICMP probe . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 maggie HTTP probe . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

List of Tables
1 2 3 4 5 6 Matching Packets By Destination and Protocol . . Matching Packets By All Headers . . . . . . . . . . Matching All Packets of a Single Protocol . . . . . Eects of dropping on a 10,000 packet ood ping . Result of Delay on Round Trip Times of 100 Packet Result of Increasing Numbers of Rules on RTT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flood Ping . . . . . . . . . . . . . . . . . . . . . . . . . 29 29 29 53 55 61

Working Documents available at: http://www.lancs.ac.uk/~poltawsk/fyp/

Introduction

Computer networks are one of the most rapidly growing technologies in computing today, the Internet brought on an explosion in many networking technologies, with usage growing rapidly since its conception, the increase only recently slowing down in recent years. Ofcom, the UK telecommunication industry watchdog, reported that 56% of uk homes were connected to the Internet in October 2004, this is a good indicator demonstrating just how wide the penetration of computer networks stretches. Although most of the western world are heavily connected by computer networks, its worth noting that as little as 12.7% of the worlds total population have direct access to the Internet[1], there are very remote and hostile environments yet to be connected, and there are still key challenges in creating network connectivity over long distances, in hostile environments, at aordable prices. Small networks throughout homes and businesses have become more widespread, which in turn has allowed the price of technology such as Ethernet to reduce to a level which allows fast networks to be setup in a variety of dierent locations with great ease. The widespread penetration has meant that many new technologies have had signicant resources placed into their development, in parallel with the popularity increasing. Most telecommunication links in the world are connected together with networking equipment and as networking technology increases in complexity, there is a very real need to be able to test applications on a large variety of dierent infrastructures, environments and situations. As well as small local area networks, multi-continental wide area networks are used for a wide variety of telecommunication needs, connecting large organisations together.

1.1

The Need for a Controllably Faulty Environment

Ofcom reported in October 2004 that in the UK alone, there were over 5 million broadband subscribers [2]; as high speed networks are reaching the home, the market for applications which take advantage of increased bandwidth increases, continuing with the constant strive to develop the most ecient networks possible, pushing for the fastest transfer of data possible. More and more applications are developed to take advantage of the high speed networks which are widely available today, developers try to push their applications to utilise the resources available to them, today many games consoles can be connected to the Internet to allow games players around the world to compete against each 5

other. Services such as Microsofts Xbox Live [3] allow game players to compete with each other, whilst chatting using a voice communication system, running in millions of peoples homes with a wide variety of dierent setups. As well as pushing the limits in terms of network utilisation, computer networks are spreading further and wider, next generation mobile phone systems are facilitating data communications in a much wider area with no guarantees at all as to the environmental conditions, whilst aiming to allow for video and voice to be transmitted and received as well as more generic high speed data communications whilst on the move. Unfortunately developing applications which communicate over networks is developing for a relative unknown. It takes away the control which exists within a single computer, the clinical consistency of which operations take place is lost. Nothing is guaranteed on a packet switched network, a message sent out onto the wire or out into the airways may never get anywhere or it may arrive incredibly slowly, it may arrive but contain invalid information. Assumptions can never be made about a network, a network link which was working previously may not necessarily be working the next time data is sent out onto it. Developers of networking applications cant make many assumptions about their environment, particularly if they are trying to reach a wide range of customers, fast data links can quickly become congested with an inux of trac, and with platforms such as mobile telecommunications the environmental conditions may rapidly change, software must be able to cope with adverse conditions in some way or another. In order to test networking applications, developers have a number of dierent options, a developer can either test their product in a real life live situation or alternatively create a simulation of the environment. With more and more complicated networks combined with ambitious networking applications trying a put together a live simulation of a faulty environment becomes exceptionally complicated, expensive and dicult to accomplish. For example, a radio-based mobile system has any number of dierent environmental factors which can cause problems. Trying to carry out live testing in all dierent environmental conditions which might aect network transmission on radio waves for example, would be impossible. It would be incredibly dicult to even get a brief subset of conditions, as natural condition changes are not really reproducible. Simulations of environments can oer signicantly cheaper costs and allow easy implementation of extensive testing, they can also be controlled easily and dierent environmental conditions can be reproduced. The ability to reproduce situations is a vital quality to be able to x problems which arise due to specic adverse conditions. A common problem with simulations is that they are not reective of the real solution, and usually dont test the actual application in its natural

environment. The goal of this project is to create a intermediate between simulation and a live environment; to create real world problematic conditions while not requiring the real world environment. It is designed to sit in a network like a real piece of equipment would, and introduce the network problems transparently, in a controllable way.

2
2.1

Background
Computer Networks

Its important to dene what is meant by a network, in this project the concept of a computer network is what will be continually referred to. Computer networks are dened as any number of computers connected together by a communication link, be that a physical piece of cable, a radio link or some other media which allows multiple computers to talk to each other. The computers are often referred to as nodes connected via a network link. The term computers is not a particularly good choice, as it may well refer to many specialised pieces of equipment. In reality nodes connected to a network could be comprised of sophisticated pieces of networking hardware, mobile phones, games consoles and other pieces of domestic equipment. In telecommunications networks the nodes connected together could well be all expert hardware which works in the background without end users ever having any idea of it, despite using the infrastructure daily. Computer networks are often categorised by size using some general terms described below: A LAN or Local Area Network is probably the more prominent of networks covering a relatively small geographical area, in situations such as a home, oce, or small set of buildings close together. A WAN or Wide Area Network covers a much larger geographical area, which is not limited by size in the same way as a LAN. Generally, WANs are used to connect LANs together. The Internet is essentially a WAN. A MAN or Metropolitan Area Network is an intermediate between a LAN and WAN, connecting together areas a few kilometres apart, but not quite as far ranging as spreading across countries and continents. Although such terms are often used to describe networks, the convenience of describing with such limiting terms is totally lost with the ambiguity it introduces, and the terms can be interchanged to describe a specic network scenario in many cases. 2.1.1 The OSI Model

Computer networks are described in a precise manner using the Open Systems Interconnection Model, this a somewhat abstract description to precisely describe how nodes on a network interconnect. Its a useful model as it allows the complexity of network communication to be split up into specialised layers.

Although the layers between the OSI model may well be more concrete than they need to be, its important to be able to distinguish between tasks in this way. The OSI model can be described as follows: 1. Physical layer The Physical Layer is concerned about the underlying principles of how data physically gets from one location to another, be that by electrical charge travelling down a wire or radio waves travelling wirelessly between nodes. 2. Data Link Layer The data link layer is concerned with getting data formed in a way which is ready to transmit through the physical layer, it denes the form in which data will be transmitted. That is to say, the largest message which can be sent at once is dened here. It introduces basic addressing between nodes directly connected, and allows the detection and correction of errors from the physical layer to a limited extent. 3. Network Layer The network layer introduces the ability to transfer data beyond the length of data permitted by specic hardware, it allows the transfer of messages of variable length. It packages up data into a suitable manner for the data link layer. The network layer allows routing and hence addressing beyond the limits of the data-link layer (beyond physically connected nodes). The network layer is primarily concerned about routing and is the layer at which this project will be built at. 4. Transport Layer The transport is primarily concerned about bridging the gap between application and network, it is concerned with making the applications job to send data between each other simpler. The application doesnt have to be concerned with individual packets, the transport layers (including session/presentation layers) are concerned with this. 5. Session Layer The session layer is an extension of the Transport layer in most protocol stacks. Its purpose is keeping track of a network connection between two nodes. 6. Presentation Layer The presentation layer is concerned with the format of the data communication between nodes. A common presentational format needs to be established at this level to allow dierent systems to communicate. 7. Application Layer 9

The application layer is where specic applications carry out their networking operations, theyll use all the above layers to carry out their task in one way or another. Although the OSI model is useful for separating out dierent functions of a networking system, in reality its rarely quite well as clearly dened as the model would seem to look like. Usually, a couple of layers will be grouped together more, the low-level media levels will be similar for layers 1/2 for Ethernet or wireless Ethernet (for example) and layers 5-7 will be more focused on a specic application. This project will be focused on simulating problems on layers 1-3 and testing how parts of networking systems can cope above these levels. So the OSI model is good for getting an overall overview of how networking systems work, as each of the higher levels depends upon the the levels below it to work. The Higher levels in the OSI mode have methods to deal with problems which occur lower down in the protocol stack, primarily because itd simply be infeasible to actually use networks if every small problem in a transmission caused the whole communication to fail. The OSI model is very useful in its ability to allow failures from dierent levels of the model to be dealt with by each level individually, and so has the advantage of reducing the complexity by masking many potential problems with network transmissions between layers. If there was not a structured model of dealing which allows dierent faults at dierent levels of the network stack, application designers would then also have to be experts on every single network fault which could occur; obviously a highly undesirable way of going about handling faults. It is because of the abstraction provided by network stacks useful network applications are able to be created. As this project aims to introduce faults into the network layer, it will give some sort of test to the strengths of the OSI model and the ability of higher-level layers to deal with and cope with the problems which are introduced. 2.1.2 IP

The Internet Protocol(IP) has been an important tool in allowing heterogeneous and scalable inter-networks[4] to be created, it was developed for the primary reason of connecting networks with networks, its dened by RFC791 [6] IP runs on each node in the network and provides the infrastructure and addressing scheme to allow dierent nodes to communicate with each other, over many dierent types of network links. It is IPs functionality and exibility which has allowed it to run on top of many dierent low-level physical layer transport mechanisms, indeed, it has been proposed (somewhat comically) that IP could run on top of an avian carrier mechanism [7].

10

IP allows any host to send packets to another host on the network, in the current IP system (IPv4), each node is referred to using an address known as the IP address, this is a 4-byte address, usually represented in dotted quad formation, where 4 numbers are separated by dots, i.e. 148.88.8.4. Usually these addresses are assigned to hosts worldwide by the Internet Assigned Numbers Authority (IANA)[5] , and were originally supposed to have a hierarchal or classful level, which was originally supposed to allow routing to take place intelligently, but this has become less and less practical, with many addresses wasted, as the hierarchal nature didnt reect that of the organisational nature of networks, and it was a terribly inecient way of allocating addresses to organisations. As the original idea behind the addressing scheme was to allow ecient routing between hosts on a network, an alternative solution had to found, this came in the form of Classless Inter-Domain Routing(CIDR)[8]. The classing scheme of network addresses is one of the many problems with the IPv4 system, and slowly, networks around the world are adopting the new IPv6[9] which alleviates many of the problems with addressing and routing in IPv4. Its worth noting that private address space has been allocated in RFC1918 [10], this means that private networks can be setup to use addresses which only route in a private network, and not out to the wider Internet, which also allows many dierent private addresses to share the same global addresses without conicting with each other. Using private addresses allows small internal networks to be setup easily without requiring IP allocation from Internet authorities which is required to receive a globally unique IP. In order for networks to be administered in a convenient way, subnets are created, using something called a subnet mask, its easy for a host to see if the address of another host is within its given subnet (and local broadcast network). If the address is in the local Ethernet network, the host will use arp [11] (address resolution protocol) to nd the hardware address of the host, and then be able to send the packet with the hardware address in the datagram header. If a network address is outside the local subnet, the host will forward the packet onto the default gateway of the network, which will then make a suitable routing decision for how to send the packet on. The use of subnet masks allows IP networks of any size to be segmented and setup in a logical way. 2.1.3 TCP

TCP, or Transmission Control Protocol[12] is a transport-layer protocol which is used with IP to provide a more reliable connection between hosts. TCP provides the reliable stream, which is not provided by IP. TCP sits between IP and applications and provides reliability which is needed to eectively transmit information of any size. TCP oers reliability to provide what is known as a byte-stream, that is to say it 11

provides a channel to transmit unlimited information between hosts. It allows an application to eectively say I want to send this information to this host, and to transmit that information without worrying about lower level issues of putting that in a form which can be transmitted over the network. TCP tries to recover from data which is lost, damaged, duplicated or reordered. TCP also provides ow-control which enables transmission to carry on at the rate of ow which is acceptable for a network, and aims to stop a single connection from overloading a network. This allows many connections to cooperate in harmony. TCP allows Multiplexing to occur between hosts, this means it allows dierent applications to communicate on the same machine, binding each to what is known as a port, to allow the dierent applications to concurrently operate. A nal feature which TCP provides is connection tracking, it takes care of when connections are established and terminated and frees the applications running above TCP from having to perform this process. The majority of applications which run on networks use TCP, as reliable network delivery is a key feature for most applications; HTTP for web pages and SMTP/POP3 for email being key applications dominating large amount of network trac. TCP Mechanics TCP is an interesting protocol to examine in terms of how connections are established, maintained and terminated. As has been already noted, TCP provides mechanisms for congestion control and connection tracking. TCP provides ordered delivery and robustness through its design and in order for the controllable router to test the limits of TCP, its important to know how the protocol is designed to achieve these goals of reliability. TCP establishes connection using what is known as a handshake, this is a term to represent that both ends of a connection agree on the establishment of a connection. TCP uses a three-way handshake, which is a way in which both sides can agree on the parameters involved for the opening of the the TCP connection. Symbolically, the side wishing to open a connection will ask the other side if it can connect, the other side will then acknowledge that request, and ask the sender if it can connect back. The original requester will then respond to that with an acknowledgement. So both sides knows that they all want to talk and have accepted the establishment of that connection. More technically, the client will send a SYN ag to the server, the server will respond with a SYN/ACK ag and the client will respond to that with an ACK ag (SYN stands for synchronise, ACK acknowledge). To provide reliable communications during a TCP connection, sequence numbers are used in conjunction with acknowledgements, so each side of the TCP connection knows that the transfer is taking place successfully. When a client is sending some data to a server, each packet will be sent, when the server receives the packet, itll 12

send an acknowledgement with the the sequence number of the packet it received, this means the client will know which packets the server has received. TCP uses the sliding window algorithm to enable it to send multiple packets without requiring an immediate acknowledgement, and allows the receiver to accept out-of order packets without causing too much disruption. Sliding window is based on the principle of a buer which is known as the window. The window slides when one acknowledgement is received for the rst packet in the window. When the window lls up on the receiving side, the receiver will not be able to accept any more data and hence wont acknowledge incoming packets beyond the scope of its window. The sliding window algorithm means that sender will not send more data than the receiver can receive, as the sender wouldnt see acknowledgements for dropped packets, it will retransmit until it does receive an acknowledgement. TCP uses the amount of acknowledgements for data it sends to gauge the network conditions between the sender and receiver, RFC 2001[13] documents some congestion avoidance algorithms which have been developed to enable TCP to deal with network congestion more intelligently. Terminating TCP Connections uses another handshake, known as a 4-way handshake, this means that the client will send an ACK/FIN (FIN stands for nish) to the server, the server will send an ACK back. The server will then repeat the process, sending an ACK/FIN to the client, and the client will send an ACK back. This process seems excessive but is required as the TCP connection is two-way connection, both sides need to acknowledge the closing down of the connection in both directions. 2.1.4 UDP

UDP, or User Datagram Protocol[14] is another transport-layer protocol which is used widely as a simple gateway between the application layer and the network layer. UDP provides only very basic expansions over the IP level, it gives data checksumming and multiplexing, by binding to ports like TCP. While UDP can seem very basic and archaic in comparison to TCP, it still provides a very useful function. UDP doesnt guarantee reliable delivery or connection tracking and this is its strong point. In many situations the overhead of guaranteeing a reliable connection can cause more problems than benets. Many real-time applications which require vast quantities of trac ow would cause more problems by resending packets after problems in transmission. As has been discussed, TCP guarantees ordered delivery, if a single packet is lost, it needs to be retransmitted, and TCP waits to receive the packet in order. The time required to wait for the single lost packet would then cause an unacceptable delay on time critical applications where the application can 13

cope without one packet and recover straight away. 2.1.5 ICMP

ICMP or the Internet Control Message Protocol [15], its actually an extension of IP, and hence is said to be a Network Layer protocol, however, it uses the basics of IP for addressing and can often be mistaken for a higher level protocol, however its required to be implemented as part of the Internet protocol. IP doesnt guarantee communication in any form, as this project intends to demonstrate. ICMP is hence required to allow diagnostics and maintenance tasks which are required when dealing with IP packets. ICMP is used for error messages and can be thought of as a maintenance system, it is used to report back errors which can occur in transmission, such as when a router has run out of buer space, or a host is unreachable. ICMP is heavily used for network diagnostics tools such as ping and traceroute. Ping is used to see if nodes on a network are available, or up. Ping will send what is known as an ICMP echo request, which is a message asking a node to send a reply known as an Echo reply, the sender of the echo request, can then determine if a node is reachable by receiving a successful reply. Traceroute works by exploiting the Time to Live header of a an IP packet, Time to Live is a eld which stops a packet from carrying on traversing a network constantly, and is a number decremented at each router a packet passes through. When the time to live value is 0 a router will drop the packet and send a ICMP Time exceeded message back to the sender. Traceroute works out the route a packet will take by only using small TTL values, when a router receives the packet, it will see the TTL is 0 and send a Time exceeded message back, traceroute then takes the ICMP response and can see where the message has come from. By increasing the TTL value by one and then retransmitting a packet, the packet will then be passed to the next hop, which will then transmit a Time exceeded message back and traceroute can build up the path of a packet by continually increasing the TTL value until the destination is reached in this way.

2.2
2.2.1

Faulty Environments
CLEO

CLEO (Cumbria & Lancashire Education Online)[20] is regional broadband consortium which provides network connectivity to a variety of dierent educational institutes throughout Cumbria and Lancashire via a variety of novel techniques.

14

The CLEO network covers an area of nearly 10,000km2 [21], connecting over 1200 dierent sites using a variety of dierent networking technologies. The backbone network is comprised of traditional optical bres and microwave links which tends to be fairly reliable. Dierent sites on the network are then connected to the backbone using a variety of dierent technologies for the last mile between the site and the backbone, to combat the problems of infrastructure. The last link between the backbone and the actual site is formed using unlicensed 2.4GHz microwave links as well as telecoms services such as DSL through traditional copper networks. CLEO faces many challenges in providing a widespread reliable network, with such a large number of hosts the failure rate on equipment and infrastructure means that there is almost always a faulty node which on the network which needs dealing with, be that a wireless link which has been blown o the line of sight by harsh weather conditions, a malfunctioning piece of hardware or a telecoms link provided by an external organisation which is causing problems.

Figure 1: RTT of a Core CLEO Node A complex network monitoring system has been set up to enable the networking teams to identify problems on the network when they occur. Figure 1 shows the Round Trip Time (time it takes for a packet to get to and from a node) to a core node within the CLEO network. Figure 1 shows the normal function of a core node, occasional blips are seen in round trip time throughout the day, however the maximum value of the RTT stays a respectively small 6ms, this is normal variations in the work network system. In contrast to the normal behaviour of the core node, gure 2 shows the RTT of a node which is connected to the core node seen in gure 1 via a DSL link. The RTT is uctuating signicantly and is of a very poor quality (up to 170ms RTT), totally loosing all connectivity at some stages of the day. The comparison of gure 1 with gure 2 would allow the networking team to quickly establish that the fault with the connectivity in the DSL node does not lie in problems further up the network. After diagnosing where the problem lies with this node, the networking team will then investigate the problem to nd out what needs to be done to restore normal

15

Figure 2: RTT of Misbehaving DSL Connected CLEO Node connectivity, in the instance of this particular node, a fault was reported to BT as the provider of the poor functioning DSL line.

2.3

Related Programs

There are a number of previous implementations which allow network testing on undesirable network conditions. These implementations can come in a number of forms for dierent areas of testing. Tools exist to allow specic network trac patterns to be simulated to cause congestion and adverse eects on actual network equipment, rather than a simulation of what would happen. 2.3.1 Iperf

Iperf [22] is one example of a tool which can be used to test the limits of a networks capacity by trying to force down the network trac between hosts. Iperf can be used to test the throughput and bandwidth which can be put through a network. Iperf also provides the facility to alter part of TCP and UDP such as TCP window size and time to live, it can then measure the eect of these changes on the connection and can then be used to tune connections between two points on a network. With a UDP connection Iperf will measure jitter and lost UDP datagrams as well as throughput and so could be useful for running across the controllable faulty router and used to measure the eects of the problem environments which this project will intend to introduce into a network simulation. Iperf is a powerful tool and as well as simple ipv4 TCP/UDP point to point connections, it can also handle ipv6, multicast and multiple connections to attempt to simulate more useful real world situations. The performance of video applications is highly dependent on good network conditions and Multicast support for video 16

applications is a development aimed at increasing the eective streaming of video on a network. Testing multicast video against adverse conditions is very useful in order to improve the stability of streaming video codecs. Iperf is a software solution to a task which some specialist companies also supply in the form of dedicated hardware and software solutions. 2.3.2 LAN forge

An example of an application where the focus is on testing existing equipment (rather than emulating behaviour) is LAN forge [23], this is a Wide Area Network trac simulator which allows the injection of certain conditions into the WAN, such as dropped, duplicated and reordered packets. LAN forge can generate all kinds of packets for use over many dierent transport protocols again being used to test equipment to its very limits. LAN forge is a sophisticated package which is sold with dedicated hardware to accomplish enterprise-level network performance testing on hardware. Companies such as Agilent[24] also have expensive trac generation hardware which is dedicated to the task of trac generation and analysis.

2.4

Emulation Environments

There are also a number of tools available to emulate faulty network conditions in a similar way to the proposed system, a device which can act like a faulty link or a link over which there is inherent unreliability, such as a radio link (particularly lossy). 2.4.1 Dummynet

Dummynet [25, 26] is a tool for FreeBSD which can perform a number of tasks in simulating a faulty network link. With Dummynet its possible to cause delays on packet ow and packet loss, it can also be used to throttle bandwidth and has been used in the past to simulate bandwidth limitations. Dummynet can be used to simulate multi-path eects, this is an inherent problem with radio signals getting to a destination via dierent paths (which can cause packets to be received at dierent times). Dummynet can be implemented using a standard PC with two Network Interface Cards without much complexity. It is congurable so that it can be precisely dened which packets are aected by delay. Dummynet only gives the option to allow random packet loss, it does not allow any sort of control, it also has little support for packet reordering.

17

2.4.2

ONE - Ohio Network Emulator

ONE [27, 28] is a tool which was created to emulate an entire WAN on a single computer, it can emulate many limitations such as bandwidth limitations, queue delays and also provides the facility to simulate delays introduced by the delays between satellite communication at dierent stages of their orbit (NASA are involved with the project). ONE runs on a Sun Solaris and can be congured with dierent queueing/speed characteristics, however rules get applied to all trac coming into a given interface and cant selectively carry out dierent tasks for dierent sorts of trac. 2.4.3 Honeyd

Honeyd [29] takes a completely dierent approach to network emulation, in actually creating a large network of virtual hosts, it emulates dierent operating systems and services which run on dierent hosts, such as web and mail servers. Honeyd can be used to test the eects of multiple hosts of dierent operating systems on a large network. Honeyd can also add characteristics to the network paths between virtual hosts, facilitating latency, loss and bandwidth restriction. Honeyd can be integrated with actual network links and create very sophisticated network emulation environments, with each individual part of a virtual network dened. Unfortunately honeyd doesnt provide support for reordering of packets within a network path. 2.4.4 NISTNet

NISTNet [30, 31] is a kernel module for Linux, a substantial tool for emulating faulty network link, it can be used to delay, drop, duplicate, limit and reorder packets based on source and destination or protocol and the combination. NISTNet is easily implemented on cheap Linux-based router, so could be easily implemented on cheap PC hardware like. NISTNet provides a command-line interface to control how it aects packet ow, it also has a GUI version which can control the behaviour. NISTNet cant alter packet in random distributions, while this is controllable, the situation may occur where a very unreliable (and unpredictable) link may need to be simulated.

18

2.5
2.5.1

GNU/Linux
What is Linux?

Linux is an Operating System, originally developed by Linus Torvalds[17] in the early 1990s as a hobby project whilst at the University of Helsinki in Finland. Linux was released under the GNU GPL licence, which allows unlimited distribution of Linux, but requires that all copies are released under the same licence and importantly, are accompanied by the source code. The GNU GPL licence has allowed volunteers around the world to contribute to Linux, improving it all the time. Unlike the way that many operating systems are described, the name Linux refers strictly to the Linux kernel, or the core of the operating system. The kernel is the program which does the basics of an operating system, dealing with the computers hardware and manages resources and networking. The kernel doesnt provide an interface to these functions directly for a user, their programs run on top of the kernel to provide the operating system environment that is known to users. Originally Linux was designed to run only on x86-based hardware and not intended to be portable, however through its development by many volunteers, Linux has been ported to many many platforms, from standard PCs, to ARM-processor based PDAs, to Games consoles. Linux was essentially designed to be a UNIX clone, to follow what is known as the UNIX philosophy The UNIX philosophy is an approach to operating system design which allows the creation of many small contained programs which work together to create a powerful and exible system. The small command line utilities can be chained together to easily create scripts to accomplish complex tasks without the need for the wheel to be constantly reinvented. This modular structure is seen as very benecial and key to the success of UNIX-like systems. Often, the term GNU/Linux will be used to describe Linux systems, this terms has been derived from the belief of many that the Linux Kernel could not exist as it is today without the ability of GNU programs. The GNU programs which live above the linux kernel are seen as essential in the creation of a usable Linux system and so GNU is prexed to acknowledge the signicant part it has in the system users know as Linux. Richard Stallman explains the GNU contribution to Linux in more detail in Linux and the GNU Project[33]. Linux has matured into an operating system used widely around the world, particularly as servers which power the web and back end operations of many corporations. Large companies in the computer industry are supporting Linux today, with companies such as IBM[18] and Hewlett Packard[19] moving to support Linux and its widely known that major parts of the infrastructure of the Internet is built on Linux, with companies such as google[37] utilising Linux to serve their pages.

19

Today there are a number of dierent organisations creating Linux distributions, these can be organised by community supported projects such as Gentoo[35] or Debian [36] but commercial companies such as Red Hat[32] or SuSe[34] (owned by Novell) also produce distributions. Each distribution is basically a package of software contained with the Linux Kernel as its core, each distribution has a dierent avour for what features they provide. Some distributions focus on making their distribution easy to use, others will focus primarily on security and they often provide support and facilities to aid their aims. 2.5.2 Why choose Linux?

Linux is an incredible success story from the open source community, which has grown from the hobby project of a university undergraduate to an operating system used widely around the world by millions (although often indirectly). It has considerable advantages over other platforms for potential use in a project such as this due to its open nature, support community and availability on many dierent hardware platforms. A Linux router can be built from a cheap low-end machine with standard parts which are mass produced. Trying to do low-level network programming tasks on other closed-source platforms would probably only be possible if the vendor of the operating system allowed deep integration into the low-level parts of the operating system, in most cases there isnt functionality available to this. As well as providing the functionality, Linux allows all areas of the code which powers the operating system to be examined, the access to this code enables someone to learn how the operating system works simply by seeing how all parts of the operating system actually come together, if a developer doesnt understand how a function is operating, he or she can look and see how it was implemented, and even if the function has been poorly documented, they can at least try and understand the functionality from its actual implementation. The sheer number of developers who have attempted to modify and expand Linux for their own tasks brings considerable gains in sharing information through websites and discussion channels such as online forums and mailing lists, problems encountered and xed are often a helpful way to get past common barriers.

20

3
3.1

Linux Architecture
The Linux Kernel and Modules

The Linux kernel is monolithic kernel, this means that one single process which runs the computer, and this one process takes care of all of the operating system functions. The monolithic model which was adapted by most early operating systems, and was thought to be outdated with the advent of more modular micro kernel designs. It is believed by some that small dedicated modules can increase security and be focused on their specic task. While the Linux kernel is monolithic, the issue is slightly clouded by the fact that there is support for loadable modules which can extend the functionality of the kernel, this project aims to produce a module in this manner. The dynamic loading and removing of modules allows for drivers for hardware and extensions to functionality of core tasks such as the network routing tasks which this project will attempt to deal with. Modules are loaded into the main kernel space and interact with kernel functions without the need to constantly rebuild the kernel to accomplish this. Kernel modules are loaded into the Kernel using the tool insmod (INStall MODule), this tool will link the module into the running kernel, allocate kernel memory resources required to hold the module then copy the module into kernel memory and and then call the module initialisation function of the module. To discover which modules are loaded into the kernel, the lsmod (LiSt Module) can be used. Removing modules is a task which is carried out by the similarly named rmmod (ReMove Module), which will ensure the kernel module is not in use, the cleanup function of the module is called (to allow the module to release resources it has allocated), the module is then marked for removal and unlinked from the kernel, nally the memory containing the module is released. As well as manual loading of kernel modules, tools exist for the kernel to dynamically load modules which are of use to it (such as device drivers for specic I/O hardware which is in use). The management of dynamic modules typically uses the standard functions discussed above but is managed without direct user interaction.

3.2

Kernel Space vs. Userspace

Coding in the kernel is dangerous by its very nature. Memory is shared with all of the critical functions which handle the normal operation of a computer, the input/output operations writing to les, handling the response to key presses and in fact all operations between hardware and software, plus management of software. If memory is allocated incorrectly in kernel space then it could dangerously overwrite another critical function of the kernel (the memory allocation function for example) and 21

it would cause the kernel to crash. A kernel panic is where by the kernel detects that something has gone seriously wrong and the kernel cant continue to function, when a kernel panic occurs, primitive debugging information will be printed and all operations then stopped. After a kernel panic, a machine will need to rebooted (and hopefully not have the oending part of the kernel which caused the panic loaded on reboot), as it is eectively dead and cant do anything. Its not just memory allocation which can cause problems in kernel code, if deadlock occurs and cant escape then the computer will be made unusable and wont be able to break out of the deadlock and innite loop may well stop the computer carrying out any other operations at all. However, most programs which run on a computer which are familiar to users will not run in the kernel, and are known to run in Userspace. Userspace is a far more attractive environment to run programs in, the kernel protects memory allocation and wont allow two dierent processes to use the same memory. If a program in userspace has an error it wont cause the whole computer to crash or stop crucial parts of the operating system from functioning. In userspace memory is known as virtual and can be swapped from RAM to disk space if not being used. If userspace provides a relatively safe environment then why would anyone ever want to code at the kernel level? Userspace provides a barrier from the system and is restricted by the lower-level CPU priority (Kernel code runs at a higher CPU priority than userspace code), memory limitations on what can be accessed, its also basically impossible for userspace processes to have direct access to hardware, as this is again something which is handled by the kernel. The reason that there is a dierential between the kernel space and user space is for instances where low-level operations need to be completed, situations dealing with interrupts, interfacing with hardware, etc. and provide the platform which userspace processes can run on. The kernel is focused on providing a stable and powerful userspace platform for processes to run on.

3.3

Networking Overview

This project is well focused on altering the networking behaviour of Linux, there are helpful tools and functions which are present in Linux, which make the task of altering the behaviour of the Linux networking code simpler, but its is important to be able to understand where this ts in with the kernel as a whole, this brief explanation of how a packet is handled summarised from Harald Weltes explanation [38].

22

3.3.1

Packet Transition Through the Linux Kernel

When a network card receives a frame which has been determined to be addressed for attention of that host, the driver will deal with the frame, creating an hardware interrupt and allowing the driver to take the packet o the network card and allocate it into RAM to be queued for further processed by the system. A software interrupt is then called to deal with this packet. The dierence between a hardware interrupt and a software interrupt is that of priority, hardware interrupts are called directly by hardware and fast ecient processes are used to deal with them, then push any complex work to software interrupts, hardware interrupts are never stopped, they dont context switch, an interrupt which comes in will either be queued or dropped. This dierential means that hardware can be dealt with at speed and rarely be interrupted in the processing, while software heavy tasks take place at a lower priority. Software interrupts can also be processed by multiple CPUs, further reducing the load of processing a software interrupt. Once a software interrupt has been received calling the packet handling functions, the network layer of the packet is determined and sent o to an appropriate function to be handled. An IPv4 packet will be passed to the appropriate packet handler which will do appropriate sanity tests, ensuring the packet is for the correct host, is of the correct length and that the checksum adds up correctly. If the sanity tests fail, the packet is dropped. If not the packet continues processing, removing the overhead of some useless data-link layer information from the headers. After the initial packet processing has been completed, the packet is passed to the Netlter framework, in the rst hook, the netlter hooks are discussed in more detail in the next section. After completing transition of the rst netlter hook, the packet is passed to the rst packet routing function, which determines where to next process the packet; this can be delivering to the local host where it will be queued up to be dealt with by the kernel and possibly dealt with by local applications. The packet may be forwarded on to another host, where further sanity checks will be performed and then the packet processed to be sent out onto the network again, alternatively further multicast routing or an error handling functions may be called. 3.3.2 sk bu

The linux kernel stores packets throughout their transition through the kernel in a structure called sk bu. struct sk bu is dened in /usr/src/linux/include/linux/skbu.h with some functions to handle memory allocation of sk bus [39]. The sk bu structure is an incredibly complex structure and is mostly lled up by the driver for the network card which receives the packet. The sk bu provides 23

relatively simple access to the network headers. The sk bu for IP packets will contain a struct iphdr1 which provides access to the source and destination address. The sk bu will also contain protocol header structs which can then be used to examine protocol specic headers (such as struct tcphdr2 , udphdr3 and icmphdr 4 which are of interest to this project). As well as header information, the sk bu structure contains many elds relevant only to the kernel handling of the packet as well as elds such as the nfmark eld which allows the netlter framework to add a mark to the packet which can be used for various uses such as distinguishing packets from each other without expensive header examination operations. A function may rst examine an incoming packet decide what it wishes to do with it and then mark it using nfmark, another function examining packets may only have to look out for the appropriate nfmark and not perform lookups on each header again.

3.4
3.4.1

Netlter
What is Netlter?

Netlter [40] is a fundamental part of the Linux Kernel which is used in processing packets as they traverse through the kernel processing. The netlter framework has a basis of hooks, these hooks are points at which the kernel passes a packet to netlter for processing before the packet carries on in the next part of its journey through the protocol stack. Netlter allows for the processing and ltering of packet at each of these hooks. A part of the kernel can register onto any of the hooks, when a packet passes through the hook, netlter passes the packet onto the function which has been registered with the hook. The function can then instruct netlter to do a variety of things with the packet such as drop or forget about it, it can alter the packet a this point. The netlter hacking howto [41] provides a practical guide to using the netlter framework. 3.4.2 Netlter Hook Points

Netlter ts into the Linux Kernel at a nice points within a packets transition, and enables a relatively simple interface to allow operations on packets to be carried out without complete (error-prone) reworking of the Linux kernel, the numerous points at which netlter can hook into. The hooks are dened in the Kernel source
1 2

/usr/src/linux/include/linux/ip.h /usr/src/linux/include/linux/tcp.h 3 /usr/src/linux/include/linux/udp.h 4 /usr/src/linux/include/linux/icmp.h

24

at: /usr/src/linux/include/linux/netlter ipv4.h, here is a brief explanation of each hook and where it ts into the journey of a packet: 1. NF IP PRE ROUTING This is the rst Hook at which a function can register with, it is called directly after the sanity checks by the kernel but before any sort of routing decision is made, all packets which pass through this machine and are valid pass this hook. 2. NF IP LOCAL IN If the packet is destined for the current machine (and not to be forwarded on to another host), a packet will pass through this hook just before being handled for further processing and passed to userspace. 3. NF IP FORWARD Packets will pass through this hook if they are not destined for the current host and are being forwarded onto another host. Processing will be done rewriting the headers of the packet after this hook to enable the packet to be sent onto the next hop. 4. NF IP LOCAL OUT Packets coming from userspace and about to be sent out onto the network will come through this hook just after they are received from userspace. 5. NF IP POST ROUTING All packets going out from this host onto somewhere else will pass through this hook before being passed on to be sent out, as the name indicates, this is after all routing decisions have been made by the kernel.

NF_IP_PRE_ROUTING

NF_IP_LOCAL_IN

Network

NF_IP_FORWARD

Kernel

Userspace

NF_IP_POST_ROUTING

NF_IP_LOCAL_OUT

Figure 3: Netlter Hook Structure

25

Its should be noted where routing will take place in the traversal of a packet through the hooks, at NF IP PRE ROUTING, no routing will have taken place on the packet at all, at this point the kernel will have only checked that the packet in question is actually supposed to be picked up by the kernel (by being free of checksum errors and having a valid destination). After NF IP PRE ROUTING the kernel has to make its rst routing decision as to whether it should send the packet to the local host (and pass through NF IP LOCAL IN) or if it should be ready to forward the packet on as it is destined for another host (and hence send it out through NF IP FORWARD and then later NF IP POST ROUTING). If a packet is destined for the current host, it is a relatively simple operation of sending the packet to userspace (unless its something which needs to be dealt within the kernel). If a packet is destined to another machine, the packet is sent through to be routed further after NF IP FORWARD. When a packet is forwarded on to another host, the kernel routing tables are consulted and headers of the packet rewritten to allow the packet to carry on its journey to the nal destination. 3.4.3 Netlter Return Codes

When a function hooks onto a netlter hook it can pass netlter number of dierent return codes in order to instruct netlter in how to handle the packet once the function has done what it wishes, using these return codes allows developers to easily instruct netlter in terms of how to continue processing the packet. There are 5 return codes which are dened by netlter5 , their functionality is outlined below: NF DROP This means the packet should be dropped as soon as possible, and netlter will discard the packet, freeing up resources and removing references to it and it wont continue to traverse through the stack. NF ACCEPT Keep the packet and allow it to ow through the network stack as normal, note that this basically means the packet ows through the stack as normal, however the function could have altered the packet in some way, and so it doesnt automatically mean nothing has changed. NF STOLEN When a packet is stolen, references of taken from the packet and traversal through the protocol stack will not continue, however the packet will not be freed up and so resources are still allocated (and will be up to the function which returns NF STOLEN to carry out this duty).
5

/usr/src/linux/include/linux/netlter.h

26

NF QUEUE This is to queue the packet up for less critical operations, often to be sent to userspace. NF QUEUE allows the packet to be queued to be conceptually put on hold for a while, and allows other packets to come in and continue traversal as normal. After a packet has been queued it can be dropped, or pushed back into the netlter architecture. NF REPEAT This calls the hook function again and should be used with care to avoid a packet constantly looping around the one function. 3.4.4 Using Netlter Hooks

In order to access any of the Netlter hooks, a function has to tell netlter that it wishes to take control of processing of the packet as it passes through the hook, to register with a hook it must inform netlter with the functions nf register hook and nf unregister hook, each of which take a pointer to a nf hook opts structure which species the point at which the module function should take on the processing of a packet. The nf hook opts structure is lled with references to the actual function which will process the packet, this must be dened in the format specied by nf hookfn and return one of the netlter return codes examined in 3.4.3. As well as a reference to the function the nf hook opts structure will contain information about which hook (from those specied in Section 3.4.2 ) and protocol the function wishes to be registered with. When NF QUEUE is returned from a hook function, the packet will be sent to be queued, this takes the traversal of the packet away from the normal ow through each hook and instead will continue to be passed to a queue handler. In the queued state, processing of the packet can be blocked as it is no longer in the context of the software interrupt from a packet being received. The queue handling functions are very similar to the hook functions with the use of functions nf register queue handler and nf unregister queue handler to tell netlter to pass packets to the a queue handler. Once the queue handler has done whatever it wishes to it can re-inject the packet into the netlter architecture using nf reinject. The packet can then carry on traversal through the network stack as normal from where it left o. Alternatively nf reinject can accept the netlter verdicts specied in Section 3.4.3 so when reinjected a packet can carry on as though it had just come out of a hook function at that point.

27

Design

The design chapter of this report is intended to give a general overview of how the system would work regardless of platform which the proposed system would be built upon. Due to the nature of the project, the design is very general in nature as many details of how the system is designed cant be nalised without the target implementation specic details. The general design should dene the basic elements required to complete the task.

4.1

Overall Structure

Droping

Delaying

Reordering Packets Received

Rule Selection

Packets Sent Out

Rules

Rule Alteration

Figure 4: Overall Design The controllable faulty router needs to facilitate the alteration of packet ow, it should be possible for packets to be dropped, reordered and delayed. The amount of packets aected by the behaviour should be controllable in terms of what trac is aected and how much its aected, this is a non-trivial task and as packets ow through a router, decisions need to be made as to how to handle a specic packet. Figure 4 shows how dierent parts of the system t together. To enable the dierent functions of the controllable faulty router there needs to be a number of elements which are used in harmony in order to decide how to progress with a packet. As a packet comes into the system the headers of a packet will be 28

compared to those in a selection of rules (shown at the Rule Selection stage in Figure 4), if a rule is matched the packet will then be passed along to one of a few dierent subsystems which will each deal with emulating the specic condition on the packet which is passing through the system (this is shown in the Dropping, Delaying and Reordering parts of Figure 4).

4.2
4.2.1

Storing Rules
Packet Matching Source IP * Desination IP 148.88.8.1 Protocol ICMP Table 1: Matching Packets By Destination and Protocol Source IP 148.88.8.1 Desination IP 148.88.8.8 Protocol ICMP Table 2: Matching Packets By All Headers Source IP * Desination IP * Protocol UDP Table 3: Matching All Packets of a Single Protocol

A fundamental part of the router implementation is that it should allow a controllable treatment of packets, it should be possible for packets to be specied by source IP, destination IP and protocol (only TCP/UDP/ICMP). The rules should be able to be specied to match all of the dened parameters, or a subset of one or two to allow exibility in the rule structure. Some scenarios are dened withe the following examples where * is used to demonstrate a wild card parameter which will match all. The rst example, Table 1 will aect all ICMP packets which go to the destination 148.88.8.1, regardless of where trac comes from. In Table 2, there is a very specic example eecting ICMP trac going from the source 148.88.8.1 to destination 148.88.8.8. Table 3 shows a scenario which will aect all UDP trac passing through the faulty router. All 3 of these rules could be specied to do dierent things to the trac passing through and this could be used to emulate how dierent parts of a network have dierent delays between them or a specic fault between two destinations. 29

4.2.2

Fault Selection

Rather than have a binary on/o style of faulty behaviour which is not always the natural behaviour of faulty conditions, a probability system will be used. For each faulty behavioural characteristic, a probability of each faulty behaviour will be specied in terms of chance out of 100 that a packet will be aected by the faulty behaviour. This means that a very lossy environment could be dened with a packet dropping rule being specied with a probability of 90, which would mean that an average of 90% of packets would be lost. This system still allows the binary on/o style of rule by specifying 100 or 0 as the probability of a packet being aected. The probability model can also allow dierent behaviour, by specifying multiple nonzero probabilities there is a chance that any one of the dierent faulty behaviours be applied to a packet which the router receives, which will allow a mixed-variety of characteristics in the environment. A particularly harsh environment may have large delays and losses concurrently, with the dierent probabilities set correctly, the router can demonstrate this kind of behaviour.

4.3

Altering Packet ow

Once a packet has been matched by a rule it will then be passed to an appropriate handler with parameters specifying anything which that handler might need to know about its operation on a specic packet. The dierent handlers will always perform their function on a packet, whether a packet is selected in the rst place is performed earlier in the rule checking. 4.3.1 Dropping

When a packet is dropped it will be removed from the packet ow immediately and not continue to pass out of the system. 4.3.2 Delaying

When packets are specied to be delayed, they will be taken in by the handler and then stored for a specic amount of time before being pushed back into the packet stream. The time to delay a packet will be dened on a per-packet basis. Its worth noting that by the very nature of delaying packets we may inherently reorder some packets as they may come in and be delayed whilst other packets which arent specied to be delayed are carrying on through the router, altering the ordering which is occurring at the router.

30

As delayed packets are stored temporarily till the specied period of delay has occurred, consideration needs to be given for the number of packets which can be delayed. If packets are being stored then they are going to require memory resources and if more packets come into the system than can be stored in memory, then problems will occur. The memory resources problem could quite quickly manifest itself if long periods of delay are specied and lots of data is transmitted (theoretically 12.5 MB/s can be transfered on a 100Mb/s network card). 4.3.3 Reordering
Reorder Pool

Reorder

Reorder Pool 3 2 1

Reorder

Reorder Pool 5 4 6

Reorder

Figure 5: Reordering Basics Reordering packets is a relatively complex problem, in order to reorder packets there needs to be ow of packets to change the ordering with each other. If one packet comes into a router it cant be reordered unless another packet comes in and it 31

can swap the order around with that packet. The fact that more packets need to be received in order to reorder them means that inherently, a packet needs to be delayed (whilst it waits for further packets to change order with). The reordering function of the router will be designed to have a pool of packets which need to be reordered, as a packet comes in to be reordered it will be added to the pool of packets and swapped around with other packets which are being reordered. Once the order is altered the packets will be sent back out in their altered order from the reordering function, this is shown in a basic form in Figure 5. There are a number of problems which need to be considering in reordering packets; as has been noted there is the problem of having no incoming packets to alter the order of packets with, when do we stop waiting for packets to be put into the reordering pool in order to alter the order of packets? The question of the amount of reordering is also critical to how reordering performs, if a large quantity of reordering is to be performed there is a requirement of a long delay waiting for a big enough pool of packets to change their order with. With big pools required the packet ow will become very bursty, waiting for a large pool of packets to enter the router, and delaying the release of any packets out of the router till the pool lls out and the whole chain of packets are sent out at once. With bigger pools required for delaying, there is increased risk of waiting for packets which will never be received. In order to try and keep the eects of the reordering function on packet ow acceptable, the size of the reorder pool will be congurable as well as a maximum delay. By specifying a maximum delay period the problem of waiting for packets which will never be received can be alleviated. After the dened maximum delay period, the pool of reordered packets will be sent out no matter if the full quantity of reordering has been reached. Using the compromise between keeping consistent reordering whilst ensuring packet ow continues after a period stops a black-hole from being created (where packets never leave unless the packet ow is sucient). As with delaying packets, memory is an issue; packets need to be stored whilst in the pool, therefore reordering limits need to be put on the size of the reordering pool to prevent more packets being stored than memory can sensibly accommodate.

4.4

Controlling Rules

The routing function of the project needs to be running constantly and there needs to be the facility to alter and add rules to the system whilst it is running, it would be impractical and unacceptably restrictive to stop and start the routing functionality to alter rules and add delays, etc, in certain areas. The project is designed to seamlessly integrate the faulty aspects of the router into the traditional routing

32

functions, and the simulation of problems in dierent areas may be introduced at one point in time to see the eect this has on a scenario. If connectivity has to be lost to introduce this fault (by starting and stopping the system) any benet from examining the eects of this change is lost as characteristics which wouldnt happen in the real life scenario are being introduced. A separate part of the system which isnt directly tied to the actual functionality of altering packets will be used to pass messages to the routing system to allow for rules to be controlled. The rule functions will also provide a link into alter the reordering limiting factors, specifying the maximum delay period and reorder size, this is shown as the rule alteration function in Figure 4.

33

5
5.1

Implementation
Implementation Environment

Faulty operation needs to be integrated in with core networking functionality of a router, Linux provides an open system with the extensible netlter framework, this makes it a very good choice of platform for a system to produce the controllable faulty router. Netlter provides a structure which a module can use to integrate into the routing functionality of quite easily, the well structured nature of the architecture means that a great deal of complexity is hidden by the modular structure and allows the task of creating the delay, dropping and reordering functions to be done without without having to concentrate on parts of the networking code which do not impact directly upon the project. While there are tools available for netlter to allow handling of packets from userspace, the nature of the development of the controllably faulty router means this would not be an appropriate place to use these functions. As packets come into the router they need to be delayed, rearranged and reinjected at relatively high speed. As the router would need to examine each packet as it is received, sending each packet to userspace to be examined would cause unnecessary delays to all packets which pass through the system and this would make the router perform in a way which would not be present on a real life scenario. Timing is also critical to the functionality of router and kernel based timing mechanisms which will produce responsive execution of re injection functions to ensure that the router performs in a consistent and accurate manner. A Kernel Module will be created to extend Netlter and allows the faulty behaviour to be created, this module will handle each packet which is received from the network, it will decide if any rules specify that the normal ow of that packet should be altered and it will either alter the packet ow or allow it to continue as normal. As the module will handle every single packet which is received by the Linux Router, it is critical that it performs in an ecient manner (this is of course why it is implemented as a kernel module). Ideally there should be negligible dierence between a packet which passes through the controllable faulty router (and does not have a fault specied for it) and a packet passing through the router without the controllable faulty function. The C Programming language is used to create the kernel module, it is really the only choice for the module as it needs to interact with the Linux Kernel which is written in C (and bits of Assembly). C has has vastly greater eciency than many other higher level languages, while some languages have attractive abstractions, these translate into some ineciencies in the generated object code. In developing

34

critical parts of the operating system the basic level of C is advantageous as it means that the programmer can be entirely sure of what is going on underneath.

5.2

Useful Kernel Functions

When creating the kernel module for Linux, many standard functions which are traditionally available in userspace C programs are not available. The traditional C library is not available in the kernel, so we cant just use predened functions like printf. In implementing the controllable faulty router, there are a number of kernel functions which are used throughout the project in order to create the working system as well as being used to aid debugging. 5.2.1 printk()

printk()6 is generally used for debugging and reporting errors from kernel functions, it is the best way of getting messages back to the user. printk() sends messages back to the console (the terminal which is connected to the computer). Messages sent to the console are logged, and can be accessed using the dmesg tool as well being stored in a system log le. printk() is very similar to printf() from the standard C library and takes parameters in the same fashion. printk(KERN_ALERT_LEVEL "printing integer x=%d", x); The KERN_ALERT_LEVEL will be a constant dened in include/linux/kernel.h which species the priority of the message (and how it should be logged). 5.2.2 kmalloc() and kfree()

kmalloc() and kfree()7 are used to allocate and free kernel memory, again they are similar to their traditional C library equivalents. kmalloc(size, GFP\_PRIORITY); kfree(); kmalloc() takes an additional parameter as shown above, the priority of the memory allocation can be GFP_KERNEL which may sleep (so cant be used in interrupt context) or GFP_ATOMIC which cant sleep (and so is always used in interrupt context), but wont be able do anything if there is no memory left.
6 7

/usr/src/linux/include/linux/kernel.h /usr/src/linux/include/linux/slab.h

35

5.2.3

Double-linked list

The linux kernel provides a double linked list implementation 8 , the struct list_head can be added to a structure then there are functions, list_add(), list_add_tail(), list_del() and INIT_LIST_HEAD() which can be used to perform various operations on the list. Using a previously well tested implementation of the linked list helps to reduce errors as well as keep a familiar structure familiar in other parts of the kernel. 5.2.4 Kernel Timers

Kernel Times are an incredibly useful function which the linux kernel provides9 . A timer is set with a time in the future, when this time is reached the timer is woken and the function which is associated with the timer is run. Timers are essentially used to schedule execution of a function in the future. The simple mechanism a timer provides for basic scheduling is exceptionally useful. There is no limit to the amount of timers which can be created. A timer is created with the struct timer_list, this structure contains pointers for it to be inserted into the linked list of timers, as well as an expiry value. The expiry value is a time in jies, a jiy is 1/100th of a second. The current time in jies can be accessed with the variable called (simply enough) jies. So if the expiry of a timer is set to jies + 100, then the function which is registered to that timer will be executed in exactly 1 second. The timer list structure contains a timer function which will be called when the timer expires, and also the argument which should be passed to that function. A timer list struct can be initialised with the function init_timer(), it can then be lled with appropriate values. Once the timer list struct is lled, the add_timer() function is used to add the structure to a ordered linked list of timers. The linked list of timers is checked approx 100 times a second and the appropriate timer functions are then run. If a timer needs to be removed from the list of timers, the function del_timer() can be used (note this happens automatically when a timer expires and is run normally). 5.2.5 Spin locks

As many things within the kernel can run concurrently and interrupts bring a whole complexity into the system, it is not known when an interrupt will be generated,
8 9

/usr/src/linux/include/linux/list.h /usr/src/linux/include/linux/timer.h

36

as the network handling code has to deal with interrupts, it needs to be safe from concurrent access to avoid race conditions. A spin lock is used to protect a shared variable and enforce mutual exclusion. If a variable is locked by one function and another function tries to access it, it will be stopped and spin in a loop until the function accessing the protected variable has unlocked the variable. In the kernel, a spin lock interface is provided 10 , the actual locking variable is of type spinlock_t, and this can be initialised in an unlocked state with the function spin_lock_init() and the spinlock can be locked with spin_lock() and spin_unlock(). Although in interrupt context spin_lock_bh() and spin_unlock_bh() are used (bh stands for bottom halves), these are parts of code which will only run one at a time, no two bottom halves will run at the same time, this is particularly true of Kernel Timers (see 5.2.4).

5.3
5.3.1

The Controllable Faulty Routing Module


Overview

NF_IP_PRE_ROUTING

NF_IP_LOCAL_IN

CFR_ROUTER

Network

NF_IP_FORWARD

Kernel

Userspace

NF_IP_POST_ROUTING

NF_IP_LOCAL_OUT

Figure 6: Updated Netlter Hook Structure As was discussed in 3.4.2, the netlter hooks provide access to a packet at dierent
10

/usr/src/linux/include/asm/spinlock.h

37

parts of its transition throughout the linux kernel. This project is concerned with causing faulty behaviour for all incoming packets (which have rules associated with them). Using the NF IP PRE ROUTING hook point packets can be examined as they come into the kernel, then check if any rules are relevant to the incoming packet and apply the appropriate rule function to the packet, this is shown in Figure 6. Once the hooking function is registered with netlter, a packet is received at NF IP PRE ROUTING is passed to our hook function. The hook function will examine each skb bu (see section 3.3.2) which is passed to it from netlter. Each time the hook function is called, the headers from the sk bu are checked against each rule which is stored to see if any match the packet which has been received. If a rule is matched, then the rules will be run, with the fault selection running against the rules to see if any of the faults are to be applied to the incoming packet. If the packet is processed and is found to required to be dropped the hook function will return NF DROP and be dropped immediately. If the packet is determined to be delayed or reordered the packet will be marked appropriately and NF QUEUE will be returned. If after checking against the rules, no faulty behaviour is applied on a packet the hook function will return NF ACCEPT and the packet will continue as normal. This is shown in Figure 7. Marking each packet involves using the nfmark eld of a sk bu and setting it to either a delay time or -1 to indicate reordering.
Return Codes
Drop

NF_DROP

Reorder

NF_QUEUE

Delay

NF_IP_PRE_ROUTING

Examine Packet

NF_ACCEPT

Figure 7: Hook Function Transition Since it has been determined that delaying and reordering functions return NF QUEUE, the module needs to register to deal with queued packets. We queue delayed and reordered packets so that they can be taken out of interrupt context and not need to be immediately processed. When NF QUEUE is returned from the hook function, netlter then passes that packet onto our queue handler; while taking the packet out of the transition through the kernel network stack. The queue handling function will determine which packets are marked to be delayed and which are marked 38

to reordered and then carry out these functions. Once the packets have been delayed/reordered they are reinjected back into netlter to continue transition through the kernel from the NF IP PRE ROUTING hook. 5.3.2 Creating Module and Registering with Netlter

In order to create the module which receives packets from the netlter architecture a kernel module needed to be created, the kernel module must implement the module_init() and module_exit() functions to tell the kernel loading tools which parts of the module are to run upon loading it into the kernel[42]. To hook onto NF IP PRE ROUTING, nf_register_hook() is called in the module initialisation function with a nf hook opts struct which references the module hook function and hook that it is interested in. nf_register_hook will then add the module hook function to a list of functions which are interested in packets passing through. It is important that as well as registering the hook function upon loading the kernel module, nf_unregister_hook() is also called on the module_exit() to ensure netlter doesnt try to pass packets to a function which no longer exists in the kernel. Similar to registering a hook, in order to register a netlter queue handler, the module_init() function uses nf_register_queue_handler() to tell netlter to pass queued packets to the modules queue handling function. As with the hook registering, the queue handler needs to be unregistered with netlter on the module_exit() function using nf_unregister_queue_handler(). In order to receive commands from userspace a socket interface is used, netlter provides a function which allows a socket to be registered, this is the nal part of the module which is registered on initialisation. The function nf_register_sockopt() takes a nf sockopt ops struct which registers a socket handling function from the module. As with all other functions registered with the kernel, this needs to be unregistered on module_exit(). 5.3.3 Storing and Examining Rules

Rules are stored in a simple structure which contains a list head struct for linked list implementation, an integer value specifying the protocol which it is interested in and a 32bit source and destination IP address. Each rule has an integer parameter specifying the probability of the function occurring, there is an additional parameter specifying the length of any delay. The rules are stored in a linked list (using the kernel doubly linked list described in 5.2.3 and is protected with mutual exclusion with the use of kernel spinlocks 5.2.5 (to prevent a rule being removed whilst in use).

39

The linked list implementation of rules is a terribly inecient way of storing and examining rules, with O(n) eciency as every single rule will need to be checked until a matching one is found (often a matching rule will not be found). The primitive linked list structure for rules was used to primarily increase the simplicity of implementation. With a large number of rules, the overhead of examining each rule for each incoming packet would be unacceptable. A constraint of the linked list design is that rules can overstep each other and at present there is no checking to ensure that one rule doesnt stop another from ever occurring, this could produce unexpected behaviour when one rule runs before another and is something which should be addressed in a later version of the router. When a packet is received by the module hook, the sk bu is passed to a rule processor, the rule processor looks through each rule comparing it with the sk bu to determine if they match. Each rule species which packet it matches in terms of protocol and source and destination address, a value of 0 for any of these parameters indicates that the rule wishes to match all packets (and isnt interested in the value of that eld). If the values are not 0 then the rule will examine the IP header of the packet and compare the elds, if all match then that rule is executed. As has been explained previously faults in a rule are specied by the probability of them occurring, the process of running rules uses this probability in a simple fashion to decide if a rule is executed. 1. A random number between 1 and 100 is generated using the kernels get random bytes() function. 2. The random number is compared to the probability of the fault occurring. 3. If the probability is greater than the random number the rule runs, if it is not, the rule doesnt run. This primitive function allows the behaviour of the faults to be a little more life like without requiring too much overhead. If a delay is not executed (for example) then the next fault (i.e. reorder/block) will be checked; this provides a small bit of varying behaviour from a rule with varying probabilities of faults occurring. However, each fault is checked in a uniform order and the rst fault will always be checked where as the last may often miss out. This is an issue which would need to be reworked in order to produce a more life-like environment. 5.3.4 Dropping Packets

After successful rule selection, dropping packets is a trivial task which quite simply requires the NF DROP return code to be returned from the hook function. Netlter then releases resources and prevents the continued traversal of the packet.

40

5.3.5

Delaying Packets

A packet marked to be delayed from the hook function will take the delay period specied in the rule and add this to the nfmark eld of skbu. The function will then return NF QUEUE to tell netlter to send the packet to the queue handlers, where the real work of delaying the packet will take place. Upon receiving packets at the queue handler, the module queue handler will determined if a packet is destined to be delayed or reordered. Delayed packets will be passed on to the delay function, this can be seen in gure 8.

receive NF_QUEUE Packets

nfmark == -1

nfmark > -1

Reorder Function
reorder pool

kernel timer_list

delayed packets

Delay Function

nf_reinject

Figure 8: Overview of the Queue Handler The delay function will allocate a timer skb struct. The struct timer skb contains a list head for use as a linked list, a pointer to a sk bu which holds the packet, a struct timer list which is the timer data structure which was outlined in 5.2.4. The timer skb also contains the struct nf info which is lled with information which is passed to the queue handler regarding the hook the sk bu was received from and information netlter wishes to use to keep track of the packet. After allocating a timer skb, the delay function will initialise a timer and set it to expire at the current time + the delay specied from the delay period specied (in the nfmark eld of the sk bu). The timer will also have a pointer to a function which 41

re-injects the packet when called with the timer skb. A pointer to the timer skb itself is set as the parameter to the function in the timer. This is how the queue handler side of the module will process a delayed packet: 1. The module queue handler will receive a queued packet and identify if it is destined to be delayed, the incoming skb bu will be passed on to the delay function. 2. The delay function take the sk bu and gradually build the timer skb structure, lling it with with the incoming sk bu, nf info and setting the timer to point to the wake function and expire at the current time + the delay specied from the nfmark parameter. 3. The timer skb will be added to a linked list of delayed packets, this list is used to keep track of which packets are currently delayed. 4. The timer is then added to the list of kernel timers. 5. After the delay has passed the timer will be woken and the timer skb passed to the re-injection function. 6. The reinject function will remove the timer skb from the linked list of delayed packets and then reinject the sk bu back into NF IP PRE ROUTING allowing the packet to continue traversal through the network stack. The delay process revolves around allocating kernel timers for each packet scheduled to be delayed, the timers will schedule the the appropriate delays and cause reinject functions to be executed at the appropriately delayed time. 5.3.6 Reordering Packets

Similar to the behaviour of delaying, once a packet has been identied for reordering by the rule functions in the initial NF IP PRE ROUTING hook, the nfmark value of the sk bu for the packet will be set to -1 (to distinguish it from packets queued for delaying). NF QUEUE will be returned and the packet will be sent to the queue handler by netlter. The basic principle of the reordering function is to have a constant buer which contains packets to be reordered, packets are put into the buer in non-sequential order and are primitively reordered as they are received. When the buer is lled to capacity (specied by the amount of reordering), the contents of the buer are reinjected into netlter in their reordered sequence. There is also a timer associated with the reorder buer, this timer is set to execute a function which sends the reorder buer out if the timer expires, this is used to ensure packets are not lost waiting in the buer for new packets to be received to reorder.

42

When a packet is received from the queue handler and is determined to be a packet which needs to be reordered it is passed to the reordering function, the reorder function will allocate a reorder skb structure, this contains linked list head/tails, a pointer to the sk bu in question and a pointer to nf info from the queued packet for re-injection. Once the incoming packet has been put into a reorder skb it is added to the buer of reordered packets. If it the rst packet in the queue, the expiry timer is activated (this will call a function to reinject the queue if it expires). A simple check of the length of the queue determines if the packet is added to the front or back of the buer of packets. This reordering is primitive, but allows a simple demonstration of wide ranging reordering, with larger reorder buer sizes creating a wider gap between incoming packets. After adding the packet to the reorder buer the size of the buer is checked, if found to be of the size that current reordering wants, the buer is sent out, else it awaits more packets (or being sent out when the delay timer runs out).

5.4
5.4.1

Control Communication
Method of Communication

The kernel module which is used to alter packet ow, is loaded into the kernel and then it exists permanently until it is unloaded, examining packets which pass through the hook function and applying reorder/drop/delay functions upon the packet if a rule exists to carry out this function on the packet. So how is the kernel module informed of rules to apply on packets? One potential, primitive way of communicating options to the kernel module is to pass it command line arguments upon loading using a predened MODULE_PARM() function [43]; this isnt a particularly viable solution as it only allows parameters to be passed to the module upon loading of the kernel module. To alter any rules the controllable faulty router module would need to be unloaded from the kernel, then loaded again with the new options. Using command line options is not a viable solution to the communication problem. As well as the the standard packet handling functions, the netlter framework provides a very useful socket interface to enable userspace to kernel-space communication. The use of sockets for communications is a convenient and well supported mechanism for communication with the module. Sockets are often used for network communications across networks as they provide convenient abstraction to allow two processes to communicate, however, the use of sockets in this part of the project will be will be used at a more raw, low-level for communicating with the kernel module,

43

rather than a packet based communication which would take place in a IP based socket communication. 5.4.2 Enabling Socket Communications in the Module

In order to enable socket communications, the netlter function nf_setsockopt() can be used to register the struct nf_sockopt_ops. The nf sockopt ops structure contains the parameters which are used to set up the socket. Userspace programs (with administrative privileges) can then communicate via a socket using the same parameters. The nf sockopt ops structure takes a pointer to a function which deals with receiving data when a socket is created as well as a function to send data out through the socket when a user asks for data. For the purposes of setting rules and data will only ow one way from userspace to the module, any feedback will be given via the form of printk messages. When a socket is created from userspace, the set function will be called with the socket information and a pointer to the data which was pushed through the socket. A shared structure, struct cfr_module_comms, is used for communications between userspace and the kernel module. cfr module comms is a basic structure which will enable the use of dierent functions to set rules, change parameters and display current rules. When the cfr module comms structure is received from userspace, the validity of the communication will be examined and if it is an acceptable communication then the function in the communication will be carried out. When the module receives a request for communication, it is good practise to check that the user process which generated the socket has the CAP NET ADMIN privilege as only users with this privilege should be able to carry out functions which alter the routing functionality of the computer. The privilege can be checked using the capable()11 function, which returns 1 if the calling process has the passed privilege or 0 if not. 5.4.3 Userspace Control Program

Once the socket options is setup in the kernel module, a userspace program may communicate with the module and create a module using the standard C functions for using BSD style sockets, socket() and setsockopt(). The socket between the userspace program and kernel module uses the SOCK RAW type, this method of communication is only available to the superuser or root as it can be used to control access to network internals. Information about both functions can be
11

/usr/src/linux/include/linux/sched.h

44

found in the manual pages of a linux system and also outlined BSD Interprocess Communication Tutorial[44]. The userspace control program is very simple in functionality: it receives commands from the user, generates a struct cfr_module_comms, sends this to the module and then exits. The control program doesnt receive information back from the module. Information on the output is provided in console feedback from the module (which is logged and can be accessed using the dmesg tool and system log).

45

Testing and Evaluation

In order to ensure that the system which has been developed is correct in providing the functionality designed, a testing setup has been created. This chapter will examine individual functions of the system and investigate how they perform.

6.1

The Testing Environment

Internet

NAT Gateway [gateway] 10.34.64.249 (eth0) 192.168.0.1 (eth1)

switch

Controllable Faulty Router [devbox] 192.168.0.4 (eth0) 10.0.9.2 (eth1)

[homer] 192.168.0.6

[maggie] 192.168.0.2

[bart] 10.0.9.1

Figure 9: Test Network Diagram The test network is comprised of 5 hosts forming access to 3 separate subnets. The external network, which a NAT gateway provides access to, is a private network with access to external resources for the wider Internet. It has the IP range 10.34.64.0/21. The internal subnet which all machines (apart from Bart) are connected to, has the 46

IP range 192.168.0.0/24. Finally our test external network which resides behind the controllable faulty router has the IP range 10.0.9.0/24. In order to communicate between the 192.168.0.0 subnet and the 10.0.9.0 subnet, trac must pass through the controllable faulty router and it is routing trac through this path which will be of interest. 6.1.1 Routing of the Test Network

The test network was set up with machines of numerous dierent variations, and their conguration has small dierences. Each host on the 192.168.0.0 subnet is congured to use 192.168.0.1 as the default gateway, the gateway is setup to route trac destined for external access via 10.34.64 subnet and it will route trac destined for the 10.0.9 subnet via the devbox (192.168.0.4). The test network was setup with 3 dierent linux machines; Gateway running Slackware, Bart running Gentoo and Devbox running Debian. Each of these machines have slightly dierent conguration les for the initial setup of their network interfaces, although they all use the standard routing table alteration tool, route. The other machines on the network (Homer & Maggie) were running Mac OSX, which also use the standard route command, which has slightly dierent syntax. This is how routes were setup on linux (more detailed info can be found by running man route): /sbin/route add -net ip_range netmask subnet_mask gw gateway_ip The routing tables for the dierent hosts are outlined here: gateway[192.168.0.1/10.34.64.249] routing table: Kernel IP routing table Destination Gateway Genmask Flags 192.168.0.0 * 255.255.255.0 U 10.0.9.0 192.168.0.4 255.255.255.0 UG 10.34.64.0 * 255.255.248.0 U loopback * 255.0.0.0 U default 10.34.64.1 0.0.0.0 UG devbox[192.168.0.4/10.0.9.2] routing table: Kernel IP routing table Destination Gateway Genmask 192.168.0.0 * 255.255.255.0 10.0.9.0 * 255.255.255.0 default 192.168.0.1 0.0.0.0 bart[10.0.9.1] routing table: Kernel IP routing table 47

Metric 0 0 0 0 0

Ref 0 0 0 0 0

Use 0 0 0 0 0

Iface eth1 eth1 eth0 lo eth0

Flags U U UG

Metric 0 0 0

Ref 0 0 0

Use 0 0 0

Iface eth0 eth1 eth0

Destination 10.0.9.0 loopback default

Gateway * localhost 10.0.9.2

Genmask 255.255.255.0 255.0.0.0 0.0.0.0

Flags U UG UG

Metric 0 0 0

Ref 0 0 0

Use 0 0 0

Iface eth0 lo eth0

homer[192.168.0.6] Internet: Destination default 127.0.0.1 192.168.0

routing table: Gateway 192.168.0.1 127.0.0.1 link#4 Flags UGSc UH UCS Refs 13 19 3 Use 7 74696 0 Netif Expire en0 lo0 en0

As well as setting up the routes correctly on devbox, IP forwarding must be enabled to allow the machine to forward packets between subnets. This is done with the command: echo 1 > /proc/sys/net/ipv4/ip_forward Once forwarding has been enabled, packets should be forwarded freely between the 10.0.9 subnet and the 192.168 subnet and the network conguration is complete. 6.1.2 Controlling the System

In order to enable the controllable faulty router, the kernel module which has been developed must be compiled and loaded into the kernel. To load and unload the module a well as control the functionality of the router, root privileges can be required. The make command will compile the source code and generate the object le cf router.o. This can then be loaded into the kernel using the command: /sbin/insmod cf_router.o To check that the module is then loaded, the dmesg command can be used, and should contain messages like this: CFR: Netfilter hooked (0) CFR: Socket Handler loaded (0) CFR: Netfilter queue hooked (0) Occasionally the queue handler is busy. In that instance the module cant register with the queue handler and the delay and reorder functions of the router will not function correctly. The last (or close to last) line of dmesg will contain a message saying: CFR: IMPORTANT - Couldnt register queue handler The module will then need to be unloaded and loaded again. This can be done with the commands: 48

/sbin/rmmod cf_router /sbin/insmod cf_router.o The queue handler will usually be register correctly on a second try as it has nished dealing with what it needs to. Once the module has been loaded successfully, rules can begin to be setup. The communication program - interact - takes arguments to set up specic parameters for the controllable faulty router and these are outlined below. In the rule parameters, - can be used to match all values of source IP, destination IP and protocol. Set reorder conguration. This is used to set the timeout of a reorder queue and how big the reordering pool is. The timeout is in 10ms increments, so to set a timeout of 1s, this value would be set to 100. interact reorder_config reorder_size reorder_timeout Dropping takes the rule headers and a probability of the drop occurring. interact src_ip dest_ip TCP|UDP|ICMP|- drop probability Reorder takes the rule headers and the probability of the reordering occurring. The reorder specics are set globally with the reorder_config option (see above). interact src_ip dest_ip TCP|UDP|ICMP|- reorder probability Delay takes the rule headers, probability of the rule running and also the delay period. As with reorder timeout, the delay time is specied in 10ms increments. interact src_ip dest_ip TCP|UDP|ICMP|- delay probability delay_time Remove Rule is used to remove a rule and simply takes the headers of the rule to be deleted. interact src_ip dest_ip TCP|UDP|ICMP|- delete Print All tells the module to print all of the current rules out to the console. interact printall To remove the controllable faulty router module completely, the rmmod command is used: /sbin/rmmod cf_router Diagnostic information will printed to the console and informing the status from removing the module, like this: CFR: Freed 3 rules CFR: Freed up 24 reordered stored packets 49

CFR: Freed up 12 delayed stored packets CFR: Controllably Faulty Router Unloaded After removing the module, the computer should be restored to normal operation.

6.2

Testing Tools

The nature of this project means it performs fundamental low-level operations to the networking between hosts. Although the faults introduced may be visible to users - in terms of having a detrimental eect on the network applications, such vague observations are of little use in testing the eectiveness of the system developed. In order to test and evaluate the system, a number of tools have been used, some of which were used heavily throughout the development of the project in order to examine how the developed system was performing 6.2.1 Ping

Ping provides a very eective way to examine how packets pass through a network. It provides useful statistics on passed packets and can be used for quick diagnostics of how a link is performing. The ICMP echo response should also be supported by all connected nodes running IP which means that ping can be used to carry out diagnostics on any host without additional conguration. Ping only uses ICMP datagrams, which means its not useful in nding the eects on behaviour of other transport protocols. Ping also uses small datagrams by default, and so can be less useful to stress test the eects of the network. Ping is however undoubtedly the most powerful tool for examining eects of the network conditions which isnt surprising as it is what it was primarily designed for. A ood ping diers from a conventional ping by attempting to send packets at the rate of 100 a second, or as fast as they come back (the quicker of the two options is used). A ood ping requires super-user (administrative) privileges as it is not something which should be used in normal operation, for it can have adverse eects on the network. 6.2.2 tcpdump

Tcpdump[45] is a tool used to capture information in a raw format as network trac passes through a network interface, tcpdump prints the headers of packets passing through and interface and can be used with a variety of lters and options to carry out detailed specialised logging of network trac. Tcpdump has a slightly

50

misleading name as it is not just limited to TCP trac and captures all network trac passing through an interface. Tcpdump is a very useful tool as it provides access to what is actually going on a network and is one of the few ways to examined actual happening of the network, rather than what a particular application reports what is happening. 6.2.3 SmokePing

SmokePing[46] is a useful tool which was developed to measure latency of networks to dierent hosts and create graphs based on network conditions. Smokeping is essentially a graphing tool which runs particular probes at regular intervals to assess network conditions. The default probe for smokeping is a simple ICMP echo ping (hence the name), however a number dierent probes are available to smokeping to test the performance of dierent types of trac. 6.2.4 curl

curl[47] is a widely used tool which allows data to be transfered from a standard URI, it transfers data over many dierent protocols (such as HTTP/FTP/GOPHER etc), it provides easy transparent access to network transfers. As HTTP is one of the major applications on networks to use TCP, it is benecial to monitor who HTTP transfers are eected. A curl probe exists for SmokePing and so it can be integrated to examine the eects of TCP transfers.

6.3

Dropping

When it is specied that a packet is dropped, then the packet is simply dropped and will not be seen again. This is reasonably clear behaviour to spot on a reliable network where packets will not be dropped. To the examine packets dropping, the link between th test machine Bart and gateway (see diagram 9), trac between these two hosts has to pass through the controllable faulty router. In normal operation packet loss is more or less non existent on this link. 6.3.1 Flood Ping Under Normal Conditions

First we test the link under normal operation without any rules applied to devbox, sending a ood ping of 10,000 packets:

51

root@bart root # ping -f -c 10000 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. --- 192.168.0.1 ping statistics --10000 packets transmitted, 10000 received, 0% packet loss, time 8984ms rtt min/avg/max/mdev = 0.843/0.882/20.946/0.294 ms Under normal conditions we can see that the network link is very good quality, the 10,000 packets being transmitted in just under 9 seconds without a single packet being dropped. 6.3.2 Flood Ping With Delay Conditions

Rules can now be applied to the controllable faulty router to alter the ow of packets passing between the two hosts, with varying probabilities of dropping applied to the packet ow. A drop probability of 50% is used to drop ICMP trac between the hosts: ./interact 10.0.9.1 192.168.0.1 ICMP drop 50 The same ood ping command is carried out as before, which produces the following result from the ping: --- 192.168.0.1 ping statistics --10000 packets transmitted, 5070 received, 49% packet loss, time 89134ms rtt min/avg/max/mdev = 0.976/1.349/20.941/0.885 ms The packets are being intercepted and 49% of packets were dropped, this is within 1% of our congured probability (the expected behaviour of the drop function). The process for adding rules and ood pings was repeated with varying degrees of drop probability in increments of 10, the results of this are documented in Table 4. In Table 4 it can be seen quite clearly that the percentage loss matches the equivalent drop probability specied. This is further demonstrated in gure 10 which shows the plots the packets received against the probability of packets being dropped, the behaviour clearly uniform and correct. Packets lost is directly proportional to the probability of packets being dropped. The actual value of packet loss is out from the probability by 1% on numerous occasions; this is an unavoidable side eect of the random nature of the probability of a rule occurring. This is an unavoidable problem, it is impossible for the router to know the exact amount of packets which will be transfered and so it is impossible to always ensure that the amount of packets aected is exactly the specied proportion.

52

Drop Probability 0 10 20 30 40 50 60 70 80 90 100

Received Packets 10000 8974 8049 7051 5934 5070 4013 3056 2079 1002 0

Loss (%) Time(ms) 0 8984 10 26578 19 41649 29 58100 40 76108 49 89134 59 106519 69 121226 79 136468 89 155080 100 173276

Table 4: Eects of dropping on a 10,000 packet ood ping 6.3.3 Probability vs. Sample Size

The graph in gure 10, shows that the packet selection performs very well in keeping in proportion with proportion. However, with a smaller sample of packets, the behaviour will vary more. Figure 11 shows a repeat of the loss test, but with a 100 packet ood. The smaller sample causes the dierences from the probability to become more noticeable. The graph still shows that the probability of a packet being dropped causes roughly that amount of packets to be dropped, so the probability of packets being eected is still a useful and eective mechanism to design controllable behaviour.

6.4

Delaying

Delaying packets requires that they are slowed down from their normal speed through the network; this can be measured in a number of dierent ways, but ping provides a nice convenient way to test this out and is used 6.4.1 A Demonstration of Multiple Periods of Delay

Here a simple test demonstrates delaying. As with the drop tests, the test is carried out from Bart to gateway and the devbox has rules applied to cause delays between the two machines. dan@bart dan $ ping -i 5 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_seq=1 ttl=63 time=0.945 ms

53

Effects of Packet Loss on 10,000 Packet Flood Ping


10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0 0 10 20 30 40 50 60 70 80 90 100

Packets Recieved

Probability of Drop

Figure 10: Eects of Dropping on a 10,000 packet ood ping 64 64 64 64 64 64 64 bytes bytes bytes bytes bytes bytes bytes from from from from from from from 192.168.0.1: 192.168.0.1: 192.168.0.1: 192.168.0.1: 192.168.0.1: 192.168.0.1: 192.168.0.1: icmp_seq=2 icmp_seq=3 icmp_seq=4 icmp_seq=6 icmp_seq=7 icmp_seq=5 icmp_seq=8 ttl=63 ttl=63 ttl=63 ttl=63 ttl=63 ttl=63 ttl=63 time=7.79 ms time=99.3 ms time=1000 ms time=0.872 ms time=0.859 ms time=10003 ms time=0.875 ms

The rst packet seen back from the ping is unaected as the rules have not yet been added. After the rst packet a rule is added to delay packets to 192.168.0.1 by 10ms and the packet has a round trip time of 7.79ms, which is below the dened 10. The delay rule is altered to delay by 100ms added, and the next packet actually takes 99.3ms, a delay of 1000ms (1s) is altered to the rule and to the next packet and it returns in exactly 1000ms. Finally the delay rule is altered to 10000ms(10s) and is added to the rules, and then the delay rule is removed. The nal delay being added and removed results in packets being reordered as can be spotted by the icmp seq number. The packet with 10000ms delay was sent out and delayed, then the rule delaying packets was removed and the next packet from the ping was sent out, a delay was not applied to this packet and so it was returned before the previous packet. This actually happened twice as can be seen from the sequence numbers. This rather complex example of delaying demonstrates the relative complexities which can be involved with delaying packets.

54

Effects of Packet Loss on 100 Packet Flood Ping


100 90 80 70 60 50 40 30 20 10 0 0 10 20 30 40 50 60 70 80 90 100

Packets Recieved

Probability of Drop

Figure 11: Eects of Dropping on 100 packet ood ping 6.4.2 Testing Delay Periods

To test if the delay function performs as it should, ood ping was used to send 100 packets from Bart to gateway with varying degrees of delay added to a rule aecting all ICMP trac passing between the two hosts, the result of which is shown in Table 5 and the average round trip times can be seen plotted against the congured delay in Figure 12. Added Delay Min RTT (ms) 0 0.86 1000 997.94 2000 1998.6 3000 2999.68 4000 3992.98 5000 4997.06 6000 5994.04 7000 6994.76 8000 7992.32 9000 9005.75 10000 10033.99 Average RTT (ms) (%) Max RTT (ms) 0.92 1.91 1003.74 1007.99 1998.9 2000.86 3001.67 3003.49 4000.69 4003.97 5001.96 5004.02 6001.17 6006.56 6997.57 7006.01 8001.04 8002.71 9011.74 9013.3 10041.54 10052.81

Table 5: Result of Delay on Round Trip Times of 100 Packet Flood Ping The delay performance which actually results from the delay rule performs reasonably accurately to the added delay, as can be seen in Figure 12, the actual delay 55

Average Delay on 100 Packet Flood Ping


11000 10000 9000

Actual Delay

8000 7000 6000 5000 4000 3000 2000 1000 0


0 1000 2000 3000 4000 5000 6000 7000 8000 9000 1000 0

Expected Delay

Figure 12: Average round trip delay on 100 Packet Flood Ping period is consistent with the delay period added by rules. The range of values between the minimum round trip time and maximum round trip time seen through the ood ping is within 1% of the delay specied in all cases. The fact that the delay period is consistently accurate to the rule is very positive in demonstrating that the delay function works consistently well at delaying packets for the specied amount of time. 6.4.3 Probability with varying degrees of delay

Smokeping (see 6.2.3) was setup to carry out 10 pings to gateway (from Bart, where smokeping is setup) every 30 seconds, measuring the period of latency from the pings probes and graphing the results. In normal periods of operation, this produces a uniform graph showing a relatively consistent delay, with the occasional variations. Devbox was setup to add delays to all ICMP packets owing between Bart and gateway, starting at a delay value of 10 (100ms), adding 10 to the delay length every minute until the delay reaches 100ms (or 1 second). Figure 13 shows the result of smokeping running while the delay function was delaying packets by 100ms extra each minute. The round trip time can be seen clearly jumping by 100ms every minute. In a uniform manner, as is the expected behaviour. Smokeping provides a very additional information to its graphs when dierent probes have signicantly dierent round trip times. Dierences in RTT are shown on the smokeping graph by covering the range of round trip times which pings were received 56

Figure 13: Gateway Incremental Delay at in a grey shading. This is particularly useful to spot when network performance is uctuating signicantly, rather than just seeing the consistently bad or good round trip times. In the rule structure for delayed packets, the probability of specied packets having rules applied to them can be specied, as was examined in the similar nature of the drop rule. To test that the probability of delays occurring was functioning correctly with packet delay rules, the incrementing delay rule was once again setup, however on this run, the probability of the rule being applied was set to 75%. There were also 50 pring probes sent every 30 seconds as apposed to the previous 10, this was to increase the sample size for reasons which are examined in Section 6.3.3. Figure 14 shows the result of 75% probability of a rule occurring whilst the increasing values of delay are applied to the ping probes. The grey shading shows the full range of round trip times which are received from the ping probes. The grey area on the graph covers the same area from the bottom of the axis to up to the same point which can be seen from the lines on gure 13. The fact that smokeping covers the area from the bottom of the RTT axis to that of the rule delay makes sense as our test rules allow 25% of packets to be unaected by the delay rule. By covering the whole area under the delay added, Figure 14 demonstrates that some packets are not being delayed by the rule and are carrying on as normal, in stark contrast to Figure 13, where all packets are aected by the delay in round trip time, and there is no grey area indicating a wider range of round trip times.

57

Figure 14: Increasing Delay with 75% Probability

6.5

Reordering

Reordering packets is an operation which is both complicated to demonstrate and analyse as it requires a per-packet analysis of how individual packets are treated. The reordering setup will also only work if enough packets enter to be able to be reordered before the timeout on the reorder occurs, otherwise it will appear as though packets are simply delayed (as they wait in the reorder pool, and then get sent out when nothing appears). In order to demonstrate reordering at its basic level, a rule was put into place to cause packets owing from the controllable faulty router to all be reordered, with a reorder pool size of 10 and a timeout value of 1000ms, this is the result of a ping of 20 packets: root@bart root # ping -i 0.1 -c 20 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_seq=1 ttl=63 time=44.6 ms 64 bytes from 192.168.0.1: icmp_seq=9 ttl=63 time=187 ms 64 bytes from 192.168.0.1: icmp_seq=7 ttl=63 time=409 ms 64 bytes from 192.168.0.1: icmp_seq=5 ttl=63 time=620 ms 64 bytes from 192.168.0.1: icmp_seq=3 ttl=63 time=843 ms 64 bytes from 192.168.0.1: icmp_seq=2 ttl=63 time=955 ms 64 bytes from 192.168.0.1: icmp_seq=4 ttl=63 time=733 ms 64 bytes from 192.168.0.1: icmp_seq=6 ttl=63 time=512 ms 64 bytes from 192.168.0.1: icmp_seq=8 ttl=63 time=301 ms

58

64 64 64 64 64 64 64 64 64 64 64

bytes bytes bytes bytes bytes bytes bytes bytes bytes bytes bytes

from from from from from from from from from from from

192.168.0.1: 192.168.0.1: 192.168.0.1: 192.168.0.1: 192.168.0.1: 192.168.0.1: 192.168.0.1: 192.168.0.1: 192.168.0.1: 192.168.0.1: 192.168.0.1:

icmp_seq=10 icmp_seq=18 icmp_seq=16 icmp_seq=14 icmp_seq=12 icmp_seq=11 icmp_seq=13 icmp_seq=15 icmp_seq=17 icmp_seq=19 icmp_seq=20

ttl=63 ttl=63 ttl=63 ttl=63 ttl=63 ttl=63 ttl=63 ttl=63 ttl=63 ttl=63 ttl=63

time=79.9 ms time=218 ms time=440 ms time=652 ms time=874 ms time=986 ms time=764 ms time=543 ms time=332 ms time=111 ms time=999 ms

--- 192.168.0.1 ping statistics --20 packets transmitted, 20 received, 0% packet loss, time 2065ms rtt min/avg/max/mdev = 44.606/530.598/999.519/307.286 ms Looking at the icmp sequence number (icmp seq), its quite clear that the packets have in fact been reordered, its also clear that signicant delay has been put on the round trip time of the packets. The round trip time delays are an unavoidable problem with reordering. The faster that packets are sent out and received by the reordering function, the faster they can be sent back out onto the network. Appendix A contains the output of a tcpdump of reordering when rules were setup to create a pool of 30 packets for reordering and a timeout of 1000ms, from this output it can be seen how the function receives enough packets to allow reordering and then pushes each packet out in one burst in reordered format, gure 15 demonstrates the eect this has on the ping probe, causing vastly variable round trip times, as packets which are reordered and the length of time they wait before re injection varies.

Figure 15: Eects of 30 packet pool reordering on ICMP probe

59

6.6

Packet Selection

So far it has been investigated that the individual rule functions perform correctly on their own, and rules have been applied, but without regard for the protocol selection properties.

Figure 16: maggie ICMP probe

Figure 17: maggie HTTP probe In order to test the functioning of rules for dierent protocols, a rule was rst setup to delay TCP trac to maggie, the rule was then removed and an additional rule was added to delay ICMP trac to maggie. Whilst the rules were altered, smokeping was running a standard ping probe and also a curl (see 6.2.4) probe, sending 10 HTTP requests for a 300k image which was on a web server on maggie, every 30 seconds.

60

6.7

Rule Performance

It was noted in the design that the way that rules are stored is inecient and would need to be reworked, it was also noted that an aim of the system is to enable packets which are not specied to have some sort of fault, should not have their performance aected, in order to give the most life-like scenario possible. In order to evaluate the eects of rules on the performance of the system, we will add increasing amounts of rules and measure the eects on round trip time of packets. Figure 16 shows the ping probe which contrasts directly to the latency seen in gure 17, from the same host at the same time. Number of Rules 0 250 500 750 1000 Average RTT (ms) 0.304 0.498 0.606 0.794 0.946

Table 6: Result of Increasing Numbers of Rules on RTT Table 6 shows the result of average round trip time from a sample of 100,000 packets which were sent through the controllable faulty router with various amounts of rules added, Its quite clear that signicant overhead is added to a packets speed of transition when large numbers of rules are added, with 1000 rules adding about 2/3 to the total round trip time of a packet. While it is unrealistic to expect there would be a scenario where 1000 dierent rules were added to the router, it is an important consideration with such a section of code running so frequently on every single packet.

6.8

Evaluation Issues

Over the course of development of the system, various tools were used and on occasion unusual results were found to be reported which didnt accurately show what was thought to be occurring in the network. Usually, after investigating further by examining actual trac with tcpdump (see 6.2.2), it was found that the tool in question was reporting dierently due to the limitations of the tools in question. Here are some peculiarities discovered: OSX Ping vs Linux Ping The version of ping which is used in Mac OSX is much less well suited for use in seeing the functionality of this project than the version 61

which linux uses. If a count is specied, the linux ping implementation will stop sending ECHO RESPONSE icmp packets after that many are sent out, it will then wait for a response for a default timeout, or if used with the -W option, a congurable amount of time. The linux version of ping also adapts, if it receives a packet and twice the round trip time of that packet is more than the default timeout, it will wait for a response for that period of time. OSX ping is far less congurable, it will transmit icmp packets until it receives the count specied back, if these packets are delayed past the timeout, ping will count them to be lost. The way that OSX ping behaves means that it is not useful to test adverse conditions in the same way that linux ping is. OSX Flood Control While testing dierent features of the controllable faulty router and examining how well the features performed, curious losses were found from the Mac OSX computers. Flood pings were carried out on machines with no rules to apply detrimental eects, and yet varying degrees of packet loss were discovered. The random packet loss was very worrying. After some time attempting to solve what the problem was, I tried doing a ood ping to a OSX machine on a route which was not at all aected by the devbox, nding that the random packet loss was still occurring. On further investigation I discovered the reason for this unexplained loss. The mac system logs provided the answer: Limiting Limiting Limiting Limiting Limiting Limiting icmp icmp icmp icmp icmp icmp ping ping ping ping ping ping response response response response response response from from from from from from 437 435 434 256 374 367 to to to to to to 250 250 250 250 250 250 packets packets packets packets packets packets per per per per per per second second second second second second

Mac OSX has some ood ping protection to stop it causing too much unnecessary trouble to the computer. This security measure meant that it was less convenient to test the router on the Mac OSX machines, and so most of the tests in this section were carried out between two linux machines, which stops the mac ood control factor from complicated the result of tests.

62

7
7.1

Conclusion
Fullment of aims

The aims of this project were split into two fundamental sections which were closely related to each other; research which would be required to complete the successfully implementation of the system, and implementation aims which dened what was required of the actual implementation of the system. 7.1.1 Research Aims

The research aims were outlined in the project proposal as follows: Research will investigate the existing routing technologies, and their implementation Research Linux kernel programming and Linux internals, to accurately decide whether to use a user space or kernel space implementation Research previously available technologies for faulty routing On the whole, the research aims were completed quite thoroughly as the very nature of the project and my lack of experience in dealing with issues such as developing the project at a very low-level (kernel level) required a signicant amount of work in rst understanding underlying architecture, before work could be carried out and a decision could be made on how the system should be implemented. Linux internals were investigated to a large degree using a variety of good resources which are available from the Internet. In order to construct the system, investigation into how to create a simple Linux based simple PC-based linux router was carried out. Previous technologies allowing the control of faulty routing were found and a number of tools similar to the one which this project developed were discovered. While the dierent tools were found, more work could have been carried out to investigate how the dierent tools performed. It would have been a useful addition to the project to try the dierent tools and evaluate their actual usage to a greater degree, instead papers and documentation were used to get a feel for the dierent limitations of each tool. One other research area which was not directly outlined in the aims was to nd more information on specic faulty environments, to a large degree this aim was not investigated particularly well and this limited the dierent simulated conditions which could be created on the router.

63

7.1.2

Implementation Aims

The implementation aims from the project proposal were as follows: A tool should be produced to run on a Linux-based PC router to alter packet ow, according to specied parameters: Allow packets to be dropped Allow packets complying to certain criteria to be dropped Allow packets to be reordered Allow packets to be delayed A friendly user interface should be available in order to control the way the tool appears faulty, changing the parameters should be reasonably simple The Implementation aims were almost completely accomplished, packets can be dropped, delayed and reordered. This can be done selectively based on source ip address, destination ip address and (a limited selection of) protocols. The the router can be controlled in a relatively simple way using a command-line tool, however this may well be too tricky for some users and an easier GUI style interface would allow rules to be altered on the controllable faulty router would further the fullment of that aim.

7.2

Deciencies and How They Should Be Addressed

Throughout the development of the project, a number of less than ideal solutions were applied problems in order to simplify the complexities of implementation. The places where the implemented solution could be improved to make the implementation more robust are outlined here in this section. 7.2.1 Inecient Rule Storage

The current storage structure of rules is a particularly poor implementation of the current rule structure. Rules are stored in an unordered linked list, in order to check if a packet has a rule applied to it each rule in the list is examined individually until a rule is found. If a packet has no rules applied to it, it still needs to be compared with every single rule. As was mentioned in Section5.3.3, the eciency of this operation is O(n) and every packet is examined in this way as it passes through the router. The rule selection process is clearly a fundamental operation and it is worth increasing the eciency of the operation as much as possible.

64

The eciency of rule checking could be vastly improved by providing a better rule structure and the large inecient comparison operations could then be kept to a minimum. One possible way to address rule storage could be the use of hash functions to map rules of a certain header parameters (source/destination ip or protocol in the current case) and then reduce the amount of rules examined immediately. If a hash was used then only rules applied to the current header would be applied. The disadvantage of a hash function is that it would require signicant amounts of memory to be allocated to allow the full range of possible header values to be mapped. 7.2.2 Rules Overstepping Each Other

As was noted in section 5.3.3, rules can actually overstep one another with the current rule design, one rule can overrule the eects of another. One rule overstepping the eects of another could cause unexpected behaviour. While duplicate rules are detected, rule overstepping can happen when one rule eects a whole subset of packets (i.e. all ICMP packets from any host to any host), and other rules exist to alter behaviour on individual trac between ICMP hosts. The rst alteration to make this deciency less of an issue would be to check for overstepping rules when a rule is added, then, at the very least, the user adding the rule can be warned about the rule overstepping another and given the option to abort the operation. A simple warning system may not be an ideal solution to allow rules which overstep each other to exist concurrently, it may be better to introduce a system of priorities to rules instead. A revised rule structure could add rules to dierent priorities of execution, so an individual delay on an incoming ICMP packet could be put at a high priority, and hence checked before the catch all rule which would eect all icmp packets. 7.2.3 Multiple Faults in Rules

Currently it is possible to specify rules with multiple faults, so the ability to specify a rule which both allows some packets to be delayed and some to be dropped completely, with varying probabilities of each fault occurring. One scenario could be that a rule could dene that there is a 50% probability that packets are dropped and a 50% probability that packets are delayed. While the fault can be specied to allow multiple faults to coexist, the router currently doesnt perform this function very well. As each packet is received, the router will check if a drop is going to take place, followed by a checking if a delay is going 65

to take place and so on through each fault. The check to see if a fault will take place is conducted in a hard coded order, the order never changes. The fact that the order of faults is static means that faults which are checked rst are favoured of those later down the checklist and executed immediately. In a situation where two faults have a high probability of execution, then the one which is checked rst will be executed more frequently - this is a clear design fault. There is no support to allow more complex fault structures, for example a rule to specify that a packet is either delayed with a 75% probability, if it is not delayed then it is dropped. The current rule selection only treats each fault as an individual test and doesnt allow scope to integrate these rules. In order to address the problem of faults being checked in uniform order (and favouring one rule over another), a number of strategies could be used. Faults could be checked for execution in a random order, or each fault could rst be checked to see if it is executed and out of those faults which are executed a random one could be chosen to run a specic packet. To enable multiple faults to integrate in the rules dened would require a far more complex rule structure which allowed for chaining of rules to allow alternative results following one fault not being executed. There is quite a large amount of scope for design of such a rule structure which would enable more complex emulation environments to be setup whilst reusing the current fault functions. 7.2.4 Improved Accuracy Delay Times

Although the delay time produced from the delay function produced around the correct degree of amount of delay period on packets, it was not quite as accurate as it could have been. When a delay of 10ms was dened, some packets would come in just under 10ms and some just over 10ms. While it is acceptable to expect that the precise amount of time a packet takes to reach the destination will vary, the rules should specify that packets take at least the amount of time specied. The behaviour should be modied so that a packet never arrives quicker than this. 7.2.5 Improved Reordering Support

The way that reordering has been implemented within the project is of a simplied and slightly too uniform manner, the reordering functions do not allow enough exibility to provide a random reordering format, if packets are sent into the reordering function with the same parameters, they will leave in the same (altered) order each time. The reordering implementation was much a proof of concept to allow reordering to be demonstrated. Further work needs to be undertaken to provide exibility to allow reordering to be carried out in a more exible manner. 66

7.3

Future Work

As well as addressing the issues present in Section 7.2 which need addressing, there is scope for work to further the functionality and usefulness of this project. 7.3.1 Easier Control

As the aim of providing an easy user interface to allow manipulation of the functionality of the controllable faulty router was not completed, it is somewhere where there is immediate scope for further development. The creation of an alternative interface to the command line tool which would aim to allow users who are uncomfortable with the command line interface. A graphical user interface could be developed to control the router, or perhaps the use of a web front end to allow easier remote control of the router. In order to create an easy control system a nicer front end could be put on the command line tool, or the communication control system could be natively implemented on a new application. One issue with creating an easier controls system would be access rights, currently the userspace communication tool needs administrative rights to the router, this would not be a practical way to allow many people control the routers functionality. With many people having access to router, other solutions and security measures would have to be evaluated. 7.3.2 IPv6 Support

The netlter architecture fully supports IPv6 and the alterations required to allow the system to support IPv6 as well as IPv4 would not be a huge in nature. As more and more networks move to IPv6, it would be a natural progression to provide v6 support to the controllable faulty router, providing functionality that most of the other emulation environments investigated in this report do not provide. 7.3.3 Support More Rule Parameters

The current rule structure is relatively basic in only supporting 3 dierent protocols and matching by source and destination IP address, this could be expanded to provide support for more protocols and additional header elds. Port numbers and packet size could be potential parameters which rules could respond to. Using dierent sizes of packets could produce interesting behaviour to demonstrate a router which would drop packets based on size characteristics. 67

7.4

Lessons Learnt

The project provided me with an insight into many areas of computing which I had only previously touched upon on in an abstract way, looking into Linux internals was both highly interesting and challenging. Changing fundamental parts of the way the operating system behaves gives a kind of empowering feeling which is satisfying after spending vast amounts of time discovering how things work. I learnt a lot about the Linux operating system by completing the system, what kinds of errors can occur in dierent situations and how to debug them. Developing code which runs at the kernel level requires dierent debugging skills to conventional programing, when things go wrong the process to recover from serious errors requires some careful auditing of code to ensure the same error doesnt occur again.I have learnt how to nd information on code which others have written, even if it comes down to reading the actual implementation of a function itself. I learnt about a variety of dierent tools available for measuring network performance, developing graphs for various functions and developing code to run in the linux kernel. Subscribing to mailing lists regarding linux networking whilst also developing the project has provided greater immersion into how things work, by reading the trials and tribulations of others carrying out completely dierent projects has provided me with a wide ranging perspective of other areas of networking which are currently being worked on in the community.

7.5

Final Overview

The project has been a success on most accounts, it carries out reordering, delaying and dropping based on congurable parameters. The result of the system running provides a real eect on the networking performance which can be seen very easily by running applications over the router. The router runs reliably and transparently and can be quickly setup to develop the network characteristics of a specic link network infrastructure. The project enabled my personal understanding of many concepts and systems to be furthered a great deal, so much was learnt that it is impossible to even document everything which I discovered, but the knowledge is now present and knowing a little more about how things work behind the systems I use on a daily basis enables a greater understanding of how to develop and design systems at a high and low level perspective in the future.

68

Acknowledgements

Acknowledgement is given to the hard work of the following individuals and groups who have been of assistance in developing this project: The Netlter Project The numerous documents which explain a number of detailed concepts behind the linux networking architecture were of great help in this project, the mailing list is of very high educational value. The Linux Documentation Project Various guides produced by numerous individuals help to get a detailed understanding of many dierent wide ranging areas around development and network conguration in Linux. Michael Clarke For providing information and resources regarding the characteristics of the CLEO[20] network which is maintained by Lancaster University. Paul Tipper For providing helpful information on the netlter architecture and FYP report[48] was of great help in terms of useful reference resources and structure for this report. Alex Paterson, Chris Haslam & Julia Forsberg For their part in reviewing the language in various parts of the report in dierent stages of drafts.

References
[1] World Internet Users and Population Stats http://www.internetworldstats.com/stats.htm [2] Ofcom (2004). October Review of UK Communications Market http://www.ofcom.org.uk/research/industry_market_research/ m_i_index/cm/qu_10_2004/cm_qu_10_2004.pdf [3] Microsoft. Xbox Live UK. http://www.xbox.com/en-gb/live/default.htm [4] Larry L. Peterson, Bruce S.Davie (2003). Computer Networks: A Systems Approach. Morgan Kaufmann Publishers, San Francisco. [5] Internet Assigned Numbers Authority. http://www.iana.org/ 69

[6] J. Postel (1981). RFC 791 - Internet Protocol: DARPA Internet Program Protocol Specication. http://www.faqs.org/rfcs/rfc791.html [7] D. Waitzman (1990). RFC 1149 - Standard for the transmission of IP datagrams on avian carriers http://www.faqs.org/rfcs/rfc1149.html [8] V. Fuller, T. Li, J. Yu, K. Varadhan (1993). RFC 1519 - Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggregation Strategy http://www.faqs.org/rfcs/rfc1519.html [9] S. Deering, R. Hinden (1998). RFC 2460 - Internet Protocol, Version 6 (IPv6) Specication. http://www.faqs.org/rfcs/rfc2460.html [10] Y. Rekhter, B. Moskowitz, D. Karrenberg, G. J. de Groot, E. Lear (1996). RFC 1918 - Address Allocation for Private Internets. http://www.faqs.org/rfcs/rfc1918.html [11] D. C. Plummer (1982). RFC 826 - Ethernet Address Resolution Protocol: Or converting network protocol addresses to 48.bit Ethernet address for transmission on Ethernet hardware. http://www.faqs.org/rfcs/rfc826.html [12] J.Postel (1981). RFC 793 - Transmission Control Protocol. http://www.faqs.org/rfcs/rfc793.html [13] W. Stevens (1997). RFC 2001 - TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms. http://www.faqs.org/rfcs/rfc2001.html [14] J. Postel (1980). RFC 768 - User Datagram Protocol. http://www.faqs.org/rfcs/rfc768.html [15] J. Postel (1981). RFC 792 - Internet Control Message Protocol. http://www.faqs.org/rfcs/rfc792.html [16] A. Tanenbaum (2004). Minix Information Sheet, Vrije University. http://www.cs.vu.nl/~ast/minix.html [17] Linux Torvalds Homepage, Helsinki University. http://www.cs.helsinki.fi/u/torvalds/ [18] IBM Linux Portal. http://www-1.ibm.com/linux/ [19] HP and Linux. http://www.hp.com/linux

70

[20] CLEO - Cumbria Lancashire Eduction Online http://www.cleo.net.uk/ [21] B. Forde (2005). Networking in the North West: An ISS Perspective [22] A. Tirumala, F. Qin, J. Dugan, J. Ferguson, K. Gibbs (2004). NLANR/DAST : Iperf 1.7.0 - The TCP/UDP Bandwidth Measurement Tool http://dast.nlanr.net/Projects/Iperf/ [23] B. Greear (2005). LANforge Project Homepage. http://freshmeat.net/project/lanforge/ [24] Aglient Technologies Homepage. http://www.agilent.com/ [25] L. Rizzo (1997). Dummynet: a simple approach to the evaluation of network protocols. ACM Computer Communication Review 27. [26] L. Rizzo. Dummynet Homepage. http://info.iet.unipi.it/~luigi/ip_dummynet/ [27] M. Allman, A. Caldwell, S. Ostermann. ONE: The Ohio Network Emulator. Technical Report TR-19972, Ohio University. [28] A. Caldwell, S. Ostermann, M. Allman, J. McKim (2001). ONE - the Ohio Network Emulator Homepage http://masaka.cs.ohiou.edu/one/ [29] N. Provos (2005). Developments of the Honeyd Virtual Honeypot. http://www.honeyd.org/ [30] M. Carson, D. Santay (2003), NIST Net: a Linux-based network emulation tool, ACM Computer Communication Review 33. [31] American National Institue of Standards and Technology (2002). NIST Net Homepage. http://snad.ncsl.nist.gov/nistnet/ [32] Red Hat Corporation. http://www.redhat.com/ [33] R.Stallman. Linux and the GNU Project. http://www.gnu.org/gnu/linux-and-gnu.html [34] Novell, SuSe Linux. http://www.novell.com/linux/suse/index.html [35] Gentoo Linux. http://www.gentoo.org/

71

[36] Debian GNU/Linux. http://www.debian.org/ [37] Google Homepage. http://www.google.com/ [38] H. Welte (2000), The journey of a packet through the linux 2.4 network stack. http://gnumonks.org/ftp/pub/doc/packet-journey-2.4.html [39] H. Welte (2000), skb - Linux network buers. http://gnumonks.org/ftp/pub/doc/skb-doc.html [40] The netlter/iptables project. http://www.netfilter.org/ [41] P. Russel, H. Welte. Linux netlter Hacking HOWTO. http://www.netfilter.org/documentation /HOWTO//netfilter-hacking-HOWTO.html [42] P. J. Salzman, O. Pomerantz (2001). The Linux Kernel Module Programming Guide. http://www.tldp.org/LDP/lkmpg/2.4/html/ [43] P. J. Salzman, O. Pomerantz (2001). Passing Command Line Arguments to a Module. http://www.tldp.org/LDP/lkmpg/2.4/html/x354.html [44] J. Leer, S. Fabry, N. Joy, P. Lapsley, S. Miller, C. Torek (1993). An Advanced BSD Interprocess Communication Tutorial. Computer Systems Research Group, University of California, Berkeley, Heterogeneous Systems Laboratory, University of Maryland. http://www-users.cs.umn.edu/~bentlema/unix/advipc/ipc.html [45] tcpdump Public Repository. http://www.tcpdump.org/ [46] T. Oetiker (2005). SmokePing. Deparment of IT and Electrical Engineering, Swiss Federal Institue of Technology. http://people.ee.ethz.ch/~oetiker/webtools/smokeping/ [47] cURL and libcurl http://curl.haxx.se/ [48] P. Tipper (2004). IPv4 to IPv6 Bump In Stack http://www.lancs.ac.uk/~tipper/fyp/

72

30 Packet Pool Reordering Tcpdump


IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: gateway > 10.0.9.1: 10.0.9.1 > gateway: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: 73 icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo request seq 1 request seq 2 request seq 3 request seq 4 request seq 5 request seq 6 request seq 7 request seq 8 request seq 9 request seq 10 request seq 11 request seq 12 request seq 13 request seq 14 request seq 15 request seq 16 request seq 17 request seq 18 request seq 19 request seq 20 request seq 21 request seq 22 request seq 23 request seq 24 request seq 25 request seq 26 request seq 27 request seq 28 request seq 29 request seq 30 reply seq 30 request seq 31 reply seq 28 reply seq 26 reply seq 24 reply seq 22 reply seq 20 reply seq 18 reply seq 16 reply seq 14 reply seq 12

06:13:10.030816 06:13:10.041058 06:13:10.061054 06:13:10.081051 06:13:10.101046 06:13:10.121045 06:13:10.141042 06:13:10.161039 06:13:10.181036 06:13:10.201032 06:13:10.221027 06:13:10.241026 06:13:10.261022 06:13:10.281018 06:13:10.291018 06:13:10.311015 06:13:10.321014 06:13:10.341010 06:13:10.361006 06:13:10.371013 06:13:10.391025 06:13:10.410998 06:13:10.430995 06:13:10.450992 06:13:10.470990 06:13:10.490986 06:13:10.510982 06:13:10.530979 06:13:10.550976 06:13:10.570973 06:13:10.576286 06:13:10.576339 06:13:10.576374 06:13:10.576478 06:13:10.576842 06:13:10.576892 06:13:10.576934 06:13:10.577061 06:13:10.577507 06:13:10.577926 06:13:10.578332

06:13:10.578764 06:13:10.579194 06:13:10.579629 06:13:10.580056 06:13:10.580479 06:13:10.581336 06:13:10.581771 06:13:10.582198 06:13:10.582638 06:13:10.583067 06:13:10.583499 06:13:10.583929 06:13:10.584358 06:13:10.584782 06:13:10.585214 06:13:10.585647 06:13:10.586086 06:13:10.586603 06:13:10.586645 06:13:10.587058 06:13:10.587501 06:13:10.597966 06:13:10.607970 06:13:10.627967 06:13:10.637968 06:13:10.657962 06:13:10.667963 06:13:10.687958 06:13:10.697957 06:13:10.717952 06:13:10.727953 06:13:10.747967 06:13:10.767941 06:13:10.787938 06:13:10.807935 06:13:10.827931 06:13:10.847927 06:13:10.857927 06:13:10.877923 06:13:11.573603 06:13:11.573995 06:13:11.574872 06:13:11.575309

IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP

gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: 10.0.9.1 > gateway: gateway > 10.0.9.1: gateway > 10.0.9.1: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: 10.0.9.1 > gateway: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: gateway > 10.0.9.1: 74

icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp

64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64:

echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo

reply seq 10 reply seq 8 reply seq 6 reply seq 4 reply seq 2 reply seq 1 reply seq 3 reply seq 5 reply seq 7 reply seq 9 reply seq 11 reply seq 13 reply seq 15 reply seq 17 reply seq 19 reply seq 21 reply seq 23 reply seq 25 request seq 32 reply seq 27 reply seq 29 request seq 33 request seq 34 request seq 35 request seq 36 request seq 37 request seq 38 request seq 39 request seq 40 request seq 41 request seq 42 request seq 43 request seq 44 request seq 45 request seq 46 request seq 47 request seq 48 request seq 49 request seq 50 reply seq 50 reply seq 48 reply seq 45 reply seq 43

06:13:11.575736 06:13:11.576165 06:13:11.576643 06:13:11.577073 06:13:11.577501 06:13:11.577926 06:13:11.578357 06:13:11.578791 06:13:11.579222 06:13:11.579656 06:13:11.580083 06:13:11.580508 06:13:11.580933 06:13:11.581362 06:13:11.581784 06:13:11.582218

IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP

gateway gateway gateway gateway gateway gateway gateway gateway gateway gateway gateway gateway gateway gateway gateway gateway

> > > > > > > > > > > > > > > >

10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1: 10.0.9.1:

icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp icmp

64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64: 64:

echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo echo

reply reply reply reply reply reply reply reply reply reply reply reply reply reply reply reply

seq seq seq seq seq seq seq seq seq seq seq seq seq seq seq seq

41 39 37 35 33 31 32 34 36 38 40 42 44 46 47 49

75

Project Proposal

76

You might also like