CN Manual (Not Edit123)
CN Manual (Not Edit123)
Preface:
Computer Networks laboratory covers the implementation of basic networking concepts and
simulation of advanced concepts. The prerequisite for this laboratory is the understanding of fundamentals of
computer networks. There are two parts in this laboratory. The first part deals with simulation of networking
concepts. The second part deals with how to implement the message transfer among the systems using inter
process communication (IPC) techniques, security algorithms, routing algorithms, error detection and flow
and congestion control techniques.
Starting ns
You start ns with the command 'ns <tclscript>' (assuming that you are in the directory with the ns
executable, or that your path points to that directory), where '<tclscript>' is the name of a Tcl (Tool Command
Language) script file which defines the simulation scenario (i.e. the topology and the events). You can also
just start ns without any arguments and enter the Tcl commands in the Tcl shell, but that is definitely less
comfortable. Starting nam (Network Animator): You can either start nam with the command 'nam <nam-file>'
where '<nam-file>' is the name of a nam trace file that was generated by ns, or you can execute it directly out
of the Tcl simulation script for the simulation which you want to visualize.
Introduction to NS-2
• Widely known as NS2, is simply an event driven simulation tool.
• Useful in studying the dynamic nature of communication networks.
• Simulation of wired as well as wireless network functions and protocols (e.g., routing algorithms,
TCP, UDP) can be done using NS2.
• In general, NS2 provides users with a way of specifying such network protocols and simulating their
corresponding behaviours.
Tcl scripting
• Tcl is a general purpose scripting language. [Interpreter]
• Tcl runs on most of the platforms such as Unix, Windows, and Mac.
• The strength of Tcl is its simplicity.
• It is not necessary to declare a data type for variable prior to the usage.
Basics of TCL
NS Simulator Preliminaries.
The above creates a trace file called out.tr and a nam visualization trace file called out.nam. Within the tcl
script, these files are not called explicitly by their names, but instead by pointers that are declared above and
called tracefile and namfile respectively. Remark that they begins with a # symbol. The second line open the
file out.tr to be used for writing, declared with the letter w. The third line uses a simulator method called trace-
all that have as parameter the name of the file where the traces will go.
The last line tells the simulator to record all simulation traces in NAM input format. It also gives the file name
that the trace will be written to later by the command $ns flush-trace.
In our case, this will be the file pointed at by the pointer $namfile, i.e the file out.tr.
The termination of the program is done using a finish procedure.
Proc finish { } {
global ns tracefile1
namfile
$ns flush-trace
Close $tracefile1
Close $namfile
Exec nam out.nam &
Exit 0
}
The word proc declares a procedure in this case called finish and without arguments. The word global is used
to tell that we are using variables declared outside the procedure. The simulator method flush-trace will dump
the traces on the respective files. The tcl command close closes the trace files defined before and exec executes
the nam program for visualization. The command exit will ends the application and return the number 0 as
status to the system. Zero is the default for a clean exit. Other values can be used to say that is a exit because
something fails.
At the end of ns program we should call the procedure finish and specify at what time the termination should
occur. For example,
$ns at 125.0 “finish”
will be used to call finish at time 125sec.Indeed, the at method of the simulator allows us to schedule events
explicitly
Once we define several nodes, we can define the links that connect them. An example of a definition of a link
is:
In NS, an output queue of a node is implemented as a part of each link whose input is
that node. The definition of the link then includes the way to handle overflow at that queue.
In our case, if the buffer capacity of the output queue is exceeded then the last packet to
arrive is dropped. Many alternative options exist, such as the RED (Random Early Discard) mechanism, the
FQ (Fair Queuing), the DRR (Deficit Round Robin), the stochastic Fair
Queuing (SFQ) and the CBQ (which including a priority and a round-robin scheduler).
In ns, an output queue of a node is implemented as a part of each link whose input is
that node. We should also define the buffer capacity of the queue related to each link. An
example would be:
We need to define routing (sources, destinations) the agents (protocols) the application that use them.
TCP is a dynamic reliable congestion control protocol. It uses Acknowledgements created by the destination
to know whether packets are well received.
There are number variants of the TCP protocol, such as Tahoe, Reno, NewReno,
Vegas. The type of agent appears in the first line:
When we have several flows, we may wish to distinguish them so that we can identify them with different
colors in the visualization part. This is done by the command $tcp set fid_ 1 that assigns to the TCP connection
a flow identification of “1”.We shall later give the flow identification of “2” to the UDP connection.
A UDP source and destination is defined in a similar way as in the case of TCP.
Instead of defining the rate in the command $cbr set rate_ 0.01Mb, one can define the time interval between
transmission of packets using the command.
$cbr set interval_ 0.005
The packet size can be set to some value using
$cbr set packetSize_ <packet size>
Scheduling Events
NS is a discrete event based simulation. The tcp script defines when event should
occur. The initializing command set ns [new Simulator] creates an event scheduler, and
events are then scheduled using the format:
$ns at <time> <event>
The scheduler is started when running ns that is through the command $ns run.
The beginning and end of the FTP and CBR application can be done through the following command
When tracing into an output ASCII file, the trace is organized in 12 fields as follows in fig shown below, The
meaning of the fields are:
Event Time From To PKT PKT Flags Fid Src Dest Seq Pkt
Node Node Type Size Addr Addr Num id
1. The first field is the event type. It is given by one of four possible symbols r, +, -, d which correspond
respectively to receive (at the output of the link), enqueued, dequeued and dropped.
2. The second field gives the time at which the event occurs.
3. Gives the input node of the link at which the event occurs.
4. Gives the output node of the link at which the event occurs.
5. Gives the packet type (eg CBR or TCP)
6. Gives the packet size
7. Some flags
8. This is the flow id (fid) of IPv6 that a user can set for each flow at the input OTcl script one can further
use this field for analysis purposes; it is also used when specifying stream color for the NAM display.
9. This is the source address given in the form of “node.port”.
10. This is the destination address, given in the same form.
11. This is the network layer protocol’s packet sequence number. Even though UDP implementations in
a real network do not use sequence number, ns keeps track of UDP packet sequence number for
analysis purposes.
12. The last field shows the Unique id of the packet.
XGRAPH
The xgraph program draws a graph on an x-display given data read from either data file or from standard
input if no files are specified. It can display upto 64 independent data sets using different colors and line
styles for each set. It annotates the graph with a title, axis labels, grid lines or tick marks, grid labels and a
legend.
Syntax:
Xgraph [options] file-name
Awk- An Advanced
Awk is a programmable, pattern-matching, and processing tool available in UNIX. It works equally well
with text and numbers.
Awk is not just a command, but a programming language too. In other words, awk utility is a pattern
scanning and processing language. It searches one or more files to see if they contain lines that match
specified patterns and then perform associated actions, such as writing the line to the standard output or
incrementing a counter each time it finds a match.
Syntax:
awk option ‘selection_criteria {action}file(s)
Here, selection_criteria filters input and select lines for the action component to act
upon. The selection_criteria is enclosed within single quotes and the action within the curly
braces. Both the selection_criteria and action forms an awk program.
Example: $ awk ‘/manager/ {print}’ emp.lst
Variables
Awk allows the user to use variables of there choice. You can now print a serial number, using the variable
kount, and apply it those directors drawing a salary exceeding 6700:
$ awk –F”|” „$3 == “director” && $6 > 6700 {
kount =kount+1
printf “ %3f %20s %-12s %d\n”, kount,$2,$3,$6 }’ empn.lst
THE –f OPTION: STORING awk PROGRAMS IN A FILE
You should holds large awk programs in separate file and provide them with the awk extension for easier
identification. Let’s first store the previous program in the file
empawk.awk:
$ cat empawk.awk
Observe that this time we haven’t used quotes to enclose the awk program. You can
now use awk with the –f filename option to obtain the same output:
Dept. of CSE, ACSCE Page 8
Computer Network Laboratory -21CS52 V SEM/BE
Awk statements are usually applied to all lines selected by the address, and if there are no addresses, then
they are applied to every line of input. But, if you have to print something before processing the first line,
for example, a heading, then the BEGIN section can be used gainfully. Similarly, the end section useful in
printing some totals after processing is over.
The BEGIN and END sections are optional and take the form
BEGIN {action}
END {action}
These two sections, when present, are delimited by the body of the awk program. You can use them to print
a suitable heading at the beginning and the average salary at the end.
BUILT-IN VARIABLES
Awk has several built-in variables. They are all assigned automatically, though it is also possible for a user
to reassign some of them. You have already used NR, which signifies the record number of the current line.
We’ll now have a brief look at some of the other variable.
The FS Variable: as stated elsewhere, awk uses a contiguous string of spaces as the default field delimiter.
FS redefines this field separator, which in the sample database happens to be the |. When used at all, it must
occur in the BEGIN section so that the body of the program knows its value before it starts processing:
BEGIN {FS=”|”}
This is an alternative to the –F option which does the same thing.
The OFS Variable: when you used the print statement with comma-separated arguments,
each argument was separated from the other by a space. This is awk’s default output field
separator, and can reassigned using the variable OFS in the BEGIN section:
BEGIN { OFS=”~”}
When you reassign this variable with a ~ (tilde), awk will use this character for delimiting the
print arguments. This is a useful variable for creating lines with delimited fields.
The NF variable: NF comes in quite handy for cleaning up a database of lines that don’t
contain the right number of fields. By using it on a file, say emp.lst, you can locate those lines
not having 6 fields, and which have crept in due to faulty data entry:
$awk ‘BEGIN {FS = “|”}
NF! =6 {
Print “Record No “, NR, “has”, “fields”}’ empx.lst
The FILENAME Variable: FILENAME stores the name of the current file being processed. Like grep and
sed, awk can also handle multiple filenames in the command line. By default, awk doesn‟t print the
filename, but you can instruct it to do so:
‘$6<4000 {print FILENAME, $0 }’
With FILENAME, you can device logic that does different things depending on the file that is processed.
NS2 Installation
➢ Go to Computer File System now paste the zip file “ns-allinone-2.34.tar.gz” into opt folder.
➢ Now unzip the file by typing the following command
[root@localhost opt] # tar -xzvf ns-allinone-2.34.tar.gz
➢ After the files get extracted, we get ns-allinone-2.34 folder as well as zip file nsallinone- 2.34.tar.gz
[root@localhost opt] # ns-allinone-2.34 ns-allinone-2.34.tar.gz
➢ Now go to ns-allinone-2.33 folder and install
it [root@localhost opt] # cd ns-allinone-2.34
[root@localhost ns-allinone-2.33] # ./install
➢ Once the installation is completed successfully we get certain pathnames in that terminal which
must be pasted in “.bash_profile” file.
➢ First minimize the terminal where installation is done and open a new terminal and open the file
“.bash_profile”
[root@localhost ~] # vi .bash_profile
➢ When we open this file, we get a line in that file which is shown below
PATH=$PATH:$HOME/bin
To this line we must paste the path which is present in the previous terminal where ns was installed.
First put “:” then paste the path in-front of bin. That path is shown below.
“:/opt/ns-allinone-2.33/bin:/opt/ns-allinone-2.33/tcl8.4.18/unix:/opt/ns-
allinone2.33/tk8.4.18/unix”.
➢ In the next line type “LD_LIBRARY_PATH=$LD_LIBRARY_PATH:” and paste the two paths
separated by “:” which are present in the previous terminal i.e Important notices section (1)
“/opt/ns-allinone-2.33/otcl-1.13:/opt/ns-allinone-2.33/lib”15
➢ In the next line type “TCL_LIBRARY=$TCL_LIBRARY:” and paste the path which is
present in previous terminal i.e Important Notices section (2)
“/opt/ns-allinone-2.33/tcl8.4.18/library”
1. Implement Three nodes point – to – point network with duplex links between them for different
topologies. 1Set the queue size, vary the bandwidth, and find the number of packets dropped for various
iterations.
#Create Simulator
set ns [new Simulator]
#Open Trace file and NAM file set ntrace [open prog1.tr w]
$ns trace-all $ntrace
set namfile [open prog1.nam w]
$ns namtrace-all $namfile
#Create 3 nodes set n0 [$ns node] set n1 [$ns node] set n2 [$ns node]
#Schedule Events
$ns at 0.0 "$cbr0 start"
$ns at 5.0 "Finish"
Output:
2. Implement simple ESS and with transmitting nodes in wire-less LAN by simulation and determine
the throughput with respect to transmission of packets.
#Create a ns simulator
set ns [new Simulator]
-topoInstance $topo \
-agentTrace ON \
-routerTrace ON
#===================================
#Nodes Definition
#===================================
create-god 6
#Create 6 nodes
set n0 [$ns node]
$n0 set X_ 630
$n0 set Y_ 501
$n0 set Z_ 0.0
$ns initial_node_pos $n0 20
set n1 [$ns node]
$n1 set X_ 454
$n1 set Y_ 340
$n1 set Z_ 0.0
$ns initial_node_pos $n1 20
set n2 [$ns node]
$n2 set X_ 785
$n2 set Y_ 326
$n2 set Z_ 0.0
$ns initial_node_pos $n2 20
set n3 [$ns node]
$n3 set X_ 270
$n3 set Y_ 190
$n3 set Z_ 0.0
$ns initial_node_pos $n3 20
set n4 [$ns node]
$n4 set X_ 539
$n4 set Y_ 131
#===================================
#Applications Definition
#===================================
#Setup a CBR Application over UDP connection
set cbr0 [new Application/Traffic/CBR]
$cbr0 attach-agent $udp0
#===================================
#Termination
#===================================
#Define a 'finish' procedure
proc finish {} {
global ns tracefile namfile
$ns flush-trace
close $tracefile
close $namfile
}
$ns at 1.0 "$cbr0 start"
$ns at 2.0 "$ftp0 start"
$ns at 180.0 "$ftp0 stop"
$ns at 200.0 "$cbr0 stop"
$ns at 200.0 "finish"
$ns at 70 "$n4 set dest 100 60 20"
$ns at 100 "$n4 set dest 700 300 20"
Output:
3. Write a program for error detecting code using CRC-CCITT (16- bits).
Whenever digital data is stored or interfaced, data corruption might occur. Since the beginning of
computer science, developers have been thinking of ways to deal with this type of problem. For serial data
they came up with the solution to attach a parity bit to each sent byte. This simple detection mechanism works
if an odd number of bits in a byte changes, but an even number of false bits in one byte will not be detected
by the parity check. To overcome this problem developers have searched for mathematical sound mechanisms
to detect multiple false bits. The CRC calculation or cyclic redundancy check was the result of this. Nowadays
CRC calculations are used in all types of communications. All packets sent over a network connection are
checked with a CRC. Also each data block on your hard disk has a CRC value attached to it. Modern computer
world cannot do without these CRC calculations. So let's see why they are so widely used. The answer is
simple; they are powerful, detect many types of errors and are extremely fast to calculate especially when
dedicated hardware chips are used.
The idea behind CRC calculation is to look at the data as one large binary number. This number is divided by
a certain value and the remainder of the calculation is called the CRC. Dividing in the CRC calculation at first
looks to cost a lot of computing power, but it can be performed very quickly if we use a method similar to the
one learned at school. We will as an example calculate the remainder for the character 'm'—which is 1101101
in binary notation—by dividing it by 19 or 10011. Please note that 19 is an odd number. This is necessary as
we will see further on. Please refer to your schoolbooks as the binary calculation method here is not very
different from the decimal method you learned when you were young. It might only look a little bit strange.
Also notations differ between countries, but the method is similar.
With decimal calculations you can quickly check that 109 divided by 19 gives a quotient of 5 with 14 as the
remainder. But what we also see in the scheme is that every bit extra to check only costs one binary comparison
and in 50% of the cases one binary subtraction. You can easily increase the number of bits of the test data
string—for example to 56 bits if we use our example value "Lammert"—and the result can be calculated with
56 binary comparisons and an average of 28 binary subtractions. This can be implemented in hardware directly
with only very few transistors involved. Also software algorithms can be very efficient.
All of the CRC formulas you will encounter are simply checksum algorithms based on modulo-2 binary
division where we ignore carry bits and in effect the subtraction will be equal to an exclusive or operation.
Though some differences exist in the specifics across different CRC formulas, the basic mathematical process
is always the same:
• The message bits are appended with c zero bits; this augmented message is the dividend.
• A predetermined c+1-bit binary sequence, called the generator polynomial, is the divisor.
• The checksum is the c-bit remainder that results from the division operation
Table 1 lists some of the most commonly used generator polynomials for 16- and 32-bit CRCs. Remember
that the width of the divisor is always one bit wider than the remainder. So, for example, you’d use a 17-bit
generator polynomial whenever a 16-bit checksum is required.
Table 1: International Standard CRC Polynomials
x16+x12+x5+1
o Used in: HDLC, SDLC, PPP default
• IBM-CRC-16 (ANSI):
x16+x15+x2+1
• 802.3:
x32+x26+x23+x22+x16+x12+x11+x10+x8+x7+x5+x4+x2+x+1
o Used in: Ethernet, PPP rootion
Source Code:
import java.util.Scanner;
import java.io.*;
public class CRC1 {
//Calculation of CRC
for(int i=0;i<message.length();i++)
{
if(data[i]==1)
for(int j=0;j<divisor.length;j++)
data[i+j] ^= divisor[j];
}
//Display CRC
System.out.print("The checksum code is: ");
for(int i=0;i<message.length();i++)
data[i] = Integer.parseInt(message.charAt(i)+"");
for(int i=0;i<data.length;i++)
System.out.print(data[i]);
System.out.println();
//Calculation of remainder
for(int i=0;i<message.length();i++) {
if(data[i]==1)
for(int j=0;j<divisor.length;j++)
data[i+j] ^= divisor[j];
}
if(valid==true)
System.out.println("Data stream is valid");
else
System.out.println("Data stream is invalid. CRC error occurred.");
}
Output:
4. Implement transmission of ping messages/trace route over a network topology consisting of 6 nodes
and find the number of packets dropped due to congestion in the network.
#Create Simulator
set ns [new Simulator]
#Open trace and NAM trace file set ntrace [open prog3.tr w]
$ns trace-all $ntrace
set namfile [open prog3.nam w]
$ns namtrace-all $namfile
puts "node [$node_ id] received ping answer from $from with round trip time $rtt ms"
}
#Create two ping agents and attach them to n(0) and n(5)
set p0 [new Agent/Ping]
$p0 set class_ 1
$ns attach-agent $n(0) $p0
#Create Congestion
#Generate a Huge CBR traffic between n(2) and n(4)
set tcp0 [new Agent/TCP]
$tcp0 set class_ 2
$ns attach-agent $n(2) $tcp0 set sink0 [new Agent/TCPSink]
$ns attach-agent $n(4) $sink0
$ns connect $tcp0 $sink0
#Schedule events
$ns at 0.2 "$p0 send"
$ns at 0.4 "$p1 send"
$ns at 0.4 "$cbr0 start"
$ns at 0.8 "$p0 send"
$ns at 1.0 "$p1 send"
$ns at 1.2 "$cbr0 stop"
$ns at 1.4 "$p0 send"
Output:
5. Write a program to find the shortest path between vertices using bellman-ford algorithm.
Distance Vector Algorithm is a decentralized routing algorithm that requires that each router simply
inform its neighbors of its routing table. For each network path, the receiving routers pick the neighbor
advertising the lowest cost, then add this entry into its routing table for re-advertisement. To find the shortest
path, Distance Vector Algorithm is based on one of two basic algorithms: the Bellman-Ford and the Dijkstra
algorithms.
Routers that use this algorithm have to maintain the distance tables (which is a one dimension array -- "a
vector"), which tell the distances and shortest path to sending packets to each node in the network. The
information in the distance table is always upd by exchanging information with the neighboring nodes. The
number of data in the table equals to that of all nodes in networks (excluded itself). The columns of table
represent the directly attached neighbors whereas the rows represent all destinations in the network. Each data
contains the path for sending packets to each destination in the network and distance/or time to transmit on
that path (we call this as "cost"). The measurements in this algorithm are the number of hops, latency, the
number of outgoing packets, etc.\
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all
of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for the same problem, but
more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers.
Negative edge weights are found in various applications of graphs, hence the usefulness of this algorithm. If
a graph contains a "negative cycle" (i.e. a cycle whose edges sum to a negative value) that is reachable from
the source, then there is no cheapest path: any path that has a point on the negative cycle can be made cheaper
by one more walk around the negative cycle. In such a case, the Bellman–Ford algorithm can detect negative
cycles and report their existence
Implementation Algorithm:
1. send my routing table to all my neighbors whenever my link table changes
2. when I get a routing table from a neighbor on port P with link metric M:
a. add L to each of the neighbor's metrics
b. for each entry (D, P', M') in the updated neighbor's table:
i. if I do not have an entry for D, add (D, P, M') to my routing table
ii. if I have an entry for D with metric M", add (D, P, M') to my routing table if M' < M"
3. if my routing table has changed, send all the new entries to all my neighbors.
Source Code:
import java.util.Scanner;
public class ford
{
private int D[];
private int num_ver;
public static final int MAX_VALUE = 999;
public ford(int num_ver)
{
this.num_ver = num_ver;
D = new int[num_ver + 1];
}
public void BellmanFordEvaluation(int source, int A[][])
{
for (int node = 1; node <= num_ver; node++)
{
D[node] = MAX_VALUE;
}
D[source] = 0;
Output:
6. Implement an Ethernet LAN using n nodes and set multiple traffic nodes and plot congestion window
for different source / destination.
#Create Simulator
set ns [new Simulator]
#Open trace and NAM trace file set ntrace [open prog5.tr w]
$ns trace-all $ntrace
set namfile [open prog5.nam w]
$ns namtrace-all $namfile
#Use some flat file to create congestion graph windows set winFile0 [open WinFile0 w]
set winFile1 [open WinFile1 w]
#Plot the Congestion Window graph using xgraph exec xgraph WinFile0 WinFile1 &
exit 0
}
#Create 6 nodes
for {set i 0} {$i<6} {incr i} { set n($i) [$ns node]
}
#Setup queue between n(2) and n(3) and monitor the queue
$ns queue-limit $n(2) $n(3) 20
$ns duplex-link-op $n(2) $n(3) queuePos 0.5
#Set error model on link n(2) to n(3) set loss_module [new ErrorModel]
$loss_module ranvar [new RandomVariable/Uniform]
$loss_module drop-target [new Agent/Null]
$ns lossmodel $loss_module $n(2) $n(3)
#Set up the TCP connection between n(0) and n(4) set tcp0 [new Agent/TCP/Newreno]
$tcp0 set fid_ 1
$tcp0 set window_ 8000
$tcp0 set packetSize_ 552
$ns attach-agent $n(0) $tcp0
set sink0 [new Agent/TCPSink/DelAck]
$ns attach-agent $n(4) $sink0
$ns connect $tcp0 $sink0
#Set up another TCP connection between n(5) and n(1) set tcp1 [new Agent/TCP/Newreno]
$tcp1 set fid_ 2
$tcp1 set window_ 8000
$tcp1 set packetSize_ 552
#Schedule Events
$ns at 0.1 "$ftp0 start"
$ns at 0.1 "PlotWindow $tcp0 $winFile0"
$ns at 0.5 "$ftp1 start"
$ns at 0.5 "PlotWindow $tcp1 $winFile1"
$ns at 25.0 "$ftp0 stop"
$ns at 25.1 "$ftp1 stop"
$ns at 25.2 "Finish"
Output:
The main concept of the leaky bucket algorithm is that the output data flow remains constant despite the
variant input traffic, such as the water flow in a bucket with a small hole at the bottom. In case the bucket
contains water (or packets) then the output flow follows a constant rate, while if the bucket is full any
additional load will be lost because of spillover.
In a similar way if the bucket is empty the output will be zero. From network perspective, leaky bucket consists
of a finite queue (bucket) where all the incoming packets are stored in case there is space in the queue,
otherwise the packets are discarded. In order to regulate the output flow, leaky bucket transmits one packet
from the queue in a fixed time (e.g. at every clock tick). In the following figure we can notice the main
rationale of leaky bucket algorithm, for both the two approaches (e.g. leaky bucket with water (a) and with
packets (b)).
While leaky bucket eliminates completely bursty traffic by regulating the incoming data flow its main
drawback is that it drops packets if the bucket is full. Also, it doesn’t take into account the idle process of the
sender which means that if the host doesn’t transmit data for some time the bucket becomes empty without
permitting the transmission of any packet.
Implementation Algorithm:
Steps:
Source Code:
import java.util.Scanner;
import java.lang.*;
public class lab7 {
public static void main(String[] args)
{
int i;
int a[]=new int[20];
int buck_rem=0,buck_cap=4,rate=3,sent,recv;
Scanner in = new Scanner(System.in);
System.out.println("Enter the number of packets");
int n = in.nextInt();
System.out.println("Enter the packets");
for(i=1;i<=n;i++)
a[i]= in.nextInt();
System.out.println("Clock \t packet size \t accept \t sent \t remaining");
for(i=1;i<=n;i++)
{
if(a[i]!=0)
{
if(buck_rem+a[i]>buck_cap)
recv=-1;
else
{
recv=a[i];
buck_rem+=a[i];
}
}
else
recv=0;
if(buck_rem!=0)
{
if(buck_rem<rate)
{sent=buck_rem;
buck_rem=0;
}
else
{
sent=rate;
buck_rem=buck_rem-rate;
}
}
else
sent=0;
if(recv==-1)
System.out.println(+i+ "\t\t" +a[i]+ "\t dropped \t" + sent +"\t" +buck_rem);
else
System.out.println(+i+ "\t\t" +a[i] +"\t\t" +recv +"\t" +sent + "\t" +buck_rem);
}
}
}
Output: