CN Lab Manual 21Cs52
CN Lab Manual 21Cs52
Introduction to NS-2
Tcl scripting
Tcl is a general purpose scripting language. [Interpreter]
Tcl runs on most of the platforms such as Unix, Windows, and Mac.
The strength of Tcl is its simplicity.
It is not necessary to declare a data type for variable prior to the usage.
Basics of TCL
Syntax: command arg1 arg2 arg3
Hello World!
puts stdout{Hello, World!}
Hello, World!
• Variables Command Substitution
set a 5 set len [string length foobar]
set b $a set len [expr [string length foobar] + 9]
Simple
Arithmetic
expr 7.2 / 4
Procedures
proc Diag {a b} {
set c [expr sqrt($a * $a + $b * $b)]
return $c }
NS Simulator Preliminaries.
Initialization and termination aspects of the ns simulator.
Definition of network nodes, links, queues and topology.
Definition of agents and of applications.
The nam visualization tool.
Tracing and random variables.
Which is thus the first line in the tcl script? This line declares a new variable as using the set
command, you can call this variable as you wish, In general people declares it as ns because it is
an instance of the Simulator class, so an object the code[new Simulator] is indeed the installation
of the class Simulator using the reserved word new.
In order to have output files with data on the simulation (trace files) or files
used for visualization (nam files), we need to create the files using “open” command:
The above creates a data trace file called “out.tr” and a nam visualization trace file called
“out.nam”. Within the tcl script, these files are not called explicitly by their names, but instead
by pointers that are declared above and called “tracefile1” and “namfile” respectively. Remark
that they begins with a # symbol. The second line open the file “out.tr” to be used for writing,
declared with the letter “w”. The third line uses a simulator method called trace-all that have as
parameter the name of the file where the traces will go.
The last line tells the simulator to record all simulation traces in NAM input format. It also
gives the file name that the trace will be written to later by the command $ns flush-trace. In our
case, this will be the file pointed at by the pointer “$namfile”,i.e the file “out.tr”.
Proc finish { } {
global ns tracefile1 namfile
$ns flush-trace
Close $tracefile1
Close $namfile
Exec nam out.nam &
Exit 0
}
The word proc declares a procedure in this case called finish and without arguments. The
word global is used to tell that we are using variables declared outside the procedure. The
simulator method “flush-trace” will dump the traces on the respective files. The tcl command
“close” closes the trace files defined before and exec executes the nam program for
visualization. The command exit will ends the application and return the number 0 as status to
the system. Zero is the default for a clean exit. Other values can be used to say that is a exit
because something fails.
At the end of ns program we should call the procedure “finish” and specify at what time
the termination should occur. For example,
$ns at 125.0 “finish”
will be used to call “finish” at time 125sec.Indeed, the at method of the simulator allows us to
schedule events explicitly.
The simulation can then begin using the command
$ns run
Many alternative options exist, such as the RED (Random Early Discard) mechanism, the FQ
(Fair Queuing), the DRR (Deficit Round Robin), the stochastic Fair Queuing (SFQ) and the
CBQ (which including a priority and a round-robin scheduler).
In ns, an output queue of a node is implemented as a part of each link whose input is that
node. We should also define the buffer capacity of the queue related to each link. An example
would be:
#set Queue Size of link (n0-n2) to 20
$ns queue-limit $n0 $n2 20
Agents and Applications
We need to define routing (sources, destinations) the agents (protocols) the application
that use them.
The command $ns attach-agent $n0 $tcp defines the source node of the tcp
connection.
Defines the behaviour of the destination node of TCP and assigns to it a pointer called sink.
#Setup a UDP connection
set udp [new Agent/UDP]
$ns attach-agent $n1 $udp
set null [new Agent/Null]
$ns attach-agent $n5 $null
$ns connect $udp $null
$udp set fid_2
different colors in the visualization part. This is done by the command $tcp set fid_1 that
assigns to the TCP connection a flow identification of “1”.We shall later give the flow
identification of “2” to the UDP connection.
Scheduling Events
NS is a discrete event based simulation. The tcp script defines when event should occur. The
initializing command set ns [new Simulator] creates an event scheduler, and events are then
scheduled using the format:
$ns at <time> <event>
The scheduler is started when running ns that is through the command $ns run. The
beginning and end of the FTP and CBR application can be done through the following
command
$ns at 0.1 “$cbr start”
$ns at 1.0 “ $ftp start”
$ns at 124.0 “$ftp stop”
Structure of Trace Files
When tracing into an output ASCII file, the trace is organized in 12 fields as follows in
fig shown below, The meaning of the fields are:
Event Time From To PKT PKT Flags Fid Src Dest Seq Pkt
Node Node Type Size Addr Addr Num id
The first field is the event type. It is given by one of four possible symbols r, +, -, d which
correspond respectively to receive (at the output of the link), enqueued, dequeued and
dropped.
The second field gives the time at which the event occurs.
Gives the input node of the link at which the event occurs.
Gives the output node of the link at which the event occurs.
Gives the packet type (eg CBR or TCP)
Gives the packet size
Some flags
This is the flow id (fid) of IPv6 that a user can set for each flow at the input OTcl script one
can further use this field for analysis purposes; it is also used when specifying stream color
for the NAM display.
This is the source address given in the form of “node.port”.
This is the destination address, given in the same form.
This is the network layer protocol’s packet sequence number. Even though UDP
implementations in a real network do not use sequence number, ns keeps track of UDP
packet sequence number for analysis purposes
The last field shows the unique id of the packet.
XGRAPH
The xgraph program draws a graph on an x-display given data read from either data file
or from standard input if no files are specified. It can display upto 64 independent data sets using
different colors and line styles for each set. It annotates the graph with a title, axis labels, grid
lines or tick marks, grid labels and a legend.
Syntax:
Xgraph [options] file-name
This is the unit name for the y-axis. Its default is “Y”.
Awk- An Advanced
awk is a programmable, pattern-matching, and processing tool available in UNIX. It
works equally well with text and numbers.
awk is not just a command, but a programming language too. In other words, awk utility
is a pattern scanning and processing language. It searches one or more files to see if they contain
lines that match specified patterns and then perform associated actions, such as writing the line to
the standard output or incrementing a counter each time it finds a match.
Here, selection_criteria filters input and select lines for the action component to act upon.
The selection_criteria is enclosed within single quotes and the action within the curly braces.
Both the selection_criteria and action forms an awk program.
You should holds large awk programs in separate file and provide them with the awk
extension for easier identification. Let’s first store the previous program in the file empawk.awk:
$ cat empawk.awk
Observe that this time we haven’t used quotes to enclose the awk program. You can now
use awk with the –f filename option to obtain the same output:
BEGIN {action}
END {action}
These two sections, when present, are delimited by the body of the awk program. You
can use them to print a suitable heading at the beginning and the average salary at the end.
BUILT-IN VARIABLES
Awk has several built-in variables. They are all assigned automatically, though it is also
possible for a user to reassign some of them. You have already used NR, which signifies the
record number of the current line. We’ll now have a brief look at some of the other variable. The
FS Variable: as stated elsewhere, awk uses a contiguous string of spaces as the default field
delimiter. FS redefines this field separator, which in the sample database happens to be the |.
When used at all, it must occur in the BEGIN section so that the body of the program knows its
value before it starts processing:
BEGIN {FS=”|”}
This is an alternative to the –F option which does the same thing.
The OFS Variable: when you used the print statement with comma-separated arguments, each
argument was separated from the other by a space. This is awk’s default output field separator,
and can reassigned using the variable OFS in the BEGIN section:
BEGIN { OFS=”~” }
When you reassign this variable with a ~ (tilde), awk will use this character for delimiting the
print arguments. This is a useful variable for creating lines with delimited fields.
The NF variable: NF comes in quite handy for cleaning up a database of lines that don’t contain
the right number of fields. By using it on a file, say emp.lst, you can locate those lines not having
6 fields, and which have crept in due to faulty data entry:
The FILENAME Variable: FILENAME stores the name of the current file being processed.
Like grep and sed, awk can also handle multiple filenames in the command line. By default, awk
doesn’t print the filename, but you can instruct it to do so:
With FILENAME, you can device logic that does different things depending on the file that is
processed.
NS2 Installation
NS2 is a free simulation tool.
It runs on various platforms including UNIX (or Linux), Windows, and Mac systems.
NS2 source codes are distributed in two forms: the all-in-one suite and the component-
wise.
‘all-in-one’ package provides an “install” script which configures the NS2 environment and
creates NS2 executable file using the “make” utility.
After the files get extracted, we get ns-allinone-2.34 folder as well as zip file ns-allinone-
2.34.tar.gz
[root@localhost opt] # ns-allinone-2.34 ns-allinone-2.34.tar.gz
Once the installation is completed successfully we get certain pathnames in that terminal
which must be pasted in “.bash_profile” file.
First minimize the terminal where installation is done and open a new terminal and open
the file “.bash_profile”
[root@localhost ~] # vi .bash_profile
When we open this file, we get a line in that file which is shown below
PATH=$PATH:$HOME/bin
To this line we must paste the path which is present in the previous terminal where ns was
installed. First put “:” then paste the path in-front of bin. That path is shown below.
“:/opt/ns-allinone-2.33/bin:/opt/ns-allinone-2.33/tcl8.4.18/unix:/opt/ns-allinone-
2.33/tk8.4.18/unix”.
“/opt/ns-allinone-2.33/otcl-1.13:/opt/ns-allinone-2.33/lib”
In the next line type “TCL_LIBRARY=$TCL_LIBRARY:” and paste the path which is
present in previous terminal i.e Important Notices section (2)
“/opt/ns-allinone-2.33/tcl8.4.18/library”
In the next line type “export LD_LIBRARY_PATH”
In the next line type “export TCL_LIBRARY”
The next two lines are already present the file “export PATH” and “unset USERNAME”
Save the program ( ESC + shift : wq and press enter )
Now in the terminal where we have opened .bash_profile file, type the following command
to check if path is updated correctly or notstudent@sample-ThinkCentre-M920q:~$
vi .bash_profile [root@localhost ~] # source .bash_profile
If path is updated properly, then we will get the prompt as shown
below student@sample-ThinkCentre-M920q:~$
Now open the previous terminal where you have installed ns
[root@localhost ns-allinone-2.33] #
Here we need to configure three packages “ns-2.33”, “nam-1.13” and “xgraph-12.1”
First, configure “ns-2.33” package as shown below
student@sample-ThinkCentre-M920q:~$ cd ns-2.33
student@sample-ThinkCentre-M920q:~$ ./configure
student@sample-ThinkCentre-M920q:~$ make clean
student@sample-ThinkCentre-M920q:~$ make
student@sample-ThinkCentre-M920q:~$make install
student@sample-ThinkCentre-M920q:~$ns
%
If we get “%” symbol it indicates that ns-2.33 configuration was successful.
%
If we get “%” symbol it indicates that nam-1.13 configuration was successful.
Third, configure “xgraph-12.1” package as shown below
student@sample-ThinkCentre-M920q:~$ cd . .
student@sample-ThinkCentre-M920q:~$cd xgraph-12.1
student@sample-ThinkCentre-M920q:~$ ./configure
student@sample-ThinkCentre-M920q:~$make clean
student@sample-ThinkCentre-M920q:~$make
student@sample-ThinkCentre-M920q:~$make install
[root@localhost xgraph-12.1] # ns
%
This completes the installation process of “NS-2” simulator.
PART-A
1).Implement three nodes point – to – point network with duplex links between
them. Set the queue size, vary the bandwidth and find the number of packets
dropped.
proc finish {} {
global f nf ns
$ns flush-trace
close $f
close $nf
exec nam lab1.nam &
exit 0
}
1) Open Text editor and type program. Program name should have the extension
“ .tcl ”
student@sample-ThinkCentre-M920q:~$ gedit lab1.tcl
2) Run the simulation program
student@sample-ThinkCentre-M920q:~$ ns lab1.tcl
i) Here “ns” indicates network simulator. We get the topology shown in the
snapshot.
ii) Now press the play button in the simulation window and the simulation will
begins.
3) After simulation is completed run grep command to find packets drop to see the
output
Output
Note:
Set the queue size fixed from n0 to n2 as 10, n1-n2 to 10 and from n2-n3 as
Syntax: To set the queue size
$ns set queue-limit <from> <to> <size> Eg: $ns set queue-limit $n0 $n2 10
Go on varying the bandwidth from 10, 20 30. . and find the number of
packets dropped at the node 2
proc finish {} {
global ns f nf
$ns flush-trace
close $f
close $nf
exec nam lab2.nam &
exit 0
}
proc sendPingPacket {} {
global ns ping0 ping1 ping4 ping5
set intervalTime 0.001
set now [$ns now]
$ns at [expr $now + $intervalTime] "$ping0 send"
$ns at [expr $now + $intervalTime] "$ping1 send"
$ns at [expr $now + $intervalTime] "$ping4 send"
$ns at [expr $now + $intervalTime] "$ping5 send"
1) Open Text editor and type program. Program name should have the extension
“ .tcl ”
student@sample-ThinkCentre-M920q:~$ gedit lab2.tcl
2) Run the simulation program
student@sample-ThinkCentre-M920q:~$ ns lab2.tcl
i) Here “ns” indicates network simulator. We get the topology shown in the
snapshot.
ii) Now press the play button in the simulation window and the simulation will
begins.
3) After simulation is completed run grep command to find packets drop to see the
output
student@sample-ThinkCentre-M920q:~$ grep ^d lab2.tr -c
4) To see the trace file contents open the file as ,
student@sample-ThinkCentre-M920q:~$ vi lab2.tr
Topology
Output
3).Implement an Ethernet LAN using n nodes and set multiple traffic nodes and
plot congestion window for different source / destination.
$ns make-lan "$n0 $n1 $n2 $n3 $n4" 5Mb 100ms LL Queue/DropTail Mac/802_3
$ns duplex-link $n4 $n5 1Mb 1ms DropTail
proc finish {} {
global ns nf tf
$ns flush-trace
close $tf
close $nf
exec nam lab3.nam &
exit 0
}
AWK file (Open a new editor using “vi command” and write awk file and
save with “.awk” extension)
BEGIN {
}
{
if($6=="cwnd_")
printf("%f\t\t%f\t\n",$1,$7);}
END {
}
1) Open Text editor and type program. Program name should have the extension
“ .tcl ”
student@sample-ThinkCentre-M920q:~$ gedit lab3.tcl
2) Open Text editor and type awk program. Program name should have
the extension “.awk ”
student@sample-ThinkCentre-M920q:~$ gedit lab3.awk
3) Run the simulation program
student@sample-ThinkCentre-M920q:~$ ns lab3.tcl
i) Here “ns” indicates network simulator. We get the topology shown in the
snapshot.
ii) Now press the play button in the simulation window and the simulation will
begins.
4) After simulation is completed run awk file to see the output ,
student@sample-ThinkCentre-M920q:~$ awk –f lab3.awk file1.tr > a1
student@sample-ThinkCentre-M920q:~$ awk –f lab3.awk file2.tr > a2
student@sample-ThinkCentre-M920q:~# xgraph a1 a2
Here we are using the congestion window trace files i.e. file1.tr and file2.tr
and we are redirecting the contents of those files to new files say a1 and a2
using output redirection operator (>).
5) To see the trace file contents open the file as ,
student@sample-ThinkCentre-M920q:~$ gedit lab3.tr
Topology
Output
Xgraph
-topoInstance $topo \
-agentTrace ON \
-routerTrace ON
$n0 set X_ 50
$n0 set Y_ 50
$n0 set Z_ 0
$n1 set X_ 100
$n1 set Y_ 100
$n1 set Z_ 0
$n2 set X_ 600
$n2 set Y_ 600
$n2 set Z_ 0
$ns run
AWK file (Open a new editor using “vi command” and write awk file and
save with “.awk” extension)
BEGIN{
count1=0
count2=0
pack1=0
pack2=0
time1=0
time2=0
}
{
if($1=="r"&& $3=="_1_" && $4=="AGT")
{
count1++
pack1=pack1+$8
time1=$2
}
if($1=="r" && $3=="_2_" && $4=="AGT")
{
count2++
pack2=pack2+$8
time2=$2
}
}
END{
printf("The Throughput from n0 to n1: %f Mbps \n",
((count1*pack1*8)/(time1*1000000)));
printf("The Throughput from n1 to n2: %f Mbps",
((count2*pack2*8)/(time2*1000000)));
}
1) Open Text editor and type program. Program name should have the extension “ .tcl ”
student@sample-ThinkCentre-M920q:~$gedit lab4.tcl
2) Open Text editor and type awk program. Program name should have the
extension “.awk ”
student@sample-ThinkCentre-M920q:~$ gedit lab4.awk
3) Run the simulation program
student@sample-ThinkCentre-M920q:~$ns lab4.tcl
i) Here “ns” indicates network simulator. We get the topology shown in the
snapshot.
ii) Now press the play button in the simulation window and the simulation will
begins.
4) After simulation is completed run awk file to see the output ,
student@sample-ThinkCentre-M920q:~$ awk -f lab4.awk lab4.tr
5) To see the trace file contents open the file as ,
student@sample-ThinkCentre-M920q:~$ gedit lab4.tr
Topology
Output
PART-B
Dept. of CSE,AIT Page 28
CN LAB MANUAL 21CS52
5). Write a program for error detecting code using CRC-CCITT (16- bits).
Whenever digital data is stored or interfaced, data corruption might occur. Since
the beginning of computer science, developers have been thinking of ways to deal with
this type of problem. For serial data they came up with the solution to attach a parity bit
to each sent byte. This simple detection mechanism works if an odd number of bits in a
byte changes, but an even number of false bits in one byte will not be detected by the
parity check. To overcome this problem developers have searched for mathematical
sound mechanisms to detect multiple false bits. The CRC calculation or cyclic
redundancy check was the result of this. Nowadays CRC calculations are used in all
types of communications. All packets sent over a network connection are checked with
a CRC. Also each data block on your hard disk has a CRC value attached to it. Modern
computer world cannot do without these CRC calculations. So let's see why they are so
widely used. The answer is simple; they are powerful, detect many types of errors and
are extremely fast to calculate especially when dedicated hardware chips are used.
The idea behind CRC calculation is to look at the data as one large binary
number. This number is divided by a certain value and the remainder of the calculation
is called the CRC. Dividing in the CRC calculation at first looks to cost a lot of
computing power, but it can be performed very quickly if we use a method similar to
the one learned at school. We will as an example calculate the remainder for the
character 'm'—which is 1101101 in binary notation— by dividing it by 19 or 10011.
Please note that 19 is an odd number. This is necessary as we will see further on. Please
refer to your schoolbooks as the binary calculation method here is not very different
from the decimal method you learned when you were young. It might only look a little
bit strange. Also notations differ between countries, but the method is similar.
With decimal calculations you can quickly check that 109 divided by 19 gives a
quotient of 5 with 14 as the remainder. But what we also see in the scheme is that every bit
extra to check only costs one binary comparison and in 50% of the cases one binary
subtraction. You can easily increase the number of bits of the test data string—for example to
56 bits if we use our example value "Lammert"—and the result can be calculated with 56
binary comparisons and an average of 28 binary subtractions. This can be implemented in
hardware directly with only very few transistors involved. Also software algorithms can be
very efficient.
All of the CRC formulas you will encounter are simply checksum algorithms based on
modulo-2 binary division where we ignore carry bits and in effect the subtraction will be equal
to an exclusive or operation. Though some differences exist in the specifics across different
CRC formulas, the basic mathematical process is always the same:
The message bits are appended with c zero bits; this augmented message is the
dividend
A predetermined c+1-bit binary sequence, called the generator polynomial, is the
divisor
The checksum is the c-bit remainder that results from the division operation
Table 1 lists some of the most commonly used generator polynomials for 16- and
32-bit CRCs. Remember that the width of the divisor is always one bit wider than
the remainder. So, for example, you’d use a 17-bit generator polynomial whenever a
16-bit checksum is required.
Checksum
Width 16 bits 16 bits 32 bits
Generator
10001000000100001 11000000000000101 100000100110000010001110110110111
Polynomial
Source Code:
import java.io.*;
class Crc
{
public static void main(String args[]) throws IOException
{
BufferedReader br=new BufferedReader(new
InputStreamReader(System.in));
int[ ] data;
int[ ]div;
int[ ]divisor;
int[ ]rem;
int[ ] crc;
div=new int[tot_length];
rem=new int[tot_length];
crc=new int[tot_length];
}
System.out.println();
System.out.println("CRC code : ");
for(int i=0;i<crc.length;i++)
System.out.print(crc[i]);
/*-------------------ERROR DETECTION---------------------*/
System.out.println();
System.out.println("Enter CRC code of "+tot_length+" bits : ");
for(int i=0; i<crc.length; i++)
crc[i]=Integer.parseInt(br.readLine());
for(int j=0; j<crc.length; j++)
{
rem[j] = crc[j];
}
rem=divide(crc, divisor, rem);
for(int i=0; i< rem.length; i++)
{
if(rem[i]!=0)
{
System.out.println("Error");
break;
}
if(i==rem.length-1)
System.out.println("No Error");
}
System.out.println("THANK YOU.... :)");
}
return rem;
}
}
Steps for execution
Open Text editor and type program. Program name should have the
extension “ .java ”
student@sample-ThinkCentre-M920q:~$ gedit Crc.java
To Complie the program
student@sample-ThinkCentre-M920q:~$ javac Crc.java
To Run the Program
student@sample-ThinkCentre-M920q:~$ java Crc
Output
6). Write a program to find the shortest path between vertices using bellman-
ford algorithm.
routing table for re-advertisement. To find the shortest path, Distance Vector Algorithm
is based on one of two basic algorithms: the Bellman-Ford and the Dijkstra algorithms.
Routers that use this algorithm have to maintain the distance tables (which is a
one-dimension array -- "a vector"), which tell the distances and shortest path to sending
packets to each node in the network. The information in the distance table is always up
date by exchanging information with the neighboring nodes. The number of data in the
table equals to that of all nodes in networks (excluded itself). The columns of table
represent the directly attached neighbors whereas the rows represent all destinations in
the network. Each data contains the path for sending packets to each destination in the
network and distance/or time to transmit on that path (we call this as "cost"). The
measurements in this algorithm are the number of hops, latency, the number of outgoing
packets, etc.
Source code:
import java.util.Scanner;
1) Open Text editor and type program. Program name should have the extension “
.java ”
student@sample-ThinkCentre-M920q:~$ gedit BellmanFord .java
Input graph:
A B
3
4
C D
Output:
7). Write a program for congestion control using leaky bucket algorithm.
The main concept of the leaky bucket algorithm is that the output data flow
remains constant despite the variant input traffic, such as the water flow in a bucket with
a small hole at the bottom. In case the bucket contains water (or packets) then the output
flow follows a constant rate, while if the bucket is full any additional load will be lost
because of spillover. In a similar way if the bucket is empty the output will be zero. From
network perspective, leaky bucket consists of a finite queue (bucket) where all the
incoming packets are stored in case there is space in the queue, otherwise the packets are
discarded. In order to regulate the output flow, leaky bucket transmits one packet from
Dept. of CSE,AIT Page 38
CN LAB MANUAL 21CS52
the queue in a fixed time (e.g. at every clock tick). In the following figure we can notice
the main rationale of leaky bucket algorithm, for both the two approaches (e.g. leaky
bucket with water (a) and with packets (b)).
account the idle process of the sender which means that if the host doesn’t transmit data
for some time the bucket becomes empty without permitting the transmission of any
packet.
Source Code:
import java.io.*;
import java.util.*;
class Queue
{
int q[],f=0,r=0,size;
void insert(int n)
{
Scanner in = new Scanner(System.in);
q=new int[10];
for(int i=0;i<n;i++)
{
System.out.print("\nEnter " + i + " element: ");
int ele=in.nextInt();
if(r+1>10)
{
System.out.println("\nQueue is full \nLost Packet: "+ele);
break;
}
else
{
r++;
q[i]=ele;
}
}
}
void delete()
{
Scanner in = new Scanner(System.in);
Thread t=new Thread();
if(r==0)
System.out.print("\nQueue empty ");
else
{
for(int i=f;i<r;i++)
{
try
{
t.sleep(1000);
}
catch(Exception e){}
System.out.print("\nLeaked Packet: "+q[i]);
f++;
}
}
System.out.println();
}
}
{
public static void main(String ar[]) throws Exception
{
Queue q=new Queue();
Scanner src=new Scanner(System.in);
System.out.println("\nEnter the packets to be sent:");
int size=src.nextInt();
q. insert(size);
q.delete();
}
}
1) Open Text editor and type program. Program name should have the extension
“ .java ”
student@sample-ThinkCentre-M920q:~$ gedit leaky.java
2) To Complie the program
student@sample-ThinkCentre-M920q:~$ javac leaky.java
3) To Run the Program
student@sample-ThinkCentre-M920q:~$ java leaky
Output:
Queue Overflow
Queue Empty
Queue-Size
VIVA QUESTIONS
1) What is a Link?
A link refers to the connectivity between two devices. It includes the type of cables and protocols
used in order for one device to be able to communicate with the other.
4) What is a LAN?
LAN is short for Local Area Network. It refers to the connection between computers and other
network devices that are located within a small physical location.
5) What is a node?
A node refers to a point or joint where a connection takes place. It can be computer or device
that is part of a network. Two or more nodes are needed in order to form a network connection.
11) How does a network topology affect your decision in setting up a network?
Network topology dictates what media you must use to interconnect devices. It also serves as
basis on what materials, connector and terminations that is applicable for the setup.
28) What is OSI and what role does it play in computer networks?
OSI (Open Systems Interconnect) serves as a reference model for data communication. It is
made up of 7 layers, with each layer defining a particular aspect on how network devices connect
and communicate with one another. One layer may deal with the physical media used, while
another layer dictates how data is actually transmitted across the network.
30) What is the equivalent layer or layers of the TCP/IP Application layer in terms of OSI
reference model?
The TCP/IP Application layer actually has three counterparts on the OSI model: the Session
layer, Presentation Layer and Application Layer.
REFERENCES
https://www.isi.edu/nsnam/ns/
http://aknetworks.webs.com/e-books
Leon, Garcia and Indra Widjaja, 4th Edition, Tata McGraw- Hill, reprint-2012.