Globus Engineering College Bhopal (M.P.) Department of Computer Science
Globus Engineering College Bhopal (M.P.) Department of Computer Science
BHOPAL (M.P.)
PROTOCOL SIMULATOR
Submitted as
Major Project in partial fulfillment for the award of Graduate
Degree of Bachelor of Engineering in Computer Science engineering
CERTIFICATE
Project
On
PROTOCOL SIMULATOR
SUBMITTED BY
Ankit Kumar Chaurasia(0130cs071017)
Jeetendra Nayak(0130cs071031)
Is approved for partial fulfillment of the requirement for the synopsis of the degree of “Bachelor
of Engineering in Computer Science engineering.
Examiner Examiner
(Internal) (External)
ACKNOWLEDGMENT
We take the opportunity to express our cordial gratitude and deep sense of indebtedness to our
guide Mr.Shailendra Gupta for the valuable guidance and inspiration throughout the project
duration. We feel thankful to him for his innovative ideas, which led to successful completion of
this project work. We feel proud and fortune to work under such an outstanding mentor in the
field of “CONTROL LINUX THROUGH MOBILE”.He has always welcomed my problem and
helped me to clear my doubt. We will always be grateful to him for providing me moral support
and sufficient time.
We owe sincere thanks to Principal Prof. Anil Sharma sir who helped me duly in time during our
project work in the Department.
At the same time, we would like to thank Prof. Shailendra Gupta (HOD, CS) and all other faculty
members and all non-teaching stall in Computer Science engineering Department for their
valuable co-operation.
Ankit Shrivastava(0130CS071018)
Sachin Patil(0130CS071070)
Prakash Gupta(0130CS071054)
Mukesh Chaturvedi(0130CS071043)
TITLE PAGE
CERTIFICATE
ACKNOWLEDGEMENT
ABSTRACT
1. INTRODUCTION
1.1 Problem Statement
1.2 Objective
1.3 Scope
1.4 Platform Specification
1.4.1 Hardware
1.4.2 Software
1.4.3 Implementation Language
2. SYSTEM ANALYSIS
2.1 Identification of Need
2.2 Preliminary Investigation
3. FEASIBILTY STUDY
3.1 Technical Feasibility
3.2 Economical Feasibility
3.3 Operational Feasibility
4. LITERATURE SURVEY
4.1 Work done by others
4.2 Benefits
5. TECHNICAL PART
5.1 About Technology
5.2 Frontend and Backend
7. CONCLUSION
8. REFERENCES
1.INTRODUCTION
The rapid growth in the use of wired networks due to the flexibility and advantages
they offer has lead to ongoing research in techniques to improve the utilization of the wired
medium.
In this technical age, projects play a vital role. This project named “PROTOCOL
SIMULATOR” describes and explains the working of All 4 Layers of TCP/IP reference model.
The TCP/IP protocol suite, used in the internet, was developed prior to the OSI model.Therefore,
the layers in the TRANSMISSION CONTROL PROTOCOL / INTERNETWORKING PROTOCOL (TCP/IP)
suite do not match exactly with those in the OSI model.
Physical…………………………………………Physical
Data Link……………………………………….Data Link
Network…………………………………………Network
Transport……………………………………….Transport
Session
Application……………………………… Presentation
Application
The first four layers provide physical standards, network interface, internetworking, and
transport function that corresponds to the first four layers of the OSI model. The three
top most layers in the OSI Model, however, are represented in TCP/IP by a single layer
called the application layer.
This project helps to understand the internal working of all 7 layers of OSI Model.
a) Physical Layer :-
Physical layer describes the physical information
between the two connected devices like
Transmission Mode
Bit Rate
Synchronization of bits
Representation of bits
LAN Configuration
Transmission Type
Type Of Switching
Topology Used
b) Data Link Layer :-
Data Link Layer controls the transmission of data part between two
successive joined systems. Like
Error Control
Flow Control
Framing
Link Management
Physical Addressing
Access Control
b) Network Layer :-
Network Layer is responsible for the source to destination delivery of a
packet. Specific responsibilities of the network layer include the following:
d) Transport Layer :-
Transport Layer is for the source to destination [end-to-end] delivery of
the entire message. Specific responsibilities of the transport layer include the following:
UDP
TCP
o Establishment
o Transmission
o Termination
1.1 PROBLEM STATEMENT
1.While TCP/IP can achieve good throughput on currently deployed networks, its round-trip
latency is usually poor.
2.The TCP/IP model assumes communication between autonomous machines that cooperate only
minimally. However, machines on a local-area network frequently share a common administrative
service, a common file system, and a common user base.
1.2 OBJECTIVE :
The aim of this project is to design, implement and integrate a version of the Transmission
Control Protocol (TCP) in the Simulator, so that when completed more realistic simulations could
be run to see what happens with traffic streams running over the simulated network and how
several different protocols can communicate and interact.
1.3 SCOPE:
This project is very important in this technical age of internet as it explained TCP/IP’s four layer
protocol. So this project can be used in coaching centers, colleges, companies and can be used in
science museums to understand protocols of all four layers.
TCP/IP is the main protocol suite used to provide interoperability between numerous different
types of computers. TCP/IP also supports routing, and is supported by the majority of networks
today. The TCP/IP protocol suite was developed by the Advanced Research Projects Agency (ARPA)
of the Department of Defense during 1969. The protocol was designed to be routable and
efficient.
A standard, routable enterprise networking protocol that is the most complete and accepted
protocol available. All modern operating systems support TCP/IP, and most large private networks
rely on TCP/IP for much of their traffic.
A technology for connecting dissimilar systems. Many TCP/IP application protocols were designed
to access and transfer data between dissimilar systems. These protocols include HTTP, FTP, and
Telnet.
Protocols are rules, standards and procedures that enable information to be communicated in the
network environment. The rules and standards that control the way in which computers
communicate are referred to as protocols.
Physical layer: This layer is special compared to the other layers of the model, because it is the
only one where data is physically moved across the network interface. All of the other layers
perform useful functions to create messages to be sent, but they must all be transmitted down the
protocol stack to the physical layer, where they are actually sent out over the network.
The physical layer is also “special” in that it is the only layer that really does not apply specifically
to TCP/IP. Even in studying TCP/IP, however, it is still important to understand its significance and
role in relation to the other layers where TCP/IP protocols reside.
The physical layer defines a number of network functions, not just hardware cables and cards.
Link Layer: This layer defines the networking methods with the scope of the local network link on
which hosts communicate without intervening routers. This layer describes the protocols used to
describe the local network topology and the interfaces needed to affect transmission of Internet
Layer datagrams to next-neighbor hosts. The Link Layer is used to move packets between the
Internet Layer interfaces of two different hosts on the same link.
Internet Layer: This solves the problem of sending packets across one or more networks.
Internetworking requires sending data from the source network to the destination network. This
process is called routing.In the Internet Protocol Suite, the Internet Protocol performs two basic
functions:
Transport Layer (host-to-host): The Transport Layer constitutes the networking regime between
two network hosts, either on the local network or on remote networks separated by routers. The
Transport Layer provides a uniform networking interface that hides the actual topology (layout) of
the underlying network connections. This is where flow-control, error-correction, and connection
protocols exist, such as TCP.
3.3.OPERATIONAL FEASIBILITY
This includes the following questions:
> Is there sufficient support for the users
> Will the proposed system cause harm
4.LITERATURE SURVEY
4.1 WORK DONE BY OTHERS
4.2 BENEFITS
To allow computers with different disk OS, languages, and cabling to share resources.
Local Area Network (LAN) technologies like Ethernet, token rings etc., can all be simulated
with a typical simulator and the user can test, analyze various standard results apart from
devising some novel protocol or strategy for routing etc.
The system can be further enhanced in future as the total development is based on CBD
process model.
TCP/IP can be used to establish connections between different types of computers and
servers. Providing this type of interoperability is one of the main advantages of TCP/IP.
TCP/IP is an industry standard, open protocol. This means that it is not controlled by one
institute.
TCP/IP operates independently of the operating system.
TCP/IP includes support for a number of routing protocols.
Enables internetworking between organizations.
TCP/IP has a scalable, client/server architecture.
TCP/IP includes support for name and address resolution services, including Domain Name
Service Dynamic Host Configuration Protocol and Windows Internet Name Service .
5. TECHNICAL PART
5.1 TECHNOLOGY USED:
FEATURES OF JAVA
Java is an OOP language . unlike C++ it enforces the programmers to and to OOP.
Polymorphism
Multi-Threading
PROTOCOL SIMULATOR is implemented as COMPONENT ASSEMBLY MODEL. The project has been
completed by following through several phases where each phase consists of the following phases
at different incremental level:
Requirement Analysis: In this phase of our project, the requirements of the system are analyzed
which enables to understand the nature of the software to be built. The information domain,
required function, behavior, performance and interfaces are studied.
Design: In this phase we had to decide the data structure, software architecture, interface &
details of the project. Then this design is translated into coding in the next step.
Coding: The design was translated into machine readable form in this step.
Testing: In this phase, new components that is which are not the reusable ones and interfaces of
the system had to be tested.
Component Assembly Model software to flourish, there are technologies that have given rise to
different industries that have been proven to be friendly for this type of SDLC Model.
1)The Component Assembly Model is a revolutionary model that could offer benefits to large scale
and experienced businesses.
2) Best practices, available components, and mash-ups provide a way for this model to work out.
3)Large businesses who have years of experience can use their familiarity with the industry and
present tools to develop their own software.
6.2REQUIREMENT ANALYSIS:
6.2.1 SOFTWARE REQUIREMENT SPECIFICATION
System analysis is a logical process; the objective of this stage is not actually to solve the problem
but to determine what must be done to solve the problem. The basic objective of the analysis
stage is to develop the logical model of the system using tools such as UML (Unified modelling
language).
The SRS document itself states in precise and explicit language those functions and capabilities
software system (i.e., a software application, an eCommerce Web site, and so on) must provide, as
well as states any required constraints by which the system must abide. The SRS also functions as
a blueprint for completing a project with as little cost growth as possible. The SRS is often referred
to as the "parent" document because all subsequent project management documents, such as
design specifications, statements of work, software architecture specifications, testing and
validation plans, and documentation plans, are related to it.
It's important to note that an SRS contains functional and nonfunctional requirements only; it
doesn't offer design suggestions, possible solutions to technology or business issues, or any other
information other than what the development team understands the customer's system
requirements to be.
It provides feedback to the customer. An SRS is the customer's assurance that the
development organization understands the issues or problems to be solved and the
software behavior necessary to address those problems. Therefore, the SRS should be
written in natural language (versus a formal language, explained later in this article), in an
unambiguous manner that may also include charts, tables, data flow diagrams, decision
tables, and so on.
It decomposes the problem into component parts. The simple act of writing down software
requirements in a well-designed format organizes information, places borders around the
problem, solidifies ideas, and helps break down the problem into its component parts in an
orderly fashion.
It serves as an input to the design specification. As mentioned previously, the SRS serves as
the parent document to subsequent documents, such as the software design specification
and statement of work. Therefore, the SRS must contain sufficient detail in the functional
system requirements so that a design solution can be devised.
It serves as a product validation check. The SRS also serves as the parent document for
testing and validation strategies that will be applied to the requirements for verification.
<<communicate>>
Transmission Modes
Switching
<<include>>
Simulate
Flow Control
<<communicate>>
Error Control
Data Link
Layer
Access Control
<<include>>
<<communicate>> ARP
RARP
Network
Layer
<<communicate>>
UDP
Transport TCP
Layer
Scope planning refers to a project management process that defines boundaries and deliverables.
The basic matrix of a scope planning analysis consists of three main categories:
Initiation, planning, and definition, with two control categories:
Verification, and change control interspersed between the three main categories.
Initiation inputs contain program deliverable description, strategic planning, program selection
criteria, and historical information.
Tools and techniques include program selection methods and expert judgment. The output of the
initiation phase will include a program charter, the identification and assignment of a program
director, and the identification of known constraints and assumptions. The planning category
covers descriptions on deliverables, the program charter, constraints, and assumptions. Tools and
techniques involved in this category include deliverable analysis, a benefit/cost analysis, the
identification of alternatives. The final main category includes a statement of scope, a definition of
assumptions and constraints, and other planning outputs and historical information. Tools and
techniques involved include work breakdown structure templates and decomposition. The output
of definition is work breakdown structure, and the defined scope section of the project
management plan.
Two control devises, verification and change control are interspersed between the main categories
of scope planning. Verification’s inputs are work results, and deliverable documentation.
Inspection is its sole tools and technique. Formal acceptance is verification’s output, and is
necessary to advance the project management plan to the next level.
Change control is situated between planning and definition. Its inputs are comprised of work
breakdown structure, performance reports, change requests, and the scope management plan.
Tools and techniques include the scope change control system, performance measurements and
additional planning when indicated. The outputs of change control are changes in scope,
corrective actions, and lessons learned entered in the knowledge base for other project
management considerations.
6.3.2 PROJECT RESOURCE
In project management terminology, resources are required to carry out the project tasks. They
can be people, equipment, facilities, funding, or anything else capable of definition (usually other
than labour) required for the completion of a project activity. The lack of a resource will therefore
be a constraint on the completion of the project activity. Resources may be storable or non
storable. Storable resources remain available unless depleted by usage, and may be replenished
by project tasks which produce them. Non-storable resources must be renewed for each time
period, even if not utilised in previous time periods.
Resource scheduling, availability and optimisation are considered key to successful project
management.
Allocation of limited resources is based on the priority given to each of the project activities. Their
priority is calculated using the Critical path method and heuristic analysis. For a case with a
constraint on the number of resources, the objective is to create the most efficient schedule
possible - minimising project duration and maximising the use of the resources available.
Non-traditional, innovative work environment relying on teams to achieve its objectives. Team
based organization major characteristics include (1) mutual trust, (2) employee empowerment in
planning, organization, and goal-setting, (3) shared responsibility for self-management, (4) shared
accountability for performance, and (5) shared leadership.
One key ingredient in the scheduling process is experience in the project area; another is
experience with scheduling in general. In every industry area there will be a body of knowledge
that associates the accomplishment of known work efforts with a time duration. In some
industries, there are books recording industry standards for use by cost and schedule estimators.
Interviewing those who have had experience with similar projects is the best way to determine
how long things will really take.
When preparing a schedule estimate, consider that transition between activities often takes time.
Organizations or resources outside your direct control may not share your sense of schedule
urgency, and their work may take longer to complete. Beware of all external dependency
relationships. Uncertain resources of talent, equipment, or data will likely result in extending the
project schedule.
Experience teaches that things usually take longer than we think they will, and that giving away
schedule margin in the planning phase is a sure way to ensure a highly stressed project effort.
People tend to be optimistic in estimating schedules and, on average, estimate only 80% of the
time actually required.
Failure to meet schedule goals is most often due to unrealistic deadlines, passive project
execution, unforeseen problems, or things overlooked in the plan.
6.4 DESIGN
The design approach that was suited for the project turned out to be object-oriented design. It
creates a representation of the real world problem domain & maps it into a solution domain that
is software. Unlike other methods, object-oriented design results in a design that interconnects
data objects(data items) & processing operations in a way that modularizes information &
processing, rather than processing alone.
Architectural Design
Architectural design represents the structure of data and the program components that are
required to build the system. It considers the architectural style that the system will take.
The structure and properties of the components that constitute the system,
and the interrelationships that occur among all architectural components of a system. Because
this is client-server model based on Component-based development model so the main
components of this system are classes which are used to build whole system and reused later.
Architectural design represents what are the classes used and how they interact with each other
to provide full functionality.
User Interface Design:
User interface design creates an effective communication medium between a human and a
computer. Following a set of interface design principles, design identities interface objects and
actions and then create a screen layout that forms the basis for a user interface design.
In our project we followed top-down design methodology. There are various reasons for this.
A top-down design approach starts by identifying the major components of the system,
decomposing them into their lower-level components and iterating until the desired level of detail
is achieved. Top-down design methods often result in some form of Stepwise Refinement. Starting
from an abstract design, in each step the design is refined to a more concrete level, until we reach
a level where no more refinement is needed and the design can be implemented directly.
A top-down design approach is suitable only if the specifications of the system are clearly known
and the system development is from scratch. Hence, it is a reasonable approach if a waterfall type
of process model is being used.
NewActivity
SIMULATE
EXIT
7.CONCLUSION
At last, we can say that this project is very important in this technical age of internet as it
explained TCP/IP Model protocol.
This project will explain the working of TCP/IP model including one layered architecture simulator
model.
So this project can be used as tutorial in coaching centers, collages and can be used in
science museums to understand protocols working.
8. REFERENCE
The Books that are referred during the making of the project are :-
www.google.com
www.wilkipedia.com