0% found this document useful (0 votes)
17 views45 pages

Net - Centric Computing

Uploaded by

olalemidunsin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views45 pages

Net - Centric Computing

Uploaded by

olalemidunsin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 45

NET – CENTRIC COMPUTING

1
1.0) INTRODUCTION
The underlying principle of Net-Centric Computing (NCC) is a distributed environment where
applications and data are downloaded from servers and exchanged with peers across a network on as
as-needed basis. NCC is an ongoing area of interest to a wide-variety of software engineering
researchers and practitioners, in part because it is an enabling technology for modern distributed
systems (e.g., Web applications) . Knowledge of NCC is essential requirement in architecting,
constructing, and evolving the complex software systems that power today’s enterprises.
The widespread interest in Ubiquitous and Pervasive Computing systems will give a new impulse to
Net-Centric Computing (NCC) systems.
The activities for Net-Centric Technology consists of three layers.
– The Information Service Layer pertains to the abstraction of objects.
The focus here is on the quality, security, auditability and control.
– The Network Service Layer pertains to all aspects of communications, particularly configuration,
fault management, security, performance and accounting.
– The Component Control Layer pertains to the development, acquisition and implementation of
components that form the infrastructure for distributed computing.
2) DISTRIBUTED COMPUTING
Distributed computing is a model in which components of a software system are shared among
multiple computers to improve efficiency and performance.
According to the narrowest of definitions, distributed computing is limited to programs with
components shared among computers within a limited geographic area.
Broader definitions include shared tasks as well as program components. In the broadest sense
of the term, distributed computing just means that something is shared among multiple
systems which may also be in different locations.
In the enterprise, distributed computing has often meant putting various steps in business
processes at the most efficient places in a network of computers. For example, in the typical
distribution using the 3-tier model, user interface processing is performed in the PC at the user's
location, business processing is done in a remote computer, and database access and processing is
conducted in another computer that provides centralized access for many business processes.
Typically, this kind of distributed computing uses the client/server communications model.

2
The Distributed Computing Environment (DCE) is a widely-used industry standard that supports this
kind of distributed computing. On the Internet, third-party service providers now offer some
generalized services that fit into this model.
Grid computing is a computing model involving a distributed architecture of large numbers of
computers connected to solve a complex problem. In the grid computing model, servers or
personal computers run independent tasks and are loosely linked by the Internet or low-speed
networks. Individual participants may allow some of their computer's processing time to be put at
the service of a large problem. The largest grid computing project is SETI@home, in which
individual computer owners volunteer some of their multitasking processing cycles (while
concurrently still using their computer) to the Search for Extraterrestrial Intelligence (SETI) project.
This computer-intensive problem uses thousands of PCs to download and search radio telescope
data.
There is a great deal of disagreement over the difference between distributed computing and
grid computing. According to some, grid computing is just one type of distributed computing.
The SETI project, for example, characterizes the model it’s based on as distributed computing.
Similarly, cloud computing, which simply involves hosted services made available to users from
a remote location, may be considered a type of distributed computing, depending on who you
ask.
One of the first uses of grid computing was the breaking of a cryptographic code by a group that is
now known as distributed .net. That group also describes its model as distributed computing.
Distributed computing is a field of computer science that studies distributed systems. A distributed
system is a model in which components located on networked computers communicate and
coordinate their actions by passing messages. The components interact with each other in order to
achieve a common goal. Three significant characteristics of distributed systems are: concurrency of
components, lack of a global clock, and independent failure of components. Examples of
distributed systems vary from Service Oriented Architecture (SOA)-based systems to massively
multiplayer online games to peer-to-peer applications.
A computer program that runs in a distributed system is called a distributed program, and
distributed programming is the process of writing such programs. There are many alternatives for the
message passing mechanism, including pure HTTP, RPC-like connectors and message queues.

3
A goal and challenge pursued by some computer scientists and practitioners in distributed systems is
location transparency; however, this goal has fallen out of favour in industry, as distributed
systems are different from conventional non-distributed systems, and the differences, such as
network partitions, partial system failures, and partial upgrades, cannot simply be "papered over" by
attempts at "transparency" .
Distributed computing also refers to the use of distributed systems to solve computational
problems. In distributed computing, a problem is divided into many tasks, each of which is solved
by one or more computers, which communicate with each other by message passing.
2.01 ) Parallel System
The word parallel in terms such as "parallel system", "distributed programming", and "distributed
algorithm" originally referred to computer networks where individual computers were physically
distributed within some geographical area. The terms are nowadays used in a much wider sense,
even referring to autonomous processes that run on the same physical computer and interact with
each other by message passing.
While there is no single definition of a distributed system, the following defining properties are
commonly used:
 There are several autonomous computational entities (computers or nodes), each of which has its
own local memory.
 The entities communicate with each other by message passing
A distributed system may have a common goal, such as solving a large computational problem; the
user then perceives the collection of autonomous processors as a unit. Alternatively, each computer
may have its own user with individual needs, and the purpose of the distributed system is to
coordinate the use of shared resources or provide communication services to the users.
Other typical properties of distributed systems include the following:
 The system has to tolerate failures in individual computers.
 The structure of the system (network topology, network latency, number of computers) is not known
in advance, the system may consist of different kinds of computers and network links, and the
system may change during the execution of a distributed program.
 Each computer has only a limited, incomplete view of the system. Each computer may know only
one part of the input.
2.02 ) Parallel and distributed computing

4
Distributed systems are groups of networked computers, which have the same goal for their work.
The terms "concurrent computing", "parallel computing", and "distributed computing" have
a lot of overlap, and no clear distinction exists between them. The same system may be
characterized both as "parallel" and "distributed"; the processors in a typical distributed system
run concurrently in parallel. Parallel computing may be seen as a particular tightly coupled
form of distributed computing, and distributed computing may be seen as a loosely coupled
form of parallel computing. Nevertheless, it is possible to roughly classify concurrent systems as
"parallel" or "distributed" using the following criteria:
 In parallel computing, all processors may have access to a shared memory to exchange
information between processors.
 In distributed computing, each processor has its own private memory (distributed memory).
Information is exchanged by passing messages between the processors.
The figure on the right illustrates the difference between distributed and parallel systems. Figure (a)
is a schematic view of a typical distributed system; the system is represented as a network topology
in which each node is a computer and each line connecting the nodes is a communication link.
Figure (b) shows the same distributed system in more detail: each computer has its own local
memory, and information can be exchanged only by passing messages from one node to another by
using the available communication links. Figure (c) shows a parallel system in which each processor
has a direct access to a shared memory.
The situation is further complicated by the traditional uses of the terms parallel and
distributed algorithm that do not quite match the above definitions of parallel and distributed
systems (see below for more detailed discussion). Nevertheless, as a rule of thumb, high-
performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while
the coordination of a large-scale distributed system uses distributed algorithms.

5
The use of concurrent processes that communicate by message-passing has its roots in operating
system architectures studied in the 1960s. The first widespread distributed systems were local-area
networks such as Ethernet, which was invented in the 1970s.
ARPANET, the predecessor of the Internet, was introduced in the late 1960s, and ARPANET e-mail
was invented in the early 1970s. E-mail became the most successful application of ARPANET, and
it is probably the earliest example of a large-scale distributed application. In addition to ARPANET,
and its successor, the Internet, other early worldwide computer networks included Usenet and
FidoNet from the 1980s, both of which were used to support distributed discussion systems.
The study of distributed computing became its own branch of computer science in the late 1970s and
early 1980s. The first conference in the field, Symposium on Principles of Distributed Computing
(PODC), dates back to 1982, and its European counterpart International Symposium on Distributed
Computing (DISC) was first held in 1985.
2.03) Architectures of Distributed System
Various hardware and software architectures are used for distributed computing. At a lower level, it
is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that
network is printed onto a circuit board or made up of loosely coupled devices and cables. At a higher
level, it is necessary to interconnect processes running on those CPUs with some sort of
communication system.
Distributed programming typically falls into one of several basic architectures: client–server, three-
tier, n-tier, or peer-to-peer; or categories: loose coupling, or tight coupling.
 Client–server: architectures where smart clients contact the server for data then format and display it
to the users. Input at the client is committed back to the server when it represents a permanent
change.
 Three-tier: architectures that move the client intelligence to a middle tier so that stateless clients can
be used. This simplifies application deployment. Most web applications are three-tier.
 n-tier: architectures that refer typically to web applications which further forward their requests to
other enterprise services. This type of application is the one most responsible for the success of
application servers.
 Peer-to-peer: architectures where there are no special machines that provide a service or manage the
network resources. Instead all responsibilities are uniformly divided among all machines, known as
peers. Peers can serve both as clients and as servers.

6
Another basic aspect of distributed computing architecture is the method of communicating and
coordinating work among concurrent processes. Through various message passing protocols,
processes may communicate directly with one another, typically in a master/slave relationship.
Alternatively, a "database-centric" architecture can enable distributed computing to be done without
any form of direct inter-process communication, by utilizing a shared database.
2.04) Applications of distributed system
Reasons for using distributed systems and distributed computing may include:
1. The very nature of an application may require the use of a communication network that connects
several computers: for example, data produced in one physical location and required in another
location.
2. There are many cases in which the use of a single computer would be possible in principle, but the
use of a distributed system is beneficial for practical reasons. For example, it may be more cost-
efficient to obtain the desired level of performance by using a cluster of several low-end computers,
in comparison with a single high-end computer. A distributed system can provide more reliability
than a non-distributed system, as there is no single point of failure. Moreover, a distributed system
may be easier to expand and manage than a monolithic uniprocessor system.
2.05) Examples of distributed systems
Examples of distributed systems and applications of distributed computing include the following:
 telecommunication networks:
o telephone networks and cellular networks,
o computer networks such as the Internet,
o wireless sensor networks,
o routing algorithms;
 network applications:
o World Wide Web and peer-to-peer networks,
o massively multiplayer online games and virtual reality communities,
o distributed databases and distributed database management systems,
o network file systems,
o distributed information processing systems such as banking systems and airline reservation systems;
 real-time process control:
o aircraft control systems,

7
o industrial control systems;
 parallel computation:
o scientific computing, including cluster computing and grid computing and various volunteer
computing projects (see the list of distributed computing projects),
o distributed rendering in computer graphics
2.06 ) Distributed algorithm
The field of concurrent and distributed computing studies similar questions in the case of either
multiple computers, or a computer that executes a network of interacting processes: which
computational problems can be solved in such a network and how efficiently? However, it is not at
all obvious what is meant by "solving a problem" in the case of a concurrent or distributed system:
for example, what is the task of the algorithm designer, and what is the concurrent or distributed
equivalent of a sequential general-purpose computer?
The discussion below focuses on the case of multiple computers, although many of the issues are the
same for concurrent processes running on a single computer.
Three viewpoints are commonly used:
Parallel algorithms in shared-memory model
 All processors have access to a shared memory. The algorithm designer chooses the program
executed by each processor.
 One theoretical model is the parallel random access machines (PRAM) that are used. However, the
classical PRAM model assumes synchronous access to the shared memory.
 Shared-memory programs can be extended to distributed systems if the underlying operating system
encapsulates the communication between nodes and virtually unifies the memory across all
individual systems.
 A model that is closer to the behavior of real-world multiprocessor machines and takes into account
the use of machine instructions, such as Compare-and-swap (CAS), is that of asynchronous shared
memory. There is a wide body of work on this model, a summary of which can be found in the
literature.
Parallel algorithms in message-passing model
 The algorithm designer chooses the structure of the network, as well as the program executed by
each computer.

8
 Models such as Boolean circuits and sorting networks are used. A Boolean circuit can be seen as a
computer network: each gate is a computer that runs an extremely simple computer program.
Similarly, a sorting network can be seen as a computer network: each comparator is a computer.
Distributed algorithms in message-passing model
 The algorithm designer only chooses the computer program. All computers run the same program.
The system must work correctly regardless of the structure of the network.
 A commonly used model is a graph with one finite-state machine per node.
In the case of distributed algorithms, computational problems are typically related to graphs. Often
the graph that describes the structure of the computer network is the problem instance. This is
illustrated in the following example.
An example
Consider the computational problem of finding a coloring of a given graph G. Different fields might
take the following approaches:
Centralized algorithms
 The graph G is encoded as a string, and the string is given as input to a computer. The computer
program finds a coloring of the graph, encodes the coloring as a string, and outputs the result.
Parallel algorithms

 Again, the graph G is encoded as a string. However, multiple computers can access the same string
in parallel. Each computer might focus on one part of the graph and produce a coloring for that part.
 The main focus is on high-performance computation that exploits the processing power of multiple
computers in parallel.
Distributed algorithms
 The graph G is the structure of the computer network. There is one computer for each node of G and
one communication link for each edge of G. Initially, each computer only knows about its immediate
neighbors in the graph G; the computers must exchange messages with each other to discover more
about the structure of G. Each computer must produce its own color as output.
 The main focus is on coordinating the operation of an arbitrary distributed system.
While the field of parallel algorithms has a different focus than the field of distributed algorithms,
there is a lot of interaction between the two fields. For example, the Cole–Vishkin algorithm for

9
graph coloring was originally presented as a parallel algorithm, but the same technique can also be
used directly as a distributed algorithm.
Moreover, a parallel algorithm can be implemented either in a parallel system (using shared
memory) or in a distributed system (using message passing). The traditional boundary between
parallel and distributed algorithms (choose a suitable network vs. run in any given network) does not
lie in the same place as the boundary between parallel and distributed systems (shared memory vs.
message passing).
2.061 ) Complexity measures
In parallel algorithms, yet another resource in addition to time and space is the number of computers.
Indeed, often there is a trade-off between the running time and the number of computers: the
problem can be solved faster if there are more computers running in parallel (see speedup). If a
decision problem can be solved in polylogarithmic time by using a polynomial number of
processors, then the problem is said to be in the class NC.[35] The class NC can be defined equally
well by using the PRAM formalism or Boolean circuits—PRAM machines can simulate Boolean
circuits efficiently and vice versa.
In the analysis of distributed algorithms, more attention is usually paid on communication operations
than computational steps. Perhaps the simplest model of distributed computing is a synchronous
system where all nodes operate in a lockstep fashion. During each communication round, all nodes
in parallel (1) receive the latest messages from their neighbours, (2) perform arbitrary local
computation, and (3) send new messages to their neighbors. In such systems, a central complexity
measure is the number of synchronous communication rounds required to complete the task.[37]
This complexity measure is closely related to the diameter of the network. Let D be the diameter of
the network. On the one hand, any computable problem can be solved trivially in a synchronous
distributed system in approximately 2D communication rounds: simply gather all information in one
location (D rounds), solve the problem, and inform each node about the solution (D rounds).
On the other hand, if the running time of the algorithm is much smaller than D communication
rounds, then the nodes in the network must produce their output without having the possibility to
obtain information about distant parts of the network. In other words, the nodes must make globally
consistent decisions based on information that is available in their local neighbourhood. Many
distributed algorithms are known with the running time much smaller than D rounds, and

10
understanding which problems can be solved by such algorithms is one of the central research
questions of the field.
Other commonly used measures are the total number of bits transmitted in the network (cf.
communication complexity).
2.062 )Other problems
Traditional computational problems take the perspective that we ask a question, a computer (or a
distributed system) processes the question for a while, and then produces an answer and stops.
However, there are also problems where we do not want the system to ever stop. Examples of such
problems include the dining philosophers problem and other similar mutual exclusion problems. In
these problems, the distributed system is supposed to continuously coordinate the use of shared
resources so that no conflicts or deadlocks occur.
There are also fundamental challenges that are unique to distributed computing. The first example is
challenges that are related to fault-tolerance. Examples of related problems include consensus
problems, Byzantine fault tolerance, and self-stabilization.
A lot of research is also focused on understanding the asynchronous nature of distributed systems:
 Synchronizers can be used to run synchronous algorithms in asynchronous systems.
 Logical clocks provide a causal happened-before ordering of events.
 Clock synchronization algorithms provide globally consistent physical time stamps.

Election
Coordinator election (or leader election) is the process of designating a single process as the
organizer of some task distributed among several computers (nodes). Before the task is begun, all
network nodes are either unaware which node will serve as the "coordinator" (or leader) of the task,
or unable to communicate with the current coordinator. After a coordinator election algorithm has
been run, however, each node throughout the network recognizes a particular, unique node as the
task coordinator.
The network nodes communicate among themselves in order to decide which of them will get into
the "coordinator" state. For that, they need some method in order to break the symmetry among
them. For example, if each node has unique and comparable identities, then the nodes can compare
their identities, and decide that the node with the highest identity is the coordinator.

11
The definition of this problem is often attributed to LeLann, who formalized it as a method to create
a new token in a token ring network in which the token has been lost.
Coordinator election algorithms are designed to be economical in terms of total bytes transmitted,
and time. The algorithm suggested by Gallager, Humblet, and Spira for general undirected graphs
has had a strong impact on the design of distributed algorithms in general, and won the Dijkstra
Prize for an influential paper in distributed computing.
Many other algorithms were suggested for different kind of network graphs, such as undirected
rings, unidirectional rings, complete graphs, grids, directed Euler graphs, and others. A general
method that decouples the issue of the graph family from the design of the coordinator election
algorithm was suggested by Korach, Kutten, and Moran.
In order to perform coordination, distributed systems employ the concept of coordinators. The
coordinator election problem is to choose a process from among a group of processes on different
processors in a distributed system to act as the central coordinator. Several central coordinator
election algorithms exist.
2.07 )Properties of distributed systems
So far the focus has been on designing a distributed system that solves a given problem. A
complementary research problem is studying the properties of a given distributed system.
The halting problem is an analogous example from the field of centralised computation: we are
given a computer program and the task is to decide whether it halts or runs forever. The halting
problem is undecidable in the general case, and naturally understanding the behaviour of a computer
network is at least as hard as understanding the behaviour of one computer.
However, there are many interesting special cases that are decidable. In particular, it is possible to
reason about the behaviour of a network of finite-state machines. One example is telling whether a
given network of interacting (asynchronous and non-deterministic) finite-state machines can reach a
deadlock. This problem is PSPACE-complete,[51] i.e., it is decidable, but it is not likely that there is
an efficient (centralised, parallel or distributed) algorithm that solves the problem in the case of large
networks.
2.08) Advantages of Distributed System :
 Sharing Data : There is a provision in the environment where user at one site may be able to access
the data residing at other sites.

12
 Autonomy : Because of sharing data by means of data distribution each site is able to retain a degree
of control over data that are stored locally.
 In distributed system there is a global database administrator responsible for the entire system. A
part of global data base administrator responsibilities is delegated to local data base administrator for
each site. Depending upon the design of distributed database
 each local database administrator may have different degree of local autonomy.
 Availability : If one site fails in a distributed system, the remaining sites may be able to continue
operating. Thus a failure of a site doesn't necessarily imply the shutdown of the System.
2.09) Disadvantages of Distributed Systems :
The added complexity required to ensure proper co-ordination among the sites, is the major
disadvantage. This increased complexity takes various forms :
 Software Development Cost : It is more difficult to implement a distributed database system; thus it
is more costly.
 Greater Potential for Bugs : Since the sites that constitute the distributed database system operate
parallel, it is harder to ensure the correctness of algorithms, especially operation during failures of
part of the system, and recovery from failures. The potential exists for extremely subtle bugs.
 increased Processing Overhead : The exchange of information and additional computation required
to achieve inter-site co-ordination are a form of overhead that does not arise in centralized system.

3) MOBILE COMPUTING
Mobile computing is human–computer interaction by which a computer is expected to be
transported during normal usage, which allows for transmission of data, voice and video. Mobile
computing involves mobile communication, mobile hardware, and mobile software. Communication
issues include ad hoc networks and infrastructure networks as well as communication properties,
protocols, data formats and concrete technologies. Hardware includes mobile devices or device
components. Mobile software deals with the characteristics and requirements of mobile applications.
3.01 ) Principles of Mobile Computing
 Portability: Facilitates movement of device(s) within the mobile computing environment.
 Connectivity: Ability to continuously stay connected with minimal amount of lag/downtime,
without being affected by movements of the connected nodes

13
 Social Interactivity: Maintaining the connectivity to collaborate with other users, at least within the
same environment.
 Individuality: Adapting the technology to suit individual needs. or
 Portability: Devices/nodes connected within the mobile computing system should facilitate
mobility. These devices may have limited device capabilities and limited power supply, but should
have a sufficient processing capability and physical portability to operate in a movable environment.
 Connectivity: This defines the quality of service (QoS) of the network connectivity. In a mobile
computing system, the network availability is expected to be maintained at a high level with the
minimal amount of lag/downtime without being affected by the mobility of the connected nodes.
 Interactivity: The nodes belonging to a mobile computing system are connected with one another to
communicate and collaborate through active transactions of data.
 Individuality: A portable device or a mobile node connected to a mobile network often denote an
individual; a mobile computing system should be able to adopt the technology to cater the individual
needs and also to obtain contextual information of each node.
3.02) Devices
Some of the most common forms of mobile computing devices are as given below:
 Portable computers, compact, lightweight units including a full character set keyboard and primarily
intended as hosts for software that may be parameterized, such as laptops/desktops,
smartphones/tablets, etc.
 Smart cards that can run multiple applications but are typically used for payment, travel and secure
area access.
 Cellular telephones, telephony devices which can call from a distance through cellular networking
technology.
 Wearable computers, mostly limited to functional keys and primarily intended as incorporation of
software agents, such as bracelets, keyless implants, etc.
The existence of these classes is expected to be long lasting, and complementary in personal usage,
none replacing one the other in all features of convenience.
Other types of mobile computers have been introduced since the 1990s including the:
 Portable computer (discontinued)
 Personal digital assistant/Enterprise digital assistant (discontinued)
 Ultra-Mobile PC (discontinued)

14
 Laptop
 Smartphones and tablets
 Wearable computer
 Carputer
3.03) MAJOR ADVANTAGES
Mobile computing has changed the complete landscape of our day-to-day life. Following are the
major advantages of Mobile Computing −
Location Flexibility
This has enabled users to work from anywhere as long as there is a connection established. A user
can work without being in a fixed position. Their mobility ensures that they are able to carry out
numerous tasks at the same time and perform their stated jobs.
Saves Time
The time consumed or wasted while travelling from different locations or to the office and back, has
been slashed. One can now access all the important documents and files over a secure channel or
portal and work as if they were on their computer. It has enhanced telecommuting in many
companies. It has also reduced unnecessary incurred expenses.
Enhanced Productivity
Users can work efficiently and effectively from whichever location they find comfortable. This in
turn enhances their productivity level.
Ease of Research
Research has been made easier, since users earlier were required to go to the field and search for
facts and feed them back into the system. It has also made it easier for field officers and researchers
to collect and feed data from wherever they are without making unnecessary trips to and from the
office to the field.
Entertainment
Video and audio recordings can now be streamed on-the-go using mobile computing. It's easy to
access a wide variety of movies, educational and informative material. With the improvement and
availability of high speed data connections at considerable cost, one is able to get all the
entertainment they want as they browse the internet for streamed data. One is able to watch news,
movies, and documentaries among other entertainment offers over the internet. This was not possible
before mobile computing dawned on the computing world.

15
Streamlining of Business Processes
Business processes are now easily available through secured connections. Looking into security
issues, adequate measures have been put in place to ensure authentication and authorization of the
user accessing the services.
Some business functions can be run over secure links and sharing of information between business
partners can also take place.
Meetings, seminars and other informative services can be conducted using video and voice
conferencing. Travel time and expenditure is also considerably reduced.
4.0) SECURITY ISSUES
Mobile computing has its fair share of security concerns as any other technology. Due to its nomadic
nature, it's not easy to monitor the proper usage. Users might have different intentions on how to
utilize this privilege. Improper and unethical practices such as hacking, industrial espionage,
pirating, online fraud and malicious destruction are some but few of the problems experienced by
mobile computing.
Another big problem plaguing mobile computing is credential verification. As other users share
username and passwords, it poses as a major threat to security. This being a very sensitive issue,
most companies are very reluctant to implement mobile computing to the dangers of
misrepresentation.
The problem of identity theft is very difficult to contain or eradicate. Issues with unauthorized access
to data and information by hackers, is also an enormous problem. Outsiders gain access to steal vital
data from companies, which is a major hindrance in rolling out mobile computing services.
No company wants to lay open their secrets to hackers and other intruders, who will in turn sell the
valuable information to their competitors. It's also important to take the necessary precautions to
minimize these threats from taking place. Some of those measures include −
 Hiring qualified personnel

 Installing security hardware and software

 Educating the users on proper mobile computing ethics

 Auditing and developing sound, effective policies to govern mobile computing

 Enforcing proper access rights and permissions

16
These are just but a few ways to help deter possible threats to any company planning to offer mobile
computing. Since information is vital, all possible measures should be evaluated and implemented
for safeguard purposes.
In the absence of such measures, it's possible for exploits and other unknown threats to infiltrate and
cause irrefutable harm. These may be in terms of reputation or financial penalties. In such cases, it's
very easy to be misused in different unethical practices.
If these factors aren’t properly worked on, it might be an avenue for constant threat. Various threats
still exist in implementing this kind of technology.
4.1) Limitations
 Range and bandwidth: Mobile Internet access is generally slower than direct cable connections,
using technologies such as GPRS and EDGE, and more recently HSDPA, HSUPA, 3G and 4G
networks and also the proposed 5G network. These networks are usually available within range of
commercial cell phone towers. High speed network wireless LANs are inexpensive but have very
limited range.
 Security standards: When working mobile, one is dependent on public networks, requiring careful
use of VPN. Security is a major concern while concerning the mobile computing standards on the
fleet. One can easily attack the VPN through a huge number of networks interconnected through the
line.
 Power consumption: When a power outlet or portable generator is not available, mobile computers
must rely entirely on battery power. Combined with the compact size of many mobile devices, this
often means unusually expensive batteries must be used to obtain the necessary battery life.
 Transmission interferences: Weather, terrain, and the range from the nearest signal point can all
interfere with signal reception. Reception in tunnels, some buildings, and rural areas is often poor.
 Potential health hazards: People who use mobile devices while driving are often distracted from
driving and are thus assumed more likely to be involved in traffic accidents. (While this may seem
obvious, there is considerable discussion about whether banning mobile device use while driving
reduces accidents or not.) Cell phones may interfere with sensitive medical devices. Questions
concerning mobile phone radiation and health have been raised.
 Human interface with device: Screens and keyboards tend to be small, which may make them hard
to use. Alternate input methods such as speech or handwriting recognition require training.

17
4.2) In-vehicle computing and fleet computing
Many commercial and government field forces deploy a rugged portable computer with their fleet of
vehicles. This requires the units to be anchored to the vehicle for driver safety, device security, and
ergonomics. Rugged computers are rated for severe vibration associated with large service vehicles
and off-road driving and the harsh environmental conditions of constant professional use such as in
emergency medical services, fire, and public safety.

The Compaq Portable - Circa 1982 pre-laptop


Other elements affecting function in vehicle:
 Operating temperature: A vehicle cabin can often experience temperature swings from −30–60 °C
(−22–140 °F). Computers typically must be able to withstand these temperatures while operating.
Typical fan-based cooling has stated limits of 35–38 °C (95–100 °F) of ambient temperature, and
temperatures below freezing require localized heaters to bring components up to operating
temperature (based on independent studies by the SRI Group and by Panasonic R&D).
 Vibration can decrease the life expectancy of computer components, notably rotational storage such
as HDDs.
 Visibility of standard screens becomes an issue in bright sunlight.
 Touchscreen users easily interact with the units in the field without removing gloves.
 High-temperature battery settings: Lithium ion batteries are sensitive to high temperature conditions
for charging. A computer designed for the mobile environment should be designed with a high-
temperature charging function that limits the charge to 85% or less of capacity.
 External antenna connections go through the typical metal cabins of vehicles which would block
wireless reception, and take advantage of much more capable external communication and
navigation equipment.

18
4.3) Security issues involved in mobile
Mobile security has become increasingly important in mobile computing. It is of particular concern
as it relates to the security of personal information now stored on the smartphone.
More and more users and businesses use smartphones as a means of planning and organizing their
work and private life. Within companies, these technologies are causing profound changes in the
organization of information systems and therefore they have become the source of new risks. Indeed,
smartphones collect and compile an increasing amount of sensitive information to which access must
be controlled to protect the privacy of the user and the intellectual property of the company.
All smartphones are preferred targets of attacks. These attacks exploit weaknesses related to
smartphones that can come from means of wireless telecommunication like WiFi networks and
GSM. There are also attacks that exploit software vulnerabilities from both the web browser and
operating system. Finally, there are forms of malicious software that rely on the weak knowledge of
average users.
Different security counter-measures are being developed and applied to smartphones, from security
in different layers of software to the dissemination of information to end users. There are good
practices to be observed at all levels, from design to use, through the development of operating
systems, software layers, and downloadable apps.
4.4) Portable computing devices
Main articles: Mobile device and Portable computer
Several categories of portable computing devices can run on batteries but are not usually classified
as laptops: portable computers, PDAs, ultra mobile PCs (UMPCs), tablets and smartphones.
 A portable computer (discontinued) is a general-purpose computer that can be easily moved from
place to place, but cannot be used while in transit, usually because it requires some "setting-up" and
an AC power source. The most famous example is the Osborne 1. Portable computers are also called
a "transportable" or a "luggable" PC.
 A personal digital assistant (PDA) (discontinued) is a small, usually pocket-sized, computer with
limited functionality. It is intended to supplement and to synchronize with a desktop computer,
giving access to contacts, address book, notes, e-mail and other features.

19
A Palm TX PDA
 An ultra mobile PC (discontinued) is a full-featured, PDA-sized computer running a general-purpose
operating system.
 Tablets/phones: a slate tablet is shaped like a paper notebook. Smartphones are the same devices as
tablets, however the only difference with smartphones is that they are much smaller and pocketable.
Instead of a physical keyboard, these devices have a touchscreen including a combination of a virtual
keyboard, but can also link to a physical keyboard via wireless Bluetooth or USB. These devices
include features other computer systems would not be able to incorporate, such as built-in cameras,
because of their portability.
 A carputer is installed in an automobile. It operates as a wireless computer, sound system, GPS, and
DVD player. It also contains word processing software and is bluetooth compatible.
 A Pentop (discontinued) is a computing device the size and shape of a pen. It functions as a writing
utensil, MP3 player, language translator, digital storage device, and calculator.
 An application-specific computer is one that is tailored to a particular application. For example,
Ferranti introduced a handheld application-specific mobile computer (the MRT-100) in the form of a
clipboard for conducting opinion polls.
Boundaries that separate these categories are blurry at times. For example, the OQO UMPC is also a
PDA-sized tablet PC; the Apple eMate had the clamshell form factor of a laptop, but ran PDA
software. The HP Omnibook line of laptops included some devices small more enough to be called
ultra mobile PCs. The hardware of the Nokia 770 internet tablet is essentially the same as that of a
PDA such as the Zaurus 6000; the only reason it's not called a PDA is that it does not have PIM
software. On the other hand, both the 770 and the Zaurus can run some desktop Linux software,
usually with modifications.
4.5) Mobile data communication
Wireless data connections used in mobile computing take three general forms so. Cellular data
service uses technologies GSM, CDMA or GPRS, 3G networks such as W-CDMA, EDGE or
CDMA2000. and more recently 4G networks such as LTE, LTE-Advanced. These networks are
usually available within range of commercial cell towers. Wi-Fi connections offer higher
performance,[11] may be either on a private business network or accessed through public hotspots,
and have a typical range of 100 feet indoors and up to 1000 feet outdoors. Satellite Internet access
covers areas where cellular and Wi-Fi are not available and may be set up anywhere the user has a

20
line of sight to the satellite's location, which for satellites in geostationary orbit means having an
unobstructed view of the southern sky. Some enterprise deployments combine networks from
multiple cellular networks or use a mix of cellular, Wi-Fi and satellite. When using a mix of
networks, a mobile virtual private network (mobile VPN) not only handles the security concerns, but
also performs the multiple network logins automatically and keeps the application connections alive
to prevent crashes or data loss during network transitions or coverage loss.

5.0) NETWORK SECURITY;


Network security is a specialized field in computer networking that involves securing a computer
network infrastructure. Network security is typically handled by a network administrator or
system administrator who implements the security policy, network software and hardware
needed to protect a network and the resources accessed through the network from unauthorized
access and also ensure that employees have adequate access to the network and resources to
work.
A network security system typically relies on layers of protection and consists of multiple
components including networking monitoring and security software in addition to hardware and
appliances. All components work together to increase the overall security of the computer
network.
5.1) Physical Network
A network is defined as two or more computing devices connected together for sharing resources
efficiently. Further, connecting two or more networks together is known as internetworking.
Thus, the Internet is just an internetwork – a collection of interconnected networks.
For setting up its internal network, an organization has various options. It can use a wired
network or a wireless network to connect all workstations. Nowadays, organizations are mostly
using a combination of both wired and wireless networks.
5.2 )Wired & Wireless Networks
In a wired network, devices are connected to each other using cables. Typically, wired networks
are based on Ethernet protocol where devices are connected using the Unshielded Twisted Pair
(UTP) cables to the different switches. These switches are further connected to the network
router for accessing the Internet.
In wireless network, the device is connected to an access point through radio transmissions. The
access points are further connected through cables to switch/router for external network access.

21
Wireless networks have gained popularity due to the mobility offered by them. Mobile devices
need not be tied to a cable and can roam freely within the wireless network range. This ensures
efficient information sharing and boosts productivity.
5.3) Vulnerabilities & Attacks
The common vulnerability that exists in both wired and wireless networks is an “unauthorized
access” to a network. An attacker can connect his device to a network though unsecure
hub/switch port. In this regard, wireless network are considered less secure than wired network,
because wireless network can be easily accessed without any physical connection.
After accessing, an attacker can exploit this vulnerability to launch attacks such as:
 Sniffing the packet data to steal valuable information.
 Denial of service to legitimate users on a network by flooding the network medium with
spurious packets.
 Spoofing physical identities (MAC) of legitimate hosts and then stealing data or further
launching a ‘man-in-the-middle’ attack.
5.4) Network Protocol

22
Network Protocol is a set of rules that govern communications between devices connected on a
network. They include mechanisms for making connections, as well as formatting rules for data
packaging for messages sent and received.
Several computer network protocols have been developed each designed for specific purposes.
The popular and widely used protocols are TCP/IP with associated higher- and lower-level
protocols.
5.5) TCP/IP Protocol
Transmission Control Protocol (TCP) and Internet Protocol (IP) are two distinct computer
network protocols mostly used together. Due to their popularity and wide adoption, they are built
in all operating systems of networked devices.
IP corresponds to the Network layer (Layer 3) whereas TCP corresponds to the Transport layer
(Layer 4) in OSI. TCP/IP applies to network communications where the TCP transport is used to
deliver data across IP networks.
TCP/IP protocols are commonly used with other protocols such as HTTP, FTP, SSH at
application layer and Ethernet at the data link/physical layer.

TCP/IP protocol suite was created in 1980 as an internetworking solution with very little concern
for security aspects.
It was developed for a communication in the limited trusted network. However, over a period,
this protocol became the de-facto standard for the unsecured Internet communication.
Some of the common security vulnerabilities of TCP/IP protocol suits are:
 HTTP is an application layer protocol in TCP/IP suite used for transfer files that make up the
web pages from the web servers. These transfers are done in plain
text and an intruder can easily read the data packets exchanged between the server and a client.

23
 Another HTTP vulnerability is a weak authentication between the client and the web server
during the initializing of the session. This vulnerability can lead to a session hijacking attack
where the attacker steals an HTTP session of the legitimate user.
 TCP protocol vulnerability is three-way handshake for connection establishment. An attacker
can launch a denial of service attack “SYN-flooding” to exploit this vulnerability. He establishes
lot of half-opened sessions by not completing handshake. This leads to server overloading and
eventually a crash.
 IP layer is susceptible to many vulnerabilities. Through an IP protocol header modification, an
attacker can launch an IP spoofing attack.
Apart from the above-mentioned, many other security vulnerabilities exist in the TCP/IP
Protocol family in design as well in its implementation.
Incidentally, in TCP/IP based network communication, if one layer is hacked, the other layers do
not become aware of the hack and the entire communication gets compromised. Hence, there is
need to employ security controls at each layer to ensure foolproof security.
5.6) DNS Protocol
Domain Name System (DNS) is used to resolve host domain names to IP addresses. Network
users depend on DNS functionality mainly during browsing the Internet by typing a URL in the
web browser.
In an attack on DNS, an attacker’s aim is to modify a legitimate DNS record so that it gets
resolved to an incorrect IP address. It can direct all traffic for that IP to the wrong computer. An
attacker can either exploit DNS protocol vulnerability or compromise the DNS server for
materializing an attack.
5.7) DNS cache poisoning is an attack exploiting a vulnerability found in the DNS protocol. An
attacker may poison the cache by forging a response to a recursive DNS query sent by a resolver
to an authoritative server. Once, the cache of DNS resolver is poisoned, the host will get directed
to a malicious website and may compromise credential information by communication to this
site.

24
5.8 ) ICMP Protocol
Internet Control Management Protocol (ICMP) is a basic network management protocol of
the TCP/IP networks. It is used to send error and control messages regarding the status of
networked devices.
ICMP is an integral part of the IP network implementation and thus is present in very network
setup. ICMP has its own vulnerabilities and can be abused to launch an attack on a network.
The common attacks that can occur on a network due to ICMP vulnerabilities are:
 ICMP allows an attacker to carry out network reconnaissance to determine network topology
and paths into the network. ICMP sweep involves discovering all host IP addresses which are
alive in the entire target’s network.
 Trace route is a popular ICMP utility that is used to map target networking by describing the
path in real-time from the client to the remote host.
 An attacker can launch a denial of service attack using the ICMP vulnerability. This attack
involves sending IPMP ping packets that exceeds 65,535 bytes to the target device. The target
computer fails to handle this packet properly and can cause the operating system to crush.
Other protocols such as ARP, DHCP, SMTP, etc. also have their vulnerabilities that can be
exploited by the attacker to compromise the network security. We will discuss some of these
vulnerabilities in later chapters.
The least concern for the security aspect during design and implementation of protocols has
turned into a main cause of threats to the network security.

25
5.9) Goals of Network Security
As discussed in earlier sections, there exists large number of vulnerabilities in the network. Thus,
during transmission, data is highly vulnerable to attacks. An attacker can target the
communication channel, obtain the data, and read the same or re-insert a false message to
achieve his nefarious aims.
Network security is not only concerned about the security of the computers at each end of the
communication chain; however, it aims to ensure that the entire network is secure.
Network security entails protecting the usability, reliability, integrity, and safety of network and
data. Effective network security defeats a variety of threats from entering or spreading on a
network.
The primary goal of network security are Confidentiality, Integrity, and Availability. These three
pillars of Network Security are often represented as CIA triangle.
 Confidentiality. The function of confidentiality is to protect precious business data from
unauthorized persons. Confidentiality part of network security makes sure that the data is
available only to the intended and authorized persons.
 Integrity. This goal means maintaining and assuring the accuracy and consistency of data.
The function of integrity is to make sure that the data is reliable and is not changed by
unauthorized persons.
 Availability. The function of availability in Network Security is to make sure that the data,
network resources/services are continuously available to the legitimate users, whenever they
require it.
5.10) Achieving Network Security
Ensuring network security may appear to be very simple. The goals to be achieved seems to be
straightforward. But in reality, the mechanisms used to achieve these goals are highly complex,
and understanding them involves sound reasoning.
International Telecommunication Union (ITU), in its recommendation on security architecture
X.800, has defined certain mechanisms to bring the standardization in methods to achieve
network security. Some of these mechanisms are:
 En-cipherment. This mechanism provides data confidentiality services by transforming data
into not-readable forms for the unauthorized persons. This mechanism uses encryption-
decryption algorithm with secret keys.

26
 Digital signatures. This mechanism is the electronic equivalent of ordinary signatures in
electronic data. It provides authenticity of the data.
 Access control. This mechanism is used to provide access control services. These
mechanisms may use the identification and authentication of an entity to determine and enforce
the access rights of the entity.
Having developed and identified various security mechanisms for achieving network security, it
is essential to decide where to apply them; both physically (at what location) and logically (at
what layer of an architecture such as TCP/IP). Network Security
5.11) Security Mechanisms at Networking Layers
Several security mechanisms have been developed in such a way that they can be developed at a
specific layer of the OSI network layer model.
 Security at Application Layer – Security measures used at this layer are application specific.
Different types of application would need separate security measures. In order to ensure
application layer security, the applications need to be modified.
It is considered that designing a cryptographically sound application protocol is very difficult
and implementing it properly is even more challenging. Hence, application layer security
mechanisms for protecting network communications are preferred to be only standards-based
solutions that have been in use for some time.
An example of application layer security protocol is Secure Multipurpose Internet Mail
Extensions (S/MIME), which is commonly used to encrypt e-mail messages. DNSSEC is another
protocol at this layer used for secure exchange of DNS query messages.
 Security at Transport Layer – Security measures at this layer can be used to protect the data
in a single communication session between two hosts. The most common use for transport layer
security protocols is protecting the HTTP and FTP session traffic. The Transport Layer Security
(TLS) and Secure Socket Layer (SSL) are the most common protocols used for this purpose.
 Network Layer – Security measures at this layer can be applied to all applications; thus, they
are not application-specific. All network communications between two hosts or networks can be
protected at this layer without modifying any application. In some environments, network layer
security protocol such as Internet Protocol Security (IPsec) provides a much better solution than
transport or application layer controls because of the difficulties in adding controls to individual

27
applications. However, security protocols at this layer provides less communication flexibility
that may be required by some applications.
Incidentally, a security mechanism designed to operate at a higher layer cannot provide
protection for data at lower layers, because the lower layers perform functions of which the
higher layers are not aware. Hence, it may be necessary to deploy multiple security mechanisms
for enhancing the network security.
Various business services are now offered online though client-server applications. The most
popular forms are web application and e-mail. In both applications, the client communicates to
the designated server and obtains services.
While using a service from any server application, the client and server exchange a lot of
information on the underlying intranet or Internet. We are aware of fact that these information
transactions are vulnerable to various attacks.
Network security entails securing data against attacks while it is in transit on a network. To
achieve this goal, many real-time security protocols have been designed. Such protocol needs to
provide at least the following primary objectives:
 The parties can negotiate interactively to authenticate each other.
 Establish a secret session key before exchanging information on network.
 Exchange the information in encrypted form.
Interestingly, these protocols work at different layers of networking model. For example,
S/MIME protocol works at Application layer, SSL protocol is developed to work at transport
layer, and IPsec protocol works at Network layer.

6.0) CLIENT/SERVER COMPUTING


Client/server is a program relationship in which one program (the client) requests a service or resource
from another program (the server).
Although the client/server model can be used by programs within a single computer, it is a more
important concept for networking. In this case, the client establishes a connection to the server over a
local area network (LAN) or wide-area network (WAN), such as the Internet. Once the server has
fulfilled the client's request, the connection is terminated. Your Web browser is a client program that has

28
requested a service from a server; in fact, the service and resouce the server provided is the delivery of
this Web
Client/Server Computing
Client/Server computing is a computing model in which client and server computers communicate
with each other over a network. In client/server computing, a server takes requests from client
computers and shares its resources, applications and/or data with one or more client computers on
the network, and a client is a computing device that initiates contact with a server in order to make
use of a shareable resource.
From the first client/server computing model introduced at Xerox PARC in the 1970s to today's
highly advanced client server computing networks, our client/server computing dictionary offers a
glossary of key terms you need to know.
6.1) Client–server model
The client–server model is a distributed application structure that partitions tasks or workloads
between the providers of a resource or service, called servers, and service requesters, called clients.
Often clients and servers communicate over a computer network on separate hardware, but both
client and server may reside in the same system. A server host runs one or more server programs
which share their resources with clients. A client does not share any of its resources, but requests a
server's content or service function. Clients therefore initiate communication sessions with servers
which await incoming requests. Examples of computer applications that use the client–server model
are Email, network printing, and the World Wide Web.

6.2) Client and server role


The client-server characteristic describes the relationship of cooperating programs in an
application. The server component provides a function or service to one or many clients, which
initiate requests for such services. Servers are classified by the services they provide. For
example, a web server serves web pages and a file server serves computer files. A shared
resource may be any of the server computer's software and electronic components, from

29
programs and data to processors and storage devices. The sharing of resources of a server
constitutes a service.
Whether a computer is a client, a server, or both, is determined by the nature of the application
that requires the service functions. For example, a single computer can run web server and file
server software at the same time to serve different data to clients making different kinds of
requests. Client software can also communicate with server software within the same computer.
Communication between servers, such as to synchronize data, is sometimes called inter-server or
server-to-server communication.
6.3) Client and server communication
In general, a service is an abstraction of computer resources and a client does not have to be
concerned with how the server performs while fulfilling the request and delivering the response.
The client only has to understand the response based on the well-known application protocol, i.e.
the content and the formatting of the data for the requested service.
Clients and servers exchange messages in a request–response messaging pattern. The client
sends a request, and the server returns a response. This exchange of messages is an example of
inter-process communication. To communicate, the computers must have a common language,
and they must follow rules so that both the client and the server know what to expect. The
language and rules of communication are defined in a communications protocol. All client-server
protocols operate in the application layer. The application layer protocol defines the basic
patterns of the dialogue. To formalize the data exchange even further, the server may implement
an application programming interface (API). The API is an abstraction layer for accessing a
service. By restricting communication to a specific content format, it facilitates parsing. By
abstracting access, it facilitates cross-platform data exchange.
A server may receive requests from many distinct clients in a short period of time. A computer
can only perform a limited number of tasks at any moment, and relies on a scheduling system to
prioritize incoming requests from clients to accommodate them. To prevent abuse and maximize
availability, server software may limit the availability to clients. Denial of service attacks are
designed to exploit a server's obligation to process requests by overloading it with excessive
request rates.
6.4 ) Examples
When a bank customer accesses online banking services with a web browser (the client), the
client initiates a request to the bank's web server. The customer's login credentials may be stored

30
in a database, and the web server accesses the database server as a client. An application server
interprets the returned data by applying the bank's business logic, and provides the output to the
web server. Finally, the web server returns the result to the client web browser for display.
In each step of this sequence of client–server message exchanges, a computer processes a request
and returns data. This is the request-response messaging pattern. When all the requests are met,
the sequence is complete and the web browser presents the data to the customer.

This example illustrates a design pattern applicable to the client–server model: separation of
concerns.
6.5) Early history
An early form of client–server architecture is remote job entry, dating at least to OS/360
(announced 1964), where the request was to run a job, and the response was the output.
While formulating the client–server model in the 1960s and 1970s, computer scientists building
ARPANET (at the Stanford Research Institute) used the terms server-host (or serving host) and
user-host (or using-host), and these appear in the early documents RFC 5and RFC 4. This usage
was continued at Xerox PARC in the mid-1970s.
One context in which researchers used these terms was in the design of a computer network
programming language called Decode-Encode Language (DEL). The purpose of this language
was to accept commands from one computer (the user-host), which would return status reports to
the user as it encoded the commands in network packets. Another DEL-capable computer, the
server-host, received the packets, decoded them, and returned formatted data to the user-host. A
DEL program on the user-host received the results to present to the user. This is a client–server
transaction. Development of DEL was just beginning in 1969, the year that the United States
Department of Defense established ARPANET (predecessor of Internet).

6.6 ) Client-host and server-host


Client-host and server-host have subtly different meanings than client and server. A host is any
computer connected to a network. Whereas the words server and client may refer either to a
computer or to a computer program, server-host and user-host always refer to computers. The
host is a versatile, multifunction computer; clients and servers are just programs that run on a
host. In the client–server model, a server is more likely to be devoted to the task of serving.
An early use of the word client occurs in "Separating Data from Function in a Distributed File
System", a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and

31
Jay Israel. The authors are careful to define the term for readers, and explain that they use it to
distinguish between the user and the user's network node (the client). (By 1992, the word server
had entered into general parlance.)
6.7 ) Centralized computing
The client–server model does not dictate that server-hosts must have more resources than client-
hosts. Rather, it enables any general-purpose computer to extend its capabilities by using the
shared resources of other hosts. Centralized computing, however, specifically allocates a large
amount of resources to a small number of computers. The more computation is offloaded from
client-hosts to the central computers, the simpler the client-hosts can be. [10] It relies heavily on
network resources (servers and infrastructure) for computation and storage. A diskless node
loads even its operating system from the network, and a computer terminal has no operating
system at all; it is only an input/output interface to the server. In contrast, a fat client, such as a
personal computer, has many resources, and does not rely on a server for essential functions.
As microcomputers decreased in price and increased in power from the 1980s to the late 1990s,
many organizations transitioned computation from centralized servers, such as mainframes and
minicomputers, to fat clients. This afforded greater, more individualized dominion over
computer resources, but complicated information technology management. During the 2000s,
web applications matured enough to rival application software developed for a specific
microarchitecture. This maturation, more affordable mass storage, and the advent of service-
oriented architecture were among the factors that gave rise to the cloud computing trend of the
2010s.
6.8 ) Comparison with peer-to-peer architecture
In addition to the client–server model, distributed computing applications often use the peer-to-
peer (P2P) application architecture.
In the client–server model, the server is often designed to operate as a centralized system that
serves many clients. The computing power, memory and storage requirements of a server must
be scaled appropriately to the expected work-load (i.e., the number of clients connecting
simultaneously). Load-balancing and failover systems are often employed to scale the server
implementation.
In a peer-to-peer network, two or more computers (peers) pool their resources and communicate
in a decentralized system. Peers are coequal, or equipotent nodes in a non-hierarchical network.

32
Unlike clients in a client–server or client–queue–client network, peers communicate with each
other directly. In peer-to-peer networking, an algorithm in the peer-to-peer communications
protocol balances load, and even peers with modest resources can help to share the load. If a
node becomes unavailable, its shared resources remain available as long as other peers offer it.
Ideally, a peer does not need to achieve high availability because other, redundant peers make up
for any resource downtime; as the availability and load capacity of peers change, the protocol
reroutes requests.
Both client-server and master-slave are regarded as sub-categories of distributed peer-to-peer
systems.

7) BUILDING WEB APPLICATIONS


Web Application Development Process
Requirements for Developing Web Applications

7.1. Roadmap Document: Defining Web Application, Purpose, Goals and Direction
(Performed by client / project owner)
This initial task is an important part of the process. It requires putting together the Web
Application project goals and purpose.

This step establishes your project's clear direction and helps you focus on setting and achieving
your goal.

The Roadmap Document will specify the Web Application's future plan and objectives with
approximate timelines.

7.2. Researching and Defining Audience Scope and Security Documents


(Performed by client / project owner, or by Comentum, as a fee service)
This task requires researching the audience/users, and prospective clients (if any), and creating
an Analytic Report which includes the following approximate assessments:

33
- Type of audience for usability purposes:
Creating Statistic Reports on the percentage of users: elementary, average, advanced, the
audience ages, and gender

- Type and level of access:


Creating an Access Report, specifying users' access of Intranet, Internet, Extranet - single-level,
multi-level

- Type of audience for planning the security level:


Creating a Risk Statistical Document based on users' characteristics, zone's fraud level,
application's industry security breaches, and history of the audience's security breaches

- Quantitative statistics on audience:


Creating a Potential Visitors Report, broken down by reasonable periodic time frames

7.3. Creating Functional Specifications or Feature Summary Document


(Performed by client / project owner)
A Web Application Functionality Specifications Document is the key document in any Web
Application project. This document will list all of the functionalities and technical specifications
that a web application will require to accomplish. Technically, this document can become
overwhelming if one has to follow the Functional Specifications rule and detail out each type of
user's behavior on a very large project. However, it is worth putting forth the effort to create this
document which will help prevent any future confusion or misunderstanding of the project
features and functionalities, by both the project owner and developer. A typical functional
specification will list every user's behavior, for example:

 When a visitor clicks on the "Add to Cart" button from the Product Showcase page, the item is
added to the visitor's shopping cart, the Product Showcase page closes and the user is taken to
the Shopping Cart page which shows the new item in the cart.

34
If creating a functional specification document is overwhelming to you, I recommend starting out
by creating a Specification Document or Feature Summary Document by either creating the
sample screen shots of the web application screens or creating a document that includes a
summary list of the application's features, for example:

 Product / Inventory Summary Showcase: displays a summary of items for sale, stock number,
item name, short description, one photo, price, and Add to Cart button.
 Product / Inventory One Item Showcase: displays the detail of one inventory item: stock
number, item name, long description, multiple photos (up to 10 photos), price, and Add to
Cart button.

7.4. Third Party Vendors Identification, Analysis and Selection


(Performed by client / project owner)
This task requires researching, identifying and selection of third party vendors, products and
services such as:

 Web Application Development Company - for detail information: How to Hire a Good Web
Application Development Company
 Merchant Account and Payment Gateway - for detail information: Guide to Merchant
Accounts and Payment Gateways
 SSL Certificate (example providers: Verisign, GeoTrust)
 Managed Server / Colocation Server Provider - for detail information: Managed Hosting
Comparison
 Server, Network, Firewall, Load Balancer Equipment (may not needed if using a managed
server - example: DELL, Cisco 5520 Load Balancer)
 Fulfillment Centers (example: Shipwire)

7.5. Technology Selection, Technical Specifications, Web Application Structure and


Timelines
(Performed by Comentum)
This document is the blueprint of the technology and platform selection, development
environment, web application development structure and framework.

35
The Technical Specifications Document will detail out the technology used, licenses, versions
and forecasts.

The Timeline Document will identify the completion dates for the Web Application's features or
modules.

7.6. Application Visual Guide, Design Layout, Interface Design, Wire framing
(Created by the collaboration of the project owner and Comentum)
One of the main ingredients to a successful project is to put together a web application that
utilizes a user's interactions, interface and elements that have a proven record for ease of use, and
provide the best user experience.

This process starts out by creating the visual guide, wire framing or simply sketching out the user
interface and interactions of the web applications by Comentum's Creative and Usability teams
of experts.

Once the Application Interface and Interaction Models are approved, Comentum's creative team
design the interface for the web application.

7.7. Web Application Development


(Executed by Comentum's Development Team)
The application's Design Interface is turned over to Comentum's Development Team who take
the following steps to develop the project:

1. Create the Web Application Architecture and Framework


2. Design the Database Structure
3. Develop / Customize the Web Application Module, Libraries and Classes
4. Complete the Development and Implement all Functionalities - Version 1.0

36
7.8. Beta Testing and Bug Fixing
(Executed by Comentum's Beta Testers)
Comentum's vigorous quality assurance testing help produce the most secure and reliable web
applications.

8) SIX PHASES OF THE WEB SITE DESIGN AND DEVELOPMENT PROCESS


There are numerous steps in the web site design and development process. From gathering initial
information, to the creation of your web site, and finally to maintenance to keep your web site up
to date and current.

Phase One: Information Gathering


The first step in designing a successful web site is to gather information. Many things need to be
taken into consideration when we design the look and feel of your site, so we first ask a lot of
questions to help us understand your business and your needs in a web site.
Certain things to consider are:
Purpose
What is the purpose of the site? Do you want to provide information, promote a service, sell a
product… ?
Goals
What do you hope to accomplish by building this web site? Two of the more common goals are
either to make money or share information.
Target Audience
Is there a specific group of people that will help you reach your goals? It is helpful to picture the
“ideal” person you want to visit your web site. Consider their age, sex or interests – this will help
us determine the best design style for your site.
Content
What kind of information will the target audience be looking for on your site? Are they looking
for specific information, a particular product or service…?
Phase Two: Planning
Using the information gathered from phase one, we put together a plan for your web site.

37
Here we develop a site map – a list of all main topic areas of the site, as well as sub-topics (if
applicable). This gives us a guide as to what content will be on the site, and is essential to
developing a consistent, easy to understand navigational system. This is also the point where we
decide what technologies should be implemented – interactive forms, CMS (content management
system) such as WordPress, etc.
Phase Three: Design
Drawing from the information gathered up to this point, we determine the look and feel of the
site. Target audience is one of the key factors taken into consideration here. A site aimed at
teenagers, for example, will look much different than one meant for a financial institution. We
also incorporate elements such as the company logo or colors to help strengthen the identity of
your company on the web site.
Once we’ve designed a prototype, you are given access to the Client Studio, which is a secure
area of our web site. The Client Studio allows you to view your project throughout the design
and development stages. Most importantly, it gives you the opportunity to express your likes and
dislikes on the site design.
In this phase, communication is crucial to ensure that the final web site will match your needs
and taste. We work together in this way, exchanging ideas, until we arrive at the final design for
the site. Then development can begin…
Phase Four: Development
This is where the web site itself is created. We take all of the individual graphic elements from
the prototype and use them to create the functional web site. We also take your content and
distribute it throughout the site, in the appropriate areas.
This entire time, you will continue to be able to view your site in the Client Studio, and suggest
any additional changes or corrections you would like to have done.
Phase Five: Testing and Delivery
At this point, we attend to the final details and test your web site. We test things such as the
complete functionality of forms or other scripts, we test for last minute compatibility issues
(viewing differences between different web browsers), ensuring that the site is optimized to be
viewed properly in the most recent browser versions.
Once we receive your final approval, it is time to deliver the site. We upload the files to your
server – in most cases, this also involves installing and configuring WordPress, along with a core

38
set of essential plugins to help enhance the site. Here we quickly test again to make sure that all
files have been uploaded correctly, and that the site continues to be fully functional. This marks
the official launch of your site, as it is now viewable to the public.
Phase Six: Maintenance
The development of your web site is not necessarily over, though. One way to bring repeat
visitors to your site is to offer new content or products on a regular basis. If this interests you, we
will be more than happy to continue working together with you to update the information on
your web site. We offer maintenance packages at reduced rates, based on how often you
anticipate making changes or additions to your site.

9) WEB DEVELOPMENT LIFE CYCLE

Similar to the traditional software development process, the website development life-

cycle too can be divided into different steps. Such bifurcation helps align different

activities towards a progressive goal that ultimately culminates into successful project

completion. Knowing more about these steps will also help team to understand their

respective roles in the context of a given task and extract maximum quality.

This article explains the different steps in the development process of web engineering.

This is just a guideline to help you to know, how a process can be done. The steps may

vary from application to application. Do feel free to write to me about your suggestions

and comments on this article to [email protected].

Note: Throughout this text, the words ‘websites’, ‘Web applications’, ‘Web based

applications’ and ‘Intranet/extranets’ can be interchangeably used.

Introduction

A systematic development process can follow a number of standard or company specific


frameworks, methodologies, modeling tools and languages. As an industry practice, the software
development life-cycle adheres to certain set standards which need to be followed by
development team to stay on track with respect to timelines and quality control. Just like
software, websites can also be developed using certain concrete methods that has provisions for

39
customization to the existing software development process.
Let us analyze the steps involved in any website development:

9.1. Review, Assessment and Analysis:

The first step is understanding client requirement and the various dynamics around the clients
existing systems, as the website or web application will eventually be integrated into this system.
The analysis will then cover all the aspects of clients’ business and their needs especially on how
the website is going to be merged with their existing system.
The first important thing is finding the target audience. Then, all the present hardware, software,
people and data should be carefully assessed. For example, if a company XYZ Corp is in need of
a website to have its Human Resource details online, the analysis team may look to review the
existing data about the employees from the present database and what migration plan will be best
suited to complete the transition.
The analysis should be done in the way, that it is neither too time consuming nor lacking
information. The team should be able to come up with a detailed cost-benefit analysis. As the
plan for the project will be an output of analysis, it should be realistic. To achieve, this the
analyst should consult the designers, developers and testers to come up with a realistic plan.
Input:

1. Kick off interview with client, initial mails and supporting docs by the client, discussion
notes,

2. Online chat transcripts, recorded telephone conversations, and

3. Model sites/applications

Output:

1. Work plan,

2. Estimating cost

3. Team requirements (No of developers, designers, QA, DBA etc)

4. Hardware-software requirements

5. Supporting documents and


40
6. Final client approval to go ahead with the project.

Tools:There are not enough tools available in the market, But we found one good tool to try is:
Requirement Heap.

9.2. Specification Building:

Preliminary specifications are drawn up by covering up each and every element of the
requirement. For example if the product is a web site then the modules of the site including
general layout, site navigation and dynamic parts of the site should be included in the spec.
Larger projects will require further levels of consultation to assess additional business and
technical requirements. After reviewing and approving the preliminary document, a written
proposal is prepared, outlining the scope of the project including responsibilities, timelines and
costs.
Input: Reports from the analysis team.
Output:Complete requirement specifications to the individuals and the customer/customer’s
representative (Technically, the project stakeholders)
Tools: For specification building, we recommend you to try django-req and a requirement
management tool called ReqView.

9.3. Design and development:

After the specification building, work on the website is commences after receipt of the signed
proposal, a deposit, and any written content materials and graphics you wish to include. In this
stage the layouts and navigation will be designed as a prototype.

Some customers may be interested only in a full functional prototype. In this case we may need
to show them the interactivity of the application or site. But in most of the cases, the customer
may be interested in viewing two or three design alternatives with images and navigation.

Be prepared to note down quite a lot of suggestions and changes from the customer side. All the
proposed changes should be finalized before moving into the next phase. The revisions could be
redisplayed via the web for the customer to view.

41
This is the most vital stage in the project life cycle to gain client trust that the project is in
capable hands. Encourage customer comments, feedback and approvals to be communicated by
e-mail, fax and telephone. Engage in constant communication to give clients peace of mind.

Throughout the design phase the team should develop test plans and procedures for quality
assurance. It is necessary to obtain client approval on design and project plans.

In parallel, the Database team will assess and understand the requirements and develop the
database with all the data structures. In this stage, the sample data will also be prepared.
input: Requirement specification.
Output: Site design with templates, Images and prototype.
Tools: There are plenty, The most popular one is Adobe Photoshop.

9.4. Content writing:

This phase is necessary mainly for the web sites. There are professional content developers who
can write industry specific and relevant content for the site. Content writers need to add their text
in such a way as to utilize the design templates. The grammatical and spelling check should be
over in this phase. The type of content could be anything from simple text to videos.
Input: Designed template.
Output: Site with formatted content.

42
9.5. Coding:

Now, it’s the programmer’s turn to add his code without disturbing the design. Unlike traditional
design, the developer must know the interface and the code should not disturb the look and feel
of the site or application. This calls for the developer to understand the design and navigation of
the site. If the site is dynamic then the code should utilize the template. The developer may need
to interact with the designer, in order to understand the design. The designer may need to
develop some graphic buttons whenever the developer is in need, especially while using some
form buttons.
If a team of developers is working they should use a CVS to control their sources. The coding
team should generate necessary testing plans as well as technical documentation. For example,
Java users can use JavaDoc to develop their documents to understand their code flow. The end-
user documentation can also be prepared by the coding team. This can be used by a technical
writer who can understand them, and develop helps and manuals later.
Input: The site with forms and the requirement specification.
Output : Database driven functions with the site, Coding documents.
Tools: For coding normally an IDE (Integrated Development Environment) will help you. Adobe
Dreamweaver, PhpStrom, Netbeans etc are popular choices, and We can not really recommend
the one – so feel free to choose yours. Check our blog that discusses 4 IDEs for PHP developers.

9.6. Testing:

Unlike software development, web based applications need intensive testing, as the applications
will always function as a multi-user, multi-tier system with bandwidth limitations. Some of the
testing which should be done are, Integration testing, Stress testing, Scalability testing, load
testing, resolution testing and cross-browser compatibility testing. Both automated testing and
manual testing should be done without fail.

For example it’s vital to test fast loading graphics and to calculate their loading time, as they are
very important for any web site. There are certain testing tools as well as some Online testing
tools which can help the testers to test their applications. For example, ASP developers can use

43
Microsoft’s Web Application Test Tool to test the ASP applications, which is a free tool
available from the Microsoft site to download.

After doing thorough testing at the live server too is necessary for web sites and web based
applications. After uploading the site there should be a complete testing (E.g. Testing Links, Unit
testing, )
Input: The site, Requirement specifications, supporting documents, technical specifications and
technical documents.
Output: Completed application/site, testing reports, error logs, frequent interaction with the
developers and designers.
Tools: There are a plenty. Just Google it.

9.7. SEO and Social Media Optimization:

This phase is applicable only for web sites. Promotion needs preparation of meta tags, constant
analysis and submitting the URL to the search engines and directories. There is a detailed article
in this site on Search Engine Optimization, for further read. The Search Engine Optimization and
Social Media Marketing is normally an ongoing process as the strategies of search engine may
change quite often. Submitting a site URL once in 2 months can be an ideal submission policy. If
the customer is willing, then paid click and paid submissions can also be done with additional
cost.
Input: Site with unique and great content, Competitor study, keyword analysis.
Output: Site submission after necessary meta tag preparation.
Tools: We at Macronimous use few popular tools, you can check them at the bottom of our SEO
service page
Also, To learn more about SEO life cycleclick here.

9.8. Maintenance and Updating:

Web sites will need quite frequent updates to keep them fresh and relevant. In such a case, we
need to do analysis again, and all the other life-cycle steps will be repeated. Bug fixes can be
done during the time of maintenance. Once your web site is operational, ongoing promotion,
technical maintenance, content management & updating, site visit activity reports, staff training

44
and mentoring is needed on a regular basis, depending on the complexity of your website and the
needs within your organization.
Input: Site/Application, content/functions to be updated, re-Analysis reports.
Output: Updated application, supporting documents to other life cycle steps and teams.
Tools: For easy website maintenance CMS is right choice. Investing on a CMS like WordPress
or Joomla will make your site easy to maintain without much recurring expenditure.
The above-mentioned steps alone are not restricted to web application or website development.
Some steps may not applicable for certain tasks. It depends largely on the cost and time involved
and the necessity. Sometimes if it is an intranet site, then there will be no site promotion. But
even if you are a small development firm, if you adopt certain planning along with these web
engineering steps in mind, it will definitely reflect in the top notch quality of the final outcome.
See the flowchart “How we do web development in Macronimous?“

45

You might also like