CSComputing
CSComputing
OBJECTIVES
Reinforce the knowledge in the networking system
Special emphasis on Internet Protocols and Client Server based architecture
CONTENTS
1. Introduction Client/Server
6. Understanding Middleware
8. Data Warehousing
TEXT BOOKS
-Alex Berson, "Client/Server Architecture"
-Neil Jonkins et al, "Client/Server Unleashed"
-Jeffrey D. Schank, "Client/Server Applications and Architecture"
CLIENT-SERVER COMPUTING
INTRODUCTION TO CLIENT/SERVER
TRADITIONAL CONCEPT
In the early days of network-based computing, organizations could not afford to have a hard
disk and processor on every system in the network.
The cost of memory and processors was pretty high.
Centralized computing system with several dumb terminals and a single Intelligent System.
A dumb terminal is a system on a network, which consists of only a keyboard and a monitor.
It does not have its own processor or storage device.
Intelligent terminal is a normal system, it has its own storage device and processor. This
intelligent system handled all the processing requirements of the computers connected to it.
TRADITIONAL ARCHITECTURE
The Intelligent System controls all processing jobs and resources.
The dumb terminals depend on this central Intelligent System.
OPEN SYSTEMS
For create better and improved networking solution the concept of open system introduced.
Open systems are systems that adhere to a standard set of interfaces
Standardization: Chips, Peripherals, Networking protocols, O/S and Software Components.
Allows the implementation of networking & software products from multiple vendors
WHAT IS CLIENT/SERVER?
Client/Server is a computational architecture that involves client processes requesting service
from server processes.
It is an architectural model in which a system's functionality and its processing are divided
between the client PC (front end) and a server (back end).
System functionality: Programming Logic, business rules and data management is segregated
between client and server.
SERVERS: Powerful computers or processes dedicated for managing the network traffic,
storage space, and resources such as files and printers.
CLIENTS: PCs or workstations on which users run applications.
The clients rely on servers for resources, such as files, hard disk, and printers.
• Server
• Network
• Applications
The client manages all user interactions, and hides the server and the network from the user.
It creates an illusion that the entire application is executing locally, without the use of other
processes, machines, or networks.
THE NETWORK
The Network in a client/server environment, connects the clients with the servers.
Client computers communicate with the server through the network.
Network classification according to geographical coverage : LAN,MAN,WAN
THE APPLICATIONS
The applications ties the other three components to form the client/server architecture.
Software applications that run on the client and the server establish the communication
between the client and the server.
BENEFITS OF CLIENT/SERVER
A well designed client/server system provides users with easy access to information.
The user friendly front-end application displays information that the user requests.
2. INCREASED PRODUCTIVITY
A client/server system increases users productivity by providing them with the tools to
complete their tasks more easily.
Client/server system provide features that enables end users to create and customize
report.
Many companies have replaced their mainframe systems with client/server and saved
millions of dollars.
People can accomplish their task faster so they save time and effort which also translates
into a financial savings.
8. INCREASED REVENUE
Examples :
9 Enables a new product to de developed faster so that it hits the market sooner.
9 Identifies which marketing strategy work well and should be used again.
During 1980s,no real business processing were done on LANs : Centralized System.
Successful client/server breaks up a company's major business areas into several distinct
units.
MOVING TO CLIENT/SERVER
Companies are introducing client/server systems into their organization from two general
directions.
Others are upsizing to client/server and replacing their file server based database system.
1.DOWNSIZING
9 The high maintenance and support costs for keeping a mainframe running.
2.UPSIZING
Companies are turning to client/server system because of the limitation in current system.
The IS staff must acquire new technical skills required to build client/server systems.
Re-engineering translates people to learn new ways of doing jobs in addition to the new
computer system.
CLIENT/SERVER PROCESS
Typically, the following steps are involved in processing a client request in a client/server
environment.
1. Client sends a request, such as retrieving data from a database, to the server.
2. Server accesses the database and retrieves the required data for the client.
3. Server processes the retrieved data, if required.
4. Server sends this processed information to the client.
5. Client then presents this information to the user.
You might implement various types of servers in a client/server environment. The various types
of servers available in a client/server model are:
• File server
• Database server
• Transaction server
• Application server
• Groupware server
1. FILE SERVER
A file server, also known as Networked File Server (NFS), is a server on the network that
stores the data files, which can be accessed by network clients.
When a client sends a request for a file over the network, the file server searches and
returns the requested file to the client.
2. APPLICATION SERVER
An application server regulates resources and processes, such as e-mail, on the network.
Various resources are placed on the server, enabling users at the client end to share these
resources, as and when required.
3. GROUPWARE SERVER
A groupware server is responsible for handling the management of semi-structured
information, such as text, image, mail, bulletin boards, and the flow of work, in a network.
4. DATABASE SERVER
A database server is responsible for handling database queries from the clients and
returning requested data to them.
5. TRANSACTION SERVER
The transaction server handles a set of SQL statements sent by a client as a single
transaction
Client/server systems can be classified based on the way in which the system have been
built.
o Distributed Presentation
o Remote Presentation
o Distributed Logic
o Remote Data
o Distributed Data
SERVER
NETWORK
APP. APP. APP.
LOGIC LOGIC LOGIC
END
USER
DISPLAY
DATA
CLIENT
1. DISTRIBUTED PRESENTATION
Distributed presentation means both the client and the server machines format the display
presented to the end user.
The client machine intercepts display output from the server and reroutes the output
through its own process before presenting to the user.
2. REMOTE PRESENTATION
The client formats the data and presents it to the end user.
All the core system and application logic resides on the server.
The client and server process communicate through more advanced protocols such as
IBM's Advanced Peer to Peer Communications (APPC).
3. DISTRIBUTED LOGIC
A distributed logic client/server splits the logic of application between the client and server
processes.
The client and server process can communicate through variety of middleware tools.
4. REMOTE DATA
The client handles all the application logic and end user presentation.
Clients typically use remote SQL or ODBC to access the data stored on the server.
Applications built in this way are currently the most common in use.
5.DISTRIBUTED DATA
The distributed data model uses data distributed across multiple networked systems.
Data sources may be distributed between the client and the server or multiple servers.
It is complex and requires a great deal of planning and decision making to use effectively.
CLIENT HARDWARE
Some newer systems like mobile phones, palms or PDAs are also clients.
They can only send and receive information to and from server.
CLIENT SOFTWARE
Most client software is available for many computers and operating systems.
Now there are graphical interfaces for most client software : User friendly.
It was first widely successful version of Windows enabling Microsoft to compete with Apple
Macintosh.
Windows 3.1 is a host based o/s that runs on Intel 386 based PCs and above.
Windows 95 was intended to combine the functions of Microsoft's formerly separate MS-
DOS and Windows products.
Windows NT supports preemptive multitasking and threading like most large scale
systems.
The NTFS file system has powerful multi-user and security features.
Different windows product beneficial for the development of client/server systems are
√ OLE
√ Activex
Activex
2. IBM OS/2
The promised GUI, Presentation Manager, was introduced with OS/2 1.1 in November
1988.
OS/2 was expected to fill the gap between DOS and large o/s UNIX.
But many users opted to stay with DOS to switch to windows or to change to UNIX.
There were several technical reasons for the breakup and several practical reasons why
the public preferred Windows
o API incompatibility
o OS/2 could not run Windows programs
o Insufficient backwards compatibility
o The 640 KB barrier has been broken by other means.
In 1992 IBM without Microsoft relauched OS/2 with improved support for running DOS
and added support for running windows application.
It seemed costly because IBM had to pay a royalty to Microsoft.
That latest version of OS/2,OS/2 Warp contain no Microsoft code : no royalty
Price is reduced to half.
The main drawbacks: IBM needs to update its OS/2 every time Microsoft releases new
version of windows.
3. X Window
The X Window is a vendor and hardware independent network based windowing system.
It was developed at MIT and has become the de facto industry standard for window-based
applications in the UNIX environment.
The XClient and XServer communicate by means of the Xprotocol.
The Xprotocol is asynchronous : requests are not normally acknowledged.
4.Apple
Apple launched a challenge to Intel/Microsoft with the release of its Power Macintosh or
Power PC desktop computer.
The PowerMac is able to run Mac software and native Power PC.
The PowerMac also provides support for DOS and windows application through Insigna’s
soft windows emulation package.
SERVER HARDWARE
Server hardware traditionally has been a minicomputer : Sun or Cray or a high end IBM or
DEC computer.
Introduction of Pentium and power pc processors more PCs are being used as a server.
Example : A company wanted to post employee manuals, memos on a web server ???
Example :A large corporation wanted to maintain an upto date nation wide inventory ???
SERVER SOFTWARE
In a typical intranet: A mail server to process and deliver mail, FTP server to manage file
transfers, a web server to host and serve world wide web documents.
Memory and space requirements are directly proportional to no of concurrent users and no
or requests to be processed at one time
1. PC-Based Servers
Make sure that each system offers expansion and upgradation features.
PC based servers are available from a wide range of vendors : IBM,HP and Compaq
2. UNIX Servers
RS/6000 range from IBM, HP-UX range from Hewlet Packard and DEC Alpha from Digital.
UNIX servers are suited for both the commercial and numeric computing environments.
Wide range of performance options from single processor systems (SMS) to Massively
Parallel Systems.
Growth areas of UNIX based servers : Web Servers, Database Servers etc
The AS/400 (also known as AS/400, iSeries (since 2000) and System i5 (since 2006)) is
a type of minicomputer produced by IBM.
The AS/400 has an open approach to communication : Available to link AS/400 into even
the most complex networked systems.
AS/400 advanced series incorporates special design features that make it ideal for data
warehousing applications.
4. Mainframe
Large and expensive computers used mainly by government institutions and large
companies for mission critical applications.
It runs on any IBM or compatible and supports all major LAN vendor’s hardware.
Philosophy is to make itself a de facto industry standard by dominating the market place.
NetWare was based on the NetWare Core Protocol (NCP), which is a packet-based
protocol.
NCP was directly tied to the IPX/SPX protocol, which meant that natively, NetWare could
only communicate using IPX/SPX.
One or more dedicated servers were connected to the network, and disk space was shared
in the form of volumes.
Clients had to log in to a server in order to be allowed to map volumes, and access could
be restricted according to the login name.
Similarly, they could connect to shared printers on the dedicated server, and print as if
the printer was connected locally.
# Topology
The file server software forms a shell around the operating system and is able to intercept
commands from application programs before they can reach to o/s’s command processor.
The network interface to network file server (Network Shell) resides in each workstation.
√ The shell must first determine whether the request is for a local file or a network
request for information located in a file server.
√ Server locates the file and transmits it to the workstation in the form of reply packet.
√ The packet is received by a reply translator, which converts this information into a
form the local workstation can handle.
√ The command processor then provides the application program with this data.
2. Microsoft Windows NT
Microsoft supplies a file server network operating system called Network Operating
System : Windows NT Server.
√ Preemptive Multitasking
√ Multithreaded processes
√ Portability
Multithreaded process refers to the threads under NT that functions as execution agents.
It also permits Transaction Tracking : Rolls back to its previous state just before the
system crash.
It includes the support for the IEEE 802.2 specifications, SDLC (Synchronous Data Link
Control), X.25 protocols.
It allocates and synchronizes multiple processsors as well as handling interrupts and error
exceptions.
NT Executive manages the interface between the kernel and various subsystems.
Page 19 © KUMAR PUDASHINE
CLIENT-SERVER COMPUTING
Hardware Abstraction Layer translates the NT Executive’s command into a form that
can be understood by the hardware found in the physical platform running NT.
Windows NT also supports system with multiple processors and provides the ability to
perform Symmetric Multiprocessor Processing (SMP)
Requires at least an Intel based 486 DX machine and at least 12 MB or RAM and 100 MB
or secondary storage.
# Security Under NT
Windows NT requires user to enter a password each time they start the O/S.
Event viewer program enables network managers to view a log of all network errors.
Windows NT server provides built in file sharing and print sharing capabilities.
It provides API that permits network operating system vendors to write client software for
their products to run.
You can quickly install all the services required for the Internet and Intranets.
The IIS contains a performance monitor to measure all Internet events in real time.
3. Banyan VINES
Banyan System’s VIRtual Networking System (VINES) is a network o/s based on heavily
modified version of UNIX.
All VINES services including naming,file,printer and mail executes as UNIX processes.
These services can be stopped and started from the server without distributing other
services.
# Street Talk
Can also specify hours and days permitted for a particular user to log into network.
VINES version 3.0 and later contains security software known as Vanguard.
Under VINES each user service and communication link has an ARL : Access Rights List.
Because it deals with connecting open systems: Systems that are open for communication
with other systems.
9 The function of each layer should be chosen with an eye toward defining
internationally standardized protocols.
9 The layer boundaries should be chosen to minimize the information flow across the
interfaces.
9 The number of layers should be large enough :Distinct functions in different layer
Because it does not specify the exact services and protocols to be used in each layer.
Physical Layer
The physical layer is concerned with transmitting raw bits over a communication channel.
The design issues largely deal with mechanical,electrical and timing interfaces.
The data link layer provides reliable data delivery across the physical network.
Network Layer
A key design issue is determining how packets are routed from source to destination.
Transport Layer
Based upon the protocol this layer do or do not provide error recovery
The transport layer is a true end-to-end layer,all the way from the source to the
destination.
Session Layer
The session layer allows users on different machines to establish sessions between them.
Sessions offers various services : Dialog control, token management and synchronization.
Token management : Preventing two parties from attempting the same critical operations
at the same time.
Synchronization : Check pointing long transmissions to allow them to continue from where
they were after a crash.
Presentation Layer
The presentation layer is concerned with the syntax and semantics of the information
transmitted.
Application Layer
The application layer contains a variety of protocols that are commonly needed by users.
One widely used application protocol is HTTP which is the basis for the WWW.
PROTOCOLS
1. TCP/IP
2. XNS
3. IPX
4. AppleTalk
5. SNA
Issues like SNA gateways from PC-LAN based networks to mainframes for data access will
affect network design.
6. LAN Protocols
# Ethernet
Data can be transmitted over wireless access points, twisted pair, coaxial, or fiber optic
cable.
#Token Ring
The computers are connected so that the signal travels around the network from one
computer to another in a logical ring.
A single electronic token moves around the ring from one computer to the next.
If a computer does not have information to transmit, it simply passes the token on to the
next workstation.
The Token Ring protocol requires a star-wired ring using twisted pair or fiber optic cable.
#FDDI
Network protocol used primarily to interconnect two or more local area networks.
A major advantage of FDDI is speed. It operates over fiber optic cable at 100 Mbps.
#ATM
It is a network protocol that transmits data at a speed of 155 Mbps and higher.
ATM supports a variety of media such as video, CD-quality audio, and imaging.
ATM employs a star topology, which can work with fiber optic as well as twisted pair cable.
ATM is most often used to interconnect two or more local area networks
TOPOLOGIES
Physical Topology : Refers to the arrangement of computers & other devices in a network.
Main Types of Physical Topologies : Linear Bus, Star ,Ring and Tree
linear bus topology consists of a main run of cable with a terminator at each end.
All nodes (file server, workstations, and peripherals) are connected to the linear cable.
# Star Topology
It is designed with each node connected directly to a central network hub or switch.
Data on a star network passes through the hub/switch before continuing to its destination.
The hub or concentrator manages and controls all functions of the network.
# Tree Topology
# Ring Topology
CABLES
Medium through which information usually moves from one network device to another.
There are several types of cable which are commonly used with LANs.
The type of cable chosen for a network is related to the network's topology, protocol, and
size.
UTP is the most popular and is generally the best option for most of the networks.
Each pair is twisted with a different number of twists per inch to help eliminate
interference from adjacent pairs and other electrical devices.
The tighter the twisting, the higher the supported transmission rate and the greater the
cost per foot.
Type Use
The standard connector for unshielded twisted pair cabling is an RJ-45 connector
RJ stands for Registered Jack, standard borrowed from the telephone industry.
#Coaxial Cable
A plastic layer provides insulation between the center conductor and a braided metal
shield.
The metal shield helps to block any outside interference from fluorescent lights, motors,
and other computers.
10 Base 2 refers to the specifications for thin coaxial cable carrying Ethernet signals.
The 2 refers to the approximate maximum segment length being 200 meters.
10Base5 refers to the specifications for thick coaxial cable carrying Ethernet signals.
The most common type of connector used with coaxial cables is the Bayone-Neill-
Concelman (BNC) connector.
It transmits light rather than electronic signals eliminating the problem of electrical
interference.
Fiber optic cable has the ability to transmit signals over much longer distances than
coaxial and twisted pair.
Wireless LANs
Wireless LANs use high frequency radio signals, infrared light beams, or lasers to
communicate between the workstations and the file server.
Each workstation on a wireless network has some sort of transceiver to send and receive
the data.
Wireless networks are also beneficial in older buildings where it may be difficult or
impossible to install cables.
They provide poor security, and are susceptible to interference from lights and electronic
devices
BANDWIDTH
Bandwidth refers to the amount of data a cable can carry; measured in bits per second
(bps) for digital signal, or in hertz(Hz) for analog signals.
For example, the bandwidth of the human voice is roughly 2700 Hz (3000-300).
# REPEATERS
By the use of repeater signal can cover longer distances without degradation.
# BRIDGES
Bridge generally refers to a hardware device that can pass packet from one network to
another.
A bridge makes the network look like a single network to higher level protocol.
Bridges are used only when the same network protocol (such as TCP/IP) is on both LANs.
Two or more LANs involved in one organization and possibility of lot of traffic ??
It is better to connect the two LANs directly with a bridge instead of loading the backbone
with the cross traffic.
BRIDGE
LAN A LAN A
ROUTER ROUTER
BACKBONE
# ROUTERS
Routing occurs at layer 3 (the Network layer e.g. IP) of the OSI seven-layer protocol
stack.
A router acts as a junction between two or more networks to transfer data packets among
them.
The first router was created at Stanford University by a staff researcher named William
Yeager
In order to route packets, a router communicates with other routers using routing
protocols.
LANs can be tied to WAN through a router that handles the passage of data between LAN
and WAN backbone.
# GATEWAYS
For Example : It is used to connect PC based network and IBM mainframe or token ring
network.
√ Protocol conversion.
√ Data translation.
√ Multiplexing.
COMMUNICATION
1. SOURCE
2. TRASMITTER
Usually the data generated by a source system are not transmitted directly.
Example : Modem takes digital bit stream and transforms that bit stream into analog
signals.
3. TRANSMISSION SYSTEM
This can be a single transmission line or a complex network connecting source and
destination.
4. RECEIVER
And converts it into a form that can be handled by the destination device.
5. DESTINATION
Example : Server
9 Local Network
Within local network, client applications communicate to servers using protocol suites of
LAN.
When data is communicated over wide area , telecommunication plays an important role.
In this model client and the server are connected over a physical wire
(cables,gateways,bridges).
Accept request
CLIENT SERVER
Handshake Protocol
Physical Connection
1. Protocols are verified and the client establishes software contact with the server.
[ Think of as telephone ringing on the other end when you call someone ]
2. Server sends handshake protocols. [ You know that your connection is good ]
3. The user longs into the client sending username and password to the server.
4. The server authenticates them and provides the service according to that particular user’s
privileges.
UNDERSTANDING TELECOMMUNICATIONS
From 9.6 Kbps dial up lines to 2.5 Gbps SONET [ Synchronous Optical Network].
As business become global and workers more mobile : Client/server applications must
communicate over even larger geographical areas.
Dialup connections are used with simple file transfer and electronic mail functions.
What happens when you want to build robust, fault tolerant communications for your
application ? : No dial up
Synchronous Optical Network : Offers bandwidth ranging from 55 Mbps to 2.5 Gbps.
DESIGN CONSIDERATIONS
LAN
REMOTE ACCESS
NOS
ENTERPRISE
ANALOG
DIALUP
ANALOG
PROTOCOL DIALUP
ISDN
ISDN
LEASED
LINE
ATM
SONET
9 LAN Phase
9 Enterprise Phase
The First logical step to extend application beyond LAN is to grant remote access.
In remote control, remote user will dial in and take control of a physical PC of LAN.
In remote node, remote user will Load LAN protocol and sent it over phone line.
Two 64 Kbps : 2B
One 16 Kbps : 1D
So BRI is rated as 2B + D
D channel is used for call setup and is not available for your use.
Multiplexed voice and data can be passed from two B channels : Throughput of 128 Kbps.
# ENTERPRISE PHASE
Depending upon the bandwidth needs of your application, you may decide to step to T-1
or ATM.
STRATEGIC
BUSINESS FACTORS
REQUIREMENT
APPLICATION
FACTORS
NETWORK
DESIGN
PHYSICAL
PLANT
PROTOCOL
TOPOLOGY
WAN LINKS
PUBLIC VS
PRIVATE N/W
1. NUMBER OF USERS
2. NUMBER OF SITES
The number of sites your application will span affects WAN links, protocols etc.
3. TYPE OF APPLICATION
If video or imaging file is required : ATM on LAN and ISDN on WAN should be considered
4. DESIGN OF APPLICATION
5. FAULT TOLERANCE
Establishing fault tolerance at each level : Adding servers, establishing intelligent routers.
6. DATA ACCESS
Using ODBC is simple but it may not be fast enough for certain application.
The data link layer is responsible for delivery of frames between two neighboring nodes
over a link.
Communication on the Internet is not defined as the exchange of data between two nodes
or between two hosts.
CLIENT-SERVER PARADIGM
A process on the local host called client needs service from a process usually on the
remote host called server.
For Example :To get the day and time from a remote machine.
We need a daytime client process running on the local host and a daytime server process
running on a remote machine.
ADDRESSING
At data link layer, we need a MAC address to choose one node among several nodes.
A frame in the data link layer needs a destination MAC address for delivery and a source
address for the next node’s reply.
A datagram in the network layer needs a destination IP address for delivery and a source
IP address for the destination’s reply.
At transport layer, we need a transport layer address called a port number to choose
among multiple processes running on the destination host.
The destination port number is needed for delivery and a source port number is needed
for the reply.
Page 42 © KUMAR PUDASHINE
CLIENT-SERVER COMPUTING
In the Internet model, the port number are 16 bit integers between 0 and 65535.
The client program define itself with a port number chosen randomly by the transport
layer software running on the client host.
If the computer at the server side runs a server process and assigns a random number as
the port number, the client process will not know the port number.
Of course, one solution would be to send a special packet and request the port number of
the server : More overhead.
The internet has decided to use universal port numbers for servers : Well known ports.
HTTP : 80
The ports ranging from 0 to 1023 are assigned and controlled by IANA.
2. Registered Ports
The ports ranging from 1024 to 49,151 are not assigned/controlled by IANA.
3. Dynamic Ports
The ports ranging from 49,152 to 655,35 are neither controlled nor registered.
The destination IP address defines the host among the different hosts in the world.
After the host has been selected, the port number defines one of the processes on this
particular host.
SOCKET ADDRESS
MULTIPLEXING
At the sender side, there may be several processes that need to send packets.
DEMULTIPLEXING
At the receiver side, the relationship is one to many and requires demultiplexing.
After error checking and dropping of header, it delivers each message to the appropriate
process based on the port number
CONNECTIONLESS SERVICE
In a connectionless service, the packets are sent from one party to another with no need
for connection establishment and connection release.
The packets are not numbered : They may be delayed , lost or arrive out or sequence.
There is no acknowledgement.
In a connection oriented service, a connection is first established between the sender and
the receiver.
1. CONNECTION ESTABLISHMENT
1. Host A sends a packet to announce its wish for connection and includes its initialization
information about traffic from A to B.
Each connection request needs to have a sequence number to recover from the loss or
duplication of packet.
Each ACK needs to have an ACK number for the same reason.
The first sequence number in each direction for each connection must be random.
In other words, a sender cannot create several connections that start with the same
sequence number.
2. CONNECTION TERMINATION
If Application layer program needs reliability : Use a reliable TCP protocol by implementing
flow and error control.
If the application program does not needs reliability and has its own flow and error control
mechanism : Unreliable protocol can be used.
One Question ?
If the data link layer is reliable and has flow and error control, do we need this at the
transport layer too ?
Reliability at the data link layer is between two nodes : we need reliability between two
ends.
The simple unreliable transport layer protocol in the internet is called the UDP.
Sending a small message using UDP takes much less interaction between the sender and
receiver than using TCP.
PORT NUMBERS
UDP uses port numbers as the addressing mechanism in the transport layer.
USER DATAGRAM
UDP packets called user datagrams, have a fixed size header of 8 bytes.
HEADER DATA
1. Source Port No
This is the port number used by the process running on the source host.
It is 16 bit long which means that port number can range from 0 to 65535.
2. Destination Port No
This is the port number used by the process running on the destination host.
3. Total Length
This is a 16 bit field that defines the total length of the user datagram header plus data.
4. Checksum
This field is used to detect errors over the entire user datagram (header plus data)
APPLICATIONS OF UDP
UDP is suitable for a process that requires simple request-response communication with
little concern for flow and error control.
UDP is used for some route updating protocols : RIP (Routing Information Protocol)
The reliable, but complex transport layer protocol in the internet is called TCP.
PORT NUMBERS
TCP SERVICES
TCP allows the sending process to deliver data as a stream of bytes and the receiving
process to obtain data as a stream of bytes.
The sending and receiving processes may not produce and consume data at the same
speed.
Although buffering handles the disparity between the speed of producing and consuming
process.
The IP layer as a service provider for TCP needs to send data in packets.
At the Transport layer, TCP groups a number of bytes together into a packet called a
segment.
TCP adds a header to each segment and delivers the segment to the IP layer for
transmission.
Note that the segments are not necessarily the same size.
TCP offers full duplex service where data can flow in both directions at the same time.
5. Numbering Bytes
# Byte Numbers
When TCP receives bytes of data from the process and stores them in the sending buffer,
it numbers them.
For Example :
If random number happens to be 1057 and total data to be sent are 6000 bytes the bytes
are numbered from 1057 to 7056.
# Sequence Number
After the bytes have been numbered, TCP assigns a sequence no to each segment that is
being sent.
The sequence no for each segment is the number of the first byte carried in that segment.
Question ?
Imagine a TCP connection is transferring a file of 6000 bytes. The first byte is numbered
10010. What are the sequence no of each segment if data are sent in 5 segments with the
first 4 segments carrying 1000 bytes and the last segment carrying 2000 bytes.
# ACK Number
However, the ACK No defines the number of the next byte that the party expects to
receive.
ACK No is cumulative.
The receiver takes the number of the last byte that it has received and adds 1 to it.
The term cumulative here means that if a party uses 5643 as an ACK No, it has received
all bytes from the beginning up to 5642
TCP SEGMENT
The unit of data transfer between two devices using TCP is a segment.
The header is 20 bytes if there are no options and 60 bytes if it contains options.
This is a 16 bit field that defines the port number of the application program in the host
that is sending the segment.
This is a 16 bit field that defines the port number of the application program in the host
that is receiving the segment.
3. Sequence Number
This 32 bit field defines the no assigned to the first byte of data contained in this segment.
4. ACK No
This 32 bit field defines the byte number that the sender of the segment is expecting to
receive from the other party.
If the byte number x has been successfully received x+1 is the ACK No.
5. Header Length
6. Reserved
7. Control
Description of Flags :
Flags Description
8. Window Size
This field defines the size of the windows in byte that the other party must maintain.
9. Checksum
10.Urgent Pointer
This 16 bit field, which is valid only if the urgent flag is set.
11. Options
IP : INTERNET PROTOCOL
The Internet Protocol is the host to host network layer delivery protocol for the Internet.
The term best-effort means that IP provides no error control or flow control.
IP uses only an error detection mechanism and discards the packet if it is corrupted.
IP does its best to deliver a packet to its destination, but with no guarantees.
The post office does its best to deliver the mail but might not always succeed.
IP is also a connectionless protocol for packet switching network which uses the datagram
approach.
This means that each datagram is handled independently and each datagram can follow a
different route to the destination.
DATAGRAM
A datagram is a variable length packet consisting of two parts : Header and Data.
The Header is 20 to 60 bytes in length and contains information essential to routing and
delivery.
1. VER : Version
This field defines the length of the datagram header in 4 byte word.
3. Type of Service
4. Total Length
This field defines the total length ( header plus data) of the IP datagram in bytes.
To find the length of data coming from the upper layer, subtract the header length from
total length.
The header length can be found by multiplying the value in IHL/HLEN field by 4.
Total length of IP datagram is limited to (216 -1) = 65535 of which 20-60 bytes are
header and the rest is the data from the upper layer.
a. Identification
It knows that all fragments having the same identification value should be assembled in
one datagram.
b. Flags
If it’s value is 1, it means the datagram is not the last fragment ; there are more
fragments after this one.
c. Fragment Offset
This 13 bit field shows the relative position of this fragment with respect to whole
datagram.
It is the offset of the data in the original datagram measured in units of 8 bytes.
For Example
A datagram with a data size of 4000 bytes is fragmented into three parts
First fragment carries bytes 0 to 1399 : The offset of for this fragment will be 0/8=0.
The second fragment carries byte 1400 to 2799 : The offset value for this fragment will be
1400/8=175.
Finally, the third segment carries bytes 2800 to 3999 : The offset value for this fragment
will be 2800/8=350
6. Time to Live
This field is used to control the maximum no of hops (routers) visited by datagrams.
When a source host sends the datagram, it stores a number in this field.
This value is approximately 2 times the maximum no of routes between any 2 hosts.
If this value after being decremented is zero, the router discards the datagram.
7. Protocol
This field defines the higher level protocol that uses the services of the IP layer.
An IP datagram can encapsulate data from several higher level protocols such as TCP or
UDP.
This field specifies the final destination protocol to which the IP datagram should be
delivered
Example 6 : TCP
UDP : 17
8. Checksum
The checksum in the IP packet covers only the header, not the data.
First, all higher level protocols that encapsulate data in the IP datagram have a checksum
field that covers the whole packet.
Second the header of the IP packet changes with each visited router, but data do not.
If the data are included, each router must recalculate the checksum for the whole packet.
(increase processing time for each router)
9. Source Address
This field must remain unchanged during the time IP datagram travels from source to
destination.
10 . Destination Address
11.Options
Question ?
How can we make a connection oriented transport layer protocol over a connectionless
network layer protocol such as IP ?
According to the design goal of Internet model, the two layers are totally independent.
Each parcel delivered to the post office is independent from the next even if we deliver
100 parcel to the same destination.
The post office cannot guarantee that the parcel arrive at the destination in order even if
the parcel are numbered.
Have an agent at destination city and make the agent to arrange the parcels.
This agent can keep track of all the parcels until all have arrived.
When two TCPs in two machines are connected they are able to send segments to each
other simultaneously.
This implies that each party must initialize communication and get approval from the
other party before any data transfer.
CLIENT SERVER
Segment 1 : SYN
Segment 3 : ACK
Time Time
1.
o The client sends the first segment : SYN .
o The segment also contains the client ISN : Initialization Sequence No.
o ISN is used for numbering the bytes of data sent from the client to the server.
2.
o The server sends the second segment : SYN and ACK.
o Second, the segment is used as the initialization segment for the server.
o ISN sent by server is used to number the bytes sent from the server to client.
3.
o The client sends the third segment.
INTRODUCTION
During the development of client-server system, there is a need to hide the complexities
of interaction between the client machine and the server machine.
This need has led to the development of a suite of products that provide this functionality.
With the advent of local area networks and interconnected PCs : More companies focused
on interconnecting their database and host systems.
Companies realized that software is required to sit between these PCs and host systems.
As the information System world increases in complexity, the need for generic middleware
solutions to enable interoperability of systems and portability of applications becomes
increasingly more important.
9 Transparency of connection
9 Rapid development
9 Rapid deployment
Database connectivity involves the ability to access multiple, heterogeneous data sources
from within a single application running on the client.
A second challenge is flexibility : The application should be able to directly access data
from a variety of data sources without the modification of an application.
For Example : An application could access data from FoxPro in a standalone small office
environment and from SQL server or Oracle in larger networked environments.
These challenges are day to day occurrences for programmers and for corporate
developers attempting to provide solutions to end users.
These challenges grow exponentially for developers and support staffs as the number of
data sources grows.
The primary differences in the implementation of each of the components are the
following.
1. PROGRAMMING INTERFACES
2. DBMS PROTOCOLS
Each DBMS supplier uses proprietary data formats and methods of communication
between the application and the DBMS
3. DBMS LANGUAGES
SQL has become the language of choice for relational DBMS but many differences still
exist among SQL implementations.
4. NETWORKING PROTOCOLS
For Example : SQL server may use DECnet on VAX, TCP/IP on Unix or SPX/IPX on PC.
DBMS suppliers and third party companies have attempted to address the problem of
database connectivity in a no of ways
o Using gateways.
The gateway translates and forwards requests to the target DBMS and receives result
from it.
For Example : Application that access SQL server can also access DB2 data through the
Micro Decisionware DB2 gateway.
This product allows a DB2 DBMS to appear to a window-based application as a SQL server
DBMS.
The gateway approach is limited by structural and architectural differences among DBMS
such as a differences in catalogs and SQL implementations.
APPLICATION APPLICATION
The standardization is the result of creating a standard API, a macro language, or set of
user tool for accessing data and translating requests and results.
A common interface is usually implemented by writing a driver for each target DBMS.
COMMON INTERFACE
NETWORKING S/W
The DBMS protocol, SQL grammar and networking protocol are common to all DBMS so
the application can use the same protocol and SQL grammar to communicate with all
DBMS.
Examples
Common protocols can ultimately work very efficiently in conjunction with common
interface.
# ANALYSIS
A common protocol and interface provides a standalone API for developers as well as a
single protocol for communication with all databases.
A common gateway provide a standard API for developers and allows the gateway to
provide functionality such as translation and connectivity to wide area networks.
APPLICATION DATABASE
WORKSTATION LOGIC DATABASE SERVICES
SERVER
SQL SQL
API API
NPS NPS
When an application at the client and requires data from the server, a transaction is sent
from the application logic via SQL to network.
Middleware that uses synchronous transaction oriented communications involved back and
forth exchange of information between two or more programs.
The synchronized aspect of this communication style demands that every program
performs its task correctly; otherwise transaction will not be completed.
o Products that supports TCP/IP sockets so that PC programs can communicate with
other sockets are synchronized transaction oriented as well.
For Example : A server database update program uses a data queue facility to send
subsets of updated records to PC programs.
Middleware can also link a business application with a generalized server program that
typically resides on another system.
A database server, an image server, a video server and other general purpose servers can
communicate with an application program through a middleware solutions.
Provides a consistent set of SQL oriented access routines on the other sides.
DCE is a combined integrated set of services that supports the development of distributed
applications.
The architecture is layered bottom-up from the o/s to the highest level applications.
Distiributed services provide tools for software developers to create the end-user services
needed for distributed computing.
APPLICATIONS
PC OTHER DIST.
INTEGRATION SERVICES
MANAGEMENT
THREADS
O/S
RPC manages the network communications needed to support these calls, even the details
such as network protocols.
RPC extends a local procedure call by supporting direct calls to procedure on remote
systems enabling programmers to develop distributed applications as easily as traditional
single system programs.
RPC allows clients to interact with multiple servers and allows to handle multiple clients
simultaneously.
# Naming Service
The distributed directory service provides a single naming model throughout DCE.
This model allows users to identify by resources such as servers, files, disks without
needing to know where they are located on a network.
As a result, users can continue referring to a resource by one name even when a
characteristic of the resouce, such as its network address change.
# Time Service
The distributed time service is a software based service that synchronizes each computer
to a widely recognized time standard.
# Thread Services
The threads service provides portable facilities that support concurrent programming
which allows an application to perform many actions simultaneously.
The thread service include operation to create and control multiple threads of execution in
a single process.
A number of DCE components including RPC,Security,directory & time services use thread
# Security Service
The DCE security service component is well integrated within the fundamental distributed
service and data sharing components.
MOM is a single class of middleware that operates on the principles of message passing
and message queuing.
MOM is perhaps the most visible and currently the clearest example of middleware.
MOM uses the concept of message to separate processes so that they can operate
independently.
When a client issues a request for a service such as database search, it does not talk
directly to that service ; it talks to middleware.
Talking to the middleware usually involves placing message on queue where it will be
picked up by the appropriate service when the service is available.
The messaging middleware acts as a buffer between the client and the server.
MOM ensures that messages get to their destination and receive a response.
The queuing mechanism can be very flexible either offering First in First Out scheme of
priorities based message.
Message passing and message queuing have been around for many years as the basis of
Online Transaction Processing Systems (OLTP)
Electronic mail passes message from one person to another whereas MOM passes
messages back and forth between software processes.
The API is portable so MOM programs can be moved to new platforms easily.
MOM is also valid middleware technology that use Object Oriented Technology.
Message passing and queuing allows objects to exchange data and can even pass objects.
In many systems, data is to be transferred from Mainframe to PCs, the data conversion
from EBCDIC to ASCII must be handled.
The MOM software only provides the transport and delivery mechanism for messages, it
is not concerned with content.
As a result, the application must take responsibility for creating and decoding messages.
MOM's simplicity also can slow performance because messages are usually processed
from a queue one at a time.
This particular problem means that MOM is not usually suitable for applications that
require real-time communications within applications.
It supports wide range of IBM and non IBM hardware platforms such as sun
solaris,Tandem, AT&T GIS.
MQSeries accommodate all of the major computer languages (COBOL,C,Visual Basic) and
network protocols (SNA, TCP/IP, Decnet).
Before client-server had developed as a concept, the concept of middleware was very
much in place within transaction processing systems.
Transaction Processing Monitors were first built to cope with batched transactions.
Transactions were accumulated during the day and then passed against the company's
data file overnight.
By the 1970s , TP monitors were handling online transactions which give rise to the term
Online Transaction Processing (OLTP).
IBM has defined transaction as an atomic unit of work that possesses 4 properties
o Atomicity
o Consistency
o Isolation
o Durability
Consistency means that the results of a particular transaction must be reproducible and
predictable.
The transaction must always produce the same results under the same conditions.
Isolation means that no transaction must interfere with any concurrently operating
transaction.
Finally, Durability means that the results of the transaction must be permanent.
IBM's CICS is perhaps one of the best examples of a Transaction Processing System
CICS began in late 1960s as the Customer Information Control System, a robust and
reliable piece of software with the great range of OLTP.
It has traditionally been used in Mainframes and also been to OS/2 as CICS OS/2.
The client performs data capture and local data processing and then sends requests to
middleman called request router.
The router breaks the client request to one or more several processes.
1. Queued TP
Queued TP is convenient for applications in which some clients produce data and other
process or consume it.
Email, Job dispatching, Electronic Data Interchange are typical example of Queued TP
The router inserts a client's request to a queue for later processing by other applications.
2. Conversational TP
Conversational transactions requires the client and the server to exchange several
messages as a single ACID unit.
These relationships are sometimes not a simple request and response but rather small
requests answered by sequence of responses.
3. Workflow TP
With ODBC, application developers can allow an application to concurrently access, view
and modify data from multiple, diverse databases.
ODBC has emerged as the industry standard for data access for both windows-based and
Macintosh based applications.
The key salient points with respect to ODBC in client-server development environment are
as follows.
o ODBC is open, working with ANSI standard, the SQL Access Group (SAG).
It allows users to access data in more than one data storage location ( for example, more
than one server) from within a single application.
It allows users to access data in more than one type of DBMS (DB2,Oracle, Microsoft SQL
Server etc).
It is now easier for application developers to provide access to data in multiple, concurrent
DBMS.
It is a portable programming interface, enabling the same interface and access technology
to be cross platform tool.
ODBC allows corporations to continue to use existing diverse DBMS while moving to client-
server based systems.
ODBC addresses the database connectivity problem by using the common interface
approach.
Application developers can use one API to access all data sources.
APPLICATION
DRIVER MANAGER
ODBC.DLL
DBMS
DRIVER (DLL)
NETWORKING
SOFTWARE
DATA
SOURCE
[ DBMS ]
Driver Manager : Loads the ODBC driver of application and passes requests to driver
and provides results to application
DBMS Driver : Processes ODBC function calls, submit SQL requests to Specific DBMS
Networking S/W : This layer may require a DBMS specific network component
depending on the data source.
Data Source : Processes requests from driver and returns result to driver.
Each application uses the same code as defined by the API specification to talk to many
types of data sources through DBMS specific drivers.
In windows, the driver manager and the drivers are implemented as dynamic link libraries
(Dlls).
The application calls ODBC functions to connect to data sources either locally or remotely.
The Driver Manager provides information to an application such as a list of available data
sources, loads driver dynamically as they are needed.
The Driver developed separately from application sits between the application and the
network.
The Driver processes ODBC function calls, manages all exchanges between an application
and a specific DBMS and may translate the standard SQL syntax into native SQL of the
target data source.
A single application can make multiple connections each through a different driver, or
multiple connections to similar sources through a single driver.
To access a new DBMS, a user or admin has to install a driver for the DBMS.
Users can submit data access requests in Industry standard SQL grammar.
APP 1 APP2
ODBC.DLL
Above figure shows that user may be running two applications accessing three different
data sources through ODBC.
ODBC defines a standard SQL grammar and a set of functions that are called core
grammar and core functions.
o Establish a connection with data source, execute SQL statement and retrieve
results.
o Provides standard logon interface to the end user for access to the data source.
In addition to the preceding features, ODBC has extension that provide enhanced
performance and increased power through the following.
o Scrollable cursors.
o Asynchronous execution
INTRODUCTION
Business processes are simply a set of activities that transform a set of inputs into a set of
outputs.
The purpose of this model is to define the supplier and process inputs, your process, and
the customer and associated outputs
Many companies are combining their business process re-engineering initiatives with the
client-server development projects to provide a true business solution.
Everyone is expected to complete his/her work accurately and efficiently so that overall
goals and objectives are accomplished in a timely manner.
Every business process has an input, which starts the process, as well as an output.
In the extreme, reengineering assumes the current process is irrelevant - it doesn't work,
it's broke, forget it.
BPR is the thorough evaluation of company's existing business processes followed by the
dramatic changing of them for optimization and streamlining purpose.
New and improved methods for accomplishing the goals can be implemented.
Successful re-engineering is based on the philosophy that every task and procedure must
be examined and may be potentially changed : Nothing is beyond modification.
Users are empowered with new tools in order to do their job faster.
The main theme of BPR are Customer, Competition and Speed of Change.
By aligning technology with business requirements, a client-server system can bring the
re-engineering business process into real implementation model.
It is because of its power, flexibility and ability to be integrated with other technologies.
In addition, a client-server system can be easily modified to handle any future changes
that may be made to business processes.
o Automating tasks
o Facilitating workflow.
# SEPERATE TEAMS
One team focuses on conducting the business process re-engineering effort by examining
processes and streamlining them.
The other team takes those findings and concentrates on developing client-server system.
The two groups work together to turn over the re-engineering team's deliverables to the
development team.
The re-engineering team discusses and reviews its findings and requirements with the
client-server team.
It is because their deliverables such as business process maps an business rules serve as
a functional specification for new systems to client-server team.
Form a core team of individuals who work on both the business processing re-engineering
and client-server development.
This core group of individuals actively participates in both re-engineering and the technical
sides of the combined project.
Because it is not easy to find individuals who possess the skill to handle both re-
engineering and client-server development responsibilities.
Having a core group that is involved through out both projects rather than having
separate teams has many benefits.
For Example : The fear of change that both re-engineering and client-server bring is a
common issue.
The relationships with members from management and the end-user community need
to be built once.
It is very important because the better you know management and the end users , the
more support you will get.
Parts of the client-sever system's infrastructure can be developed in parallel with BPR
effort.
It is because some technical tasks can be started before all the re-engineering is
completed.
There are fewer questions because the same people who re-engineered the processes
are assisting in designing the system.
May encounter some resistance when you discuss about client-server technology.
Face some of the same challenge when conducting a business process re-engineering.
Many individuals have been doing the same routine for many years and they do not know
what they want to change.
Resistance is to be expected and the best solution is to be ready for their concerns.
The following table lists some of the common concerns voiced by people towards business
process re-engineering and client-server.
CONCERN RESPONSE
I designed the current process and want it However now with the newer technology and
to stay. the change in how business is done.
CONCERN RESPONSE
By carefully analyzing the business process maps and thoroughly evaluating all your
findings from your interviews, you can identify those areas that should be changed and
improved.
By accessing the various steps of current business process and how people do their work.
Eliminating areas of inefficiency and selecting new technology will streamline and optimize
the business process.
After a re-engineered business process has been designed, a new business process map
detailing the new process should be developed.
Compare the complexity and inefficiencies of the current process with the new improved
one.
Manual tasks such as having to bring paper forms to other people's offices.
A lot of time spent waiting for information to be received from other sources.
Delays in the process where people are waiting and not being productive.
The competition performs the same business process better than yours.
Replace the task with another task that can accomplish the same objective faster.
Determine the new technology that enables a process to be completed more efficiently
and accurately.
They provide a wide range of feature ranging from basic flowcharting to advanced ones
that provides process modeling and process simulation.
Basic graphics and flowcharting software make it easy to develop and maintain
professional business maps that reflect the business process an operational activities.
The sophisticated products even have simulations that show how much better a re-
engineered business process is from the original process.
One of the most popular products that is on the market today is Process Charter from
scitor corporation.
Workflow Analyzer
Cosmo
Sciforma Process
A data warehouse is a repository of data that has been extracted and integrated from
heterogeneous and autonomous distributed sources.
The ultimate goal of data warehousing is the creation of logical view of data that may
reside in many different, separate physical databases.
The data warehouse is optimized for analysis of large volume of data rather than the
speed of performance of individual transactions.
Data is extracted periodically from the core business database and placed into secondary
database to form an organization's information repository.
People in the organization then use the information repository as a pool of information to
test and report against.
Tools like Cognos' Powerplay can present textual information such as the customer
purchasing products X,Y and Z in a graphical manner.
Network
CLIENT CLIENT
ACCESSING REPORTING FROM
DATABASE A DATABASE B
A data warehouse or the databases within a data warehouse are updated regularly.
For most companies, the most frequently used databases are updated daily.
Other databases that do not change frequently may be updated weekly or monthly.
Reports and statistics can be processed based on data that is 24 hours or more out of
date.
Large reductions in the amount of printouts because reports can be viewed and analyzed
online through graphical views on client workstations.
Ability to summarize data at a high level and then break down information into its core
components by using drill down techniques.
# DRILL DOWN
Each key press and mouse click breaks down the information into more and more detail
form.
For Example : Drilling down the information reveals that the total of 500,000 European
customer is made up of :
# EXCEPTION REPORTING
Exception reporting allows you to see only the information that is out of the ordinary that
you may want to act upon
You scan a printed report to pick out the good points or the bad points of the information.
For Example : You could do a report that shows only the sales people who exceeded
their sales target by 10% so that you can comment and support your company's
outstanding sales people.
In the client presentation / server selection scenario, the data warehouse server uses its
processing time to perform the data selection and format the result set.
The server then sends the result set back to the individual user.
The selection or analysis is typically created on the server through a server-based tool,
such as IBM AS/400, Query/400.
Query/400 is a feature-rich yet reasonably easy to use database query and selection tool.
In the client presentation and selection/server processing scenario, the client machine
processes a selection criteria through a front-end tool such as Cognos' Powerplay.
Processing time on the server is used to create the result set that is then sent back to the
client and presented in the original tool.
In the client presentation, selection and processing scenario, the client issues a selection
request to the server.
The server generates the result sets and sends it back to the client.
The client then runs further application logic on the result set that has been received.
Data warehousing like most computer systems has its limitations that can cause problems
to an organization.
For example , a user accessing the warehouse may be able to collect and use company's
data in a fraudulent way.
Overtime , a company's databases may become inaccurate due to the changes made in
data by core workforce.
The data warehouse is somewhat of a double-edged sword with regard to these problems.
Secondly, to your benefit once a problem has been identified your data warehouse and
data tools can identify all the problem areas.
# Financial Loss
Users incorrectly keying in or calculating financial activity can affect financial loss.
If your name happens to be Mr. Kim but responds you as Mrs. Rim.
How soon will you as the customer respond to such a marketing piece ?
If other information is sent to you which is inaccurate, how likely you are going to respond
In an era, it costs 10 times more to get new customer than to retain an existing
customers.
By collecting information about customers in data warehouse and doing target mailing
rather than blanket mailing can reduce its advertising cost.
The success of an organization's data warehouse depends on the quality, integrity and
reliability of the data within it.
To increase the chances for data warehouse to succeed : Companies should develop an
Information Process.
An information process deals with the management of corporate information including its
accuracy, relevance to the business, accessibility by the user base and consistency of
definition.
Relevant data means that the data contained within the data warehouse is meaningful to
the organization
The relevant data determine the status of the company from financial, operational and
managerial perspective.
Timeliness is concerned with the rate of change of data within the warehouse.
Companies needs to decide how often the data needs to be updated in order to meet
organizational needs.
Finally data consistency is often considered the most problematic area of the information
process.
Data inconsistency is the result of users placing different meanings and interpretations on
data.
Julia from Accounts arrives at a meeting stating that the company has 100,000
customers.
Naomi from operations also arrives with her report stating that there are 107,000
customers.
The information process department's role is to look after the corporate data and know
how the data is created, maintained and perhaps deleted from the business system.
It provides the information required for senior management to make effective decisions
based on information gathered from both internal and external data sources.
They can be used to set measurements and thresholds on levels of the business.
The technology is therefore both useful and powerful when combined with the data
warehouse environment.
EIS and DSS tools are sometimes also referred to as data mining tools.
The user requirements for the EIS are clear cut and they are as follows :
o Customizable tools.
The newer EIS tools coming to the market place will be much more intelligent.
The EIS will began to guide the users rather than requiring the users to understand a
great deal of fundamental ways the EIS works.
An EIS must give online analytical processing and also multidimensional views of data.
SUMMARY
Data warehousing has to meet the demands of process- oriented, re-engineered business.
Data warehouse and its implementations can play a part in facilitating the reengineering
of the business processes in which knowledge based workers including executives are
involved.
The effective use of good EIS and DSS can realign the broken keel of an organization and
set it on the right path.
INTRODUCTION
o Good Design
A typical centralized system may have several applications and databases, but it has only
one O/S and one Communication Protocol : SNA.
# HARDWARE
The client machine's performance will be increased by improving any or its major
subsystems.
In a PC, these subsystems would be the amount of available RAM, the processor speed,
the video graphics speeds and the speed of NIC cards.
When purchasing client machines, the best strategy is to buy the fastest, more reliable
machines available.
Delete redundant temporary files and defragment the hard drives regularly.
# SOFTWARE
1. OPERATING SYSTEM
Real multitasking operating system such as OS/2, Windows NT, Windows XP can be
selected.
2. APPLICATIONS
The client applications is normally where the largest improvements can be made.
# HARDWARE
Upgrading server hardware just like upgrading client hardware can improve the
performance of client-server.
Using multiple network interfaces within a server can also improve performance.
High performance file systems using technologies such as SCSI and RAID offer dramatic
improvements.
# SOFTWARE
Several servers can be broken down into file server, communication server and a
database server rather than having a single server.
Choosing servers that supports Symmetric Multiprocessors give performance gains over
single processor models.
To make client-server database applications perform well, you have to follow two simple
rules :
In any real world client-server implementation, some data will be stored locally.
The application retrieves data from the server and places in local table.
Subsequent analysis is performed on local data, minimizing network traffic and load.
A highly normalized databases is usually associated with complex relational joins, which
can hurt performance.
Experiment with different forms of indexes in order to find the optimum use.
One thing to keep in mind when designing queries is that large result sets are costly on
most RDBMS.
It is much more efficient to restrict the size of the result set and allow the database back
end to perform the function for which it was intended.
Concept of views.
The network design should b completed and documented as a part of the development
process.
Reduce the number of users per LAN or increase the bandwidth available to each client if
bandwidth or response time problem exits.
Excessive network traffic is one of the most common cause of poor client-server
performance.
Designers must take care to prevent this problem by minimizing network requests.
Securing the centralized system of 70's and 80's was difficult enough.
Now system managers are faced with multivendor systems and ultimately multiple
security issues.
In the distributed era of 90's security must encompass host system, PCs, LANs and global
WANs.
These new configurations are creating an environment of increased security concerns and
adds complexity.
Some of the issues influencing this new environment and their effects on security are
listed below.
They have powerful desktop workstations with access to private and public networks.
Be sure about the security to prevent the possibility of any unauthorized access to
resources.
Security must now reach beyond and offer high quality protection in the open and
interconnected world.
Threats in an open distributed environment reach far beyond those encountered in the
centralized system.
It is because each individual piece of network and system can be linked to many other
networks and systems.
1. Virus
2. Worms
3. Trojan Horse
4. Back Door
The back door refers to a specific hidden word used by many programmers.
The term was made even more popular by the film war games.
5. User Fakes
User fakes are common in an environment where trust is based on identification rather
than authentication.
The intruder takes control of a workstation by faking the identification of a trusted user.
Systems from different vendors using different formats and protocols must now
interoperate with each other seamlessly.
APPLICATION
NETWORK O/S
HARDWARE
NETWORK
Hardware security requires you to protect the physical components of the system.
For example, you may prevent unauthorized users from starting the PCs on a LAN : Power
on Password.
Link encryption can protect the network cabling against passive wire tapping.
The network operating system also provides validation of all users logging onto the LAN.
It can be done by providing the restriction of work area for each users.
1. Secure Sign-On
2. Client Security
Client workstations are often in accessible locations with portable data and applications.
Pay attention to the security needs : They are very easy to steal.
You should use a secure firewall gateway between your organization and the Internet.
4. System Integrity
System integrity is the ability of your operating system to prevent he circumvention of its
security, auditing or accounting controls.
IBM was the first and main software vendor to commit to this objective.
5. Accountability
You should be able to monitor specific security events using a single audit function.
This audit function should allow you to monitor the system and generate reports.
Be sure that all points in between are secured as information flows from point of origin to
point of destination.
By identifying your requirements, you can begin to build your security solutions.
After you define your requirements, you can develop a security policy and implement a
security solution.
o Assessing and managing risks when your business and environment change.
o Implementing the products and policies to align your security and business
objective.
o Repeating the life cycle process to maintain the vitality of your security solution.
To take full advantage of networking opportunities, you need to be able to verify the
authenticity of all users.
1. Confidentiality
Maintaining confidentiality involves ensuring that staff members have access only to the
data they are allowed to see and modify.
2. Integrity
3. Availability
USER AWARENESS
Staff do not genuinely understand the cost of damage to compute systems as a result of
misuse.
o No games.
Companies have heavily invested in information systems and the people managing this
technology.
# The CLIENT
The client machines also are easily accessible and easy to use.
In order to provide good security at client level, you need to consider the role of the client
machine
1. Physical Security
Set alarms.
User Power-on-passwords.
Page 93 © KUMAR PUDASHINE
CLIENT-SERVER COMPUTING
2. Network Security
Server must receive a valid user id and password from client machine
Authenticate the user against the valid user list held on the server.
3. Application Security
The applications have varied levels of security based upon the system.
# THE SERVERS
1. Physical Security
2. Software Security
Most servers have auditing capabilities to show when and how events occurred.
3. Network Security
The Internet and other public networks offers business and their customer links to
valuable information .
The client workstation will continue to improve in its capability as a strong, reliable
business computer.
The improvement will require changes in the components of the hardware itself, the O/S
and the application.
# HARDWARE
The architecture is also becoming more user friendly with the advent of technologies such
as plug-n play.
With the likes of windows 98,XP and OS/2 wrap, desktop operating system have become
more and more reliable.
The O/S also have to offer maximum compatibility with older drives, older applications
and older equipments.
A significant no of tools have developed for reliable and rapid application development.
Like programming languages and system development, the commercial database also will
move to an object base.
The future business systems running on distributed systems will be based on Object-
Oriented Database Management System [OODMS]
In addition, companies will began to create networks that can provide them with
bandwidth on demand.
Optical Networks.
ISDN Networks
The result of these trends is a growing customer demand for greater network bandwidth.
Many companies are preparing for future by installing LANs with FDDI.
A broad range of FDDI products is currently available from a variety of vendors including
Cisco, 3 Com , Hewlett Packard.
TV Channels
This cell based transmission technology will have an even greater impact on both LANs
and WANs.
It uses cell switching technology to achieve transmission speed from 1.54 Mbps to 1.26
Gbps.
Page 96 © KUMAR PUDASHINE
CLIENT-SERVER COMPUTING
A major advantage of ATM is that all cell are of same size 53 bytes [48 + 5].
This type of transmission can be used to carry real-time information such as voice and
video.
Ad-hoc networks.
Mobile computing
Study group XVII of the CCITT worked 4 years to develop a set of standards for future
voice and data integration.
It includes circuit switched and packet switched networks as well as end to end digital
transport of data.
It includes truly integrated voice, data and even video traveling smoothly from one type of
network to another.
ISDN concept of universal interface means each terminal will understand every other
terminal .
ISDN standards defines a digital interface divided into two types of channels.
The CCITT committee defined two major interface that uses these B and D channels.
BRI : Basic Rate Interface is used to serve devices with relatively small capacity.
BRI : 2B + D
Primary Rate Interface is used for large capacity devices. PRI : 23B+D
Trend of linking company's several different network to form enterprise wide networks.
A single enterprise wide networks of all these diverse computing environments can be
made.
Every employee under such a network would have access to the resources available on
any of these previously isolated computing networks.
SNMP based network management programs are very common to networks where Unix or
Linux is used.
INTRODUCTION
PROCESSES
A process is just an executing program including the current values of PC, registers and
variables.
All the runnable software on the computer, often including the operating system is
organized into a number of sequential process or just process.
In reality, of course the real CPU switches back and forth from process to process.
A single processor may be shared among several processes with some scheduling
algorithm being used to determine when to stop on one process and service a different
one.
# PROCESS STATES
Running
Dispatch
Ready Blocked
Wakeup
A process is said to be blocked if it is waiting for some event to happen (I/O completion
event).
The PCB is a data structure containing certain important information about the process
including :
PROCESS SCHEDULING
When more than one process is runnable, the operating system must decide which one to
run first.
The part of operating system that makes this decision is called Scheduler.
Processes are dispatched according to their arrival time on the ready queue.
CPU Completion
C B A
Ready List
Preemptive Scheduling
Processes are dispatched FIFO but are given a limited amount of time.
When the process uses up its quantum, it is put on the end of the list.
The only interesting issue with round robin is the length of quantum.
Switching from one process to another requires a certain amount of time for doing the
administration : Saving and loading registers, updating various tables and lists.
After doing 20 msec of useful work, the CPU will have to spend 5 msec on process
switching.
Conclusion :
Setting the quantum too short causes two many process switches and lowers the CPU
efficiency.
Setting it too long many cause poor response to short interactive processes.
Completion
A C B A CPU
Ready List
Preemption
3. PRIORITY SCHEDULING
Round Robin scheduling makes the implicit assumption that all processes are equally
important.
Preemptive Scheduling.
Each process is assigned a priority and the runnable process with the highest priority is
allowed to run.
It is often convenient to group processes into priority class and use priority scheduling
among the classes but round-robin scheduling within each class.
To prevent high-priority process from running indefinitely , the scheduler may decrease
the priority of currently running process.
PRIORITY 3
PRIORITY 2
Priority Class
8 4 4 4
A B C D
4 4 4 8
B C D A
In SRT a running process may be preempted by a new process with a shorter estimated
run-time.
It must keep track of the elapsed service time of the running job.
Excessive bias against longer jobs and excessive favoritism towards short new jobs.
Priority of each job is a function not only of job's service time but also of the amount of
time the job has been waiting for service.
Level 1 Completion
D C B A CPU
Completion
A X Y Z CPU
Level 2
Completion
Z M N P CPU
A new process enters the queuing network at the back of top queue.
If the job waits for I/O or quantum expires, the process is placed at the back of next lower
level queue.
Earliest Deadline First Scheduling : Process with least deadline is serviced first.
CONTEXT SWITCH
A context switch is the computing process of storing and restoring the state(context) of a
CPU.
A context switch can mean a register context switch, a task context switch or a thread
context switch.
o Execute ISR
INTERPROCESS COMMUNICATION
For example , in a shell pipeline the output of the first process must be passed to the
second process.
# RACE CONDITONS
If two processes try to call deposit concurrently, something very bad can happen.
The single statement balance += amount is really implemented, on most computers, buy
a sequence of instructions such as
If one completes before the other starts, the combined effect is to add 30 to the balance,
as desired.
However, suppose the calls happen at exactly the same time, and the executions are
interleaved.
Suppose the initial balance is 100, and the two processes run on different CPUs. One
possible result is
This kind of bug, which only occurs under certain timing conditions, is called a Race
Condition.
To avoid these kinds of problems, systems that support processes always contain
# CRITICAL SECTIONS
Prohibit more than one process from reading and writing the shared data at the same
time.
Some way of making sure that if one process is using a shared variable or file, the other
processes will be excluded from doing the same thing.
The part of the program where the shared memory is accessed is called the Critical Region
or Critical Section.
# SEMPAPHORES
A semaphore is a protected variable and constitutes a classic method for restricting access
to equivalent shared resources in a multiprogramming environment.
In the special case where there is single equivalent shared resource, the semaphore is
called binary semaphore.
P (Semaphore S)
{
await S > 0 then S=S-1 ; /* must be atomic */
}
V (Semaphore S)
{
S=S+1 ; /* must be atomic */
}
The value of semaphore is the no of units of the resource which are free.
V is the inverse, it simply makes a resource available again after the process has finished
using it.
Init is only used to initialize the semaphore before any requests are made.
The P and V operations must be atomic, which means that no process may ever be
preempted in the middle of one of those operations to run another operation on the same
semaphore.
# MONITORS
o Shared data.
Typical implementation:
Each monitor has one lock.
Acquire lock when begin a monitor operation, and Release lock when operation finishes.
Statically identify operations that only read data, then allow these read-only operations to
go concurrently.
Writers get mutual exclusion with respect to other writers and to readers.
Advantages:
Reduces probability of error (never forget to Acquire or Release the lock).
Trend is away from encapsulated high-level operations such as monitors toward more