0% found this document useful (0 votes)
124 views64 pages

Computer Model Answers Welinghkar University Q A

Uploaded by

rohit kanojia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
124 views64 pages

Computer Model Answers Welinghkar University Q A

Uploaded by

rohit kanojia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Introduction to Computers

University Questions and Answers


1998-2002

Welingkar Institute Of Management


Matunga , Mumbai

1
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

1 Qt.1 2002 16M Write a note on how IT can help an organization in gaining a competitive
advantage.
2 Qt.1 1998 10M Using Specific examples , describe how computerization would help your
department perform more efficiently and effectively. Also explain how computerization will help better deci-
sion-making , analysis , and planning ?

3 Qt.1 2001 20M Giving suitable examples, explain how and why IT and computers have con-
tributed in increasing productivity and competitiveness of organizations in doing business.

4 Qt 5 1998 20M “ An integrated company wide computerization is the only way of deriving full
benefits of information technology today”. Discuss.

Information Technology has occupied a very important place in today’s world. Be it any industry, there
is no one who wants to stay out of the race. Everyone is fighting for their survival and the bottom line
would be ‘Let the best man win’.

Before the advent of Information Technology:


Increased paper work
Lack of storage space.
Communication problems. - High Telephone costs etc

Computerized transactions and one single database is being used by all the branches of one particular
branch and so retrieval of customer information and the transactions carried out can take place in no
time, resulting in quick dealings and negligible loss of time.

Let’s take the simple example of the ATMs. (Automatic Teller Machines) deployed at various locations
in India. The ATMs reduce the workload of the banking staff to a great extent as previously, customers
had to deposit and withdraw cash or cheques on the counters. Now, the ATMs do the job. Thus, the staff
can be utilized for more substantial work.

Emails, video conferencing etc has also brought different branches of the organizations closer to each
other as communication mediums have become much more advanced. Work can be done at a brisk pace
as reminders and other details can be mailed easily and savings on huge telephone bills is possible.

Printing of customer transactions for e.g., the transaction summary etc can be given out as per the
customer requirement instantly, without delay. This would result into customer satisfaction and avoid-
ing of long queues.

Analysis of data and comparison of balance sheets etc is possible in no time as all the data would be
present in the computer and accurate information and comparisons can be made and the Unique selling
points of the company and it’s weaknesses can also be detected.

Previously all the work, say in Finance, Marketing, Human Resources or any other department for that
matter would be done manually. A lot of paper work would be involved. Repetitive information could
not be avoided. Communication between departments was possible only through telephone lines. Now,
for e.g. the Human resource department keeps all the information about an employee in the computer,
which is available at their fingertips. Retrieval and updating of the data is possible much faster. One
branch say, In London can access the employee details where records

2
are being kept in the other, Mumbai for e.g.
There is no paper work required. Bank transactions passed through the computer can be retrieved as
and when required using different criteria. Less amount of manpower required as one operator can
handle the work of multiple workers, resulting in saving of finance.

Computers can calculate amounts and figures accurately. The work is never monotonous for the com-
puter. It can give accurate results every time, something which a human being is never able to do all the
time.

Time punching machines and computerized attendance musters can be implemented so that accurate
attendance sheets can be produced without manipulation.

5 Qt.1 2000 20M In what ways will use of IT and internet can enhance your job function as
middle manger. Discuss with examples, with respect to either HRD function or marketing function of
finance function.

HRD:
As an HRD manager, the functions would be in relation to recruitment, induction, payroll, industrial
relations etc.

Information about whether a particular candidate has appeared for an interview earlier, his other
personal details etc can be stored together and can be retrieved easily and fast, avoiding delays.
In Wipro and other top companies, the candidate is required to fill up a form stating all these details
so that when a particular candidate appears for an interview the next time, time is not lost in filling up
the entire form once again and processing takes place faster.

Details such as the employee’s salary, bonus, the dearness allowance etc, can be fed into the com-
puter one single time and the salary and attendance statements can be generated every month without
delay and calculations. There are special software packages available for this purpose. Payroll thus
becomes a lot easier by simply specifying the percentage of deductions etc.

Appointment Letters, Memos etc can be issued just by drafting a sample copy.

FINANCE:
As a Finance Manager, the functions would be in relation to Balance Sheets, Bank Reconciliation
Statements, Petty cash statements etc.

Packages such as Tally etc are really helpful as once the entries are fed on daily basis, they can be
retrieved as and when required in the form of single statements or consolidated statements without
much of a hassle. This gives the manager a complete view in totality about the expenses incurred by
the company during a particular period and whether they can be reduced or not.

Balance Sheets, Bank Reconciliation Statements (BRS) etc can be compared with the previous years
figures and the profits and losses can be determined easily without delay. This would give us accurate
information about the actual standing of the company in the current year as compared to the previous
year.

MARKETING:
As a Finance Manager, the functions would be in relation to Sales, Turnover, Client details, purchases
etc.

3
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

Information about the purchase and sale of goods etc would give information as to how much stock
is currently available with us, the outstanding amounts to be received from debtors and amounts to be
given to creditors etc.

Sales figures can be compared to know the exact status of the current year.

6 Qt.2(a) 2002 3M
Qt.2(e) 2001 4M Define : Booting

When the power to a computer is turned on, the first program that runs is usually a set of instructions kept
in the computer's read-only memory (ROM) that examines the system hardware to make sure everything is
functioning properly.

This power-on self test (POST) checks the CPU, memory, and basic input-output systems (BIOS) for
errors and stores the result in a special memory location. Once the POST has successfully completed, the
software loaded in ROM (sometimes called firmware) will begin to activate the computer's disk drives. In
most modern computers, when the computer activates the hard disk drive, it finds the first piece of the
operating system: the bootstrap loader.

The bootstrap loader (booting)

•is a small program that has a single function: It loads the operating system into memory and allows it to
begin operation.

•It determines whether the disk has the basic components that are necessary to run the operating system
successfully. It sets up the divisions of memory that hold the operating system, user information and applica-
tions. It establishes the data structures that will hold the myriad signals, flags and semaphores that are used to
communicate within and between the subsystems and applications of the computer. Then it turns control of
the computer over to the operating system.

7 Qt.6(a) 2002 4M Enumerate the key purpose and name one example of Operating System
8 Qt.2(a) 1998 5M Write short note on : Operating System
9 Qt 1(b) 1999 10M What is the purpose of operating system? Explain the functions of
operating system.
10 Qt 4(b) 2000 10M Explain the functions of operating system.

An operating system provides the environment within which programs are executed. Besides hardware
and high-level language translators, many other routines, which enable a user to efficiently use the computer,
is provided by the operating system. To construct such an environment, the system is partitioned into small
modules with a well-defined interface.

A system as large and complex as an operating system can only be created by partitioning it into smaller
pieces. Each of these pieces should be a well-defined portion of the system with carefully defined inputs,
outputs, and function. operating systems share the system components outlined below.

1. Process Management
The CPU executes a large number of programs. While its main concern is the execution of user programs,
the CPU is also needed for other system activities. These activities are called processes. A process is a

4
program in execution. Typically, a batch job is a process. A time-shared user program is a process. A system
task, such as spooling, is also a process. For now, a process may be considered as a job or a time-shared
program, but the concept is actually more general.

In general, a process will need certain resources such as CPU time, memory, files, I/O devices, etc., to
accomplish its task. These resources are given to the process when it is created. In addition to the various
physical and logical resources that a process obtains when its is created, some initialization data (input) may
be passed along.

We emphasize that a program by itself is not a process; a program is a passive entity. It is known that two
processes may be associated with the same program, they are nevertheless considered two separate execu-
tion sequences.

The operating system is responsible for the following activities in connection with processes managed.
·The creation and deletion of both user and system processes
·The suspension is resumption of processes.
·The provision of mechanisms for process synchronization
·The provision of mechanisms for deadlock handling.

2. Memory Management
Memory is central to the operation of a modern computer system. Memory is a large array of words or
bytes, each with its own address. Interaction is achieved through a sequence of reads or writes of specific
memory address. The CPU fetches from and stores in memory. In order for a program to be executed it must
be mapped to absolute addresses and loaded in to memory.
In order to improve both the utilization of CPU and the speed of the computer's response to its users,
several processes must be kept in memory.
The operating system is responsible for the following activities in connection with memory management.
·Keep track of which parts of memory are currently being used and by whom.
·Decide which processes are to be loaded into memory when memory space becomes available.
·Allocate and de-allocate memory space as needed.
Memory management techniques will be discussed in great detail in section 8.

3. Secondary Storage Management


The main purpose of a computer system is to execute programs. These programs, together with the data
they access, must be in main memory during execution. Since the main memory is too small to permanently
accommodate all data and program, the computer system must provide secondary storage to backup main
memory. Most modem computer systems use disks as the primary on-line storage of information, of both
programs and data. Most programs, like compilers, assemblers, sort routines, editors, formatters, and so on,
are stored on the disk until loaded into memory, and then use the disk as both the source and destination of
their processing. Hence the proper management of disk storage is of central importance to a computer
system. There are few alternatives. Magnetic tape systems are generally too slow. In addition, they are limited
to sequential access. Thus tapes are more suited for storing infrequently used files, where speed is not a
primary concern.
The operating system is responsible for the following activities in connection with disk management
·Free space management
·Storage allocation
·Disk scheduling.

4. Input Output System


One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from the

5
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

user. For example, in Unix, the peculiarities of Input/Output devices are hidden from the bulk of the
operating system itself by the INPUT/OUTPUT system. The Input/Output system consists of:

·A buffer caching system


·A general device driver code
·Drivers for specific hardware devices.
Only the device driver knows the peculiarities of a specific device.

5. File Management
File management is one of the most visible services of an operating system. Computers can store infor-
mation in several different physical forms; magnetic tape, disk, and drum are the most common forms.
Each of these devices has it own characteristics and physical organization. For convenient use of the
computer system, the operating system provides a uniform logical view of information storage. The
operating system abstracts from the physical properties of its storage devices to define a logical storage
unit, the file. Files are mapped, by the operating system, onto physical devices.

A file is a collection of related information defined by its creator. Commonly, files represent programs
(both source and object forms) and data. Data files may be numeric, alphabetic or alphanumeric. Files
may be free form, such as text files, or may be rigidly formatted. In general a files is a sequence of bits,
bytes, lines or records whose meaning is defined by its creator and user. It is a very general concept.

The operating system implements the abstract concept of the file by managing mass storage device, such
as tapes and disks. Also files are normally organized into directories to ease their use. Finally, when
multiple users have access to files, it may be desirable to control by whom and in what ways files may be
accessed.

The operating system is responsible for the following activities in connection with file management:
·The creation and deletion of files
·The creation and deletion of directory
·The support of primitives for manipulating files and directories
·The mapping of files onto disk storage.
·Backup of files on stable (non volatile) storage.

6. Protection System
The various processes in an operating system must be protected from each other’s activities. For that
purpose, various mechanisms which can be used to ensure that the files, memory segment, CPU and other
resources can be operated on only by those processes that have gained proper authorization from the
operating system. Protection refers to a mechanism for controlling the access of programs, processes, or
users to the resources defined by a computer controls to be imposed, together with some means of
enforcement. An unprotected resource cannot defend against use (or misuse) by an unauthorized or
incompetent user.

7. Networking
A distributed system is a collection of processors that do not share memory or a clock. Instead, each
processor has its own local memory, and the processors communicate with each other through various
communication lines, such as high-speed buses or telephone lines. Distributed systems vary in size and
function. They may involve microprocessors, workstations, minicomputers, and large general-purpose
computer systems. The processors in the system are connected through a communication network, which
can be configured in the number of different ways. The network may be fully or partially connected. The

6
communication network design must consider routing and connection strategies, and the problems of con-
nection and security. A distributed system provides the user with access to the various resources the system
maintains. Access to a shared resource allows computation speed-up, data availability, and reliability.

8. Command Interpreter System


One of the most important components of an operating system is its command interpreter. The command
interpreter is the primary interface between the user and the rest of the system. Many commands are given to
the operating system by control statements. When a new job is started in a batch system or when a user logs-
in to a time-shared system, a program that reads and interprets control statements is automatically executed.

11 Qt.6(6) 1999 6M Write short note on: RAM, ROM and Cache Memory

RAM and ROM

Data storage is one of the most important and complicated functions of


a computer. This is why memory is a necessity in terms of computers.
Understanding the hardware involved with temporary and permanent
storage is a necessary and complicated task.

Random-access memory RAM


–Read write memory
–Volatile storage

Random-access memory allows the CPU to read and write information at any time . This information is
erased, however, when you turn the system’s power off.

We can further differentiate RAM chips between Static RAM (SRAM) and Dynamic RAM (DRAM), de-
pending upon whether the data in the cells needs to be refreshed. (SRAM doesn’t need to be refreshed.)

RAM is used for :


The processor in a computer runs more than a million times faster than the typical auxiliary storage
devices can operate (nanoseconds vs. milliseconds). For this reason, it is necessary to have a fast, tem-
porary repository for the data that the system is most likely to request, which RAM is particularly suited
for.
RAM is faster, which is why computers use it for main memory. PCs use ROM only for the initial boot
program.

DRAM is used for


• A computer usually implements main memory (sometimes called system memory) using DRAM
because it can use DRAM in large quantities (tens or even hundreds of megabytes) for a modest cost.
Even though DRAM is very fast, it is still as much as eight times slower than the processor, and many
processor cycles can be lost waiting for the data to transfer from main memory.
SRAM is used for
• Most computer systems use a small amount of SRAM (no more than a few megabytes, generally) as
a small, very fast cache. Even one megabyte of cache memory may add as much as $20 to the cost of
the system. (Cache is a bank of high-speed memory set aside for frequently accessed data.)

7
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

RAM memory is also known as main storage disk space is auxiliary storage. Multitasking operating systems
will map virtual-storage areas on disk to real storage, so multiple programs can share the same physical
memory addresses (data location). Any device that cans tore data is a storage device, but not all storage
devices are memory devices. The term memory generally refers to small integrated circuits called chips.

Read-only memory ROM


–Non volatile storage

you cannot write to read-only memory; your system can only read it during normal operation. The computer
manufacturer preprograms the data; the data will remain intact without any power.

Computer use of ROM


When you power on a computer, it needs to load the system-initialization program from a known location.
Early PCs required the operator to manually set the device address. However, this data is now stored in the
complementary metal-oxide semiconductor (CMOS) or is at a fixed location. In the PC, the location is an
address with-in a ROM chip called the Basic Input/Output system (BIOS, a special piece of software that
controls the startup process of a computer and other basic functions).
Another kind of memory isCache memory. There are many levels of cache memory, but most often references
to cache memory refer to the secondary cache or L2cache. Cache memory is used in many parts of the modern
PC to enhance system performance by acting as a buffer to recently used information. We will talk about the
secondary cache, but other kinds of cache have the same basic principle in mind. The system cache is placed
between the CPU and the RAM. The system cache is responsible for a great deal of the system performance
improvement of today's PCs. The cache is a buffer of sorts between the very fast processor and the relatively
slow memory that serves it. (The memory is not really that slow; it's just that the processor is much faster.) The
presence of the cache allows the processor to do its work while waiting for memory far less often than it
otherwise would.

12. Qt.2(C) 2000 5M Differentiate between Main Memory and Secondary Memory.
Main/ Primary Memory Secondary Memory
1. Used to store a variety of critical informa- 1. Essential to any computer system to provide
tion required for processing by CPU. backup storage
2. E.G. Two types of memory in the Immediate 2. E.g. The two main ways of storing data are serial-
Access Store of the computer, RAM and ROM access and direct-access. Like economical storage of
large volumes of data on magnetic media, Floppy
3. Made up of number of memory locations or
Disk, Magnetic disk.
Cells.
3. Made up of sectors and tracks.
4. Measured in terms of capacity and speed
4 Measured in terms of storage space
5. Storage capacity of main memory is limited.
5. Storage capacity of Secondary memory is huge.
6. Cost is high for high-speed storage and
6. Cost is comparatively low for secondary memory
hence high for primary storage.
and hence secondary storage.
7. The main memory stores the program
7. The secondary memory stores data in the form of
instructions and the data in binary machine
bytes made up of bits.
code.
8. Offers permanent storage of data. (Non Volatile)
8. Offers temporary storage of data (Volatile)

13 Qt.5(b)2000 5M Write short note on: Generation of computers

Operating systems and computer architecture have a great deal of influence on each other. To facilitate
the use of the hardware, operating systems were developed. As operating system were designed and
used, it became obvious that changes in the design of the hardware could simplify the operating system.

8
In this short historical review, notice how the introduction of new hardware features is the natural
solution to many operating system problems.

Operating systems have been evolving over the years. Since operating systems have historically been
closely tied to the architecture of the computers on which they run, we will look at successive genera-
tions of computers to see what their operating systems were like

The Zeroth Generation

The term zeroth generation is used to refer to the period of development of computing, which predated
the commercial production and sale of computer equipment. The period might be dated as extending
from the mid-1800s. In particular, this period witnessed the emergence of the first electronics digital
computers on the ABC. Since it was the first to fully implement the idea of the stored program and serial
execution of instructions. The development of EDVAC set the stage for the evolution of commercial
computing and operating system software. The hardware component technology of this period was
electronic vacuum tubes. The actual operation of these early computers took place without be benefit of
an operating system. Early programs were written in machine language and each contained code for
initiating operation of the computer itself. This system was clearly inefficient and depended on the vary-
ing competencies of the individual programmer as operators.

The First Generation, 1951-1956

The first generation marked the beginning of commercial computing. The first generation was character-
ized by high-speed vacuum tube as the active component technology. Operation continued without the
benefit of an operating system for a time. The mode was called “closed shop” and was characterized by
the appearance of hired operators who would select the job to be run, initial program load the system,
run the user’s program, and then select another job, and so forth. Programs began to be written in higher
level, procedure-oriented languages, and thus the operator’s routine expanded. The operator now se-
lected a job, ran the translation program to assemble or compile the source program, combined the
translated object program along with any existing library programs that the program might need for
input to the linking program, loaded and ran the composite linked program, and then handled the next
job in a similar fashion. Application programs were run one at a time, and were translated with absolute
computer addresses. There was no provision for moving a program to different location in storage for
any reason. Similarly, a program bound to specific devices could not be run at all if any of these devices
were busy or broken.

At the same time, the development of programming languages was moving away from the basic machine
languages; first to assembly language, and later to procedure oriented languages, the most significant
being the development of FORTRAN

The second Generation, 1956-1964

Transistors replacing vacuum tubes as the hardware component technology most notably characterized
the second generation of computer hardware. In addition, some very important changes in hardware and
software architectures occurred during this period. For the most part, computer systems remained card
and tape-oriented systems. Significant use of random access devices, that is, disks, did not appear until
towards the end of the second generation. Program processing was, for the most part, provided by large
centralized computers operated under mono-programmed batch processing operating systems.

The most significant innovations addressed the problem of excessive central processor delay due to
waiting for input/output operations. Recall that programs were executed by processing the machine
instructions in a strictly sequential order. As a result, the CPU, with its high-speed electronic component,
was often forced to wait for completion of I/O operations that involved mechanical devices (card read-

9
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

ers and tape drives) that were order of magnitude slower.

These hardware developments led to enhancements of the operating system. I/O and data channel com-
munication and control became functions of the operating system, both to relieve the application pro-
grammer from the difficult details of I/O programming and to protect the integrity of the system to
provide improved service to users by segmenting jobs and running shorter jobs first (during “prime
time”) and relegating longer jobs to lower priority or night time runs. System libraries became more
widely available and more comprehensive as new utilities and application software components were
available to programmers.

The second generation was a period of intense operating system development. Also it was the period for
sequential batch processing. Researchers began to experiment with multiprogramming and multipro-
cessing.

The Third Generation, 1964-1979

The third generation officially began in April 1964 with IBM’s announcement of its System/360 family
of computers. Hardware technology began to use integrated circuits (ICs) that yielded significant advan-
tages in both speed and economy. Operating system development continued with the introduction and
widespread adoption of multiprogramming. This marked first by the appearance of more sophisticated I/
O buffering in the form of spooling operating systems. These systems worked by introducing two new
systems programs, a system reader to move input jobs from cards to disk, and a system writer to move
job output from disk to printer, tape, or cards. The spooling operating system in fact had multiprogram-
ming since more than one program was resident in main storage at the same time. Later this basic idea of
multiprogramming was extended to include more than one active user program in memory at time. To
accommodate this extension, both the scheduler and the dispatcher were enhanced. In addition, memory
management became more sophisticated in order to assure that the program code for each job or at least
that part of the code being executed, was resident in main storage. Users shared not only the system’
hardware but also its software resources and file system disk space.

The third generation was an exciting time, indeed, for the development of both computer hardware and
the accompanying operating system. During this period, the topic of operating systems became, in real-
ity, a major element of the discipline of computing.

The Fourth Generation, 1979 - Present

The fourth generation is characterized by the appearance of the personal computer and the workstation.
Miniaturization of electronic circuits and components continued and large-scale integration (LSI), the
component technology of the third generation, was replaced by very large scale integration (VLSI),
which characterizes the fourth generation. However, improvements in hardware miniaturization and
technology have evolved so fast that we now have inexpensive workstation-class computer capable of
supporting multiprogramming and time-sharing. Hence the operating systems that support today’s per-
sonal computers and workstations look much like those, which were available for the minicomputers of
the third generation. Examples are Microsoft’s DOS for IBM-compatible personal computers and UNIX
for workstation. However, many of these desktop computers are now connected as networked or dis-
tributed systems. Computers in a networked system each have their operating system augmented with
communication capabilities that enable users to remotely log into any system on the network and transfer
information among machines that are connected to the network. The machines that make up distributed
system operate as a virtual single processor system from the user’s point of view; a central operating
system controls and makes transparent the location in the system of the particular processor or proces-
sors and file systems that are handling any given program.

10
14 Qt.7(b) 2002 8M Difference between An Ordinary Desktop v/s a computer used as a professional
grade server

An Ordinary Desktop Professional grade server


1. Desktop has one Processor 1.Server has more the one processor( Min. 100)
2. Normal Memory is in MB 2.Normal Memory is in GB
3. Slots available for connecting devices are less 3. Slots available for connecting devices are more
4. Used for Low performance application 4.Used for Low performance application
5 Eg. Mail server 5.Eg. Data Server, Networking server, Proxy server.+

15 Qt.7(a) 2001 5M, Qt.5(a)2000 5M, Qt.6(2)1999 6M Write short note on : Modem
Write short note on : Modem &its devices
16 Qt.6(d) 2002 4M Enumerate the key purpose and name one example of Modem

d ig it a l s ig n a l A n a lo g o u s s ig n a l

co m p co m p

m o dem m o dem

S ig n a lin g o n a m o d e m lin k

Modems differ in design, set-up aids, essential hardware performance, and service and support poli-
cies. Be it for sending electronic mail, for data transfer, or for Internet surfing, a small, Walkman-size
gadget called a “modem” now accompanies the PC. Till recently part of the corporate desktop only,
modems are now becoming part of home PC configurations too, as Internet usage at home is growing
considerably. A modem is a device that allows computers to transmit data over regular copper tele-
phone lines. Computers store, send, receive and process data in digital format. This means that the
data in your computer is stored as a series of binary digits, or bits. Phone lines however, transmit data
in a continuous analogue wave. A modem converts the signal from digital format to analogue format
for transmission to a remote modem - this is called modulation.

11
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

The growth in the modem market is a fallout of the growth in Internet usage, increased
telecommuting, use of E-mail for communication, setting up of WANs (Wide Area Networks) and
implementation of intranets and extranets. Connectivity can be established through analog lines,
leased lines, ISDN lines and satellite. Basically there are three types of modems, which facilitate
connectivity” dial-up, leased line and ISDN (Integrated Service Digital Network). The system is so
programmed that billings take place accordingly. Dial-up modems could be external, internal and PC
Card. An external or desktop modem is a small box, equipped with a set of indicator lights, con-
nected to the computer using a serial cable.

External modems are more prevalent in India, primarily because internal modems are difficult to
configure and install, While an external modem is easier to troubleshoot or replace in case of failure
and further the LED indicators on an external modem helps the user to visually monitor and trouble-
shoot. There are two ways by which modems could be configured” as data/fax modems or data/fax/
voice modems. Data/fax modems provide only these two facilities while voice capability in a modem
acts as an answering machine. Irrespective of type, all modems are designed to comply with the
international standards.
· To communicate with another computer over copper phone lines, both the sending and
receiving computers must be connected to a modem.
· Data is sent from your computer to your modem as a digital signal.
· Your modem converts the digital signal into an analogue signal (modulation) then transmits
the data to the receiving (remote) modem.
· The remote modem converts the analogue signal into a digital signal (demodulation) then
transmits the data to the receiving computer for processing.
The data that you sent to the remote computer may then be forwarded to another computer for
processing - as is the case when you connect to OptusNet.

17 Qt.4 002 16M Describe a standard fully featured desktop configuration. Mention a single line
description against each item in configuration.

18 Qt.7(b) 2001 5M Write short note on : Storage Devices

A standard fully featured desktop configuration has basically


three type of featured devices

1. Input Device
2. Output Device
3. Storage Devices
4. Memory

Motherboard

A motherboard or main board is the physical arrangement in a computer that contains the computer's
basic circuitry and components.

Input Devices
There are several ways to get new information or input into a computer. The two most common
ways are the keyboard and the mouse. The keyboard has keys for characters (letters, numbers and
punctuation marks) and special commands. Pressing the keys tells the computer what to do or what

12
to write. The mouse has a special ball that allows you to roll it around on a pad or desk and move the
cursor around on screen. By clicking on the buttons on the mouse, you give the computer directions
on what to do. There are other devices similar to a mouse that can be used in its place. A trackball
has the ball on top and you move it with your finger. A touchpad allows you to move your finger
across a pressure sensitive pad and press to click. A scanner copies a picture or document into the
computer. Another input device is a graphics tablet. A pressure sensitive pad is plugged into the
computer. When you draw on the tablet with the special pen (never use an ink pen or pencil!), the
drawing appears on the screen. The tablet and pen can also be used like a mouse to move the cursor
and click.

Output Devices
Output devices display information in a way that you can you can understand. The most common
output device is a monitor. It looks a lot a like a TV and houses the computer screen. The monitor
allows you to ‘see’ what you and the computer are doing together. Speakers are output devices that
allow you to hear sound from your computer. Computer speakers are just like stereo speakers. There
are usually two of them and they come in various sizes. A printer is another common part of a
computer system. It takes what you see on the computer screen and prints it on paper. There are two
types of printers. The inkjet printer uses inks to print. It is the most common printer used with
home computers and it can print in either black and white or color. Laser printers run much faster
because they use lasers to print. Laser printers are mostly used in businesses. Black and white laser
printers are the most common, but some print in color, too. Ports are the places on the outside of the
computer case where you plug in hardware. On the inside of the case, they are connected to expan-
sion cards. The keyboard, mouse, monitor, and printer all plug into ports. There are also extra ports
to plug in extra hardware like joysticks, gamepads, scanners, digital cameras and the like. Their
expansion cards that are plugged into the motherboard and are connected to other components by
cables - long, flat bands that contain electrical wiring, control the ports.

Storage Devices
The purpose of storage in a computer is to hold data or information and get that data to the CPU as
quickly as possible when it is needed. Computers use disks for storage: hard disks that are located
inside the computer, and floppy or compact disks that are used externally.

Hard Disks

Your computer uses two types of memory: primary memory,


which is stored on chips, located on the motherboard, and
secondary memory that is stored in the hard drive. Primary
memory holds all of the essential memory that tells your com-
puter how to be a computer. Secondary memory holds the
information that you store in the computer.

Inside the hard disk drive case you will find circular disks that are made from polished steel. On the disks,
there are many tracks or cylinders. Within the hard drive, an electronic reading/writing device called the
head passes back and forth over the cylinders, reading information from the disk or writing information
to it. Hard drives spin at 3600 or more rpm (Revolutions Per Minute) - that means that in one minute, the
hard drive spins around over 3600 times!

Today's hard drives can hold a great deal of information - sometimes over 20GB!

13
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

Floppy Disks

When you look at a floppy disk, you'll


see a plastic case that measures 3 1/2 by
5 inches. Inside that case is a very thin
piece of plastic (see picture at right) that
is coated with microscopic iron particles.
This disk is much like the tape inside a
video or audiocassette. Take a look at the
floppy disk pictured. At one end of it is a
small metal cover with a rectangular hole
in it. That cover can be moved aside to
show the flexible disk inside. But never
touch the inner disk - you could damage
the data that is stored on it. On one side
of the floppy disk is a place for a label.
On the other side is a silver circle with
two holes in it. When the disk is inserted
into the disk drive, the drive hooks into
those holes to spin the circle.

This causes the disk inside to spin at about 300 rpm! At the same time, the silver metal cover on the end
is pushed aside so that the head in the disk drive can read and write to the disk.
Floppy disks are the smallest type of storage, holding only 1.44MB.

Compact Disks

Instead of electromagnetism, CDs use pits (micro-


scopic indentations) and lands (flat surfaces) to store
information much the same way floppies and hard
disks use magnetic and non-magnetic storage. Inside
the CD-Rom is a laser that reflects light off of the
surface of the disk to an electric eye. The pattern of
reflected light (pit) and no reflected light (land) cre-
ates a code that represents data.
CDs usually store about 650MB. This is quite a bit
more than the 1.44MB that a floppy disk stores. A
DVD or Digital Video Disk holds even more infor-
mation than a CD, because the DVD can store infor-
mation on two levels, in smaller pits or sometimes
on both sides.
Memory
1.Ram & Rom from above
2.PROM (Programmable Read Only Memory)
A variation of the Rom chip is programmable read only memory. PROM can be programmed to record
information using a facility known as prom-programmer. However once the chip has been programmed
the recorded information cannot be changed, i.e. the prom becomes a ROM and the information can only
be read.

14
3.EPROM (Erasable Programmable Read Only Memory)
As the name suggests the Erasable Programmable Read Only Memory, information can be erased and
the chip programmed a new to record different information using a special prom-Programmer. When
EPROM is in use information can only be read and the information remains on the chip until it is erased.

19 Qt 1(a) 1999 10M Distinguish between microcomputers, minicomputers , mainframe and


supercomputers.
There are four classifications of digital computer systems: super-computer, mainframe computer, mini-
computer, and microcomputer.

Super-computers are very fast and powerful machines. Their internal architecture enables them to run
at the speed of tens of MIPS ( million instructions per second). Super-computers are very expensive
and for this reason are generally not used for CAD applications. Examples of super-computers are:
Cray and CDC Cyber 205.

Mainframe computers are built for general computing, directly serving the needs of business and
engineering. Although these computing systems are a step below super-computers, they are still very
fast and will process information at about 10 MIPS. Mainframe computing systems are located in a
centralized computing center with 20-100+ workstations. This type of computer is still very expensive
and is not readily found in architectural/interior design offices.

Minicomputers were developed in the 1960’s resulting from advances in microchip technology. Smaller
and less expensive than mainframe computers, minicomputers run at several MIPS and can support 5-
20 users. CAD usage throughout the 1960’s used minicomputers due to their low cost and high perfor-
mance. Examples of minicomputers are: DEC PDP, VAX 11.

Microcomputers were invented in the 1970’s and were generally used for home computing and dedi-
cated data processing workstations. Advances in technology have improved microcomputer capabili-
ties, resulting in the explosive growth of personal computers in industry. In the 1980’s many medium
and small design firms were finally introduced to CAD as a direct result of the low cost and availability
of microcomputers. Examples are: IBM, Compaq, Dell, Gateway, and Apple Macintosh.

The average computer user today uses a microcomputer. These types of computers include PC’s, laptops,
notebooks, and hand-held computers such as Palm Pilots. Larger computers fall into a mini-or main-
frame category. A mini-computer is 3-25 times faster than a micro. It is physically larger and has a
greater storage capacity. A mainframe is a larger type of computer and is typically 10-100 times faster
than the micro. These computers require a controlled environment both for temperature and humidity.
Both the mini and mainframe computers will support more workstations than will a micro. They also
cost a great deal more than the micro running into several hundred thousand dollars for the mainframes.

The microprocessor contains the CPU, which is made up of three components—the control unit super-
vises all that is going on in the computer, the arithmetic/logic unit that performs the math and compari-
son operation, and temporary memory. Because of the progress in developing better microprocessors,
computers are continually evolving into faster and better units.

20 Qt.3(b) 2001 10M Write a note on evolution of languages from machine level to natural
languages.
21 Qt.6(4) 1999 6M Write short note on : Generation of programming languages
22 Qt 4(a) 2000 10M Briefly Explain different generations of programming languages.

Generation of Programming Languages


A proper understanding of computer software requires a basic knowledge of programming languages.

15
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

These allow programmers and end users to develop the program of instructions that are executed by a
computer. To be acknowledgeable end uses one should know the basic categories of programming
languages. Each generation of programming language has it an unique vocabulary, grammar and uses.

1. First Generation - Machine Language


· They are the most basic level of programming languages.
· In the early stages of computer development all program instructions had to be written
using binary codes unique to each computers.
· This involved the difficult task of writing instructions in the form of strings of binary
digits (ones and zeros)or other number system.
· Programmers had to write long series of detailed instruction even to accomplish simple
processing tasks.
· Programming in machine language sequences specifying the storage locations for every
instruction and item of data used.
· Instruction must be included for every switch and indicator used by the program.
· All these abovementioned requirements make it difficult and error prone.
· Instructions in a machine language, programming consist of
An operation code, which specifies what is to be done and
An operand, which specifies the address of the data or device to be operated upon.

2. Assemble languages (Second Generation)


· They were developed to reduce the difficulties in writing language programs.
· Use of these languages require language translator programs called assemblers which
allow a computer to convert the instructions of such languages into machine instruc-
tions
· Assembly languages are known as Symbolic languages became symbols are used to
represent operation code and storage locations.
· Convenient alphabetic abbreviations called mnemonics (memory aids) and others
symbols are used.

Advantages
· Alphabetic abbreviations are easier to remember are used in the place of numerical addresses of
the data.
· Simplifies programming

Disadvantages
· Assembler language is machine oriented because assembler language instructions correspond
closely to the machine language instructions of the particular computer model used.

3. High level languages (Third Generation Languages)


· HLL are also known as compiler languages.
· Instructions of HLL are called statements and closely resemble human language or
standard notation of mathematics.
· Individuals high level language statements are actually macro instructions that is each
individual statement generates several machine instructions when translated into
machine language by HLL translator programs called compilers or interpreters.

Advantages
· Easy to learn and understand
· have less rigid rules forms

16
· Potential for error is reduced
· Machine are independent

Disadvantages
· Less efficient than assembler language programs
· Require greater amount of time for translation into machine instruction. e.g. HLL, COBOL,
FORTRAN, Ada etc.

4. Fourth Generation Languages - (4 GL)


· The term 4gl is used to describe a variety of programming languages that are more
LAN procedural and coversational than prior languages.
· Natural languages are 4gl that are very close to English or other human languages
· While using a 4gl the end users and programmers only need to satisfy the resulting they
want, while the computer determines the sequence of instructions that will accomplish
those results. etc. FOXPRO, ORACLE, Dbase.

Advantages
· Ease of use and technical sophistication
· Natural query languages that impose no rigid grammatical rules.
· More useful in end user and
· Departmental applications without a high volume of transactions to process.

Disadvantages
· Not very flexible
· Difficult for an end user to override some of the pre-specified formats or procedures of a 4gl.
· Machine language code generated by a program developed by a 4gl is frequently much less effi-
cient than a programme written in a language like COBOL.
· Unable to provide reasonable response times when faced with large amount of real time transac-
tion processing and end user inquiries.

Fourth generation languages

Use natural and non-procedural languages

High-level languages
Use English like statements and arithmetic notations.

Assembler languages
· Use symbolic coded instructions
· Machine languages use binary coded instructions.

5. Fifth Generation Language


Languages using artificial intelligence techniques.

· Artificial Intelligence (AI) is a science and technology based on discipline such as


- Computer Sciences
- Biology
- Psychology
- Linguistics
- Mathematics

17
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

- Engineering

· Major thrust of AI viz to development of computer functions normally associated with


human intelligence such as reasoning, inference, problem solving.
· Term AI was coined by John McCarthy at MIT in 1956.
· Domains of AI are
- Cognitive Science
- Computer Science
- Robotics
Natural Language

23 Qt.7(c) 2002 8M Difference between 3rd Generation Language V/s 4th Generation
Language.
Programming languages have evolved tremendously since early 1950’s and this evolution has resulted
in over hundreds of different languages being invented and used in the industry. This revolution is
needed as we can now instruct computers more easily and faster than ever before due to technological
advancement in hardware with fast processors like the 200MHz Pentium Pro developed by Intel®.
The increase in quantities and speed powerful computers are being produced now, more capable of
handling complex codes from different languages from this generation, like Appware and PROLOG,
will prompt language designers to design more efficient codes for various applications. This article
will be going down memory lane to look at past five generations of languages and looking at how
they revolutionize the computer industry.

The first and second-generation languages during the period of 1950-60, which to many experienced
programmers will say are machine and assembly languages. Programming language history really in
the early nineteenth century where automated calculation for mathematical functions was developed.
Further developments in early 1950 brought us machine language without interpreters and compilers
to translate languages. Micro-code is an example of the first generation language residing in the CPU
written for doing multiplication or division. Computers then were programmed in binary notation that
was very prone to errors. A simple algorithm resulted in lengthy code. This was then improved to
mnemonic codes to represent operations.

Symbolic assembly codes came next in the mid 1950’s, the second generation of programming lan-
guage like AUTOCODER, SAP and SPS. Symbolic addresses allowed programmers to represent
memory locations, variables and instructions with names. Programmers now had the flexibility not to
change the addresses for new locations of variables whenever they are modified. This kind of pro-
gramming is still considered fast and to program in machine language required high knowledge of the
CPU and machine’s instruction set. This also meant high hardware dependency and lack of portability.
Assembly or machine code could not run on different machines.

Throughout the early 1960’s till 1980 saw the emergence of the third generation programming lan-
guages. Languages like ALGOL 58, 60 and 68, COBOL, FORTRAN IV, ADA and C are examples of
this and were considered as high-level languages. Most of these languages had compilers and the
advantage of this was speed. Independence was another factor as these languages were machine
independent and could run on different machines. The advantages of high-level languages include the
support for ideas of abstraction so that programmers can concentrate on finding the solution to the
problem rapidly, rather than on low-level details of data representation. The comparative ease of use
and learning, improved portability and simplified debugging, modifications and maintenance led to
reliability and lower software costs.

These languages were mostly created following von Neumann constructs, which had sequential

18
procedural operations, and code executed using branches and loops. Although the syntax between
these languages were different but they shared similar constructs and were more readable by program-
mers and users compared to assembly languages. Some languages were improved over time and
previous languages, taking the desired features thought to be good, influenced some and discarding
unwanted ones. New features were also added to the desired features to make the language more
powerful.

COBOL (Common Business-Oriented Language), a business data processing language is an example


of a language constantly improving over the decades. The new COBOL 97 has included new features
like Object Oriented Programming to keep up with current languages. One good possible reason for
this is the fact that existing code is important and to totally develop a new language from start would
be a lengthy process. This also was the rationalization behind the developments of C and C++.

Then, there were languages that evolved from other languages like LISP1 for artificial intelligence work,
and had strong influences languages like MATHLAB, LPL and PL/I. Language like BALM had the
combined influence of ALGOL-60 and LISP 1.5. These third generation languages are less processor
dependent than lower level languages. An advantage in languages like C++ is that it gives the program-
mers a lot of control over how things are done in creating applications. This control however calls for
more in depth knowledge of how the operating system and computer works.

Third generation languages often followed procedural code, meaning the language performs functions
defined in specific procedures on how something is done. In comparison, most fourth generation lan-
guages are nonprocedural. A disadvantage with fourth generation languages was they were slow com-
pared to compiled languages and they also lacked control. Powerful languages of the future will combine
procedural code and nonprocedural statements together with the flexibility of interactive screen applica-
tions, a powerful way of developing application. Languages specifying what is accomplished but not
how, not concerned with the detailed procedures needed to achieve its target like in graphic packages,
applications and report generators. The need for this kind of languages is in line with minimum work and
skill concept, point and click, programmers who are end users of software applications designed using
third generation languages unseen by the commercial users. Programmers whose primary interests are
programming and computing use third generation languages and programmers who use the computers
and programs to solve problems from other applications are the main users of the fourth generation
languages.

Features evident in fourth generation languages quite clearly are that it must be user friendly, portable
and independent of operating systems, usable by non-programmers, having intelligent default options
about what the user wants and allowing the user to obtain results fasts using minimum requirement code
generated with bug-free code from high-level expressions (employing a data-base and dictionary man-
agement which makes applications easy and quick to change), which was not possible using COBOL or
PL/I. Standardization however, in early stages of evolution can inhibit creativity in developing powerful
languages for the future. Examples of this generation of languages are IBM’s ADRS2, APL, CSP and
AS, Power Builder, Access.

The 1990’s saw the developments of fifth generation languages like PROLOG, referring to systems used
in the field of artificial intelligence, fuzzy logic and neural networks. This means computers can in the
future have the ability to think for themselves and draw their own inferences using programmed informa-
tion in large databases. Complex processes like understanding speech would appear to be trivial using
these fast inferences and would make the software seem highly intelligent. In fact, these databases pro-
grammed in a specialized area of study would show a significant expertise greater than humans. Also,
improvements in the fourth generation languages now carried features where users did not need any
programming knowledge. Little or no coding and computer aided design with graphics provides an easy
to use product that can generate new applications.

19
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

The current trend of the Internet and the World Wide Web could cultivate a whole new breed of radical
programmers for the future, now exploring new boundaries with languages like HTML and Java.

24. Qt.2(2) 19995M Difference between. Compilers and Interpreters

Compilers Interpreters

1. A compiler is a translation program that trans- 1. An Interpreter is another type of translator

lates the instructions of a high level language in to used for translating high level language in to ma-

machine language. chine code.

2. Compiler merely translates the entire source 2. Interpreter is involved in execution also.

program into an object program and is not involved 3. No object code is saved for future use be-

in execution. cause the translation and execution is alternate.

3. The object code is permanently saved for fu- 4. Interpreters are easy to write.

ture use and is used every time the program is to 5. Do not require large memory space.

be executed. because the translation 6. More time consuming

4. Compilers are complex programs 7. Each statement requires translation every time

5. Require large memory space source code is Executed.

6. Less time consuming 8. Faster response to changes made in the source

7. Runs faster as no translation is required every code as it eliminates the need to compile and run

time code is executed, since it is precompiled. the program.

8. Needs the source code to be compiled after 9. Good for faster debugging

any changes are made to take effect and to run the


program.
9. Slow for debugging and testing.

25 Qt.3(a)2001 10M Enumerate the various systems software, which you have come across, Giving a
line or two about its basic function.

Software can be classified as Systems software and Application software.

An operating system, which can be classified as part of systems software, is a series of programs used by
the computer to manage its operation as well as allow the users to interact with the computer through
various devices. It is a set of programs that acts as middle layer between application software and
computer hardware. Computers with different hardware architecture will have different operating sys-
tems.

20
The other systems softwares are Windows , NT, Unix , TCP/IP

Microsoft Windows: The first version of the Microsoft Windows OS was launched in 1983. Microsoft
allowed developers to produce software applications to run on their Windows OS without the need to
notify them and hence encouraged the whole industry to work with their product. Though the original
version of windows was not very successful, MS-Windows-3 became the world’s best selling 16-bit
GUI operating systems.

Windows 95/98 and Windows NT are the most popular Microsoft Windows operating systems.

Windows 95: It is a 32 bit OS, which was released in August, 1995. It was designed to have certain
critical features. These included:
a. A 32-bit architecture which provides for a multi tasking environment
b. A friendly interface fitted with ‘one-click’ access
c. Windows 95 is network ready i.e. it is designed for easy access to network resources.
d. It is backwardly compatible with most Windows 3.1 / DOS applications

Windows NT: IT is a 32 bit operating systems. It represents the preferred platform for Intels’ powerful
Pentium range of processors. It is known as an industry standard mission critical OS.

Certain critical features of Windows NT are:


a. A stable multitasking environment
b. Enhanced security features
c. Increased memory
d. Network utilities
e. Portability – Windows NT can operate on microprocessor other than those designed for the
PC.

Unix: UNIX was originally developed at Bell Laboratories as a private research project by a small group
of people starting in 1969. This group had experience with a number of different operating systems
research efforts in the 1970’s. The goals of the group were to design an operating system to satisfy the
following objectives:
Simple and elegant
Written in a high level language rather than assembly language
Allow re-use of code
Typical vendor operating systems of the time were extremely large and all written in assembly language.
UNIX had a relatively small amount of code written in assembly language (this is called the kernel) and
the remaining code for the operating system was written in a high level language called C.
The group worked primarily in the high level language in developing the operating system. As this
development continued, small changes were necessary in the kernel and the language to allow the oper-
ating system to be completed. Through this evolution the kernel and associated software were extended
until a complete operating system was written on top of the kernel in the language C.

TCP/IP Protocol Stack :


The next layer of software is the TCP/IP driver, which can be implemented in a variety of ways. In 1993,
16-bit TCP/IP was often implemented as another DOS TSR program loaded from the AUTOEXEC.BAT
file. Somewhat later this layer of software was implemented as a Windows dynamic link library (DLL) or
virtual device driver (VxD). The DLL and VxD implementations do not require any modification of the
boot files on the PC.
The TCP/IP driver, which implements TCP/IP functionality for the system, is referred to as the TCP/IP

21
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

protocol stack. This driver may be written to interface with a packet driver, as is the case with the
Trumpet Winsock package. The advantage of having a TCP/IP stack that interfaces with a packet driver
is that a particular TCP/IP stack from one vendor can be used with any network card for which an
associated packet driver is available. Thus, the packet driver specification eliminates the need for soft-
ware vendors to provide hardware drivers for every possible network card that might be used with their
TCP/IP stack. When using a packet driver with Windows 3.x applications, another DOS TSR referred to
as a virtual packet driver may be required to interface between the Windows-based TCP/IP protocol
stack and the DOS-based packet driver.
The TCP/IP protocol stack included with Windows 95 does not use a packet driver interface with the
network card. Instead, Windows 95 provides drivers for most popular network cards. Hardware ven-
dors now provide Windows 95 drivers for their network cards, just as they provided packet drivers in the
past. Although packet driver based TCP/IP stacks from other vendors can be used with Windows 95, it
is far preferable to use the 32-bit TCP/IP support integrated into Windows 95.

26 Qt.2(f) 1998 5M Write short note on : Office Automation

The term “Office Automation” is generally used to describe the use of computer systems to perform
office operations such desktop application suites, groupware systems and workflow. An Office Automa-
tion Team is the domain team responsible for selecting product standards, defining standard configura-
tions, collaborating on component architecture design principles with the architecture team, and plan-
ning and executing projects for office automation.

Scope

The Office Automation Team will take ownership of issues related to Desktop Application Suites and
Groupware Systems. Although Workflow is considered a part of office automation, the Document Man-
agement Domain Team will cover it separately. Responsibility for some Advanced Features will be
shared with the Security Domain Team.

Desktop Application Suites

Desktop is a metaphor used to describe a graphical user interface that portrays an electronic file system.
Desktop application suites generally include:
· Word Processors to create, display, format, store, and print documents.
· Spreadsheets to create and manipulate multidimensional tables of values arranged in columns
and rows.
· Presentation Designers to create highly stylized images for slide shows and reports.
· Desktop Publishers to create professional quality printed documents using different typefaces,
various margins and justifications, and embedded illustrations and graphics.
· Desktop Database Support to collect limited amounts of information and organize it by fields,
records, and files.
· Web Browsers to locate and display World Wide Web content.
Groupware Systems
Groupware refers to any computer-related tool that improves the effectiveness of person-to-person
process. Simply put, it is software that helps people work together. Groupware systems generally in-
clude:
· Email to transmit messages and files.
· Calendaring to record events and appointments in a fashion that allows groups of users to coor-
dinate their schedules.
· Faxing to transmit documents and pictures over telephone lines.

22
· Instant Messaging to allow immediate, text-based conversations.
· Desktop Audio/Video Conferencing to allow dynamic, on-demand sharing of information through
a virtual “face-to-face” meeting.
· Chat Services to provide a lightweight method of real-time communication between two or
more people interested in a specific topic.
· Presence Detection to enable one computer user to see whether another user is currently logged
on.
· White-boarding to allow multiple users to write or draw on a shared virtual tablet.
· Application Sharing to enable the user of one computer to take control of an application running
on another user’s computer.
· Collaborative Applications to integrate business logic with groupware technologies in order to
capture, categorize, search, and share employee resources in a way that makes sense for the organiza-
tion.

Workflow
Workflow is defined as a series of tasks within an organization to produce a final outcome. Sophisticated
applications allow workflows to be defined for different types of jobs. In each step of the process,
information is automatically routed to the next individual or group that is responsible for a specific task.
Once that task is complete, the system ensures that the individuals responsible for the next task are
notified and receive the data they need to execute their stage of the process. This continues until the final
outcome is achieved.
Although workflow applications are considered part of Office Automation, workflow itself is part of a
larger document management initiative. Therefore, the Document Management Domain Team will take
responsibility for it.

Advanced Features
In any large environment, the responsibility for advanced features is generally shared among various
workgroups. Although the Security Domain Team is responsible for the overall security of the enter-
prise, the Office Automation team needs to work closely with them to implement secure technology.
Some examples of advanced features include:
· Anti-Virus Protection to identify and remove computer viruses.
· Anti-Spam Protection to identify and remove unsolicited commercial email.
· Open-Relay Protection to ensure that email servers within our environment are not used by
outside parties to route Spam.
· Enterprise Directories to provide a single repository of user accounts for authentication, access
control, directory lookups, and distribution lists.
· Digital Signatures and Encryption to allow for authentic and secure transfers of data.
Principle A - Minimize System Complexity
· Definition
o Office Automation system will be designed to balance the benefits of an enterprise deployment
against the user’s need for flexibility.
· Benefits/Rational
o Avoids duplication in system resources and support issues.
o Increases interoperability.
o Enterprise investments will be better managed (Increased ROI).
o Increases ease of use for end user.
o Solutions will meet end-user needs and expectations.
o Projects a consisted view of state government to the public.
o Leverages enterprise licensing.
· Implications

23
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

o In order to achieve the long-term benefits of enterprise systems, short-term migration invest-
ments will need to be made.
o Will be limiting the number of supported software versions, products, and configurations.
o Existing technologies need to be identified.
o Must understand user’s work process before applying technology.
o Requires coordination to implement enterprise-level technology.
o Enterprise-level solutions require enterprise-level budgets.

Principle B - Maximize Agency Interoperability


· Definition
o Office Automation systems will be deployed across the enterprise in a standardized fashion so that
complete interoperability among agencies exists.
· Benefits/Rational
o Scheduling of meetings and resources across agencies.
o Instant messaging, desktop audio/video conferencing, white boarding, chat, application sharing, and
presence detection across agencies.
o Unified Inbox for email, fax, and telephone messages.
o Collaborative applications can be developed for the enterprise.
o Electronic information can be transferred without the need for conversion utilities.
o Makes information more accessible to users.
· Implications
o A single, standardized suite of desktop software across the enterprise.
o A single, standardized set of technologies deployed across the enterprise for groupware functional-
ity.
· Counter Arguments
o Agency business requirements supercede enterprise concerns when deploying technology solutions
Principle C - Responsive Training
· Definition
o The overall investment in OA will include the responsive training of end users.
· Benefits
o More knowledgeable and efficient users.
o Maximize technology ROI through appropriate feature use.
o Reduces support burden.
Implications
o Creation of an enterprise-level training plan for OA systems.
o Agencies may need to develop supplemental training and survey users to determine their level of
knowledge.
o May require higher training investments.
o Counter argument.

27 Qt.2(e) 2000 Qt.2(4) 1999 5M Difference between. Character Interface , Graphic user Interface

28 Qt.2(c) 1998 5M Write short note on : GUI

24
Graphical user interface Character User Interface

1. Generally used in Multimedia 1. Generally used in programming Languages


2. It lies in its graphical control features such as 2. It lies in its Character control features such as
toolbars, buttons or icons textual elements or characters.
3. Used to create animations or pictures 3. Used to create word and sentences.
4. Variety of input devices are used to manipulate 4. Enables to give users the ability to specify de-
text & images as visually displayed sired options through function keys.
5. Employs graphical interface E.G web pages, 5. Can Create popup / pull down menus scrolling
Image maps which helps user navigate any sites. any text is possible. E.g. Unix, Cobol, FoxPro
E.g.: Windows, Mac programming
6. Prove to get affected by virus 6.Prove less affected by virus

29 Qt.7(e) 2001 5M Write short note on : Security and Privacy of data

Data in an IT system is at risk from various sources—user errors and malicious and non-malicious
attacks. Accidents can occur and attackers can gain access to the system and disrupt services, render
systems useless, or alter, delete, or steal information.

An IT system may need protection for one or more of the following aspects of data:

· Confidentiality. The system contains information that requires protection from unauthorized
disclosure. Examples: Timed dissemination information (for example, crop report information),
personal information, and proprietary business information.
· Integrity. The system contains information that must be protected from unauthorized, unantici-
pated, or unintentional modification. Examples: Census information, economic indicators, or
financial transactions systems.
· Availability. The system contains information or provides services that must be available on a
timely basis to meet mission requirements or to avoid substantial losses. Examples: Systems
critical to safety, life support, and hurricane forecasting.
Security administrators need to decide how much time, money, and effort needs to be spent in order to
develop the appropriate security policies and controls. Each organization should analyze its specific
needs and determine its resource and scheduling requirements and constraints. Computer systems, envi-
ronments, and organizational policies are different, making each computer security services and strategy
unique. However, the principles of good security remain the same, and this document focuses on those
principles.

Although a security strategy can save the organization valuable time and provide important reminders of
what needs to be done, security is not a one-time activity. It is an integral part of the system lifecycle.

30 Qt.2(a) 2001 4M Define : Multiuser

Multi-user systems as the name suggests are computers on which several people can work simulta-
neously. Mainframes, super computers and more powerful minicomputers fall under this category.

Such systems are based on centralised processing concept. all data and information is stored in the
central computer. Various terminals are connected to the mainframe for inputting data. All users can
work on the system simultaneously. The central computer has to be powerful enough to handle transac-
tions of terminals connected to it. The terminals connected to the mainframe can be single user systems
(PC’s / desktops.)

25
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

31 Qt.2(b) 2001 4M Define : Batch Processing

Online and batch processing

Batch processing is a technique in which a number of similar transactions to be processed are grouped
(batched) together and then processed. Processing happens sequentially in one go without any interrup-
tions. Many financial systems use this kind of processing. The main reason for this is

- This method uses technological resources very effectively, specially where large number of trans-
actions are involved.
- Many calculations can actually be done centrally and outside business hours like interest calcula-
tion, cheque clearing etc.

How does batch processing work


Batch processing system work in the following steps:
· Forms/data/documents needed for processing are collected during business hours.
· At the end of the day, this data is collected from all users, sorted and arranged in batches.
· All the data is submitted to the computer for processing in sequential order.
· Once the processing is complete, the data is again sorted in a sequential order with all calculations
updated.
· Finally, the master database is updated with the results of processing.
· The new file will be available to all users for the next day’s operation.

Online Processing : Batch processing is advantages and economical when large volumes of data are to
be processed. It is not suitable for small volume of data. Also in case where immediate response is
required, batch processing is not desired.

Online processing was developed to overcome difficulty faced in batch processing. In many systems it is
necessary that transactions should immediately update the master data file, like railway reservation sys-
tem. When a ticket is booked at one counter the database must be updated immediately and others
should be able to see it online.

How Online Processing Works


· Data is fed to the central database
· All processing that is required happens instantaneously
· When a person updates a record, it is locked for other users till the first person completes his trans-
action. It is called record locking.

32 Qt.2(a) 2000 5M Difference between. Online Processing and batch Processing

Batch Processing Online Processing

Applicable for high volume transactions – pay- Suitable for business control application –
roll / invoicing railway reservation

Data is collected in time periods and processed Random data input as event occurs.
in batches
All users have direct access to the system
No direct access to system for the user.

26
Files are online only when processing takes place Files are always online.

33 Qt.7(a) 2000 10M


Qt.7(c) 20015M What are the benefits of networking?

The main benefit of networking the businesses computers and information is that business processes
are speeded up and inefficiencies are squeezed out. This is because information flows more freely.
Communication between employees, customers and suppliers can be improved. Everything happens
much faster. All kinds of documents - from emails to spreadsheets and product specs - can be ex-
changed rapidly between networked users. On an internal level, a central intranet enabled file sharing
and kept the cost of support staff down despite rapid expansion. The shared resource means that a
team can work on a single client much more efficiently. Well-built, well-used networks deliver mea-
surable benefits on productivity and efficiency. Computer resources, such as printers, storage, mo-
dems and other communications devices, can also be shared. Staff can remain in touch almost as
effectively in the field as working from a desk.

The advantages of wireless networking


Stay connected:
Using wireless LANs allows users to stay connected to their network for approximately one and
three-quarter more hours each day. A user with a laptop and a wireless connection can roam their
office building without losing their connection, or having to log in again on a new machine in a differ-
ent location.
Access to accurate information:
A further advantage Networks was greater accuracy in everyday tasks.
Spaghetti free:
Perhaps the most obvious advantage comes from removing the need for extensive cabling and patch-
ing.
Return On Investment:
ROI - perhaps the overriding consideration in IT spend.

34 Qt.3 2002 16M Networks can be classified based on various criteria such as ;
1. Geographical spread/ distance
2. Type of Switching
3. Topologies
4. Medium of Data Communication
Etc. Give 2 examples of Types of Networks for each of the classifications mentioned above.

35 Qt.2(h)2001 4M Define : Topology

36 Qt.7(b)2000 10M Describe any three types of networking topologies

37 Qt.3(b)1999 10M Similarity between Bus and Ring Topology

A computer network can be defined as a network of data processing nodes that are interconnected for
the purpose of data communication, or alternatively as a communications network in which the end
instruments are computers. A network is generally studied based on either of the following ways:

27
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

Network Design

Medium Geography Topology Strategies Protocol

Connected LAN BUS Server Based


Wireless WAN STAR Client/Server
MAN RING Peer-To-Peer
MESH

Network Topologies
Choosing the best-fit topology for an Intranet is crucial, as rearranging computers from one topology to
another is difficult and expensive. A network configuration is also called a network topology. A network
topology is the shape or physical connectivity of the network. The network designer has three major
goals when establishing the topology of a network:
1. Provide the maximum possible reliability: provide alternative routes if a node fails and be able to
pinpoint the fault readily, deliver user data correctly (without errors) and recover from errors or lost data
in the network.
2. Route network traffic through the least cost path within the network: minimizing the actual
length of the channel between the components and providing the least expensive channel option for a
particular application.
3. Give the end users the best possible response time and throughput.

The topology of the network can be viewed in two ways:


1. The topology as seen from the layout of the cable, or the route followed by the electrical signals.
This is the physical topology.
2. The connections between nodes as seen by data traveling from one node to another - reflects the
network’s function, use, or implementation without regard to the physical interconnection of network
elements. This is the logical topology, and may be different from the physical topology.
Common patterns for connecting computers include the star and bus topologies.

Bus topology

The bus topology is the simplest network configura-


tion. It uses a single transmission medium called a
bus to connect computers together. Coaxial cable is
often used to connect computers in a bus topology.
It often serves as the backbone for a network. The
cable, in most cases, is not one length, but many
short stand that use T-connectors to join the ends.
T-connectors allow the cable to branch off in a third
direction to enable a new computer to be connected
to the network.

Special hardware has to be used to terminate both ends of the coaxial cable such that a signal traveling
to the end of the bus would come back as a repeat data transmission. Since a bus topology network uses
a minimum amount of wire and minimum special hardware, it is inexpensive and relatively easy to install.

28
In some instances, such as in classrooms or labs, a bus will connect small workgroups. Since a hub is not
required in a bus topology, the set-up cost is relatively low. One can simply connect a cable and T-
connector from one computer to the next and eventually terminate the cable at both ends. The number of
computers attached to the bus is limited, as the signal loses strength when it travels along the cable. If
more computers have to be added to the network, a repeater must be used to strengthen the signal at
fixed locations along the bus. The problem with bus topology is that if the cable breaks at any point, the
computers on each side will lose its termination. The loss of termination causes the signals to reflect and
corrupt data on the bus. Moreover, a bad network card may produce noisy signals on the bus, which can
cause the entire network to function improperly. Bus networks are simple, easy to use, and reliable.
Repeaters can be used to boost signal and extend bus. Heavy network traffic can slow a bus consider-
ably. Each connection weakens the signal, causing distortion among too many connections.
Ring topology

In a ring topology, the network has no end collection. It forms a


continuous ring through which data travels from one node to an-
other. Ring topology allows more computers to be connected to
the network than do the other two topologies. Each node in the
network is able to purify and amplify the data signal before send-
ing it to the next node. Therefore, ring topology introduces less
signal loss as data traveling along the path. Ring-topology net-
work is often used to cover a larger geographic location where
implementation of star topology is difficult. The problem with ring
topology is that a break anywhere in the ring will cause network
communications to stop. A backup signal path may be implemented
in this case to prevent the network from going down. Another
drawback of ring topology is that users may access the data circu-
lating around the ring when it passes through his or her computer.

Star Network
A star Network is a LAN in which all nodes are directly connected
to a common central computer. Every workstation is indirectly
connected to every other through the central computer. In some
Star networks, the central computer can also operate as a worksta-
tion.

The star network topology works well when the workstation are at scattered point. It is easy to add
or remove workstation. If workstation is reasonably close to the vertices of a convex polygon and the
system requirements are the modest. The ring network topology may serve the intended purpose at
lower cost than the star network topology. If the workstation lie nearly along a straight line, the bus
network topology may be best.
In a star network, a cable failure will isolate the workstation that is linked to the central computer ,
while all other workstation will continue to function normally, except that the other workstations will
not be able to communicate with the isolated workstation. If any of the workstation goes down other
workstation won’t be affected, but if the central computer goes down the entire network will suffer
degraded performance of complete failure. The star topology can have a number of different transmis-
sion mechanisms, depending on the nature of the central hub.
·Broadcast Star Network: The hub receives and resends the signal to all of the nodes on a network.
·Switched Star Network: The hub sends the message to only the destination node.
·Active Hub (Multi-port Repeater): Regenerates the electric signal and sends it to all the nodes con-

29
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

nected to the hub.


·Passive Hub: Does not regenerate the signal; simply passes it along to all the nodes connected to the
hub.
·Hybrid Star Network: Placing another star hub where a client node might otherwise go.
Star networks are easy to modify and one can add new nodes without disturbing the rest of the
network. Intelligent hubs provide for central monitoring and managing. Often there are facilities to
use several different cable types with hubs.
Mesh Topology
The mesh topology has been used more frequently in recent years.
Its primary attraction is its relative immunity to bottlenecks and
channel/node failures. Due to the multiplicity of paths between
nodes, traffic can easily be routed around failed or busy nodes. A
mesh topology is reliable and offers redundancy.
If one node can no longer operate, all the rest can still communicate with each other, directly or
through one or more intermediate nodes. It works well when the nodes are located at scattered points
that do not lie on a common point. . Given that this approach is very expensive in comparison to other
topologies like star and ring network, some users will still prefer the reliability of the mesh network to
that of others (especially for networks that only have a few nodes that need to be connected together).
A range of different topologies is common, with the properties as summarized

Reliability Cost Response

Bus Cable break can Single cable, low cost Shared medium, limits
segment network, or performance
prevent transmission

Star Easy to troubleshoot, Cost of wiring and Sharing, or switching


loss of one does not central hub if required possible
affect others

Ring One failure destroys Single cable, with repeat- Single medium limits
network ers at each station performance

Mesh Quite immune to Wiring expensive Alternative routes avail-


individual cable breaks able

Network Size / Geography

Local Area Networks

A Local Area Network (LAN) is a group of computers and associated devices that share a common
communication line and typically share the resources of a single processor or server within a small
Geographic Area. Specifically it has the properties:
·A limited-distance (typically under a few kilometers)
·High-speed network (typically 4 to 100 MBs)
·Usually the server has application and data storage that are shared in common by multiple computers.
·Supports many computers (typically two to thousands).

30
·A very low error rate
·Users can order printing and other services as needed through application run as on LAN server.
·Owned by a single organization
·Most commonly uses ring or bus topologies
·A user can share files with others at the LAN server , read and write access is maintained by a LAN
administrator.
The message transfer is managed by a transport protocol such as TCP/IP and IPX. The physical
transmission of data is performed by the access method (Ethernet), which is implemented in the
network adapters that are plugged into the machines. The actual communications path is the cable
(twisted pair, coax, optical fiber) that interconnects each network adapter.

Wide Area Networks

A Wide Area Network (WAN) is a communications network that covers a wide geographic area, such
as state or country. Contrast this to a LAN (local area network), which is contained within a building
or complex, and a MAN (metropolitan area network), which generally covers a city or suburb. The
WAN can span any distance and is usually provided by a public carrier. It is two or more LANs
connected together and covering a wide geographical area. For example an individual office will have
a LAN At all its office but interconnected through wan. The LANs are connected using devices such
as bridges, routers or gateways. A wide network area may be privately owned or rented.

Metropolitan Area Network


A MAN is a network that interconnects users with computer resources in a geographic area or region
larger than that covered by even large LAN but smaller than the area covered by A WAN. The term is
applied to the interconnection of networks in a city into a single larger network. It is also used to meet
the interconnection of several LAN by bridging them with backbones lines. A MAN typically covers
an area of between 5 & 50 Kms diameter.

Strategy In Network Designs:


Every network has a strategy, or way of coordinating the sharing of information and resources.
Networks are often broadly classified in terms of the typical communication patterns that one may
find on them. The most common are:

1.Server-based (client/server) - Contain clients and the servers that support them
2.Peer (peer-to-peer) - Contain only clients, no servers, and use network to share resources among
individual peers.
3.Client/server that also contains peers sharing resources.

Server Based Network:


Computers used to store the shared information and have all the other computers reference that
information over a network. The processing power is centralized in one large computer, usually a
mainframe. The nodes connected to this host computer are terminals with little or no processing
capabilities. One advantage of Server based network system is the centralized location and control of
technical personnel, software and data.

Servers may be classified as:


1.File Servers - Offer services that allow network users to share files and provide central file manage-
ment services.
2.Print Servers - manage and control printing on a network, allowing users to share printers.
3.Application Servers - allow client machines to access and use extra computing power and expensive

31
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

software applications that reside on the server.


4.Message Servers - data can pass between users on a network in the form of graphics, digital video
or audio, as well as text and binary data (for example: e-mail).
5.Database Servers -provide a network with powerful database capabilities that are available for use
on relatively weaker client machines.
Peer-To-Peer
Nodes can act as servers and clients. A typical configuration is bus network. Peer networks are
defined by a lack of central control over a network. Users share resources, disk space, and equipment.
The users to control resource sharing, and so there may be lower security levels, and no trained
administrator. Since there is no reliance on other computers (server) for their operation such networks
are often more tolerant to single points of failure. Advantage is easy to install and are inexpensive.
Peer networks place additional load on individual PCs because of resource sharing. The lack of
central organization may make data hard to find, backup or archive.
Client/Server Networks
They can combine the advantages and disadvantages of both of the above types. These network
architectures can be compared with the pre-network host-based model. Terminals could connect only
to the mainframe, and never to each other. In a client-server environment, the clients can do some
processing on their own as well, without taxing the server. In a peer-to-peer environment, clients can
be connected to one another. Client/Server networks offer a single strong central security point, with
central file storage, which provides multi-user capability, easy backup and supports a large network
easily. It also gives the ability to pool the available hardware and software, lowering overall costs.
Optimized dedicated servers can make networks run faster. Dedicated server hardware is usually
expensive, and the server must run often-expensive network operating system software. A dedicated
network administrator is usually required.

Medium of Data communication

Cable type

Cable is what physically connects network devices together, serving as the conduit for information
traveling from one computing device to another. The type of cable you choose for your network will
be dictated in part by the network's topology, size and media access method. Small networks may
employ only a single cable type, whereas large networks tend to use a combination.
Coaxial Cable
Coaxial cable includes a copper wire surrounded by insulation, a
secondary conductor that acts as a ground, and a plastic outside
covering. Because of coaxial cable's two layers of shielding, it is
relatively immune to electronic noise, such as motors, and can
thus transmit data packets long distances. Coaxial cable is a good
choice for running the lengths of buildings (in a bus topology) as
a network backbone.
Coaxial cable, with BNC end connector, and T piece.

Local area networks (LANs) primarily use two sizes of coaxial cable, commonly referred to as thick
and thin. Thick coaxial cable can extend longer distances than thin and was a popular backbone (bus)
cable in the 1970s and 1980s. However, thick is more expensive than thin and difficult to install.
Today, thin (which looks similar to a cable television connection) is used more frequently than thick.

32
Twisted-Pair Cable
Twisted-pair cable consists of two insulated wires that are
twisted around each other and covered with a plastic casing.
It is available in two varieties, unshielded and shielded.
Unshielded twisted-pair (UTP) is similar in appearance (see
Figure 1.4) to the wire used for telephone systems. UTP ca-
bling wire is grouped into categories, numbered 1-5. The
higher the category rating, the more tightly the wires are
twisted, allowing faster data transmission without cross talk.

Since many buildings are pre-wired (or have been retrofitted) with extra UTP cables, and because UTP
is inexpensive and easy to install, it has become a very popular network media over the last few years.
Shielded twisted-pair cable (STP) adds a layer of shielding to UTP. Although STP is less affected by
noise interference than UTP and can transmit data further, it is more expensive and more difficult to
install.

Fiber-Optic Cable
Fiber-optic cable is constructed of flexible glass and plas-
tic. It transmits information via photons, or light. It is sig-
nificantly smaller, which could be half the diameter of a
human hair. Although limited in the distance they can carry
information and have several advantages. More resistant
to electronic interference than the other media types, fi-
ber-optic is ideal for environments with a considerable
amount of noise (electrical interference).

Furthermore, since fiber-optic cable can transmit signals further than coaxial and twisted-pair, more and
more educational institutions are installing it as a backbone in large facilities and between buildings.
They are much lighter and less expensive. The cost of installing and maintaining fiber-optic cable re-
mains too high, however, for it to be a viable network media connection for classroom computers.

Microwave
In case of the microwave communication channel, the medium is not a solid substance but rather the air
itself. It uses a high radio frequency wave that travels in straight lines through the air. Because the waves
cannot bend with the curvature of the earth, they can be transmitted only over short distances.

Satellite
It uses satellites orbiting above the earth as microwave relay station. Satellites rotate at a precise point
and speed above the earth. This makes them appear stationery, so they can amplify and relay microwave
signals from one transmitter on the ground to another. Thus they can be used to send large volumes of
data. The major drawback is that bad weather can interrupt the flow of data.

Signal Transmission Mechanisms.


The way data are delivered through networks requires solutions to several problems
·Methods for carrying multiple data streams on common media.
·Methods for switching data through paths on the network.
·Methods for determining the path to be used.
Multiplexing
LANs generally operate in base band mode, which means that a given able is carrying a single data signal

33
of data, bro
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

Time Division Multiplexing.


To enable many data streams to share a high-bandwidth me-
dium, a technique called multiplexing is employed. The signals-
carrying capacity of the medium is divided into time slots, with
a time slot assigned to each signal, a technique called Time-
Division Multiplexing (TDM), illustrated in Figure . Because
the sending and receiving devices are synchronized to recog-
nize the same time slots, the receiver can identify each data
stream and re-create the original signals.
The sending device, which places data into the time slots, is called a multiplexer or mux. The receiving
device is called a demultiplexer or demux. TDM can be inefficient. If a data stream falls silent, its time
slots are not used and the media bandwidth is underutilized.
A more advanced technique is statistical time-division multiplexing. Time slots are still used, but some
data streams are allocated more time slots that others. An idle channel, D, is allocated no time slots at
all. A device that performs statistical TDM often is called a stat-MUX.

Switching Data

On an Internet work, data units must be switched through the various intermediate devices until they are
delivered to their destination. Two contrasting methods of switching data are commonly used: Circuit
switching and packet switching. Both are used in some form by protocols in common use.
Circuit and Packet Switching.
Circuit Switching
When two devices negotiate the start of a dialogue, they
establish a path, called a circuit, through the network,
along with a dedicated bandwidth through the circuit (see
Figure ). After establishing the circuit, all data for the dia-
logue flow through that circuit. The chief disadvantage
of circuit switching is that when communication takes
place at less than the assigned circuit capacity, bandwidth
is wasted. Also, communicating devices can't take advan-
tage of other, less busy paths through the network unless
the circuit is reconfigured.

Circuit switching does not necessarily mean that a continuous, physical pathway exists for the sole use
of the circuit. The message stream may be multiplexed with other message streams in a broadband
circuit. In fact, sharing of media is the more likely case with modern telecommunications. The appear-
ance to the end devices, however, is that the network has configured a circuit dedicated to their use.
End devices benefit greatly from circuit switching. Since the path is pre-established, data travel through
the network with little processing in transit. And, because multi-part messages travel sequentially through
the same path, message segments arrive in an order and little effort is required to reconstruct the original
message.

Packet Switching

Packet switching takes a different and generally more efficient approach to switching data through
networks. Messages are broken into sections called packets, which are routed individually through the
network (see Figure ).

34
oadband media are being considered for LAN as well.

At the receiving device, the packets are reassembled to construct the complete message. Messages are
divided into packets to ensure that large messages do not monopolize the network. Packets from several
messages can be multiplexed through the same communication channel. Thus, packet switching enables
devices to share the total network bandwidth efficiently.
Two variations of packet switching may be employed:

·Datagram services treat each packet as an independent message. The packets, also called datagrams,
are routed through the network using the most efficient route currently available, enabling the switches
to bypass busy segments and use underutilized segments. Datagrams frequently are employed on LAN's
and network layer protocols are responsible for routing the datagrams to the appropriate destination.
Datagram service is called unreliable, not because it is inherently flawed but because it does not guaran-
tee delivery of data. Recovery of errors is left to upper-layer protocols. Also, if several messages are
required to construct a complete message, upper-layer protocols are responsible for reassembling the
datagrams in order. Protocols that provide datagram service are called connectionless protocols.

Virtual circuits establish a formal connection between two devices, giving the appearance of a dedicated
circuit between the devices. When the connection is established, issues such as messages size, buffer
capacities, and network paths are considered and mutually agreeable communication parameters are
selected. A virtual circuit defines a connection, a communication path through the network, and remains
in effect as the devices remain in communication. This path functions as a logical connection between the
devices. When communication is over, a formal procedure releases the virtual circuit. Because virtual
circuit service guarantees delivery of data, it provides reliable delivery service. Upper-layer protocols
need not be concerned with error detection and recovery. Protocols associated with virtual circuits are
called connection-oriented.

38 Qt.2(b )2002 3M Qt.2(d) 2001 4M Define : Time Sharing

Timesharing
Timesharing is a form of interactive computing in which many different users are able to use a single
computer simultaneously. Each user communicates with the computer through a terminal, such as that
shown in or through a microcomputer. The terminals may be wired directly to the computer, or they may
be connected to the computer over telephone lines or a microwave circuit. Thus a timesharing can be
located far away – perhaps several hundred miles away-from its host computer.

Since a computer operates much faster than a human sitting at a terminal, one computer can support a
large number of terminals at essentially the same time. Therefore each user will be unaware of the
presence of other users and will seem to have the entire computer at his or her own disposal.

Timesharing is best suited for processing relatively simple jobs that do not require extensive data trans-
mission or large amounts of computer time. Many of the computer applications that arise in schools and
commercial offices have these characteristics. Such applications can be processed quickly, easily and at
minimum expense using timesharing.

Certain Features of batch processing and timesharing can be combined if desired. For example , it is
possible to enter a set of input data directly from a terminal and then proceed to process the data in the
batch mode, as described earlier. Another possibility is to use a card reader (batch processing) to enter a
program and a set of data and then edit (modify) the program and process the data in the timesharing
mode. Such rations have become increasingly common in recent years.

39 Qt 6(a) 1998 20M Describe the benefits of networking with examples. Also briefly discuss
topologies of networking.

35
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

The main benefit of networking the businesses computers and information is that business processes are
speeded up and inefficiencies are squeezed out. This is because information flows more freely. Commu-
nication between employees, customers and suppliers can be improved. Everything happens much faster.
All kinds of documents - from emails to spreadsheets and product specs - can be exchanged rapidly
between networked users. On an internal level, a central intranet enabled file sharing and kept the cost of
support staff down despite rapid expansion. The shared resource means that a team can work on a single
client much more efficiently. Well-built, well-used networks deliver measurable benefits on productivity
and efficiency. Computer resources, such as printers, storage, modems and other communications de-
vices, can also be shared. Staff can remain in touch almost as effectively in the field as working from a
desk.
The advantages of wireless networking
Stay connected:
Using wireless LANs allows users to stay connected to their network for approximately one and three-
quarter more hours each day. A user with a laptop and a wireless connection can roam their office
building without losing their connection, or having to log in again on a new machine in a different
location.
Access to accurate information:
A further advantage Networks was greater accuracy in everyday tasks.
Spaghetti free:
Perhaps the most obvious advantage comes from removing the need for extensive cabling and patching.
Return On Investment:
ROI - perhaps the overriding consideration in IT spend.

40 Qt.5(a) 2001 Write a brief note on the various options available for a company to establish net-
work/connectivity across its offices , branches

Corporates in India first started setting up LANs to connect their computers within the office. But as
these companies expanded, they began looking at options for connecting their various offices spread
across different locations. This gave rise to the deployment of WANs. Then as enterprises started apply-
ing technology to solve business problems by deploying enterprise applications such as ERP, supply
chain management (SCM) and customer relationship management (CRM), the need for deploying cost-
effective solutions in different environments or industry segments became imperative. While there are
different technology options for connectivity, each technology has its own advantages and disadvan-
tages. Here’s a look at the different options.

Leased lines

Leased lines are one of the best options. A leased line offers fixed bandwidth, and one only needs to pay
a pre-determined price for it irrespective of the amount of bandwidth one uses. This is well suited for
organizations in cities, for Internet connectivity and inter-branch connectivity for large organizations,
and data-sensitive environments such as banking and insurance. A leased line is the best option to con-
nect long-distance offices because of the existing infrastructure of telephone companies. This connectiv-
ity option is suited for offices where there is a need for multi-connectivity within an office. One of the
most widely used applications of leased lines is having a secure dedicated data circuit between two
locations via a private line, which can be used to transmit data at a constant speed.
·Disadvantages: One of the major disadvantages is that it takes weeks to allocate a connection. Addi-
tionally, since a leased line depends on the existing infrastructure of telephone companies, getting a
leased line in remote locations is difficult.
VSATs

36
India’s huge geographical spread, low tele-density and strong demand for reliable communication infra-
structure has made VSATs a good choice. Business houses which have a distributed environment with a
number of offices in difficult-to-access areas and where telecommunication services have not penetrated
yet normally look at this connectivity option. VSAT technology represents a cost-effective solution for
users seeking an independent communication network connecting a large number of geographically
dispersed sites. Additionally, VSATs give an organization the power to quickly roll out a network. There
are numerous examples that showcase how VSATs have been used by organizations to roll out their
business plans speedily. A case in point is the National Stock Exchange (NSE), which decided to use
VSATs for reaching the length and breadth of the country. Today, NSE’s network has grown to a mas-
sive 3,000 sites, making it Asia’s largest VSAT network. L&T’s engineering and construction division is
another organization, which uses VSATs in an innovative way. Most of the projects that are in far-off
places where there are no telephone connections take the help of VSATs to establish connectivity with
its various business partners.
Industry sectors such as stock exchanges and ATMs, which can’t sacrifice uptime, are prime candidates
for the usage of VSATs. Currently, VSATs support a whole set of applications such as ATMs, distance
learning, online lottery, rural connectivity, corporate training, bandwidth monitoring, bandwidth-on-
demand and Internet access.
·Conclusion: Ideally, VSATs should be used where information must be exchanged between locations
that are too far apart to be reliably linked using other communication networks.
VPNs

Virtual Private Networks (VPNs) have attracted the attention of many organizations looking to provide
remote connectivity at reduced costs. The key feature of a VPN is its ability to use public networks like
the Internet rather than private leased lines, and provide restricted access networks without compromis-
ing on security. A VPN can provide remote-access-client-connection, LAN-to-LAN networking, and
controlled access within an intranet. VPNs are the cornerstone of new-world networking services. They
manifest a technological breakthrough that transforms the industry and revolutionizes services. They
deliver enterprise-scale connectivity from a remote location, over the public Internet, with the same
policies enjoyed in a private network.
Organizations that are looking to provide connectivity between different locations can look at VPNs as
a significant cost-effective option. With VPNs, an organization only needs a dedicated connection to the
local service provider. Another way VPNs reduce costs is by reducing the need for long-distance tele-
phone calls for remote access. This is because VPN clients need to only call into the nearest service
provider’s access point.
Leased lines are being replaced by VPN solutions provided by ISPs such as Sify. ISPs provide security
and privacy for individual customers who run on their larger data pipes through the use of technologies.
ISPs also commit service level agreements, which include packet loss, latency and uptime. VPNs are
also engineered and provided as a managed service by ISPs, thereby removing the need for the customer
to retain trained professionals for network set-up and management. This adds up to a compelling solu-
tion for most Corporates—the best of price effectiveness and managed network solutions. A VPN has
another advantage when compared to private networks—it uses the public Internet infrastructure. This
gives it the ability to offer far wider coverage than private networks. A VPN can support the same
intranet/extranet services as a traditional WAN. But the real reason why VPNs have grown in popularity
is their ability to support remote access services.
Security is another big plus in VPNs. In traditional private networks, data security relies solely on the
telecommunications service provider. But if a corporate installs a VPN it does not have to rely on the
physical security practices of the service provider. Data that goes through a VPN is fully secure. Most
VPN technologies implement strong encryption codes, so data cannot be directly viewed using network
sniffers.

37
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

·Limitations: There are some concerns, which need to be taken care of by service providers. There are
four concerns regarding VPN solutions which are often raised:
1.VPNs require an in-depth understanding of public network security issues and proper deployment of
precautions.
2.The availability and performance of an organisation’s wide-area VPN (over the Internet in particular)
depends on factors largely outside their control. The Quality of Service needs to be carefully examined.
3.VPN technologies from different vendors may not work well together due to immature standards.
4.VPNs need to accommodate protocols other than IP and existing (legacy) internal network technol-
ogy.
5.These four factors comprise the ‘hidden costs’ of a VPN solution. While VPN advocates cost savings
as the primary advantage of this technology, detractors cite hidden costs as the primary disadvantage of
VPNs.

·Conclusion: VPN solutions are widely used to provide cost-effective long-distance connectivity with
excellent security. This is a good solution for companies that have offices in India and want to connect
their offices abroad. Most financial institutions and banks use VPN solutions as they want to maintain
confidentiality of information. This mode of connectivity is also popular among manufacturing compa-
nies.
Wireless LANs

Lately, wireless LANs (WLANs) has attracted a lot of attention due to the various advantages the
technology provides with respect to mobility. A WLAN can be defined as a type of LAN that uses high
frequency radio waves rather than wires to communicate and transmit data among nodes. A look at the
benefits shows the immense potential of the technology. One, it enables users to move unhindered and
where wires are difficult to deploy.
While the initial investment required for WLAN hardware might be higher than the cost of wired LAN
hardware, overall installation expenses and life cycle costs will be significantly lower. Further, there is no
problem of interoperability. WLANs can be configured in a variety of topologies to meet the needs of
specific applications and installations. Due to the various benefits, today WLANs can be seen almost in
every industry where organizations want to add extensions to their wired networks or provide wireless
connectivity. The future is undoubtedly bright for this technology as devices like notebooks, palmtops,
Tablet PCs and similar newer devices will all be used without wired power connections. This makes it
necessary for the network to be available anywhere, without the restriction of wires. So devices working
without wired power connections would demand wireless Internet services. Due to this even mobile
operators are providing Internet connectivity through new generation handsets. While the initial invest-
ment is high, it offers good bandwidth for enterprises at low recurring costs. The most quantifiable
benefit of WLAN usage is timesavings.
Wireless applications will also be in high demand in sectors like retail; real-time information availability
and immediate validation of data.
·Disadvantages: The main disadvantage of this technology is the high cost of equipment, which can
come down if deployments of WLANs increase. Additionally, it has certain limitations and is not advised
for streaming audio/video content or viewing extremely graphic-intensive websites.
Free Space Optics

Free Space Optics (FSO) is a line-of-sight technology, which uses lasers, or light pulses to provide
bandwidth connections. It has the ability to link huge buildings within a couple of days. Since it uses air
and not fiber as the transport medium, the cost savings are considerable. An added benefit is that since
FSO is a non-radio frequency technology, there are no licenses to be obtained for deploying it.

38
For FSO to work, all one needs is an optical transceiver with a laser transmitter and a receiver to provide
bi-directional capability. Each FSO system uses a high-power optical source, plus a lens that transmits
light through the atmosphere to another lens receiving data. In addition, FSO allows service providers to
deliver flexible bandwidth requirements as needed by a customer. For instance, a company may require
a higher amount of bandwidth for a few days during a conference. FSO can make this happen.
Currently, FSO is seen as a complementing technology rather one that will replace others and is heralded
as the best technology to cover niche areas.
·Disadvantages: FSO can only provide connectivity up to a maximum of four kilometers, so if a com-
pany wants to link its offices spread over huge distances a VSAT would be more relevant. Also, being a
line-of-sight technology, interference of any kind can pose problems. Factors like rain and fog can dis-
rupt a signal. But for a company looking at providing connectivity between different locations within a
few kilometers of each other, this technology is an ideal option.
Connectivity options today
·Power Users: Without data crunching they become irrelevant. Examples: Investment analysts, finance
professionals, business analysts, stockbrokers, top executives. They need more robust connectivity.
·Action Users: Medium data needs. Field action is more than data crunching. Examples: Sales, opera-
tions staff, field offices. They require reliable but not robust connectivity.
·Occasional Users: Low data needs. Examples: Dealers, distributors, customers, suppliers, prospects.
They require a channel, which is generally available.

How various connectivity options compare:

Connectivity option Reliability/Security Relevance to category of users


Leased line Action users/WAN No guaranteed uptime. Secure
Connectivity
Dedicated VPNs Power/Action users/WAN Guaranteed uptime. Secure

connectivity
PAMA VSAT Power users/WAN
Guaranteed uptime of 99.5 percent.
Secure.
TDMA broadband Action users, Limited power users/
Guaranteed uptime of 99.5 percent.
WAN
Secure
Frequency interference. Secure
VSAT Microwave Power users/MAN not WAN
No guaranteed uptime. Not
2.5G/3G Occasional users
Secure on its own
Fiber
Power users
Secure
WLAN
Power/Action users/LAN applications
Not secure on its own.
Occasional users, backup to action
Dial-up Internet No guaranteed uptime.
connectivity/WAN
Not secure on its own

39
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

41 Qt.2(d) 2002 3M Define : Protocol


42 Qt.2(e) 1998 5M Write short note on : Protocols in Communication

Protocols are a collection of rules and procedures that control the transmission of data between devices.
They enable an intelligible exchange of data between otherwise incompatible devices. Protocols are
intended to minimize transmission errors; they are a set of rules for sending and receiving electronic
messages.

Protocols split data files into small blocks or packets in order to transmit files over a network. Protocols
use mathematical techniques (algorithms) to measure data packets to ensure proper receipt.

OSI Internet RM Internet Protocol Examples

Application Application

SMTP (simple mail transfer protocol)


FTP (file transfer)
Presentation
telnet (remote login)
http (hypertext transfer protocol)
XDR (external data representation)
Session

Transport Transport
Transmission Control Protocol (TCP
User Datagram Protocol (UDP)
Network
Internet
Internet Protocol
Data Link Host-to-network

Physical

Transport protocols provide end-to-end data exchanges where network devices maintain a session or
connection. Network protocols handle addressing and routing information, error control, and requests
for retransmissions. These protocols function at the Transport and Network Layers of the OSI Refer-
ence Model.
In order for different computers to communicate with each other they must both speak the same lan-
guage. The rules for these languages are called Network Protocols.
Some Protocols are:
The Transmission Control Protocol/Internet Protocol (TCP/IP) was initially designed to provide a
method for interconnecting different packet-switching networks participating in the Advanced Re-
search Projects Agency Network (ARPANet); which was the foundation for the Internet.

40
·TCP is a connection-oriented Transport Layer protocol that uses the connectionless services of IP
to ensure reliable delivery of data.
·TCP/IP was introduced with UNIX, and incorporated into the IBM environment. It was adopted
because it provides file transfer, electronic mail, and remote log-on across a large number of distributed
client and server systems.
·The original design of TCP/IP was based upon Open System Standards to support multi-vendor,
multi-platforms. As a result, it is platform-independent, an important facet of interoperability – a
key strategy for selecting a particular protocol.

Wireless Access Protocol (WAP)

WAP Service Network Infrastructure.

WAP utilizes a set of WAP-developed transmission protocols to transfer content from Internet to users'
devices. These underlying protocols include WCMP, WDP, WTLS, WTP, and WML. The current WAP-
based services charge users by the time duration of their data transfer, the prices being closely correlated
with the phone-service charges on their devices. WAP has to date been the only wireless protocol offered
in North America and Europe.
All web-content accessible through the web is developed in a standard HTML. When a user makes an
access request to a Web site through a WAP-enabled wireless device, the Web site content is translated
by the user's wireless service provider (WSP) from HTML to WML, and then sent to the user. The
connection from WSP to a content provider (Origin Server) is an Internet link with SSL encryption
enabled as required. The wireless transmission of content radio packets is encrypted using Wireless
Transport Layer Security protocol (WTLS), which is discussed in detail in Security section of this docu-
ment.

WAP High-level Architecture.

Ethernet

The term Ethernet refers to the family of local-area


network (LAN) products covered by the IEEE 802.3
standard that defines what is commonly known as the
CSMA/CD protocol.

41
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

Three data rates are currently defined for operation over optical fiber and twisted-pair cables:
·10 Mbps—10Base-T Ethernet
100Mbps—Fast Ethernet
1000Mbps—Gigabit Ethernet

10-Gigabit Ethernet is under development and will likely be published as the IEEE 802.3ae
supplement to the IEEE 802.3 base standard in late 2001 or early 2002.

Other technologies and protocols have been touted as likely replacements, but the market has spoken.
Ethernet has survived as the major LAN technology (it is currently used for approximately 85 percent of
the world's LAN-connected PCs and workstations) because its protocol has the following characteris-
tics: ·

Is easy to understand, implement, manage, and maintain Allows low-cost network implementations
.Provides extensive topological flexibility for network installation Guarantees successful intercon-
nection and operation of standards-compliant products, regardless of manufacturer

Ethernet LANs consist of network nodes and interconnecting media. The network nodes fall into two
major classes:

·Data terminal equipment (DTE)—Devices that are either the source or the destination of data frames.
DTEs are typically devices such as PCs, workstations, file servers, or print servers that, as a group, are
all often referred to as end stations.
Data communication equipment (DCE)—Intermediate network devices that receive and forward
frames across the network. DCEs may be either standalone devices such as repeaters, network switches,
and routers, or communications interface units such as interface cards and modems.

43 Qt.7(a) 2002 8M Difference between Optical Fiber V/s Conventional copper cable.
44 Qt.6(1) 1999 6M Write short note on : Fiber Optic

The huge capacity and digital proficiency of optical fibers have made them a natural for computer com-
munications. They are being used in local area networks to connect central processors with peripheral
equipment and other computers, creating integrated computer systems that span a room, a building, a
campus or a city. Developers are incorporating data highways made of optical fibers into the design of
skyscrapers; intelligent buildings, as they are called, have nervous systems of optical fibers that carry
information about temperatures, lighting, breaches of security or fire alarms to a central computer for
action and control. Both multimode and single-mode fibers have several steps on conventional copper

42
wires, in addition to their huge information-carrying capacity. Stray electromagnetic impulses do not
affect glass as they do wires, so optical fibers are immune to errors in data caused by electrical interfer-
ence. Thus, they are ideally suited for use in places like machine shops and factories, where conventional
wires often require shielding. Furthermore, optical fibers offer tight security because they are extremely
difficult to tap, making them attractive for military as well as banking applications. Fiber-optic cables are
much smaller and lighter than copper-wire cables, one reason why telephone companies have begun to
install them in cities, where they can snake through conduits

Advantages of Fiber Optic Systems


Fiber optic transmission systems a fiber optic transmitter and receiver, connected by fiber optic cable –
offer a wide range of benefits not offered by traditional copper wire or coaxial cable. These include:
1.The ability to carry much more information and deliver it with greater fidelity than either copper wire
or coaxial cable.
2. Fiber optic cable can support much higher data rates, and at greater distances, than coaxial cable,
making it ideal for transmission ofserial digital data.
3.The fiber is totally immune to virtually all kinds of interference, including lightning, and will not con-
duct electricity. It can therefore come in direct contact with high voltage electrical equipment and power
lines. It will also not create ground loops of any kind.
4. As the basic fiber is made of glass, it will not corrode and is unaffected by most chemicals. It can be
buried directly in most kinds ofsoil or exposed to most corrosive atmospheres in chemical plants without
significant concern.
5.Since the only carrier in the fiber is light, there is no possibility of a spark from a broken fiber. Even in
the most explosive ofatmospheres, there is no fire hazard, and no danger of electrical shock to personnel
repairing broken fibers.
6.Fiber optic cables are virtually unaffected by outdoor atmospheric conditions, allowing them to be
lashed directly to telephone poles or existing electrical cables without concern for extraneous signal
pickup.
7.A fiber optic cable, even one that contains many fibers, is usually much smaller and lighter in weight
than a wire or coaxial cable with similar information carrying capacity. It is easier to handle and install,
and uses less duct space. (It can frequently be installed without ducts.)
8.Fibers do not leak light and are quite difficult to tap. Fiber optic cable is ideal for secure communica-
tions systems because it is very difficult to tap but very easy to monitor. In addition, there is absolutely no
electrical radiation from a fiber.
9.It can handle much higher bandwidths than copper. This alone would require its use in high-end net-
works.
10.It is not affected by power surges, electromagnetic interference, or power failures. Nor it is affected
by corrosive chemical in the air, making it ideal for harsh factory environments.

Disadvantages of Fiber cable over Copper wires


(a) Fiber is an unfamiliar technology requiring skills most engineers do not have.
(b) Since optical transmission is inherently unidirectional, two-way communication requires
Either two fibers or two frequency bands on one fiber.
(c) Fiber interfaces cost more than electrical interfaces.

45 Qt.6(c) 2002 4M Enumerate the key purpose and name one example of Router

When a network of computers wishes to have access to the Internet, an additional protocol needs to be
"bound" to each computer requiring Internet access. The protocol used on the Internet it called TCP/IP,
and the routers we sell are limited to that protocol. Unlike the protocols used for file and printer sharing,
TCP/IP adds an Internet Protocol (IP) address to every packet. The purpose of the router is to examine

43
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

every packet on a network and route it to the correct place. If one computer is communicating with
another computer on the hub, the router ignores those packets whether they are TCP/IP or not. But
TCP/IP packets that are involved with Internet traffic are passed through to the cable modem connec-
tion.

Firewall
Having a network of home computers with access to the Internet also means having the computers
themselves accessible to any other computer on the Internet. Unfortunately, not all computers are being-
-some are operated by people intent damaging others, sometimes to steal information and sometimes as
electronic vandals intent on doing damage for the mere fun of it. When computers are placed in a
network for the purpose of gaining Internet access, good protection is obtained by simply turning off all
networking features on all of those computers (file sharing, printer sharing, and "Microsoft Client.") But
when the same computers have these features activated so that they can act as a local network, some
measures need to be taken to protect against abusers. NAT provides a first line of defense, as any
probing of your network will only "see" the router. Additional protective strategies are implemented in
the router's firewall.
The routers that are included in our product offerings contain several components that are fine tuned to
solve the following problems:
1.They allow several computers to share one IP address (a requirement with which each of our custom-
ers has agreed by signing our Terms of Use agreement).
2.They prevent packets to and from computers in the home network from being passed on to the cable
modem. You don't want those packets to go outside your network, and we don't want the congestion
that they can cause on our system.
3.They keep "broadcast" packets from passing to the cable modem. Many LAN protocols generate
frequent messages to enable other computers on the same network to be aware of their address.
They provide a firewall to protect your computer's information from unwanted and unauthorized access.

46 Qt 4 1998 20M “ Computers and communications seem to merge together seamlessly Discuss.
47 Qt.3(a) 1999 10M Why are computers and computer related devices networked in the organization

A network is a way to connect computers together so that they can communicate, exchange information
and pool resources.

In business, networks have revolutionized the use of computer technology. Many business that
used to rely on a centralized system with a mainframe and a collection of terminals, now use computer
networks in which every employee who needs a computer has one. The technology and expertise are
distributed throughout the organization among a network of computers and computer-literate users.

Whatever the setting, networks provide tremendous benefits. Four of the most compelling
benefits are:

-allowing simultaneous access to critical programs and data


-allowing people to share peripheral devices such as printers & scanners
-streamlining personal communication with e-mail
-Making the backup process easier.

Simultaneous access:
It is a fact of business computing that multiple employees, using a computer network, often need access
to the same data at the same time. If the employee keep separate copies of data on different hard disks,
updating the data becomes very difficult. As soon as a change is made to the data on one machine, a

44
discrepancy arises, and it quickly becomes difficult to know which set of data is correct. Sharing data
that is used by more than one person on a shared storage device makes it possible to solve the problem.

Shared Peripheral Devices:


Perhaps the best incentive for small businesses to link computers in a network is to share peripheral
devices, especially expensive ones such as laser printers, large hard disks, and scanners. Laser printers
are very costly, so it it’s not cost-effective for each user to have one. Sharing a laser printer on a network
makes the cost much less prohibitive.

An added benefit of sharing peripherals is that they can prolong the usable life of older computers. For
e.g., older computers often do not have enough storage for modern software and data, if the computer
is connected to a large central computer called a network server (also called a file server, or simply
server), excess data can be stored there. This solution is often less expensive than buying and installing
new hard disks for older computers.

48 Qt.2(d) 2000Qt.2(3) 1999 5M Difference between. Centralized Processing ,

Distributed Processing.
Centralised Data Processing

Historically mainframe computers were widely used in business data processing. In this kind of a system,
several dumb terminals are attached to the central mainframe computer. Dumb terminals are the ma-
chines which users can input data and see the results of processed data. However no processing takes
place at the dumb terminals. In earlier days individual organisations processed large amount of data
usually at the head office. The main advantage of such systems was that design was much straight
forward and the organisation can have tighter control on the main database.

In such systems one or more processors handle the workload of several distant terminals. The central
processor switches from one terminal to another and does a part of each job in a time-phased mode. This
switching from task of one terminal to another continues till all tasks are completed. Hence such systems
are also called time sharing systems.

The biggest disadvantage of such a system is if the main computer fails, the whole system fails. All
remotes have to stop working. Also all end users have to format data based on the format of central
office. The cost of communication of data to the central server is high as even the minutest of processes
have to be done centrally.
Distributed Processing

It is a system of computers connected together by a communication


network, in a true distributed data processing system. Each computer
is chosen to handle its local workload and the network is designed to
support the system as a whole.

Distributed data processing system enable sharing of several hardware


and significant software resources among several users who may be
located far away from each other.

45
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

Advantages
Of both a centralised and a decentralised system each computer can be used to process data like a
decentralised system. In addition, a computer at one location can also transfer data and processing jobs
to and from computers at other location.
a) Flexibility : Greater flexibility in placing true computer power at locations where it is needed.
b) Better utilisation of resources : Computer resources are easily available to the end users.
c) Better accessibility : Quick and better access to data on information especially where distance is
a meager factor.
d) Lower cost of communication : Telecommunication costs can be lower when much of the local
processing is handled by on-sit mini and microcomputers rather than by distant central mainframe com-
puters.

Disadvantages
Lack of proper security controls for protecting the confidentiality and integrity of the user programs and
data that are stored on-line and transmitted over network channels (its easy to tap a data communication
line)

Linking of different systems – due to lack of adequate computing /communication standards it is not
possible to link different items of equipment’s produced by different vendors. Thus several good re-
sources may not be available to users of a network.

Maintenance difficulty – due to decentralisation at resources at remote sites, management from a


central control point becomes very difficult. This normally results in increased complexity, poor docu-
mentation and non availability of skilled computer/communication specialists at the various sites for
proper maintenance of the system

49Qt.2(1) 1999 5MDifference between. Computer with multiple processors and computer with parallel

processors
In a computer with multiple processors the calculations are divided between the numerous processors
to tackle. Since each processor now has less work to do, the task can be finished more quickly. It is
sometimes possible to get a 210% speed increase from a computer with 2 processors.

The speed increase obtained by a multiprocessor computer depends greatly on the software being used
to perform the calculations. This needs to be able to co-ordinate the calculations between the multiple
processors. In practice, quite a bit of effort is required to divide the calculations between the processors
and re-assembling the results into a useful form. This is known as an “overhead” and explains why
computers with multiple processors are sometimes slower than those with single processors.

Super linear speedup is possible only on computers with multiple processors. It occurs because modern
processors contain a piece of high-speed memory known as a cache. This is used to accelerate access to
frequently used data. When the processor needs some data, it first checks to see if it is available in the
cache. This can avoid having to retrieve the data from a slower source such as a hard disk. In a computer
with 2 processors, the amount of cache is doubled because each processor includes its own cache. This
allows a larger amount of data to be quickly available and the speed increases by more than what is
expected.

Computers with Parallel processors rely on dozens to thousands of ordinary microprocessors—


integrated circuits identical to those found in millions of personal computers —that simultaneously

46
carry out identical calculations of different pieces of data. Massively parallel machines can be dramati-
cally faster and tend to possess much greater memory than vector machines, but they tax the program-
mer, who must figure out how to distribute the workload evenly among the many processors. Mas-
sively parallel machines are especially good at simulating the interactions of large numbers of physical
elements, such as those contained within proteins and other biological macromolecules — the types of
molecules that computational biologists are interested in modeling.

40 Qt.2(c) 2001 4M Define : Server


51 Qt.2(e) 2002 3M Define : Hub
52 Qt.2(g) 2001 4M Define : Intranet
53 Qt.2(d) 1998 5M Write short note on : Internet
54 Qt.2(b) 2000 5M Difference between. Internet and intranet
55 Qt.6(3) 1999 6M Write short note on : Uses of internet to organization

Email – The most commonly used application of data communication is email. Email is a system for
exchanging written messages through a network.

In an email system, each user has an unique identification referred to as an email address. To send an
email you have to use a special email program that works with the network to send and receive mes-
sages. You enter the persons email ID and type the message. When the message is ready it is submitted
for delivery by clicking send button. When the recipient accesses the mail system he is alerted about the
new message (s) received. Systems have pop-up or audio alerts for new messages. The recipient after
reading his mail can store, forward or delete the message.

In addition to plain text email systems lets you attach data files. With this data files can be shared
between a select group of people for whom the information is relevant and useful. This is particularly
useful when peripheral devices are not shared. If connected to the internet, then mails can be sent and
files can be shared with virtually any/all people who have access to the internet.

Email is both efficient and inexpensive users can send email without worrying if the recipient is online or
not. In corporate networks email is delivered almost instantly. The cost of sending messages is negli-
gible. Email has provided the modern world with an entirely new and immensely valuable form of com-
munication.

Internet – today Internet is a constant in the lives of millions of people around the world. This has
changed the way many people work and do business. The internet has enabled us to access nearly any
kind of information form a PC. Internet is a huge co-operative community with no central ownership.
The lack of ownership is important as no single person owns the internet. Any person who can access the
internet can use it to carry out a variety of transactions and also create his own set of resources for others
to use.

As a business tool, the internet has many uses. Electronic mail is an efficient and inexpensive way to send
and receive messages and documents around the world in minutes. Internet is becoming an important
medium of advertising, distributing software and information services. It is a space where in people with
similar interest can share data on various topics.

When business organisations connect parts of their network to the internet, they allow users to work on
the network from virtually anywhere in the world. On doing this the internet can be used to sell goods,
track inventory, order products, send invoices and receive payments in a very cost effective manner. This

47
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

means the world is the market place for today’s organisations. They can buy and sell any where in the
world today.

Intranets & Extranets – Before the advent of World Wide Web, most corporate networks were bland
environments for email and file sharing only. However, now corporate networks are being reconfigured
to resemble and give the feel of internet.

This setup enables users to work in a web like environment using web browsers as the front-end to
corporate data. Two common types of spin-offs of the web are called intranets & extranets. They are
used for data sharing, scheduling and workgroup activities.

Intranets – It is a LAN or WAN that uses TCP/IP protocol but belongs exclusively to a corporation.
The intranet is accessible exclusively to an organisation and its employees. It has all the relevant data that
can be shared and used for day to day functioning of the organisation like attendance muster, leave
records, leave and other application forms, daily production records, policies, brochures of the company
that can be used by sales teams etc. it can also be connected to the internet through proper security
features like firewalls.

Extranet – It is an intranet that can be accessed by outside users over the Internet. This is typically used
by telecommuters or business associates like suppliers and distributors to access relevant data and also
log data on to the corporate systems on a day to day basis. It helps the organisation by getting vital data
for proper planning of procurement and distribution. With such state of the art tools overheads in terms
of inventory and excess sticks can be reduced to a great extent.

56 Qt.5 2002 16M Why is Internet called the Network of Networks. Explain how it works.

The seeds: ARPANET


The seeds of the Internet were planted in 1969, when the Advanced Research Project Agency (ARPA) of
the U.S. Department of Defense began connecting computers at different universities and defense con-
tractors. The goal of this early project was to create a large computer network with multiple paths – in
the form of telephone lines – that could survive a nuclear attack or other disasters. If one part of the
network were destroyed, other parts of the network would remain functional because data could con-
tinue to flow through the surviving lines. ARPA also wanted users in remote locations to be able to share
scarce computing resources.

Soon after the first links in ARPANET (as this early system was called) were in place, the engineers and
scientists who had access to this system began exchanging messages and data that were beyond the
scope of the Defense Department’s original objective. At first, ARPANET was basically a wide area
network serving only a handful of users, but it expanded rapidly. Initially, the network included four
primary host computers. ARPANET’s host computers provided file transfer and communication ser-
vices and gave connected systems access to the network’s high-speed data lines. The system grew
quickly and spread widely as the number of hosts grew. The network jumped across the Atlantic to
Norway and England in 1973, and it never stopped growing.

NSFnet
In the mid-1980s, another federal agency, the National Science Foundation (NSF), joined the project
after the Defense Department dropped its funding. NSF established five “supercomputing” centers that
were available to anyone who wanted to use them for academic purposes.

48
The NSF expected the supercomputers’ users to use ARPANET to obtain access, but the agency quickly
discovered that the existing network could not handle the load. In response, the NSF created a new,
higher capacity network, called NSFnet, to complement the older and by then overloaded ARPANET.
The link between ARPANET, NSFnet, and other networks was called the Internet.

The process of connecting separate networks is called internetworking. A collection of “networked


networks” is described as being inter-networked, which is where the Internet – a worldwide network of
networks – gets its name. And this is why the Internet is called the “Network of Networks”.

How the Internet works

TCP/IP
The Internet works because every computer connected to it uses the same set of rules and procedures,
known as protocols, to control the timing and data format. The protocols used by the Internet are called
Transmission Control Protocol/Internet Protocol, universally abbreviated as TCP/IP.

The TCP/IP protocols include the specifications that identify individual computers and that exchange
data between computers. They also include rules for several categories of application programs, so
programs that run on different kinds of computers can talk to one another. For example, someone using
a Macintosh computer can exchange data with a UNIX computer on the Internet. It does not matter if
the system at the other end of a connection is a supercomputer, a pocketsize personal communications
device, or anything in between; as long as it recognizes TCP/IP protocols, it can send and receive data
through the Internet.

Routing Traffic Across the Internet


The core of the Internet is the set of backbone connections that tie the local and regional networks
together and the routing scheme that controls the way each piece of data finds its destination. In net-
working diagrams, the Internet backbone is often portrayed as a big cloud because the routing details are
less important than the fact that the data passes through the Internet between the origin and the destina-
tion.

The Internet creates a potential connection between any two computers; however, the data may be
forced to take a long, circuitous route to reach its destination. Here’s how it works:

1. Your request is broken into packets.


2. The packets are routed through your local network, and possibly through one or more subse-
quent networks, to the Internet backbone.
3. After leaving the backbone, the packets are then routed through one or more networks until they
reach the appropriate server and are reassembled into the complete request.
4. Once the destination server receives your request, it begins sending you the requested data,
which winds its way back to you – possibly over a different route.

Between the destination server and your PC, the request and data may travel through several different
servers, each helping to forward the packets to their final destination.

Addressing Schemes – IP, DNS


The computer that originates a transaction must identify its intended destination with a unique address.
Every computer on the Internet has a four-part numeric address, called the Internet protocol address
(IP Address), which contains routing information that identifies its location. Each of the four parts is a
number between 0 and 255, so an IP address looks like this: 205.46.117.104

49
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

Computers have no trouble working with long strings of numbers, but humans are not so skilled. There-
fore, most computers on the Internet also have an address called a domain name system (DNS) ad-
dress – an address that uses words rather than numbers

57 Qt.5(a) 2001 10M Highlight some of the key differences between applications designed
for the web/internet vis-à-vis conventional applications.

Architecture Differences
Although Internet and client/server applications have basic architectural similarities—such as applica-
tion servers, database servers, and a graphical user interface—there are several key differences that have
increased the popularity of Internet applications over client/server applications:
· Client Processing
· Network Protocols
· Security
· Content Management
· Extensibility and Integration

These architecture differences fundamentally change the characteristics of applications that developers
build, how people use those applications, and the collaborative business nature of the enterprise.

Client Processing
Both Internet and client/server architectures consist of multiple tiers, but the client tier is very different.
This is probably the single biggest technology difference between Internet and client/server applications.
Client/server is extremely client intensive, often referred to as a “fat client” architecture.

Very simply, with Internet applications, the web browser or client access device is the client. The Internet
application user interface is generated at the server, delivered to the web browser through HTML, and
renders the user interface at the client device. All business rules are executed on the server.
Network Protocols
Internet applications use standard Internet protocols and are compatible with Internet security models.
The client uses secure HTTP and Secure Sockets Layer (SSL)—which is supported by the web browser.
This protocol is familiar to firewall administrators and is supported by firewalls.

Other protocols, especially non-standard protocols, are typically blocked by firewalls.


Client/server applications do not use HTTP to communicate between the client and server. They use
proprietary protocols not natively supported by the web browser. This prevents client/server applica-
tions from executing over the Internet, seriously limiting access to the application.

The use of HTTP in Internet applications is key to opening up access to your application over the
Internet so that anyone in any location with a web browser can interact with your application and col-
laborate in your business process. Convenient access is a key reason why the Internet has been success-
ful and grown so rapidly.
Security
In addition to working with the security provided by a firewall, Internet applications are compatible with
the emerging Internet security model. This model is based on user authentication—using user names and
passwords over an SSL connection—along with digital certificates or tokens. User administration and
access control is performed at the application server level, based on Lightweight Directory Access
Protocol (LDAP). Organizations are increasingly using LDAP as a central directory in their enterprise to

50
maintain the growing number of user profiles. Internet applications use LDAP for end user authentica-
tion.

The traditional client/server approach has not leveraged many of these new Internet security technolo-
gies. User profiles are maintained in the database. System administrators must maintain user profiles in
many places instead of a central LDAP directory. End users must remember numerous user IDs and
passwords.
Structured and Unstructured Data
A client/server application deals with structured, relational data that resides in the application database.
Client/server typically does not deal with data outside of its own database or unstructured data. A client/
server application may store attachments such as documents or spreadsheets in a database, or it may
invoke third-party systems that display data from other systems, but this no nowhere near the degree that
internet applications accept unstructured data and data from outside systems.

Portals are an increasingly popular piece of Internet applications and use a very different data delivery
approach than client/server. Portal technology provides common services such as simple navigation,
search, content management, and personalization. The portal delivers content to end users from a wide
variety of sources and this content is usually unstructured data. The Internet applications accessed from
the portal support both structured and unstructured data through the use of popular technologies such as
HTML, HTTP, and XML.

Internet applications can be designed with the assumption that the data can be structured or unstructured
and can reside outside the database from any type of Internet enabled content provider. This results in
the delivery of much richer content to the end user.

Extensibility & Integration


The Internet offers unlimited sources of information because of HTTP, HTML, and XML standards.
This standard technology enables the open flow of information between people and systems. Internet
applications leverage this open flow of information to enable system to system collaboration in several
ways:

· An Internet application can host contents from other Internet applications and also acts as a content
provider using HTTP, HTML, and XML.

· Systems can pass transactions between themselves using XML messaging. This is similar to tradi-
tional Electronic Data Interchange (EDI), but has been extended beyond the American National Stan-
dards Institute (ANSI) standard set of EDI transactions.

Integration is important with client/server technology, but only moved data in and out of the application
database. Internet technologies were not used with client/server to address integration and the idea of
hosting content from other systems was not really considered. EDI has also been part of client/server
applications, but it uses proprietary, expensive networks and has a limited, rigid transaction set. Another
type of integration made popular by client/server is desktop integration with applications such as Microsoft
Excel and Microsoft Word. This has proved to be costly due to client configuration and limited in
functionality.

Leveraging new technologies, Internet applications can be extended and integrated to much greater
degrees than client/server applications. Integration never before considered with client/server is being
created today with Internet applications.

51
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

58 Qt 5 1999 20M Internet technology and Electronic commerce has brought the manu-
facturers and the customers very close to each other. This should result into better customer relationship
management and supply chain management. Please explain.

Let’s get one thing straight: The Internet doesn’t change everything. Yes, it provides the opportunity for
profound changes in the way companies and people can work, collaborate and interact with each other.
Yes, it’s enabling business-to-business collaboration to work better than ever before. But it’s not chang-
ing the way you brush your teeth in the morning, nor does it change the value of human relationships in
our lives.

Ultimately, the Internet makes it so easy to find a specialist who’s the best at delivering packages or
doing payroll or cleaning offices, or that the costs of interacting and doing business with these specialists
have decreased dramatically—for you, your customers and your competition.

The conventional wisdom a couple years ago was that the Internet and CRM solutions could enable
companies to directly target, sell to, and service all of their customers, be they consumers or businesses.
High-profile ventures like Amazon.com, Dell, and E*Trade helped to hype the Internet as the means to
remove intermediate, or cut out the middlemen. The theory was that these indirect sales channels weren’t
necessary anymore, because it is now faster and cheaper to deal directly with end customers.

Yet indirect channels remain of vital importance in reaching target markets, adding consulting and ser-
vices, and providing total solutions. Think about it: The last time you bought a car, you could’ve bought
one online. Did you? Take car dealerships: It’s possible to buy a car online, but people like to touch and
test drive. Just because the Web lets you buy online doesn’t mean that it’s the best way to do it. Besides,
it’s hard to get your car serviced over the Web.

An “eCash Register” is Not Enough


The Internet occupies an interesting place in the customer value chain. Whereas in the early days of e-
commerce it was expected to rapidly become the preferred method of purchase, it hasn’t quite worked
out that way. More and more it’s used as a means for customers to gather information before purchasing.
Customers can find a wider array of products online, more and more sites are letting customers custom-
design products, and the costs of servicing customers online are far less than with human representa-
tives.

While such experiences do increase value for customers, they don’t point to the Internet as a cash cow.
According to a report released earlier this year by Jupiter Research’s Media Metrix, only nine percent of
customers use the Internet “mostly for purchases.” Other studies have found that while the Internet is a
fast-growing new channel, this growth has not come at the expense of indirect channels, which continue
to be used extensively. Bottom line: the Internet is not eliminating intermediaries.

That said, intermediaries must adapt to a new reality where merely moving products and information is
not enough to make a profit and survive over the long-term. The key is delivering value, perceived from
the customers’ point of view. The Internet is changing the definition of what customers expect and will
pay for.

The awkward human fact is that people rather like switching off their computers and tooling around
town once in a while. Driving to the grocery store gives them something to do. There’s no percentage in
trying to force the Internet to be something that it’s not.

52
Logic In The Paradox
If the Internet is viewed primarily as a sales or purchasing channel, then you can call it a disappointment.
Viewed as a channel for sharing information and servicing customers, which seems to be the real-world
view, it’s succeeding wonderfully.

The real payoff of the Internet is in its enabling of business collaboration—the sort of three-way “infor-
mation partnerships” among manufacturers, business partners and customers.

There’s logic in what some might consider a paradox. The Internet can lead to deeper partnerships
because companies won’t feel at risk in throwing all their business to one supplier. Now you can check
easily if you’re getting a good deal, and make a change if you think you can do better.
Power in the relationship has shifted from seller to buyer, leading to stronger relationships all around.

Today the vast majorities of consumers worth chasing are online, and are comfortable communicating
online. Call centers were the genesis of CRM, and now we’re tying in dealer networks as well, so this is
the first time large-scale manufacturers are back in the position of a 14th century craftsman, who heard
directly from his customers.

The Internet will bring massive changes to the business partner relationship as well as to customer
interactions—at this point we’re just scratching the surface. What’s new is that the Internet allows
businesses to look for efficiencies outside their walls, to try to streamline interactions with business
partners and suppliers so completely that all the different companies operate as one. But we still have
companies, relationships, competition for business, and we still use channels.

The bar, though, has been raised.

59 Qt.7(d)2001 5M Write short note on : Websites and Portals

A collection of related Web pages is called a Web site. Web sites are housed on Web servers, Internet
host servers that often store thousands of individual pages. Popular Web sites receives millions of hits or
page views every day. When you visit a Web page – that is download a page from the Web server to your
computer for viewing – the act is commonly called “hitting” the Web site.

Web sites are now used to distribute news, interactive educational services, product information and
catalogs, highway traffic reports, and live audio and video among other items. Interactive Web sites
permit readers to consult databases, order products and information, and submit payment with a credit
card or other account number.

A Web portal is a free, personalized start page, hosted by a Web content provider, which you can
personalize in several ways. Your personalized portal can provide various content and links that simply
cannot be found in typical corporate Web sites. By design, a portal offers two advantages over a typical
personal home page:

1. Rich, Dynamic Content


2. Your portal can include many different types of information and graphics, including news, sports,
weather, entertainment news, financial information, multiple search engines, chat room access, email
and more.

3. Customization

53
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

You customize a portal page by selecting the types of information you want to view. Many portal sites
allow you to view information from specific sources, such as CNN, Time, and others. Some portals even
provide streaming multimedia content. You can also choose the hyperlinks that will appear in your
portal, making it easy to jump to other favorite sites. Most portal sites let you change your custom
selections whenever you want.

Here are a few Web sites that provide portal services:


Microsoft Network – http://www.msn.com
Yahoo – http://www.yahoo.com
Rediff – http://www.rediff.com

60 Qt.2(f) 2001 4M Define : Search Engine

Without a directory of some sort to help you find the right information on the Web, you could literally
spend hours going from one Web site to another trying to find what you need. Fortunately, several
specialized Webs sites, called search engines, use powerful data-searching techniques to discover the
type of content available on the Web.

By using a search engine and specifying your topic of interest, you can find the right site or information.
For example, if you need to find information on Aristotle, you can visit a search engine site such as Alta
Vista or Google and type Aristotle in the sites’ Search box. The engine will provide you with a list of
sites that should match your criterion.

Here are a few of the most popular search engines on the Web:
Name URL
Alta Vista http://www.altavista.com
Yahoo! http://www.yahoo.com
Google http://www.google.com

61 Qt.2(c) 2002 3M Define : Domain Name

DNS addresses have two parts: a host name followed by a domain that generally identifies the type of
institution that uses the address. This type of domain name is often called a top-level domain. For
example, many companies have a DNS address whose first part is the company name, followed by
“.com” – the now-overused marketing gimmick.

Some large institutions and corporations divide their domain addresses into smaller sub-domains. For
example, a business with many branches might have a sub-domain for each office – such as
boston.widgets.com and newyork.widgets.com.

Here are some of the most common types of Internet domains used:
Domain Organisation Type Example
.com Business (commercial) ibm.com
.edu Educational center.edu
.gov Government whitehouse.gov
.mil Military navy.mil
.net Gateway or host mindspring.net
.org Other organisation (non – profit) assoc.org

54
62 Qt.8 2002 16M Enumerate the inputs, activity and output of each stage in Systems develop-
ment Life Cycle
63 Qt.3 1998 20M Describe the steps in Systems development Life Cycle clearly specifying the
activities and deliverables at each step.
64 Qt.6 2000 20M What are the stages in Systems development Life Cycle. Describe the activi-
ties in each stage
65 Qt.4 2001 20M Enumerate the stages in the system development life cycle. Clearly mention
the inputs, outputs and activities with respect to each other.
66 Qt.6(a) 2001 10M Write a note on the role of a system Analyst . What technical and other at-
tributes are required by an analyst to be affective in his role.
67 Qt.2(f) 2000 5M Difference between. Operational Feasibility , economic feasibility
68 Qt.6(5) 1999 6M Write short note on : Feasibility study
69 Qt.5(d) 2000 5M Write short note on : System Analyst

The systems development life cycle (SDLC) method is an approach to developing an information system
or software product that is characterized by a linear sequence of steps that progress from start to finish
without revisiting any previous step. The SDLC is a methodology that has been constructed to ensure
that systems are designed and implemented in a methodical, logical and step-by-step approach. The
SDLC method is one of the oldest systems development models and is still probably the most commonly
used.

There SDLC consists of the following activities:


- Preliminary Investigation
- Determination of Systems Requirements ( Analysis Phase)
- Design of the System
- Development Of Software
- System Testing
- Implementation and Evaluation
- Review

R eview P re lim inary


Im p lem enta tio n In vestig a tio n

D ete rm ina tio n O f


S ystem te sting R eq u irem ents

D evelo p m ent O f D esig n O f


S ystem S ystem

1> The Preliminary Investigation


The Preliminary Investigation Phase may begin with a phone call from a customer, a memorandum
from a Vice President to the director of Systems Development, a letter from a customer to discuss a
perceived problem or deficiency, or a request for something new in an existing system.

55
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

The purpose of the Preliminary Investigation is not to develop a system, but to verify that a problem
or deficiency really exists, or to pass judgment on the new requirement.

Refers to having a project that can be completed based on considering the financial costs of completing
the project versus the benefits of completing it.

There are three factors, typically called Feasibility Study:

1. Technical Feasibility.
o It assesses whether the current technical resources and skills are sufficient for the new system
o If they are not available, can they be upgraded to provide the level of technology necessary for
the new system
o It centers around the existing computer system and to what extent it can support the proposed
addition

o It refers to having a project that can be technically self reliant of completing the project having
the right technology hardware , software and skill technicians to execute the system.

2. Economic Feasibility.

o It examines the benefits in creating the system to make it costs acceptable. It refers to having a
project that can be completed based on considering the financial costs of completing the project versus
the benefits of completing it.
o It determines whether the time and money are available to develop the system
o Includes the purchase of New equipment, Hardware ,Software

3. Operational Feasibility.

o Operational feasibility determines if the human resources are available to operate the system
once it has been installed.
o Whether the system will be used if it is developed and implemented? Or will there be resistance
from users?
o Users that do not want a new system , may prevent it from becoming operationally feasible

It could be an individual constraint, or any combination of the three that prevents a project from being
developed any further. When a project is both desirable and feasible for the organization the Analysis
Phase is implemented

2> Determination of Systems Requirements ( Analysis Phase)


Systems analysis is the study of a current business information systems application and the definition of
user requirements and priorities for a new or improved application.

The Analysts study the problem deficiency or new requirement in detail. Depending upon the size of the
project being undertaken The key to understanding the analysis phase gaining a rigorous understanding
of the problem or opportunity which is driving development of the new application. He has to work
closely with employees and managers to gather details about the business process and their opinions of
why things happen as they do and their ideas of changing the process. System analysts do more than
study the current problems, they should closely inspect the various documents available about the vari-
ous operations and processes .

56
They are frequently called upon to help handle the planned expansion of a business. They assess the
possible future needs of the business and what changes should be considered to meet they needs. He has
to help the user visualize the system An analyst mostly recommends more than 1 alternatives for improv-
ing the situation. He makes a prototype and walk through the prototype with the prospective user.

A system analyst needs to possess strong people skills and strong technical skills. People skills will assist
in work with clients to help the team define requirements and resolve conflicting objectives. Interper-
sonal Skills helps in Communication, understanding and identifying problems, having a grasp of com-
pany goal and objectives and selling the system to the user . Technical skills will help document these
requirements with process, data, and network models. It helps to focus on procedure and techniques for
operations and computerization.

At the end of this phase, the Requirements Statement should be in development: this provides details
about what the program should do. A requirement document includes Business Use Cases , Project
Charter / Goals, Inputs and Output details to the system and the broad process involved in the system. It
can easily form the basis of a contract between the customer and the developer. The Requirements
Statement should list all of the major details of the program.

3> Design of the System


The design of an information system produces the details that state how a system will meet the require-
ments identified during systems analysis. This stage is known as logical design phase in contrast to the
process of developing program software, which is referred to as physical design. Design in the SDLC
encompasses many different elements. The different components that are ‘designed’ in this phase are :
Input , Output, Processing, Files

By the end of the design phase, a formal Requirements Statement for the program is made along with a
rough sketch of what the user interface will look like. To understand the structure and working of the
SDLC each phase is examined in turn.

Most programs are designed by first determining the output of the program. If you know what the
output of the program should be, you can determine the input needed to produce that output more easily.
Once you know both the output from, and the input to the program, you can then determine what
processing needs to be performed to convert the input to output. You will also be in a position to
consider what information needs to be saved, and in what sort of file.

While doing the Output and Input designs, more information will be available to add to the Require-
ments Statement. It is also possible that a first screen design will take shape and at the end of these
designs, and a sketch will be made of what the screen will look like. At this stage of the SDLC it isn’t
necessary to discuss the ‘how’ or ‘what’ the program will do, just to get the requirements down on
paper.

Designers are responsible for providing programmers with complete and clearly outlined software speci-
fications. As programming starts , designers are available to answer questions, clarify fuzzy areas, and
handle problems that confront the programmers when using the design specifications.

4> Development Of Software


During Development Phase, computer hardware is purchased and the software is developed. That
means that actual coding the program is initiated. In the Development phase, examination and re-exami-
nation of the Requirements Statement is needed to ensure that it is being followed to the letter. Any
deviations would usually have to be approved either by the project leader or by the customer.

57
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

The Development phase can be split into two sections, that of Prototyping and Production Ready
Application Creation. Prototyping is the stage of the Development phase that produces a pseudo-
complete application, which for all intents and purposes appears to be fully functional. Developers use
this stage to demo the application to the customer as another check that the final software solution
answers the problem posed. When they are given the OK from the customer, the final version code is
written into this shell to complete the phase.

5> System Testing


During systems testing , the system is used experimentally to ensure that the software runs according to
its specifications and in the way the user expects. Special test data are input for processing , and the
results examined. If necessary, adjustments must be made at this stage. Although the programmer will
find and fix many problems, almost invariably, the user will uncover problems that the developer has
been unable to simulate.

6> Implementation and Evaluation


In the Implementation Phase, the project reaches fruition. After the Development phase of the SDLC
is complete, the system is implemented. Any hardware that has been purchased will be delivered and
installed. The designed software and programmed will be installed on any PCs that require it. Any person
that will be using the program will also be trained during this phase of the SDLC. The system is put into
use. This can be done in various ways. The new system can phased in, according to application or
location, and the old system gradually replaced. In some cases, it may be more cost-effective to shut
down the old system and implement the new system all at once.

The implementation phase also includes training systems operators to use the equipment, diagnosis of
malfunction and troubleshooting.

Evaluation of the system is performed to identify the strengths and weaknesses of the new system . The
actual evaluation can be any of the following

1. Operational Evaluation :Assessment of the manner in which the system functions , including
ease of use , response time , suitability of information formats, overall reliability and level of utilization.

2. Organizational Impact: Identification and measurement of benefits to the organization in such


areas as financial concerns ( cost , revenue and profit ), operational efficiency and competitive impact.
Includes impact on internal and external information flows.

3. User Manager Assessment: Evaluation of the attitudes of senior and user managers within the
organization, as well as end –users.

4. Development Performance : Evaluation of the development process in accordance with such


yardsticks as overall development time and effort, conformance to budgets and standards, other project
management criteria. Includes assessment of development methods and tools.

7> Review: After system implementation and evaluation , a review of the system is conducted by the users
and the analysts to determine how well the system is working , whether it is accepted , and whether any
adjustments are needed.

Review is important to gather information for maintenance of the system .No system is ever complete ,
it has to be maintained as changes are required because of internal developments such as new user or
business activities , and external developments such as industry standards or competition. The post

58
implementation review provides the first source of information for maintenance requirement.
The most fundamental concern during post implementation review is determining whether the system
has met its objectives. The analysts assess if the performance level of the users has improved and whether
the system is producing the results intended. The systems output quality has to be optimum.

70 Qt.6(b) 2001 10M As an analyst what are the various ways and techniques which would employ to
ensure that you have gathered all required information and fully understood and users requirements.

Information usually originate from


External sources :Vendors, Govt. Documents, Newspapers and professional journals
Internal sources: Financial reports, Personnel staff, Professional staff, Transaction documents and re-
ports.

Analyst use fact-finding techniques such as interview , questionnaire, record inspections ( on-site re-
view) and observation for collecting data.

Interviews A device to identify relations or verify information and to capture information as it exists. The
respondents are people chosen for their knowledge of the system under study. This method is the best
source of qualitative information( opinions, policies , subjective description of activities and problems.
Interviews can be structured or unstructured.

Unstructured Interviews : Use question and answer format and are appropriate when the analyst wants
to acquire general information about the system.
Structured Interviews : Use standardized questions in either open –response ( in own words) or closed-
response format ( set of prescribed answers)

Questionnaires: It can be administered to a large number of people simultaneously. The respondents feel
confident in the anonymity and leads to honest responses. It places less pressure on the subjects for
immediate response. The use of standardized question formats can yield reliable data. Closed ended
questionnaires control the frame of reference due to the specific responses provided. Analysts use
Open-ended questionnaires to learn about feelings, opinions and general experiences or to explore a
process or problem.

Record Review: Here Analysts examine information that has been recorded about the system and users.
Records include written policy manuals , regulations and standard operating procedures used by the
organisation.
These familiarize the analyst with what operations must be supported and with the formal relations
within the organization.

Observation: Allows analyst to gain information they cannot obtain by any other method . They can
obtain first hand information about how activities are carried out Experienced observers know what to
look for and how to assess the significance of what they observe.

71 Qt.5(c) 2000 5M Write short note on : System Documentation

System documentation
–System documentation describes the system’s functions and how they are implemented
–Most system documentation is prepared during the systems analysis and systems design phases
–Documentation consists of
•Data dictionary entries

59
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

•Data flow diagrams


•Object models
•Screen layouts
•Source documents
•Initial systems request

72 Qt.5(e) 2000 5M Write short note on : File design

The design of files include decisions about nature and content of files such as
o Whether it is to be used for storing transaction details, historical data, or reference information
o Among design decisions we look into
o Which data items to include in record format within the file
o Length of each record
o The arrangement of records within the file ( the storage structure indexed , sequential or relative)

73 Qt.5(f) 2000 5M Write short note on : User Involvement

Users ( managers and employees in business) are highly involved in systems development as:
o They have accumulated experience working with applications developed earlier. They have bet-
ter insight into what the information system should be . If they have experienced systems failures they
will have ideas about avoiding problems.
o The applications developed in organizations are often highly complex, hence systems analyst
need continual involvement of user to understand the business functions being studied
o With better system development tools emerging user can design and develop applications without
involving trained systems analysts.

74 Qt.2(b)1998 5M Write short note on : Importance of documentation

Documentation
A description of the system used to communicate, instruct, record information for Historical, Opera-
tional , Reference purposes. Documents establish and declare the performance criteria of a system
Documentation explains the system and helps people interact with it

Types of documentation
o Program documentation: Begins in the systems analysis phase and continues during systems
implementation. Includes process descriptions and report layouts. Programmers provide documentation
with comments that make it easier to understand and maintain the program. An analyst must verify that
program documentation is accurate and complete.
o System documentation: It describes the system’s functions and how they are implemented. Most
system documentation is prepared during the systems analysis and systems design phases. Documenta-
tion consists of
Data dictionary entries
Data flow diagrams
Object models
Screen layouts
Source documents
Initial systems request.
o Operations documentation : Typically used in a minicomputer or mainframe environment with
centralized processing and batch job scheduling Documentation tells the IT operations group how and
when to run programs. Common example is a program run sheet, which contains information needed for

60
processing and distributing output

o User documentation: Typically includes the following items


o System overview
o Source document description, with samples
o Menu and data entry screens
o Reports that are available, with samples
o Security and audit trail information
o Responsibility for input, output, processing
o Procedures for handling changes/problems
o Examples of exceptions and error situations
o Frequently asked questions (FAQ)
o Explanation of Help & updating the manual
o Online documentation can empower users and reduce the need for direct IT support
o Context-sensitive Help
o Interactive tutorials
o Hints and tips
o Hypertext
o Interactive tutorials

75 Qt.7(f) 2001 5M Write short note on : Contents of User Manual

Contents of users Manual must be divided into different modules on a need to know basis into
o Information flow diagrams
o Flow charts
o Instructions to use the system
o Data repository

76 Qt 4 1999 20M What is a Database management system ? What its functions? Describe the
basic functions of relational database model and discuss their importance to the users and designers.

77 Qt 3 2000 Qt 6(a) 1998 20M Describe the features and merits of Database management
system over conventional file system

A database management system (DBMS), sometimes just called a database manager, is a program that
lets one or more computer users create and access data in a database. The DBMS manages user requests
(and requests from other programs) so that users and other programs are free from having to understand
where the data is physically located on storage media and, in a multi-user system, which else may also be
accessing the data. In handling user requests, the DBMS ensures the integrity of the data (that is, making
sure it continues to be accessible and is consistently organized as intended) and security (making sure
only those with access privileges can access the data).A Database is a collection of interrelated “Files”.
DATABASE is a system where all data are kept in one large linked set of files and allow access to
different applications.

Computer Manufacturer produces DBMS for use on their own systems Or by Independent Companies
for use over a wide range of machines.
There are three main features of a database management system that make it attractive to use a DBMS
in preference to more conventional software. These features are centralized data management, data
independence, and systems integration. In DBMS, all files are integrated into one system thus reducing
redundancies and making data management more efficient. In addition, DBMS provides centralized

61
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

control of the operational data. Some of the advantages of data independence, integration and central-
ized control are:
1. Redundancies and inconsistencies can be reduced
In conventional data systems, an organization often builds a collection of application programs created
by different programmers. The data in conventional data systems is often not centralized. Some applica-
tions may require data to be combined from several systems. These several systems could well have data
that is redundant as well as inconsistent (that is, different copies of the same data may have different
values). Data inconsistencies are often encountered in everyday life. For example, we have all come
across situations when a new address is communicated to an organization that we deal with (e.g. a
bank), we find that some of the communications from that organization are received at the new address
while others continue to be mailed to the old address. Combining all the data in a database would involve
reduction in redundancy as well as inconsistency. It also is likely to reduce the costs for collection,
storage and updating of data. With DBMS, data items need to be recorded only once and are available
for everyone to use.
2. Better service to the Users
A DBMS is often used to provide better service to the users. In conventional systems, availability of
information is often poor since it normally is difficult to obtain information that the existing systems were
not designed for. Once several conventional systems are combined to form one centralized data base, the
availability of information and its up-to-datedness is likely to improve since the data can now be shared
and the DBMS makes it easy to respond to unforeseen information requests.
Centralizing the data in a database also often means that users can obtain new and combined information
that would have been impossible to obtain otherwise. Also, use of a DBMS should allow users that do
not know programming to interact with the data more easily. The ability to quickly obtain new and
combined information is becoming increasingly important. An organization running a conventional data
processing system would require new programs to be written (or the information compiled manually) to
meet every new demand.
3. Flexibility of the system is improved
Changes are often necessary to the contents of data stored in any system. These changes are more easily
made in a database than in a conventional system in that these changes do not need to have any impact on
application programs. Thus data processing becomes more flexible and enables it to respond more quickly
to the expanding needs of the business.
4. Cost of developing, implementation and maintaining systems is lower
It is much easier to respond to unforeseen requests when the data is centralized in a database than when
it is stored in conventional file systems. Although the initial cost of setting up of a database can be large,
the input/output (file definition an file maintenance) routines normally coded by the programmers are
now handled through the DBMS, the amount of time and money spent writing an application program is
reduced. Since the programmer spends less time writing applications, the amount of time required to
implementing implement new applications is reduced.
5. Standards can be enforced
Since all access to the database must be through the DBMS, standards are easier to enforce. Standards
may relate to the naming of the data, the format of the data, the structure of the data etc.
6. Security can be improved
In conventional systems, applications are developed in an ad hoc manner. Often different system of an
organization would access different components of the operational data. In such an environment, en-
forcing security can be quite difficult. Setting up of a database makes it easier to enforce security restric-
tions since the data is now centralized. It is easier to control who has access to what parts of the
database. However, setting up a database can also make it easier for a determined person to breach
security. We will discuss this in the next section.
7. Integrity can be improved

62
Since the data of the organization using a database approach is centralized and would be used by a
number of users at a time, it is essential to enforce integrity controls. Integrity may be compromised in
many ways. For example, A student may be shown to have borrowed books but has no enrolment. Salary
of a staff member in one department may be coming out of the budget of another department. If a
number of users are allowed to update the same data item at the same time, there is a possibility that the
result of the updates is not quite what was intended. Controls therefore must be introduced to prevent
such errors to occur because of concurrent updating activities. However, since all data is stored only
once, it is often easier to maintain integrity than in conventional systems.
8. Data model must be developed
Perhaps the most important advantage of setting up a database system is the requirement that an overall
data model for the enterprise be built. In conventional systems, it is more likely that files will be designed
as needs of particular applications demand. The overall view is often not considered. Building an overall
view of the enterprise data, although often an expensive exercise is usually very cost-effective in the long
term.

78 Qt.6(b)2002 4M Enumerate the key purpose and name one example of Database Man-
agement System

A relational database is a collection of data items organized as a set of formally-described tables from
which data can be accessed or reassembled in many different ways without having to reorganize the
database tables.
The standard user and application program interface to a relational database is the structured query
language (SQL). SQL statements are used both for interactive queries for information from a relational
database and for gathering data for reports.
In addition to being relatively easy to create and access, a relational database has the important advan-
tage of being easy to extend. After the original database creation, a new data category can be added
without requiring that all existing applications be modified.
A relational database is a set of tables containing data fitted into predefined categories. Each table
(which is sometimes called a relation) contains one or more data categories in columns. Each row
contains a unique instance of data for the categories defined by the columns. For example, a typical
business order entry database would include a table that described a customer with columns for name,
address, phone number, and so forth. Another table would describe an order: product, customer, date,
sales price, and so forth. A user of the database could obtain a view of the database that fitted the user’s
needs. For example, a branch office manager might like a view or report on all customers that had
bought products after a certain date. A financial services manager in the same company could, from the
same tables, obtain a report on accounts that needed to be paid.
When creating a relational database, you can define the domain of possible values in a data column and
further constraints that may apply to that data value. For example, a domain of possible customers could
allow up to ten possible customer names but be constrained in one table to allowing only three of these
customer names to be specifiable.
The definition of a relational database results in a table of metadata or formal descriptions of the tables,
columns, domains, and constraints. A database management system (DBMS), sometimes just called a
database manager, is a program that lets one or more computer users create and access data in a data-
base. The DBMS manages user requests (and requests from other programs) so that users and other
programs are free from having to understand where the data is physically located on storage media and,
in a multi-user system, which else may also be accessing the data. In handling user requests, the DBMS
ensures the integrity of the data (that is, making sure it continues to be accessible and is consistently
organized as intended) and security (making sure only those with access privileges can access the data).
The most typical DBMS is a relational database management system (RDBMS). A standard user and
program interface is the Structured Query Language (SQL). A newer kind of DBMS is the object-

63
Introduction to Computers
University Questions and Answers 1998-2002 Welingkar’s

oriented database management system (ODBMS). A DBMS can be thought of as a file manager that
manages data in databases rather than files in file systems.

Reference Books:
Analysis &Design of Information Systems - James A. Senn
Introduction to Computers - Peter Norton

Guidance
Prof. Vanita Patel
Prof. Pradeep Pendse
Prof. Sumedha Sabharwal

Answers Contributed by
Attar Ayyaz N – MMM
Divadkar Aditya - MHRDM
Mahadik Parag – MIM
Mirchandaney Dhiraj - MIM
Monthero Mable – MIM
Morje Pradnya - MHRDM
Rego John Xavier – MMM

October 2003

64

You might also like