0% found this document useful (0 votes)
58 views

An Overview of Computer System

The document provides an overview of computer systems and their evolution. It discusses: 1) The definition and characteristics of computers including their automatic, fast, accurate, reliable, and versatile nature. 2) The evolution of mechanical computers from the abacus to Babbage's analytical engine. 3) The four generations of electronic computers defined by their dominant circuit components from vacuum tubes to integrated circuits.

Uploaded by

hirut getachew
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

An Overview of Computer System

The document provides an overview of computer systems and their evolution. It discusses: 1) The definition and characteristics of computers including their automatic, fast, accurate, reliable, and versatile nature. 2) The evolution of mechanical computers from the abacus to Babbage's analytical engine. 3) The four generations of electronic computers defined by their dominant circuit components from vacuum tubes to integrated circuits.

Uploaded by

hirut getachew
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Chapter 1: An Overview of Computer System

1.1 Definition and characteristics of computers


What is computer?
Computer is derived from “compute” to mean that Calculate. Therefore, computer is electronic
device which is devised to perform calculations and controlling operations. Note that calculation
can be numeric (like addition, subtraction...) and nonnumeric (like logical operation using and, or
...)

Characteristics of Computers
The increasing popularity of computers has proved that it is a very powerful and useful tool.
Computers have the following characteristics:
 Automatic. Computers are automatic; once started on a job, they carry on until the job is
finished, normally without any human intervention.

 Speed. A computer is a very fast device. Units of time are microseconds (10-6 second),
nanoseconds (10-9 second) or even picoseconds (10-12). A powerful computer is capable of
performing several billion (109) simple arithmetic operations per second.

 Accuracy. The accuracy of a computer is consistently high and the degree of accuracy of
a particular computer depends upon its design. Accuracy of computer mainly depends on
instructions and type of machine being used. Correct instruction produce correct result but
faulty instruction produces faulty result. This phenomenon is called Garbage In
Garbage Out (GIGO).
 Diligence. Unlike human beings, a computer is free from boredom, tiredness, lack of
concentration, etc., and hence can work for hours without creating any error and without
grumbling.
 Reliability: is measurement of performance of computer. The reason behind reliability of
the computers is that, it does not require human intervention between its processing.

1
 Storage capacity: computers can store large amount of data and retrieve the required
information when it is needed. The memory of computer is small. Hence, it can store
small amount of data. So, the data is stored in storage devices such as CD_ROM and Hard
Disk. CD_ROM stores up to 700 MB, HD stores up to 80 GB.
 Versatility. Versatility is one of the most wonderful things about the computer. A
computer is capable of performing almost any task provided that the task can be reduced
to a series of logical steps.
 Power of Remembering. A computer can store and recall huge amount of information
because of its secondary storage capability.

 No I.Q. A computer is not a magical device. It can only perform tasks that a human being
can. The difference is that it performs these tasks with unthinkable speed and accuracy. It
possesses no intelligence of its own. Its I.Q. is zero. It has to be told what to do and in
what sequence. Hence, only the user can determine what tasks a computer will perform.

 No Feelings. Computers are devoid of emotion. They have no feelings and no instinct
because they are machines. Computers cannot make judgments on their own. Their
judgment is based on the instructions given to them in the form of programs that are
written by us.

In general terms computers can use for the following purposes:-


 To make the work easy.
 To do the work efficiently (Efficiency).
 To improve the quality of the result of work (Quality).
 To assist the work (Assistance).
 To finish the task faster (Speed).
Evolution of Computers
Most people are familiar with the exciting things done by computers. Over the last 50 years
computers have been used in many great human achievements including the manned/unmanned
exploration of space, automation in the workplace, technological advancement and of course the

2
internet, to name a few. The following topics discuss some important event in the evolution of
computers.

Mechanical Computers

 The first actual calculating mechanism known to us is the abacus, which is thought to have
been invented by the Babylonians sometime between 1,000 BC and 500 BC. Although the
abacus does not qualify as a mechanical calculator, it certainly stands proud as one of first
mechanical aids to calculation.
 In the early 1600s, a Scottish mathematician called John Napier invented a tool called
Napier's Bones, which were multiplication tables inscribed on strips of wood or bone.
Napier also invented logarithms, which greatly assisted in arithmetic calculations.
 In 1621, an English mathematician and clergyman called William Oughtred used Napier's
logarithms as the basis for the slide rule. However, although the slide rule was an
exceptionally effective tool that remained in common use for over three hundred years, like
the abacus it also does not qualify as a mechanical calculator.
 Blaise Pascal is credited with the invention of the first operational calculating machine. In
1640, Pascal started developing a device to help his father add sums of money. The first
operating model, the Arithmetic Machine, was introduced in 1642. However, Pascal's
device could only add and subtract, while multiplication and division operations were
implemented by performing a series of additions or subtractions.
 The first multi-purpose, i.e. programmable, computing device was probably Charles
Babbage's Difference Engine, which was begun in 1823 but never completed. A more
ambitious machine was the Analytical Engine. It was designed in 1842, but unfortunately it
also was only partially completed by Babbage. Babbage was truly a man ahead of his time:
many historians think the major reason he was unable to complete these projects was the
fact that the technology of the day was not reliable enough. In spite of never building a
complete working machine, Babbage and his colleagues, most notably Ada, Countess of
Lovelace, recognized several important programming techniques, including conditional
branches, iterative loops and index variables.
 A machine inspired by Babbage's design was arguably the first to be used in computational
science. George Scheutz read of the difference engine in 1833, and along with his son

3
EdvardScheutz began work on a smaller version. By 1853 they had constructed a machine
that could process 15-digit numbers and calculate fourth-order differences.

 One of the first commercial uses of mechanical computers was by the US Census Bureau,
which used punch-card equipment designed by Herman Hollerith to tabulate data for the
1890 census. In 1911 Hollerith's company merged with a competitor to found the
corporation which in 1924 became International Business Machines.

Generation of Computers
Computer’s efficiency and functionality has been enhanced since the emergence of different
types of computers technologies. This development process can be classified as generations of
computers. ”Generation” in computer talk is a step in technology. The major characteristics that
distinguish the various generations are the following:-
 Dominant type of electronic circuit elements used.
 Major Secondary storage Medias.
 Computer languages used.
 Type or characteristics of operating system used.
 Memory access time.

First Generation Electronic computers (1942-1955)

 Vacuum tubes were used as the switching devices. The first general purpose
programmable electronic computer was the Electronic Numerical Integrator and
Computer (ENIAC), built by J. Presper Eckert and John V. Mauchly at the University of
Pennsylvania.

 Through the use of a memory that was large enough to hold both instructions and data,
and using the program stored in memory to control the order of arithmetic operations,
another first generation computer, EDVAC (Electronic Discrete Variable Computer), was
able to run orders of magnitude faster than ENIAC.

 The memory of these computers was constructed using electromagnetic relays and all
data and instructions were fed into the system from punched cards.

4
 The instructions were written in machine language and assembly language, the latter only
in the 1950s.

Second Generation Electronic computers (1955-1964)

 Transistors (invented around 1952) were used as the switching devices. Transistors are
much smaller in size, more rugged, more reliable, faster and consume less power than their
vacuum tube counter-parts. The second-generation computers were thus more powerful,
more reliable, less expensive, smaller, and cooler to operate than the first-generation
computers.

 The memory of the second-generation computers was composed of magnetic cores.


Magnetic cores were small rings made of ferrite, which could be magnetized, in either
clockwise or anti-clockwise direction.

 High-level programming languages like FORTRAN, COBOL, ALGOL, and SNOBOL


were developed, which were easier to understand and work with than assembly and
machine languages.

Third Generation Electronic computers (1964-1975)

 Around 1964 Integrated Circuits (ICs) made their appearance and the third-generation
computers were constructed around these ICs. ICs are circuits consisting of several
electronic components like transistors, resistors, and capacitors grown on a single chip of
silicon eliminating wired interconnection between components. ICs were much smaller,
less expensive to produce, more rugged and reliable, faster in operation, dissipated less
heat, and consumed much less power than circuits built by wiring electronic components.
The third-generation computers were thus more powerful, more reliable, less expensive,
smaller, and cooler to operate than the second-generation computers.

 Parallel advancements in storage technologies allowed the construction of larger magnetic


cores based memory, and larger capacity magnetic disks and magnetic tapes. Third-
generation computers typically had a few megabytes (< 5 MB) of main memory and
magnetic disks capable of storing a few tens of megabytes of data per disk drive.

5
 Efforts were made to standardize some of the existing high-level programming languages
like FORTRAN IV and COBOL 68. Programs written with these languages were thus able
to run on any computer that has these compilers. Other high-level programming languages
like PL/1, PASCAL and BASIC were also introduced in this generation.

Fourth Generation Electronic computers (1975-1989)

 The average number of electronic components packed on silicon doubled each year after
1965. The progress soon led to the era of large scale integration (LSI) with over 30000
electronic components integrated on a single chip, followed by very large-scale integration
(VLSI) when it was possible to integrate about one million electronic components on a
single chip. This progress led to a dramatic development – the creation of a
microprocessor. A microprocessor contains all the circuits needed to perform arithmetic
and logic operations as well as control functions, the core of all computers, on a single
chip. Hence it became possible to build a complete computer with a microprocessor, a few
additional primary storage chips, and other support circuitry. It started a new social
revolution – the Personal Computer (PC) revolution.

 Magnetic core memories were replaced by semiconductor memories, with very fast access
time. Hard disks also became cheaper, smaller in size, and larger in capacity and thus
became the standard in-built secondary storage device for all types of computer systems.
Floppy disks also became very popular as a portable medium for moving programs and
data from one computer system to another.
 The fourth-generation also saw the advent of supercomputers based on parallel vector
processing and symmetric multiprocessing technologies.

 Fourth-generation period also saw the spread of high-speed computer networking. LAN
and WAN became popular for connecting computers.

 Operating systems like MS-DOS and Windows made their appearance in this generation.
Several new PC-based application packages like word processors, spreadsheets and others
were developed during this generation. In the area of large-scale computers,

6
multiprocessing operating systems and concurrent programming languages were popular
technological developments. The UNIX operating system also became very popular for
use on large-scale systems. Some other software technologies that became popular during
the fourth-generation period are C programming language, object-oriented software design,
and object-oriented programming.

Fifth Generation Electronic computers (1989 – Present)

 The fifth Generation was characterized by VLSI technology being replaced by ULSI (Ultra
Large Scale Integration) technology, with microprocessor chips having ten million
electronic components. In fact, the speed of microprocessor and size of main memory and
hard disk doubled almost every eighteen months. The result was that many of the features
found in the CPUs of large mainframe systems of the third and fourth generations became
part of the microprocessor architecture in the fifth generation. More compact and more
powerful computers are being introduced almost every year at more or less the same price
or even cheaper.
 The size of main memory and hard disk storage has increased several folds. Memory size
of 256 MB to 4 GB and hard disk sizes of the order of 100 GB are common. RAID
(Redundant Array of Independent Disks) technology has made it possible to configure a
bunch of disks as a single hard disk with a total size of a few hundred gigabytes. Optical
disks (CD-ROMS, DVDs) also emerged as a popular portable mass storage media.
 This generation also saw more powerful supercomputers based on parallel processing
technology.
 Communication technologies became faster day-by-day and more and more and more and
more computers got networked together, resulting in the Internet.
 In the area of operating systems, some of the concepts that gained popularity during the
fifth-generation include micro kernels (operating systems being modeled and designed in a
modular fashion), multithreading (a popular way to improve application performance
through parallelism), and distributed operating systems (an operating system for a set of
computers networked together with the aim of making the multiple computers of the
system appear as a single large virtual system to its users).

7
 In the area of programming languages, some of the concepts that gained popularity during
the fifth-generation include JAVA programming language, and parallel programming
libraries like MPI (Message Passing Interface) and VPM (Parallel Virtual Machine).

Generation Time Circuit Storage Languages Operating Access Time


Elements Devices Systems
First 1950s Vacuum Tubes Punched Machine Operator 1 Millisecond
Cards &Assembly controlled
Second 1959- Discrete Magnetic COBOL Batch 10Microseconds
1964 transistors tape FORTRAN
Third 1965- IC Magnetic Structured Interactive 100nanoseconds
1970 disk language
Fourth Late VLSI Mass Applications Virtual 1nanoseconds
1970s Storage oriented

1.4 Types of computer


Computers can be categorized broadly by their application, the type of data they process and
their size.

By Application

 General-purpose computers can be used for different purposes. We need only have
appropriate software to use a general-purpose computer to perform a particular task.
For example, the personal computers (PCs) currently in wide use are general-purpose
computers.

 Special-purpose computers are specifically designed to perform one particular task.


A computer that guides a missile is, for example, a special-purpose computer.

By Type of Data

 Analog computers process data that vary continuously with time, such as variations
in temperature, speed, chemical composition of petroleum products, or current
flowing through a conductor. Analog computers operate by measuring. They deal
with continuous variable. They do not compute directly with numbers, rather, they

8
operate by measuring physical magnitude such as pressure temperature, voltage,
current and etc. Generally, they are computers designed for special purpose
E.g. Thermometer, voltmeter, speedometer

 Digital computers process digital data. All the PCs currently in wide use are digital
computers. Digital computers deal with discrete variables. They operate by counting
rather than measuring. They operate directly upon numbers (or digits) that represent
numbers, letters, or other special symbols. Digital computer is a computing device in
which data is represented by discrete numerical quantities. It is represented by
discrete voltage states (1s and 0s). They have very high accuracy and speed than
analog ones.
E.g. Personal Computers

By Size

 Supercomputers are the most powerful and expensive computers; they are used for
problems requiring complex calculations. A supercomputer is generally characterized
as being the fastest, most powerful, and mot expensive computer. Super computers are
largely used by research organizations, military defense systems, national weather for
casting agencies, large corporations, aircraft manufacturer, etc. The CRAY and
CYBER are typical examples of the supercomputers.

 Mainframe computers are mainly found in large organizations. They can serve
hundreds or thousands of users, handling massive amounts of input, output and
storage. They are used as e-commerce servers handling transactions over the Internet.
To give some example, mainframes can handle the processing of thousands of
customer inquiries, employee paychecks, student registrations, sale transactions, and
inventory changes.

 Minicomputers more properly called medium-sized computers; are smaller, slower


and less expensive than mainframes. Minis perform many of the tasks that a
mainframe can, but on a reduced scale. They can support a network of user terminals,

9
but not as many as mainframes can. They may be used as network servers and Internet
servers. They are popularly used in scientific laboratories, research centers,
universities and colleges, engineering firms, industrial process monitoring and control
etc.

 Workstations are powerful single-user computers used for tasks that require a great
deal of number-crunching power, such as product design and computer animation.
They are often used as network and Internet servers. Work stations were powerful
sophisticated machines that fit on a desk, cost much and were used mainly by
engineers and scientists for technical purposes

 Microcomputers, or personal computers, are meant for personal or private use.


Microcomputers come in a variety of sizes and shapes for a variety of purposes.
Basically they can be grouped into three: Laptop, Palmtop and Desktop computers.
Laptop computers are smaller versions of microcomputers about the size of a briefcase
designed for portability. Palmtop computer is the smallest microcomputer that is about
the same size as a pocket calculator. Desktop computer is the most widely used type
of personal computer (microcomputers).

1.2 Applications of modern computers and future computing trends

We can use computers in an endless area but to list some of it:


 Commercial Application
In this area of application the emphasis is on data processing. It covers the use of computers
for clerical, administrative and business uses.

 Scientific , engineering and research applications


Here, the emphasis is scientific processing .This covers the use of computers for complex
calculations, the design, analysis, and control of physical systems, and the analysis of
experimental data or results.

10
 Computers in Education
Computers are widely used in the educational field, for instruction and for administration.
Computers can provide instructions and ask questions of the user. This kind of instruction is
called CAL (Computer Aided Learning) or CAI (Computer Aided Instruction).

 Computer in Medicine
Computers can be used as an aid to medical research by analyzing data produced from
produced from the trial of drugs, to aid diagnosis and to hold details of patients.

 Computers in Manufacturing
Some aspects of computer in manufacturing are: stock, and production control and
engineering design. The design manufacturing and testing processed are all becoming
increasing computerized, hence the terms CAD (Computer Aided Design), CAM (Computer
Aided Manufacturing) are used in this area of application.

Computer trends are changes or evolutions in the ways that computers are used which
become widespread and integrated into popular thought with regard to these systems. These
movements often begin with one or two companies adopting or promoting a new technology,
which grabs the attention of others and becomes popular. Both hardware and software can be
a part of computer trends, such as the development and proliferation of mobile devices
including smart phones and tablets.
Changes in the Internet, the development of new websites, and the expansion of cloud
computing models are likely to be similar software trends throughout the early part of the
21st Century. Much like changing fashions in clothing, trends in computers indicate the types
of technology or concepts that are popular at a given time. This can occur in a number of
ways, including a company introducing new technology to a market and customers finding
that they can use certain products more effectively than others.
As these changes happen, computer trends typically evolve and grow over time, so that
popular technology one year, may be considered outdated the next. Identifying the next

11
major trend, and finding a way to get in on it ahead of time, can be substantially profitable
for companies that work with technology.

1.3 Components of computing system

 Computing hardware trends


There are five key trends driving the hardware renaissance:
A. Hacking hardware is becoming easier for software people. 
Numerous innovations are making it easier than ever to develop hardware. The benefits
of 3D printing (quicker and cheaper prototyping) are well publicized, but there are other
innovations too. For example, there's the Arduino Robot Kit to experiment with projects
that move; UDOO, which combines Android, Linux, and Arduino in a tiny single-board
computer to interface with sensors and actuators; and Spark Core, which is the easiest
and most open way of creating cloud-connected hardware experiments.
These innovations give software developers the freedom to stretch beyond their limits of
three screens (PC, smartphone, tablet) without worrying about getting burned by
soldering guns.
B. Connectivity changes the customer expectation. The common hardware purchase
model was always "one and done"; customers bought their hardware and that was it. With
today's influx of connected devices, consumers expect more than great hardware.
Connected software now defines the hardware experience. Examples include wireless
wearable devices that track a person's activities or connected home devices that
encourage a greener lifestyle.
The merging of hardware and software poses a significant challenge for large hardware
giants like Sony or Panasonic that primarily have focused on hardware. Their hardware
might be brilliant, yet the software experience is often lagging. This gives startups a great
opportunity to create better connected experiences.
C. Crowdfunding has changed the relationship between brands and
retailers. Crowdfunding sites like Kickstarter and Indiegogo have helped fill the funding
void for hardware projects. These sites are also giving brands a direct route to the
customer. Traditionally, the retailer sat as a gatekeeper between hardware brand and

12
customer. Now, brands can build a customer base, regardless of distribution channel.
Additionally, as we have seen often, such as with the Pebble smart watch, customers are
willing to purchase a concept well before the product exists. This gives unknown brands
and young startups an unprecedented ability to compete.
D. Open hardware increases the speed of innovation. The open source movement when
applied to hardware accelerates innovation, enabling developers to build derivatives of
the original design, such as alternate use cases and accessories. With open source
hardware, developers and startups don't need to seek the approval of the creator. They can
just start working, without any patent or licensing hoops to jump through.
E. The maker movement is increasing the talent pool. The increased focus on hardware
brought about by the maker movement is rapidly bringing a new influx of hardware
developers to the market. With access to a bigger talent pool, startups (and established
companies too) can develop products more quickly and at a lower cost. The associated
lower costs and faster time to market can be a game changer.

 Software Vs. program


• Components of software

• Software= Program+ Documentation+ Operating Procedures


• Software is more than programs.
• It consists of programs; documentation of any facet of the program and the procedures
used to setup and operate the software system.

13
• Any program is a subset of software and it becomes software only if documentation and
operating procedure manuals are prepared.
• Program is a combination of source code and object code.
• Documentation consists of different types of manuals.

• Operation procedures consist of instructions to setup and use the software system and
instructions on how to react to system failure.

14
 Software Applications
Application software is a type of computer software that is designed to be employed by end
users to accomplish specific tasks such as writing a letter, editing a photograph or playing a
video file. The term refers not only to the software program itself but also to the
implementation of that program and to the use of the capabilities and power of the computer
platform running the operating system under which the application software runs. For
example, the act of installing a what-you-see-is-what-you-get (WYSIWYG) web
design program, configuring it and using that program to create web pages is the essence of
this type of software.

An application software program for making spreadsheets.


There are certain characteristics that are seen with this type of software that are not seen with
system programs. Among those characteristics is the presence of some type of user interface,
which generally is a graphical one, hence the term "graphical user interface" (GUI). These types
of programs often offer end users the freedom to create what is known as user-written software
in the form of templates used for word processing and accounting, word processor macros that
automate small tasks and even filters for the management of electronic mail. Text editors figure
among the long list of the different types of application software and are probably one of the best
examples of how these programs allow for the development of more programs.

A web developer could be the end user of a text editor in which he or she can code client- and
server-side scripts to accomplish a wide variety of functionality for web pages. For example, the
developer could write a script in a text editor that pulls content from a database to be the
dynamic display of web pages. The text editor is the application software that was employed to
create the script, which is itself a small application — but it is not application software, like the
text editor is in which it was created.

15
When application programs are bundled together, the bundle is almost always referred to as an
application suite. The programs in the suite generally interact with one another in the fact that
they can all be used to create one file that makes use of their various capabilities. For example,
an application software suite might be composed of a word processor, a spreadsheet, an image
manipulation program and a drawing program. If the user can perform things such as embedding
a spreadsheet into a document created by the word processor, there is interactivity in the suite.
Application software can be for personal use, or it can be enterprise software that accomplishes
many different tasks, such as creating presentations, translating documents into a foreign
language or editing video and audio files.

Types of Application Software

The different types of application software include the following:

Application Software Type Examples

Word processing software MS Word, WordPad and Notepad

Database software Oracle, MS Access etc

Spreadsheet software Apple Numbers, Microsoft Excel

Multimedia software Real Player, Media Player

Presentation Software Microsoft PowerPoint, Keynotes

Enterprise Software Customer relationship management

16
Application Software Type Examples

system

Information Worker Software Documentation tools, resource


management tools

Educational Software Dictionaries: Encarta,


BritannicaMathematical:
MATLABOthers: Google Earth,
NASA World Wind

Simulation Software Flight and scientific simulators

Content Access Software Accessing content through media


players, web browsers

Application Suites OpenOffice, Microsoft Office

Software for Engineering and IDE or Integrated Development Envir


Product Development

 Characteristics of good Software


A software product must meet all the requirements of the customer or end-user. Also, the
cost of developing and maintaining the software should be low. The development of software
should be completed in the specified time-frame.
The three characteristics of good application software are :-
1) Operational Characteristics
2) Transition Characteristics
3) Revision Characteristics
What Operational Characteristics should a software have ?
These are functionality based factors and related to 'exterior quality' of software.
Various Operational Characteristics of software are :
a) Correctness: The software which we are making should meet all the

17
specifications stated by the customer.
b) Usability/Learnability: The amount of efforts or time required to learn how to
use the software should be less. This makes the software user-friendly even for ITilliterate
people.
c) Integrity : Just like medicines have side-effects, in the same way a software
may have a side-effect i.e. it may affect the working of another application. But a quality
software should not have side effects.
d) Reliability : The software product should not have any defects. Not only this,
it shouldn't fail while execution.
e) Efficiency : This characteristic relates to the way software uses the available
resources. The software should make effective use of the storage space and execute
command as per desired timing requirements.
f) Security : With the increase in security threats nowadays, this factor is gaining
importance. The software shouldn't have ill effects on data / hardware. Proper
measures should be taken to keep data secure from external threats.
g) Safety : The software should not be hazardous to the environment/life.
What are the Revision Characteristics of software ?
These engineering based factors of the relate to 'interior quality' of the software
like efficiency, documentation and structure. These factors should be in-build in
any good software. Various Revision Characteristics of software are :-
a) Maintainability : Maintenance of the software should be easy for any kind of
user.
b) Flexibility : Changes in the software should be easy to make.
c) Extensibility : It should be easy to increase the functions performed by it.
d) Scalability : It should be very easy to upgrade it for more work(or for more
number of users).e) Testability : Testing the software should be easy.
f) Modularity : Any software is said to made of units and modules which are
independent of each other. These modules are then integrated to make the final
software. If the software is divided into separate independent parts that can be
modified, tested separately, it has high modularity.
Transition Characteristics of the software :

18
a) Interoperability : Interoperability is the ability of software to exchange
information with other applications and make use of information transparently.
b) Reusability : If we are able to use the software code with some modifications
for different purpose then we call software to be reusable.
c)Portability : The ability of software to perform same functions across all
environments and platforms, demonstrate its portability.
Importance of any of these factors varies from application to application. In
systems where human life is at stake, integrity and reliability factors must be given
prime importance. In any business related application usability and maintainability
are key factors to be considered. Always remember in Software Engineering,
quality of software is everything, therefore try to deliver a product which has all
these characteristics and qualities.

 Types and Classes of Software

  System Software

 The system software is a type of computer software that is designed for running the
computer hardware parts and the application programs. It is the platform provided to the
computer system where other computer programs can execute. The system software act
as a middle layer between the user applications and hardware. The operating system is
the type of system software. The operating system is used to manage all other programs
installed on the computer.
 The other purpose of system software is to translate inputs received from other sources
and convert them into language so that the machine can understand. The BIOS (basic
input/output system) is another type of system software that works when the computer
system starts and is used to manage the data between the hardware devices (video
adapter, mouse, keyboard and printer) and the operating system. The system software
provides the functionality for the user to use the hardware directly using the device
drivers program.

19
 The boot is the system software program that loads the operating system in the main
memory of the computer or can load in random access memory (RAM). The other
example of system software is assembler which has a functionality to take computer
instructions as input and then convert it into bits so that the processor can read that bit
and perform computer operations.
 The other example of system software is a device driver which is used to control some
specific device which is connected to computer systems like mouse or keyboard. The
device driver software is used to convert input/ output instructions of OS to messages so
that the device can read and understand. The system software can be run in the
background or can be executed directly by the user.

2. Application Software

 The other category of software is application software that is designed for the users to
perform some specific tasks like writing a letter, listening to music or seeing any video.
For all these requirements there required a specific software for each type and that
specific software that is designed for some specific purpose is known as application
software. The operating software runs the application software in the computer system.
 The difference between system software and application software is the difference in the
user interface. In system software, there is no user interface present whereas in
application software the user interface is present for each software so that users can easily
use the software. The user cannot see the system software like an operating system and
cannot work in system software but in an application, software users can see the
application software using a graphical user interface and can also work in the application
software. The user also has an option to create its user-written software and use the
software for its personal use.
 The templates are present which can be used by the user to create user-written programs.
The application software can be bundled together and that bundle is known as an
application suite. An example of an application suite is Microsoft Office. The word
processor software is designed by combining various small program to make one single
program which can be used for writing text, creating a spreadsheet or creating
presentations. The other type of application software is Mozilla Firefox, internet explorer.

20
These kinds of application software can be used for searching any article, text on the web
and interact with the outside world.

3. Programming Languages

 The programming language is the third category of computer software which is used by
the programmers to write their programs, scripts, and instructions which can be executed
by a computer. The other name of the programming language is a computer language that
can be used to create some common standards. The programming language can be
considered as a brick which can be used to construct computer programs and operating
system. The examples of programming languages are JAVA, C, C++, and other
languages.
 There is always some similarity between the programming languages the only difference
is the syntax of programming language which makes them different. The programmer
uses the syntax and rules of programming language to write their programs. Once the
source code is written by a programmer in the IDE (Integrated Development
Environment) the programmer then compiles that code in machine language which can be
understood by the computer. The use of programming language is in developing
websites, applications, and many other programs.
 The programming language can be broadly divided into two major elements syntax and
semantics. The programming language follows some sequence of operations so that the
desired output can be achieved. The programming language is also known as high-level
language as the programs written by a programmer are easy to read and easy to
understand. The JAVA, C, C++ programming languages are considered as high-level
language. The other category of a programming language is a low-level language.
 The low level of language includes machine language and assembly
language. The assembly language contains a list of instructions that are not easy to read
and understand. The machine language contains binary codes that can be read by CPU
directly and not present in a human-readable form. The low level of language can be
directly understood by computer hardware.

1.4 Software development process

21
Software Development Life Cycle (SDLC) is a process used by the software industry to
design, develop and test high quality softwares. The SDLC aims to produce a high-
quality software that meets or exceeds customer expectations, reaches completion within
times and cost estimates.

SDLC is the acronym of Software Development Life Cycle.

It is also called as Software Development Process.

SDLC is a framework defining tasks performed at each step in the software development
process.

ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to be


the standard that defines all the tasks required for developing and maintaining software.

What is SDLC?

SDLC is a process followed for a software project, within a software organization. It


consists of a detailed plan describing how to develop, maintain, replace and alter or
enhance specific software. The life cycle defines a methodology for improving the
quality of software and the overall development process.

The following figure is a graphical representation of the various stages of a typical


SDLC.

22
A typical Software Development Life Cycle consists of the following stages −

Stage 1: Planning and Requirement Analysis

Requirement analysis is the most important and fundamental stage in SDLC. It is performed by
the senior members of the team with inputs from the customer, the sales department, market
surveys and domain experts in the industry. This information is then used to plan the basic
project approach and to conduct product feasibility study in the economical, operational and
technical areas.

Planning for the quality assurance requirements and identification of the risks associated with
the project is also done in the planning stage. The outcome of the technical feasibility study is to
define the various technical approaches that can be followed to implement the project
successfully with minimum risks.

Stage 2: Defining Requirements

Once the requirement analysis is done the next step is to clearly define and document the
product requirements and get them approved from the customer or the market analysts. This is
done through an SRS (Software Requirement Specification) document which consists of all
the product requirements to be designed and developed during the project life cycle.

23
Stage 3: Designing the Product Architecture

SRS is the reference for product architects to come out with the best architecture for the product
to be developed. Based on the requirements specified in SRS, usually more than one design
approach for the product architecture is proposed and documented in a DDS - Design Document
Specification.

This DDS is reviewed by all the important stakeholders and based on various parameters as risk
assessment, product robustness, design modularity, budget and time constraints, the best design
approach is selected for the product.

A design approach clearly defines all the architectural modules of the product along with its
communication and data flow representation with the external and third party modules (if any).
The internal design of all the modules of the proposed architecture should be clearly defined
with the minutest of the details in DDS.

Stage 4: Building or Developing the Product

In this stage of SDLC the actual development starts and the product is built. The programming
code is generated as per DDS during this stage. If the design is performed in a detailed and
organized manner, code generation can be accomplished without much hassle.

Developers must follow the coding guidelines defined by their organization and programming
tools like compilers, interpreters, debuggers, etc. are used to generate the code. Different high
level programming languages such as C, C++, Pascal, Java and PHP are used for coding. The
programming language is chosen with respect to the type of software being developed.

Stage 5: Testing the Product

This stage is usually a subset of all the stages as in the modern SDLC models, the testing
activities are mostly involved in all the stages of SDLC. However, this stage refers to the testing
only stage of the product where product defects are reported, tracked, fixed and retested, until
the product reaches the quality standards defined in the SRS.

24
Stage 6: Deployment in the Market and Maintenance

Once the product is tested and ready to be deployed it is released formally in the appropriate
market. Sometimes product deployment happens in stages as per the business strategy of that
organization. The product may first be released in a limited segment and tested in the real
business environment (UAT- User acceptance testing).

Then based on the feedback, the product may be released as it is or with suggested
enhancements in the targeting market segment. After the product is released in the market, its
maintenance is done for the existing customer base.

SDLC Models

There are various software development life cycle models defined and designed which are
followed during the software development process. These models are also referred as Software
Development Process Models". Each process model follows a Series of steps unique to its type
to ensure success in the process of software development.

Following are the most important and popular SDLC models followed in the industry −

Waterfall Model

Iterative Model

Spiral Model

V-Model

Big Bang Model

Other related methodologies are Agile Model, RAD Model, Rapid Application Development
and Prototyping Models.

The Waterfall Model was the first Process Model to be introduced. It is also referred to as
a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall

25
model, each phase must be completed before the next phase can begin and there is no
overlapping in the phases.

The Waterfall model is the earliest SDLC approach that was used for software development.

The waterfall Model illustrates the software development process in a linear sequential flow.
This means that any phase in the development process begins only if the previous phase is
complete. In this waterfall model, the phases do not overlap.

Waterfall Model - Design

Waterfall approach was first SDLC Model to be used widely in Software Engineering to ensure
success of the project. In "The Waterfall" approach, the whole process of software development
is divided into separate phases. In this Waterfall model, typically, the outcome of one phase acts
as the input for the next phase sequentially.

The following illustration is a representation of the different phases of the Waterfall Model.

The sequential phases in Waterfall model are −

26
 Requirement Gathering and analysis − All possible requirements of the system to be
developed are captured in this phase and documented in a requirement specification
document.

 System Design − The requirement specifications from first phase are studied in this
phase and the system design is prepared. This system design helps in specifying
hardware and system requirements and helps in defining the overall system architecture.

 Implementation − With inputs from the system design, the system is first developed in
small programs called units, which are integrated in the next phase. Each unit is
developed and tested for its functionality, which is referred to as Unit Testing.

 Integration and Testing − All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system is
tested for any faults and failures.

 Deployment of system − Once the functional and non-functional testing is done; the
product is deployed in the customer environment or released into the market.

 Maintenance − There are some issues which come up in the client environment. To fix
those issues, patches are released. Also to enhance the product some better versions are
released. Maintenance is done to deliver these changes in the customer environment.

All these phases are cascaded to each other in which progress is seen as flowing steadily
downwards (like a waterfall) through the phases. The next phase is started only after the defined
set of goals are achieved for previous phase and it is signed off, so the name "Waterfall Model".
In this model, phases do not overlap.

Waterfall Model - Application

Every software developed is different and requires a suitable SDLC approach to be followed
based on the internal and external factors. Some situations where the use of Waterfall model is
most appropriate are −

 Requirements are very well documented, clear and fixed.

 Product definition is stable.

27
 Technology is understood and is not dynamic.

 There are no ambiguous requirements.

 Ample resources with required expertise are available to support the product.

 The project is short.

Waterfall Model - Advantages

The advantages of waterfall development are that it allows for departmentalization and control.
A schedule can be set with deadlines for each stage of development and a product can proceed
through the development process model phases one by one.

Development moves from concept, through design, implementation, testing, installation,


troubleshooting, and ends up at operation and maintenance. Each phase of development
proceeds in strict order.

Some of the major advantages of the Waterfall Model are as follows −

 Simple and easy to understand and use

 Easy to manage due to the rigidity of the model. Each phase has specific deliverables and
a review process.

 Phases are processed and completed one at a time.

 Works well for smaller projects where requirements are very well understood.

 Clearly defined stages.

 Well understood milestones.

 Easy to arrange tasks.

 Process and results are well documented.

Waterfall Model - Disadvantages

The disadvantage of waterfall development is that it does not allow much reflection or revision.
Once an application is in the testing stage, it is very difficult to go back and change something
that was not well-documented or thought upon in the concept stage.

28
The major disadvantages of the Waterfall Model are as follows −

 No working software is produced until late during the life cycle.

 High amounts of risk and uncertainty.

 Not a good model for complex and object-oriented projects.

 Poor model for long and ongoing projects.

 Not suitable for the projects where requirements are at a moderate to high risk of
changing. So, risk and uncertainty is high with this process model.

 It is difficult to measure progress within stages.

 Cannot accommodate changing requirements.

 Adjusting scope during the life cycle can end a project.

 Integration is done as a "big-bang. at the very end, which doesn't allow identifying any
technological or business bottleneck or challenges early.

In the Iterative model, iterative process starts with a simple implementation of a small set of the
software requirements and iteratively enhances the evolving versions until the complete system
is implemented and ready to be deployed.

An iterative life cycle model does not attempt to start with a full specification of requirements.
Instead, development begins by specifying and implementing just part of the software, which is
then reviewed to identify further requirements. This process is then repeated, producing a new
version of the software at the end of each iteration of the model.

Iterative Model - Design

Iterative process starts with a simple implementation of a subset of the software requirements
and iteratively enhances the evolving versions until the full system is implemented. At each
iteration, design modifications are made and new functional capabilities are added. The basic
idea behind this method is to develop a system through repeated cycles (iterative) and in smaller
portions at a time (incremental).

The following illustration is a representation of the Iterative and Incremental model −

29
Iterative and Incremental development is a combination of both iterative design or iterative
method and incremental build model for development. "During software development, more
than one iteration of the software development cycle may be in progress at the same time." This
process may be described as an "evolutionary acquisition" or "incremental build" approach."

In this incremental model, the whole requirement is divided into various builds. During each
iteration, the development module goes through the requirements, design, implementation and
testing phases. Each subsequent release of the module adds function to the previous release. The
process continues till the complete system is ready as per the requirement.

The key to a successful use of an iterative software development lifecycle is rigorous validation
of requirements, and verification & testing of each version of the software against those
requirements within each cycle of the model. As the software evolves through successive
cycles, tests must be repeated and extended to verify each version of the software.

Iterative Model - Application

Like other SDLC models, Iterative and incremental development has some specific applications
in the software industry. This model is most often used in the following scenarios −

 Requirements of the complete system are clearly defined and understood.

 Major requirements must be defined; however, some functionalities or requested


enhancements may evolve with time.

 There is a time to the market constraint.

30
 A new technology is being used and is being learnt by the development team while
working on the project.

 Resources with needed skill sets are not available and are planned to be used on contract
basis for specific iterations.

 There are some high-risk features and goals which may change in the future.

Iterative Model - Pros and Cons

The advantage of this model is that there is a working model of the system at a very early stage
of development, which makes it easier to find functional or design flaws. Finding issues at an
early stage of development enables to take corrective measures in a limited budget.

The disadvantage with this SDLC model is that it is applicable only to large and bulky software
development projects. This is because it is hard to break a small software system into further
small serviceable increments/modules.

The advantages of the Iterative and Incremental SDLC Model are as follows −

 Some working functionality can be developed quickly and early in the life cycle.

 Results are obtained early and periodically.

 Parallel development can be planned.

 Progress can be measured.

 Less costly to change the scope/requirements.

 Testing and debugging during smaller iteration is easy.

 Risks are identified and resolved during iteration; and each iteration is an easily managed
milestone.

 Easier to manage risk - High risk part is done first.

 With every increment, operational product is delivered.

 Issues, challenges and risks identified from each increment can be utilized/applied to the
next increment.

31
 Risk analysis is better.

 It supports changing requirements.

 Initial Operating time is less.

 Better suited for large and mission-critical projects.

 During the life cycle, software is produced early which facilitates customer evaluation
and feedback.

The disadvantages of the Iterative and Incremental SDLC Model are as follows −

 More resources may be required.

 Although cost of change is lesser, but it is not very suitable for changing requirements.

 More management attention is required.

 System architecture or design issues may arise because not all requirements are gathered
in the beginning of the entire life cycle.

 Defining increments may require definition of the complete system.

 Not suitable for smaller projects.

 Management complexity is more.

 End of project may not be known which is a risk.

 Highly skilled resources are required for risk analysis.

 Projects progress is highly dependent upon the risk analysis phase.

The spiral model combines the idea of iterative development with the systematic, controlled
aspects of the waterfall model. This Spiral model is a combination of iterative development
process model and sequential linear development model i.e. the waterfall model with a very
high emphasis on risk analysis. It allows incremental releases of the product or incremental
refinement through each iteration around the spiral.

32
Spiral Model - Design

The spiral model has four phases. A software project repeatedly passes through these phases in
iterations called Spirals.

Identification

This phase starts with gathering the business requirements in the baseline spiral. In the
subsequent spirals as the product matures, identification of system requirements, subsystem
requirements and unit requirements are all done in this phase.

This phase also includes understanding the system requirements by continuous communication
between the customer and the system analyst. At the end of the spiral, the product is deployed in
the identified market.

Design

The Design phase starts with the conceptual design in the baseline spiral and involves
architectural design, logical design of modules, physical product design and the final design in
the subsequent spirals.

Construct or Build

The Construct phase refers to production of the actual software product at every spiral. In the
baseline spiral, when the product is just thought of and the design is being developed a POC
(Proof of Concept) is developed in this phase to get customer feedback.

Then in the subsequent spirals with higher clarity on requirements and design details a working
model of the software called build is produced with a version number. These builds are sent to
the customer for feedback.

Evaluation and Risk Analysis

Risk Analysis includes identifying, estimating and monitoring the technical feasibility and
management risks, such as schedule slippage and cost overrun. After testing the build, at the end
of first iteration, the customer evaluates the software and provides feedback.

33
The following illustration is a representation of the Spiral Model, listing the activities in each
phase.

Based on the customer evaluation, the software development process enters the next iteration
and subsequently follows the linear approach to implement the feedback suggested by the
customer. The process of iterations along the spiral continues throughout the life of the
software.

Spiral Model Application

The Spiral Model is widely used in the software industry as it is in sync with the natural
development process of any product, i.e. learning with maturity which involves minimum risk
for the customer as well as the development firms.

The following pointers explain the typical uses of a Spiral Model −

 When there is a budget constraint and risk evaluation is important.

 For medium to high-risk projects.


34
 Long-term project commitment because of potential changes to economic priorities as
the requirements change with time.

 Customer is not sure of their requirements which is usually the case.

 Requirements are complex and need evaluation to get clarity.

 New product line which should be released in phases to get enough customer feedback.

 Significant changes are expected in the product during the development cycle.

Spiral Model - Pros and Cons

The advantage of spiral lifecycle model is that it allows elements of the product to be added in,
when they become available or known. This assures that there is no conflict with previous
requirements and design.

This method is consistent with approaches that have multiple software builds and releases
which allows making an orderly transition to a maintenance activity. Another positive aspect of
this method is that the spiral model forces an early user involvement in the system development
effort.

On the other side, it takes a very strict management to complete such products and there is a risk
of running the spiral in an indefinite loop. So, the discipline of change and the extent of taking
change requests is very important to develop and deploy the product successfully.

The advantages of the Spiral SDLC Model are as follows −

 Changing requirements can be accommodated.

 Allows extensive use of prototypes.

 Requirements can be captured more accurately.

 Users see the system early.

 Development can be divided into smaller parts and the risky parts can be developed
earlier which helps in better risk management.

The disadvantages of the Spiral SDLC Model are as follows −

35
 Management is more complex.

 End of the project may not be known early.

 Not suitable for small or low risk projects and could be expensive for small projects.

 Process is complex

 Spiral may go on indefinitely.

 Large number of intermediate stages requires excessive documentation.

The V-model is an SDLC model where execution of processes happens in a sequential manner
in a V-shape. It is also known as Verification and Validation model.

The V-Model is an extension of the waterfall model and is based on the association of a testing
phase for each corresponding development stage. This means that for every single phase in the
development cycle, there is a directly associated testing phase. This is a highly-disciplined
model and the next phase starts only after completion of the previous phase.

V-Model - Design

Under the V-Model, the corresponding testing phase of the development phase is planned in
parallel. So, there are Verification phases on one side of the ‘V’ and Validation phases on the
other side. The Coding Phase joins the two sides of the V-Model.

The following illustration depicts the different phases in a V-Model of the SDLC.

36
V-Model - Verification Phases

There are several Verification phases in the V-Model, each of these are explained in detail
below.

Business Requirement Analysis

This is the first phase in the development cycle where the product requirements are understood
from the customer’s perspective. This phase involves detailed communication with the customer
to understand his expectations and exact requirement. This is a very important activity and
needs to be managed well, as most of the customers are not sure about what exactly they need.
The acceptance test design planning is done at this stage as business requirements can be used
as an input for acceptance testing.

37
System Design

Once you have the clear and detailed product requirements, it is time to design the complete
system. The system design will have the understanding and detailing the complete hardware and
communication setup for the product under development. The system test plan is developed
based on the system design. Doing this at an earlier stage leaves more time for the actual test
execution later.

Architectural Design

Architectural specifications are understood and designed in this phase. Usually more than one
technical approach is proposed and based on the technical and financial feasibility the final
decision is taken. The system design is broken down further into modules taking up different
functionality. This is also referred to as High Level Design (HLD).

The data transfer and communication between the internal modules and with the outside world
(other systems) is clearly understood and defined in this stage. With this information,
integration tests can be designed and documented during this stage.

Module Design

In this phase, the detailed internal design for all the system modules is specified, referred to
as Low Level Design (LLD). It is important that the design is compatible with the other
modules in the system architecture and the other external systems. The unit tests are an essential
part of any development process and helps eliminate the maximum faults and errors at a very
early stage. These unit tests can be designed at this stage based on the internal module designs.

Coding Phase

The actual coding of the system modules designed in the design phase is taken up in the Coding
phase. The best suitable programming language is decided based on the system and architectural
requirements.

The coding is performed based on the coding guidelines and standards. The code goes through
numerous code reviews and is optimized for best performance before the final build is checked
into the repository.

38
Validation Phases

The different Validation Phases in a V-Model are explained in detail below.

Unit Testing

Unit tests designed in the module design phase are executed on the code during this validation
phase. Unit testing is the testing at code level and helps eliminate bugs at an early stage, though
all defects cannot be uncovered by unit testing.

Integration Testing

Integration testing is associated with the architectural design phase. Integration tests are
performed to test the coexistence and communication of the internal modules within the system.

System Testing

System testing is directly associated with the system design phase. System tests check the entire
system functionality and the communication of the system under development with external
systems. Most of the software and hardware compatibility issues can be uncovered during this
system test execution.

Acceptance Testing

Acceptance testing is associated with the business requirement analysis phase and involves
testing the product in user environment. Acceptance tests uncover the compatibility issues with
the other systems available in the user environment. It also discovers the non-functional issues
such as load and performance defects in the actual user environment.

V- Model ─ Application

V- Model application is almost the same as the waterfall model, as both the models are of
sequential type. Requirements have to be very clear before the project starts, because it is
usually expensive to go back and make changes. This model is used in the medical development
field, as it is strictly a disciplined domain.

The following pointers are some of the most suitable scenarios to use the V-Model application.

 Requirements are well defined, clearly documented and fixed.

39
 Product definition is stable.

 Technology is not dynamic and is well understood by the project team.

 There are no ambiguous or undefined requirements.

 The project is short.

V-Model - Pros and Cons

The advantage of the V-Model method is that it is very easy to understand and apply. The
simplicity of this model also makes it easier to manage. The disadvantage is that the model is
not flexible to changes and just in case there is a requirement change, which is very common in
today’s dynamic world, it becomes very expensive to make the change.

The advantages of the V-Model method are as follows −

 This is a highly-disciplined model and Phases are completed one at a time.

 Works well for smaller projects where requirements are very well understood.

 Simple and easy to understand and use.

 Easy to manage due to the rigidity of the model. Each phase has specific deliverables and
a review process.

The disadvantages of the V-Model method are as follows −

 High risk and uncertainty.

 Not a good model for complex and object-oriented projects.

 Poor model for long and ongoing projects.

 Not suitable for the projects where requirements are at a moderate to high risk of
changing.

 Once an application is in the testing stage, it is difficult to go back and change a


functionality.

 No working software is produced until late during the life cycle.

40
1.5 Problem solving using computers

 Defining problem in context of software solution

Software engineering is about problem-solving first, coding second. Why? Computers


need to be told exactly what to do; they can’t make assumptions like a human would
when given vague instructions. Secondly, software engineers are tasked with designing
features and applications that may not even exist yet, so it’s their job to come up with the
user interface on the front-end and data infrastructure on the back-end to power it from
scratch.

For this reason, the hardest part of being a software engineer is not understanding
programming languages and frameworks or even algorithms. Rather, it’s stringing many
instructions together to accomplish something useful.

What types of problems do software engineers solve?

Software developers work on a range of tasks, from pure coding to system-level design
and troubleshooting. Much of an engineer’s time is spent “debugging” — that is,
detecting and correcting errors and bugs in the code that cause the program to break or
behave unexpectedly. Using a computer language is a lot like writing; understanding
solid grammar usage and sentence construction are more important than memorizing the
entire dictionary.

41

You might also like