2014 Software Design and Development Notes
2014 Software Design and Development Notes
H S C
N O T E S
TABLE
OF
CONTENTS
9.1
Development
and
Impact
of
Software
Solutions
9.1.1
Social
and
Ethical
Issues
The
Impact
Of
Software
Rights
and
Responsibilities
of
Software
Developers
Software
Piracy
and
Copyright
Use
Of
Networks
The
Software
Market
Legal
Implications
9.1.2
Application
of
software
Development
Approaches
Software
Development
Approaches
Use
Of
Computer
Aided
Software
Engineering
Methods
Of
Installation
Of
New
Or
Updated
Systems
Employment
Trends
In
Software
Development
Trends
In
Software
Development
2
2
2
3
6
9
9
10
11
11
14
15
16
16
17
17
17
18
19
21
31
31
31
41
45
46
47
47
48
48
49
51
53
57
59
60
60
63
65
66
66
66
67
9.4
Options
9.4.1
Option
1
Programming
Paradigms
69
69
69
71
77
Error!
Bookmark
not
RELIANCE ON SOFTWARE
Household
appliances
such
as
ovens,
TVs,
washing
machines
and
stereos
all
rely
on
software
to
operate.
All
types
of
motorized
transport
are
controlled
by
software.
Utilities
rely
on
software.
Government
authorities
are
all
software
reliant
in
some
form.
The
software
development
industry
has
a
huge
responsibility
has
a
huge
responsibility
to
ensure
all
these
systems
are
reliable
and
perform
their
various
functions
accurately.
SOCIAL
NETWORKING
A
social
network
is
simply
a
group
of
people
who
associate
with
each
other.
Recently
the
concept
of
social
networks
has
rapidly
expanded
into
the
online
world.
Private
and
personal
information
is
all
too
easily
shared
and
this
can
lead
to
a
variety
of
problems
including
identity
theft,
stalking
and
cyber
bullying.
Comments
made
on
an
online
site
can
remain
accessible
indefinitely.
These
comments
may
be
embarrassing
when
viewed
by
some
future
employer
or
by
future
children
or
friends.
CYBER
SAFETY
The
online
world
of
the
internet
includes
a
variety
of
different
dangers
and
security
issues.
All
internet-connected
devices
have
risks.
Cyber
safety
is
about
minimizing
the
risk
of
such
dangers,
particularly
for
children.
This
is
some
of
the
recommendations
of
cyber
safety
are:
Locations
based
services-
this
makes
it
easy
for
unknown
people
to
find
you
and
it
also
alerts
them
that
you
are
not
home.
It
is
wise
to
turn
off
location
tracking
unless
required
Unwanted
contact-
if
you
receive
message
from
an
unknown
person,
do
not
respond
Cyber
bullying-
when
online,
spreading
false
rumours,
teasing
or
making
threats
online
is
unacceptable
Online
friends-
be
aware
that
people
may
not
be
who
they
claim
to
be,
it
is
prudent
to
not
share
private
information
to
those
you
do
not
know
in
real
life
Online
purchasing-
it
is
prudent
to
restrict
online
purchases
to
well
known
organisations
and
examine
the
details
and
conditions
for
sale
Identity
theft-
criminals
can
obtain
sufficient
information
about
you
so
they
are
able
to
use
your
credit
card,
access
your
bank
account
or
take
out
loans
in
your
name.
Be
careful
with
your
personal
information
EVALUATING
AVAILIABLE
INFORMATION
Huge
amounts
of
information
are
publicly
available
through
the
internet.
Information
is
uploaded
by
anybody
regardless
of
his
or
her
qualifications,
expertise
or
experience
and
in
many
cases
it
is
difficult
to
identify
the
author.
On
the
internet
biased,
unsupported,
unverifiable,
misleading
and
often
incorrect
information
is
common.
When
reading
online
information
you
should
question:
Ergonomics
is
the
study
of
the
interactions
between
human
workers
and
their
work
environment.
As
the
user
interface
provides
the
connection
between
software
and
users,
the
design
and
operation
of
the
screens
is
of
primary
interest.
Usability
testing
should
then
be
performed
throughout
the
design
process
to
ensure
the
software
is
ergonomically
sound.
User
support
including
help
screens
and
other
forms
of
user
training
and
assistance
is
another
important
area.
INCLUSIVITY
ISSUES
Inclusive
software
should
take
into
account
the
different
users
who
will
likely
use
the
product.
Software
developers
have
a
responsibility
to
ensure
software
is
accessible
to
all
regardless
of
their
culture,
economics,
gender
or
disability.
Software
products
that
do
not
take
into
account
the
different
characteristics
of
users
are
less
likely
to
secure
a
significant
market
share.
Cultural
background:
Developers
must
understand
the
needs
of
other
cultures.
Names,
numbers,
currency,
times
and
dates
are
common
areas
of
difference.
For
example
in
some
Asian
cultures,
people
dont
have
only
given
name
and
a
surname,
and
in
some
countries
different
calendars
are
used.
Many
large-scale
systems
manage
to
utilise
any
number
of
languages,
so
the
application
can
be
customised
to
suit
any
foreign
language,
in
effort
to
increase
inclusivity.
Economic
background:
Software
developers
have
a
responsibility
to
ensure
consideration
is
given
to
the
economic
situation
of
purchasers
of
software
products.
To
achieve
equality
of
access
to
technologies
such
as
software
requires
that
the
technology
is
available
at
a
cost
that
is
economically
viable
for
a
wide
audience.
Gender:
Both
men
and
women
should
be
included
in
the
software
design
and
development
process.
Programming
is
viewed
as
a
technical,
mathematical
process
with
rigid
boundaries.
Research
shows
that
men
tend
to
dominate
in
these
types
of
occupations.
Since
men
are
engaged
in
the
creation
of
software,
it
follows
that
some
bias
is
likely
to
exist
towards
males
within
the
software
products
they
develop.
Disability:
Software
design
should
include
functionality
that
allows
software
to
be
used
and
accessed
by
a
wide
range
of
users.
The
computer
has
revolutionised
the
lives
of
many
disabled
people.
To
ensure
maximum
accessibility
to
those
users
with
a
disability,
software
designers
have
a
number
of
tools
and
techniques
at
their
disposal.
The
choice
of
larger
fonts
helps
those
with
visual
disabilities.
Colour
may
not
be
used
for
the
sole
purpose
of
conveying
information,
so
the
colour-blind
can
still
used
the
software.
Software
should
not
rely
on
sound
as
the
sole
method
of
communication
of
information,
this
is
so
deaf
people
are
included
in
the
use
of
the
software.
INTELLECTUAL
PROPERTY
Intellectual
property
is
property
resulting
from
mental
labour.
Therefore
all
types
of
authors,
including
software
developers,
create
intellectual
property.
With
most
literary
works
it
is
clear
who
the
author
is
and
therefore
it
is
clear
who
owns
the
intellectual
property
rights.
In
the
case
of
software
products,
the
situation
is
often
not
clear,
for
example:
All
the
people
and
companies
involved
in
the
original
creation
of
this
software
package
are
authors
and
have
intellectual
property
right
in
regard
to
their
particular
original
contribution.
Different
contributors
to
a
software
product
will
have
differing
requirements
in
regard
to
their
intellectual
property.
Copyright
laws
are
designed
to
protect
the
intellectual
property
right
of
all
the
authors
involved.
MALWARE
Malware
is
software
that
is
intended
to
damage
or
disable
computers
and
computer
systems.
Software
developers
have
a
responsibility
to
ensure
their
products
do
not
contain
malware.
To
make
sure
this
is
the
case
they
should
check
any
new
data
and
software
being
added
to
their
computers
for
viruses.
Purchasers
of
software
have
the
right
to
expect
software
they
purchase
is
free
of
all
malware.
Developers
can
be
held
responsible
for
distributing
malware
with
their
products.
PRIVACY
ISSUES
Privacy
is
about
protecting
an
individuals
personal
information.
Privacy
is
a
fundamental
principle
of
our
society,
and
we
have
the
right
to
know
who
holds
our
personal
information.
Personal
information
is
legitimately
required
by
many
organisations
when
carrying
out
their
various
functions.
This
creates
a
problem,
how
do
we
ensure
this
information
is
used
only
for
its
intended
task
and
how
do
we
know
what
these
intended
tasks
are?
Laws
are
needed
that
require
organisations
to
provide
individuals
with
answers
to
these
questions.
In
this
way
individuals
can
protect
their
privacy.
Consequences
of
the
Privacy
Act
1988
means
that
information
systems
that
contain
personal
information
must
legally
be
able
to:
Explain
why
personal
information
is
being
collected
and
how
it
will
be
used
Provides
individuals
with
access
to
their
records
Divulge
details
of
other
organisation
that
may
be
provided
with
information
from
the
system
The
final
quality
of
a
software
development
project
is
an
important
responsibility
for
all
software
developers.
To
develop
high
quality
applications
is
time
consuming
and
costly.
Often
compromises
have
to
be
made
because
of
financial
constraints.
Quality
software
is
developed
as
a
result
of
thorough
planning
and
testing.
Many
companies
have
developed
sets
of
standards
to
which
their
employees
must
adhere.
Quality
assurance
is
used
to
ensure
that
these
standards
are
observed.
It
attempts
to
make
sure
that
customer
expectations
are
met
and/or
exceeded.
All
customers
have
a
right
to
have
their
expectations
met,
and
it
is
the
responsibility
of
the
developer
to
make
sure
this
occurs.
It
is
in
the
developers
best
interest
to
know
the
customers
requirements
and
to
exceed
their
expectations
if
they
are
to
continue
to
operate
profitably.
Factors
that
affect
software
quality
include:
Software
developers
have
a
responsibility
to
ensure
that
any
problems
users
encounter
with
their
product
is
resolved
in
a
timely,
accurate
and
efficient
manner.
The
developer
needs
to
ensure
there
is
a
mechanism
in
place
to
assist
in
the
identification
of
errors
and
their
subsequent
resolution.
Software
licences
are
intended
to
enforce
the
intellectual
property
rights
of
software
developers.
These
licence
agreements
are
enforceable
by
law,
including
copyright
laws.
As
well
as
protecting
the
intellectual
property
rights
of
software
developers,
licence
agreements
also
protect
developers
from
legal
action
should
their
products
result
in
hardship
or
financial
loss
to
purchasers.
Licence
terminology:
Licence.
Formal
permission
or
authority
to
use
a
product.
Licence
does
not
give
users
ownership
of
the
software,
rather
they
are
granted
the
right
to
use
the
software
Agreement.
A
mutual
arrangement
between
parties
Term.
The
period
of
time
the
agreement
is
in
force
Warranty.
An
assurance
of
some
sort.
Software
products
normally
contain
limited
warranties
Limited
use.
Software
licences
do
not
give
purchasers
unrestricted
use
of
the
product.
Commonly
usage
of
software
products
is
restricted
to
a
single
machine,
copying
of
the
product
is
not
permitted
Liability.
An
obligation
or
debt
as
a
consequence
of
some
event.
Licence
agreements
normally
restrict
the
liability
of
the
software
developer
to
replacing
the
product
or
refunding
the
purchase
price
should
an
error
or
other
problem
occur
Program.
Refers
to
the
computer
software.
This
usually
includes
both
executable
files
and
included
data
file.
Reverse
engineer.
In
terms
of
software,
this
usually
means
the
process
of
decompiling
the
product
Backup
copy.
A
copy
of
the
software
made
for
archival
purposes
CLASSIFICATIONS
OF
SOFTWARE
IN
TERMS
OF
COPYRIGHT
COMMERCIAL
recognised
and
that
modified
products
must
be
released
using
the
same
unrestricted
open
source
licence.
SHAREWARE
Shareware
is
covered
by
copyright.
As
with
commercial
software,
you
are
acquiring
a
licence
to
use
the
product.
Purchasers
are
allowed
to
make
and
distribute
copies
of
the
software.
Once
you
have
tested
the
software
and
decided
to
use
it,
you
must
pay
for
it.
As
with
commercial
licenses,
decompilation,
reverse
engineering
and
modifications
are
not
permitted.
PUBLIC
DOMAIN
Software
becomes
public
domain
when
the
copyrights
holder
explicitly
relinquishes
all
rights
to
the
software.
Just
because
a
product
does
not
bear
a
copyright
symbol
does
not
mean
that
it
is
not
covered
by
copyright,
public
domain
software
must
be
clearly
marked
as
such.
OWNSERHIP
VERSUS
LICENSING
The
user
does
not
generally
own
software
obtained
from
outside
sources.
The
software
developer
who
is
the
author
of
the
product
retains
ownership
of
the
product.
REVERSE
ENGINEERING
Analysing
a
product
and
its
parts
to
understand
how
it
works
and
to
recreate
its
original
design,
usually
with
the
purpose
of
creating
a
similar
product
based
on
this
design.
DECOMPILATION
The
opposite
of
compilation,
translating
machine
executable
code
into
a
higher-code.
This
allows
the
programs
design
to
be
more
easily
understood.
TECHNOLOGIES
USED
TO
COMBAT
SOFTWARE
PIRACY
Without
some
form
of
protection,
software
is
simple
to
copy
and
it
is
virtually
impossible
to
determine
that
a
copy
has
been
made.
Over
the
years
a
variety
of
different
technologies
and
strategies
have
been
used
in
an
attempt
to
minimise
the
possibility
for
piracy.
NON-COPYABLE
DATASHEET
The
user
needs
to
enter
codes
from
the
datasheet
to
continue
using
the
software.
E.g.
software
products
include
datasheets
printed
using
inks
that
could
not
be
copied,
so
when
the
software
is
pirated
they
dont
have
the
datasheet.
HARDWARE
SERIAL
NUMBERS
A
variety
of
hardware
components
within
every
computer
include
an
embedded
serial
number,
which
cant
be
altered.
Software
can
be
designed
so
that
it
examines
these
serial
numbers
and
if
they
dont
match
then
the
program
will
not
execute.
SITE
LICENCE
INSTALLATION
COUNTER
ON
A
NETWORK
In
larger
organisations
software
is
often
installed
from
a
network
server.
The
organisation
purchases
a
site
licence,
which
specifies
the
maximum
number
of
machines
who
may
either
install
or
simultaneously
execute
the
product.
REGISTRATION
CODE
A
registration
code
is
used
to
activate
software
products
during
the
initial
stage
in
the
installation
process.
For
single
software
licences
the
registration
code
in
unique
to
your
installation.
Some
software
applies
an
algorithm
to
the
registration
code
so
that
it
is
verified
as
correct.
ENCRYPTION
KEY
Encryption
effectively
scrambles
the
data
or
executable
code
in
such
a
way
that
it
is
virtually
impossible
to
make
sense.
The
encryption
key
is
required
to
reverse
the
encryption
process.
Once
entered
the
software
is
decrypted.
BACK-TO-BASE
AUTHENTICATION
Back-to-base
authentication
means
the
application
contacts
the
software
publishers
server
to
verify
the
user
or
computer
holds
a
valid
software
licence.
If
the
licence
is
verified
as
correct
then
the
software
will
execute.
USE
OF
NETWORKS
A
network
is
a
collection
of
computers
connected
electronically
to
facilitate
the
transfer
of
data
and
software.
The
widespread
use
of
networks,
including
the
internet,
has
revolutionised
the
development
of
software.
The
ability
to
collaborate
and
share
resources
greatly
improves
productivity.
Changes
to
the
user
interface
in
terms
of
how
data
is
entered
and
retrieved
across
the
network
can
greatly
improve
response
times.
BY
THE
DEVELOPER
WHEN
DEVLOPING
SOFTWARE
Currently
all
but
the
smallest
software
applications
are
developed
collaboratively,
by
a
team
of
developers.
In
many
cases
the
programmers
have
never
met
in
person
and
may
live
in
different
countries.
Without
networks,
in
particular
the
internet,
many
of
the
applications
we
use
on
a
daily
basis
would
not
exist.
In
addition
to
communicating
with
other
developers,
many
programming
language
libraries
and
related
documentation
reside
on
the
web.
During
coding,
programmers
are
able
to
research
and
then
in
an
instant
download
and
incorporate
existing
source
code
into
their
application.
BY
THE
USER
WHEN
USING
NETWORK
BASED
SOFTWARE
Even
sites
that
generally
respond
quickly
can
experience
poor
response
times
during
periods
of
high
activity.
For
software
developers,
response
times
can
be
difficult
to
control
with
certainty.
Although
media-rich
content
can
look
amazing,
it
will
have
little
impact
if
nobody
can
be
bothered
waiting
for
it
to
load.
When
embedded
video
and
large
images
are
included,
then
response
times
will
be
severely
compromised.
Text
based
content
is
many
times
smaller
in
terms
of
file
size
compared
to
video
and
images.
Price-
consumer-based
pricing
involves
looking
at
what
the
consumer
wants
and
how
much
they
are
willing
to
pay
for
it
Promotion-
advertising
can
be
helpful
to
consumers
who
are
investigating
what
is
available
in
the
marketplace.
These
advertising
activities
should
provide
accurate
information
that
is
inoffensive
and
not
misleading
THE
EFFECT
OF
DOMINANT
DEVELOPERS
OF
SOFTWARE
If
one
players
product
dominates,
then
it
becomes
the
default
purchase.
This
is
largely
what
has
occurred
in
the
OS
market
and
also
in
the
word
processer
and
spread
sheet
market.
It
requires
effort
on
the
part
of
the
user
to
even
purchase
a
computer
without
Windows.
THE
IMPACT
OF
NEW
DEVELOPERS
OF
SOFTWARE
AND
NEW
PRODUCTS
Although
there
are
companies
and
products,
which
dominate
existing
software
markets,
there
is
still
room
for
new
players.
Often
new
developers
emerge
due
to
inventiveness.
That
is,
they
invent
a
new
software
product
the
breaks
new
ground.
Both
Google
and
Facebook
are
prominent
examples
of
companies,
which
began
with
a
bright
idea.
Currently
small
software
applications
for
mobile
phones
are
creating
a
significant
market.
Apps
for
android
and
Apple
can
sell
in
the
millions.
Otherwise
unknown
developers
are
able
to
access
an
enormous
worldwide
market
using
the
Android
and
Apple
store.
LEGAL
IMPLICATIONS
There
are
many
significant
social
and
ethical
issues
that
need
to
be
considered
by
those
in
the
business
of
creating
and
distributing
software.
Software
implemented
on
systems
throughout
a
country
can
result
in
significant
legal
action
if
the
software
contravenes
the
law
of
the
country
in
some
way
or
its
development
does
not
comply
with
the
legal
contract
between
the
software
developer
and
the
customer.
The
legal
actions
may
be
the
result
of
copyright
breaches
or
can
arise
when
software
does
not
perform
as
intended.
National
legal
cases
include:
10
11
Agile
methods
remove
the
need
for
detailed
requirements
and
complex
design
documentation.
Agile
emphasises
teamwork
rather
than
following
predefined
structure
development
processes.
Characteristics
of
an
agile
software
development
approach:
Advantages
Disadvantages
- Low
levels
of
documentation
END-USER
DEVELOPMENT
When
the
end-user
develops
the
software
themselves.
Useful
as
the
user
solves
his
own
problem
quickly.
There
is
a
lack
of
formal
stages.
Advantages
Disadvantages
- Quickly
needed
system
- User
has
knowledge
to
develop
for
themselves
- Short
lifespan
- Repetition
of
processes
PROTOTYPING
Prototyping
has
high
interaction
between
customers
and
developers.
A
prototype
is
a
model
of
a
software
system
that
enables
evaluation
of
features
and
functions
in
an
operational
scenario.
The
prototypes
are
created
to
progressively
refine
user
requirements.
Throughout
the
development
process
the
customer
successively
validates
the
product.
12
Each
successive
prototype
will
better
meet
the
original
requirements
for
the
final
product.
Often
the
prototypes
are
simply
interactive
models
of
the
user
interface.
Prototypes
are
more
about
getting
the
GUI
right
and
will
not
deal
with
security
or
error
recovery.
Information-gathering
prototypes
are
developed
to
gather
information
that
can
be
used
in
another
program.
Concentrates
on
input
and
output
with
minimal
processing,
its
never
intended
to
be
a
full
working
program.
Evolutionary
prototypes
become
the
full
working
program.
Advantages
Disadvantages
Advantages
Disadvantages
- Low
cost
projects
- Small
team
needed
STRUCTURED
APPROACH
The
structured
approach
involves
very
structured,
step-by-step
stages.
Each
stage
of
the
development
cycle
must
be
completed
before
progressing
to
the
next
step.
This
is
because
one
stage
would
not
work
without
previously
completed
the
previous
stage,
you
cant
implement
a
solution
until
you
consider
potential
solutions,
and
potential
solutions
cant
be
known
until
the
problem
is
defined.
The
structured
approach
is
the
mostly
lengthy
and
costly
approach.
It
involves
many
people
including
system
analyst,
software
engineers,
programmers,
graphic
designers,
consultants,
managers
and
trainers.
The
stages
work
as
follows:
1. Defining
the
problem.
This
involves
thoroughly
understanding
and
defining
the
problem.
Little
time
and
cost
will
be
required
to
fix
any
problems
at
this
phase.
It
13
2.
3.
4.
5.
takes
on
third
of
the
development
time,
the
end
report
may
need
to
be
repeatedly
modified
until
all
personnel
are
convinced
that
it
will
solve
the
problem
Planning
the
solution.
Further
understanding
of
user
needs
and
methods
to
solve
a
problem
are
undertaken,
data
will
be
collected
from
surveys
and
interviews
to
provide
and
basis
for
the
decision.
Plan
what
data
and
structures
are
to
be
used
(dataflow
diagrams,
IPO
charts,
structured
chart),
design
algorithms,
plan
user
interface
(screen
designs
and
storyboards),
plan
scheduling
(Gantt
chart)
Building
the
solution.
Stepwise
refinement
is
used,
i.e.
overall
problem
is
divided
into
smaller,
more
easily
managed
modules.
Greater
efficiency
as
all
members
can
design
and
test
different
modules
separately,
all
at
the
same
time.
The
program
is
coded
according
to
the
algorithms,
and
modules
are
tested
throughout
code
development
to
ensure
there
are
no
errors
Testing
the
solution.
Testing
the
problem
with
real
data
with
acceptance
and
beta
testing.
When
a
program
is
free
of
error,
it
is
passed
by
the
system
analyst,
when
the
management
approves
the
program
Maintenance.
Modification
required
after
the
program
has
been
implemented.
Updates
lead
to
greater
efficiency,
solve
bugs
caused
by
external
factors.
Changes
are
very
expensive
at
this
stage,
and
are
only
undertaken
if
changes
are
small
and
crucial.
If
a
major
update
is
needed,
better
results
could
be
achieved
by
beginning
the
development
cycle
again
Advantages
Disadvantages
- Time
consuming
- Expensive
Production
of
code
Production
of
documentation
Test
data
generation
Software
versioning
ORACLE
DESIGNER
Oracle
Designer
is
a
CASE
tool
that
specialises
in
system
design,
from
creating
a
process
model
through
various
system
modelling
diagrams
and
finally
to
the
creation
of
source
code
14
and
user
interfaces.
Oracle
Designer
uses
an
area
to
store
preferences
used
in
the
generation
of
the
final
source
code
and
user
interfaces.
The
code
created
can
then
be
loaded
into
the
languages
development
environment
for
further
modification
before
being
compiled.
This
product
aims
at
teams
of
developers,
and
hence
it
allows
multiple
users
to
access
and
modify
aspects
of
the
development
of
a
product.
Each
object
in
the
project
contains
its
own
versioning
system.
Previous
versions
of
object
are
maintained,
so
unwanted
changes
can
always
be
deleted.
AXIOM
SYS
AxiomSys
is
a
system
requirements
modelling
CASE
tool.
It
can
be
used
to
generate
dataflow
diagrams
for
systems
of
all
types,
not
just
software
systems.
This
tool
allows
for
the
creation
of
modularised
dataflow
diagrams
together
with
data
dictionary
capabilities
in
regard
to
all
data
flows.
DATAFACTORY
DataFactory
is
a
test
data
generation
CASE
tool.
This
tool
is
able
to
create
fandom
test
data
in
large
quantities.
It
is
especially
suited
to
testing
large,
data-oriented
applications.
Hundreds
of
thousands
of
records
can
be
generated
using
data
of
user-specified
data
types.
Products
such
as
DataFactory
allow
real
world
testing
of
applications
without
the
need
for
massive
amounts
of
data
to
be
manually
input
or
scanned.
15
to
be
performing
satisfactorily
then
the
system
is
installed
and
used
by
all.
This
is
useful
for
new
products,
as
it
ensures
functionality
is
at
a
level
that
can
perform
in
a
real
operational
setting.
The
method
also
allows
a
base
of
users
to
learn
the
new
system.
These
users
can
then
assist
with
the
training
of
others
once
the
system
has
been
fully
installed.
PHASED
The
phased
method
of
installation
from
an
old
system
to
a
new
one
involves
a
gradual
introduction
of
the
new
system
whilst
the
old
system
is
progressively
discarded.
This
can
be
achieved
by
introducing
new
parts
of
the
product
one
at
a
time
while
the
older
parts
being
replaced
are
removed.
Often
phased
conversion
is
used
because
the
product,
as
a
whole,
is
still
under
development.
DIRECT
CUT-OVER
This
method
involves
the
old
system
being
completely
dropped
and
the
new
system
being
completely
installed
at
the
same
time.
The
old
system
is
no
longer
available.
As
a
consequence,
you
must
be
absolutely
sure
that
the
new
system
is
totally
functional
and
operational.
This
method
is
used
when
it
is
not
feasible
to
continue
operating
two
systems
together.
Any
data
to
be
used
in
the
new
system
must
be
converted
and
imported
from
the
old
system.
Users
must
be
fully
trained
in
the
operation
of
the
new
system
before
the
conversion
takes
place.
16
Web-based
software
Learning
objects
Widgets
Apps
and
applets
Web
2.0
tools
Cloud
computing
Mobile
phone
technologies
Collaborative
environments
What
are
the
clients
needs
which
this
product
will
meet?
Compatibility
issues
with
other
existing
software
and
hardware?
Possible
performance
issues,
particularly
for
internet
and
graphics
intensive
systems?
What
are
the
boundaries
of
the
new
system?
NEEDS
OF
THE
CLIENT
A
need
is
an
instance
in
which
some
necessity
or
want
exists.
The
implication
is
that
some
form
of
solution
is
required
to
meet
this
need.
Without
articulating
the
needs
clearly,
it
will
be
difficult
to
develop
a
clear
picture
of
the
precise
problem
to
be
solved.
FUNCTIONALITY
REQUIREMENTS
Functionality
requirements
describe
what
the
system
will
do.
They
are
what
you
are
aiming
to
achieve.
Requirements
are
statements
that
if
achieved,
will
allow
needs
to
be
met.
The
requirements
of
a
system
give
direction
to
the
project.
Throughout
the
design
and
development
of
a
system,
the
functionality
requirements
should
continually
be
examined
to
ensure
their
fulfilment.
The
final
evaluation
of
a
projects
success
or
failure
is
based
on
how
well
the
original
requirements
have
been
achieved.
COMPATIBILITY
ISSUES
Software
of
various
types
runs
on
a
variety
of
operating
systems,
browsers,
hardware
configurations
and
even
a
range
of
different
devices,
such
as
smart
phones
and
tablet
computers.
Users
are
able
to
configure
their
device
in
a
variety
of
manners,
for
example
screen
resolution,
colour
depth
and
font
sizes.
Today
many
software
products
require
and
17
use
internet
or
LAN
connections.
Such
connections
operate
at
varying
speeds
and
will
often
encounter
errors
or
even
complete
loss
of
connectivity.
When
designing
software,
developers
must
ensure
their
products
are
compatible
with
a
wide
range
of
likely
user
devices
and
network
conditions.
Some
examples
of
common
compatibility
issues
include:
Any
existing
hardware,
software
and
communications
systems
must
be
documented
and
the
taken
into
account
when
defining
the
problem
to
ensure
the
solution
will
be
compatible
with
these
existing
resources.
PERFORMANCE
ISSUES
Often
the
specifications
of
the
computers
used
by
developers
far
exceed
those
likely
to
be
present
in
a
typical
users
computer.
In
addition
multi-user
applications
and
applications
that
access
large
files
and
databases
will
perform
very
differently
under
real
world
conditions.
Testing
environments
and
actual
tests
should
simulate
real
world
conditions.
Some
common
examples
of
performance
issues
include:
The
computer
appears
to
be
not
responding
after
some
function
has
been
initiated.
In
fact
it
is
busy
processing
a
time
consuming
task
Users
experience
poor
response
times.
This
is
often
seen
with
networked
applications
where
during
data
entry
users
spend
most
of
their
time
waiting
for
the
application
BOUNDARIES
OF
THE
PROBLEM
Boundaries
define
the
limits
of
the
problem
or
system
to
be
developed.
Anything
outside
the
system
is
said
to
be
a
part
of
the
environment.
The
system
interacts
with
its
environment
via
an
interface.
Input
and
output
to
and
from
the
system
takes
place
via
an
interface.
The
keyboard
provides
an
interface
that
allows
humans
to
input
data
into
a
computer
system.
An
internet
service
provider
provides
an
interface
between
computers
and
the
internet.
It
is
vital
to
determine
the
boundaries
of
a
problem
to
be
solved.
Determining
the
boundaries
is,
in
effect,
determining
what
is
and
what
is
not
part
of
a
system.
When
defining
a
problem
it
is
important
to
define
the
boundaries
for
the
problem
so
that
the
customer
has
realistic
expectations
of
the
limits
of
the
system.
18
we
be
able
to
support
this
product
in
the
future;
can
we
redistribute
existing
staff
and
resources
to
this
project.
DETERMINING
IF
AN
EXISTING
SOLUTION
CAN
BE
USED
SOCIAL
AND
ETHICAL
CONSIDERATIONS
Some
social
and
ethical
areas
that
require
consideration
include:
Changing
nature
of
work
for
users-
the
introduction
of
new
software
products
has
an
effect
on
the
nature
of
work
for
users
of
the
system.
We
must
consider
that
effect
the
new
system
will
have
on
the
users,
this
could
include
changing
of
employment
contracts
and
retraining
Effects
on
level
of
employment-
in
many
ways,
computers
have
replacing
the
work
done
by
people.
One
of
the
mains
reasons
for
creating
new
software
is
to
reduce
business
costs,
including
wages.
Hence
software
products
are
designed
to
reduce
the
overall
level
of
employment
in
an
area
Effects
on
the
public-
large
software
systems
can
have
a
substantial
effect
on
the
general
public.
The
introduction
of
ATMs
involves
retraining
the
entire
population.
Many
seniors
were
reluctant
to
make
use
of
this
new
banking
system.
Effects
on
the
public
are
both
positive
and
negative.
It
is
vital
that
consideration
be
given
to
its
effect
on
the
public
LICENSING
CONSIDERATIONS
All
parties
must
consider
issues
related
to
copyright,
both
in
terms
of
protecting
the
rights
of
the
developer
and
the
legal
rights
of
the
customers
who
will
purchase
and
use
the
product.
CUSTOMISATION
OF
EXISTING
SOFTWARE
PRODUCTS
Customisation
of
an
existing
software
product
is
often
a
cost
effective
strategy
for
obtaining
new
functionality.
Many
software
developers
spend
much
of
their
time
modifying
their
own
existing
products
to
suit
the
specific
needs
of
individual
clients.
Open
source
software
is
also
routinely
customised
to
add
new
features.
The
ability
to
customise
software
helps
the
original
developer
of
the
product
widen
their
market.
COST
EFFECTIVENESS
Usually
one
of
the
constraints
of
a
new
software
system
will
be
that
it
falls
within
a
certain
budget.
Once
estimates
and
quotations
have
been
obtained,
an
overall
development
costing
can
be
compiled.
This
total
development
cost
is
then
compared
to
the
allocated
budget
for
the
project
and
the
total
cost
is
assessed.
SELECTING
AN
APPROPRAITE
DEVELOPMENT
APPROACH
If
an
appropriate
existing
solution
cant
be
found
then
new
software
will
need
to
be
developed.
This
may
involve
developing
the
entire
solution
or
it
may
involve
writing
code
to
customise
an
existing
solution.
When
new
software
is
developed,
one
of
the
first
tasks
is
to
identify
a
suitable
software
development
approach.
One
should
consider
each
approachs
strengths
and
weaknesses,
and
should
consider
a
combination
of
approaches
if
appropriate.
DESIGN
SPECIFICATIONS
SPECIFICATIONS
OF
THE
PROPOSED
SOLUTION
Developing
a
set
of
design
specifications
is
one
of
the
most
important
steps
before
the
actual
design
is
planned.
The
design
specifications
form
the
basis
for
planning
and
designing
the
solution.
The
aim
of
the
design
specifications
is
to
accurately
interpret
the
needs,
requirements
and
boundaries
identified
into
a
set
of
workable
and
realistic
specifications
19
from
which
a
final
solution
can
be
created.
These
design
specifications
should
include
considerations
from
both
the
developers
and
the
users
perspective.
Developers Perspective
Users Perspective
Consideration of:
Consideration of:
Data
types
Data
structures
Algorithms
Variables
Software
design
approach
Quality
assurance
Modelling
the
system
Documentation
Interface
design
Relevance
to
the
users
environment
and
computer
configuration
Social
and
ethical
issues
Appropriate
messages
Appropriate
icons
Relevant
data
formats
for
display
Ergonomic
issues
DEVELOPERS
PERSPECTIVE
These
specifications
will
create
standard
framework
under
which
the
term
of
developers
must
work.
These
are
specifications
that
will
not
directly
affect
the
product
from
the
users
perspective
but
will
provide
a
framework
in
which
the
team
of
developers
will
operate.
The
modelling
methods
will
be
specified.
The
method
and
depth
of
algorithm
description
to
be
used,
how
data
structures,
data
types
and
variable
names
are
to
be
allocated
and
any
naming
conventions
that
should
be
used
should
all
be
specified.
A
system
for
maintaining
an
accurate
data
dictionary
needs
to
be
specified.
In
other
words,
a
framework
for
the
organisation
of
the
development
process
is
set
up
so
that
each
member
of
the
development
team
will
be
creating
sections
of
the
solution
that
look,
fell
and
are
documented
using
a
common
approach.
Once
these
specifications
have
been
developed
a
system
model
can
be
created
to
give
an
overview
of
the
entire
system.
These
system
models
will
lead
to
the
allocation
of
specific
tasks
for
completion
by
team
members,
whilst
at
the
same
time
an
overall
direction
of
the
project
can
be
visualised.
USERS
PERSPECTIVE
Specifications
developed
form
the
users
point
of
view
should
include
any
design
specifications
that
influence
the
experience
of
the
end-user.
Standards
for
interface
design
will
be
specified
to
ensure
continuity
of
design
across
the
projects
screens,
for
example,
user
of
menus,
colour
and
placement.
The
wording
of
messages,
design
of
icons
and
the
format
of
any
data
presented
to
the
user
need
to
be
determined
and
a
set
of
specifications
created,
for
example
font
user
for
prompts
and
user
input,
the
size
of
icons
and
the
tone
of
language
used.
Ergonomic
issues
should
also
be
considered
and
taken
into
account
as
part
of
the
design
specifications.
Consider:
which
functions
will
be
accessed
most?
Which
functions
require
keyboard
shortcuts?
What
is
the
order
in
which
data
will
be
entered?
Are
the
screens
aesthetically
pleasing?
Such
questions
need
to
be
examined
and
a
standard
set
of
design
specifications
generated.
System
models
can
assist
in
determining
user
based
design
specifications,
in
particular,
screen
designs
and
concept
prototypes.
It
is
vital
that
software
developers
acknowledge
the
20
SYSTEM
DOCUMENTATION
RESRESENTING
A
SYSTEM
USING
SYSTEMS
MODELLING
TOOLS
IPO
DIAGRAMS
These
diagrams
are
used
to
document
a
system
by
identifying
the
inputs
into
each
major
process,
the
general
nature
of
these
processes,
and
the
outputs
produced.
The
IPO
diagram
is
in
the
form
of
a
table
with
3
columns,
one
for
Input,
Process
and
Output.
The
following
IPO
diagram
describes
the
voting
system
subsequently
shown
as
a
data
flow
diagram.
CONTEXT
DIAGRAMS
21
Context
diagrams
are
used
to
represent
an
overview
of
the
system.
The
system
is
shown
as
a
single
process
along
with
the
inputs
and
outputs.
The
external
entities
are
connected
to
the
single
process
by
data
flow
arrows.
Each
elements
represented
is
labelled.
A
context
diagram
does
not
show
data
stores
or
internal
process.
DATA
FLOW
DIAGRAMS
Data
flow
diagrams
represent
a
system
as
a
number
of
processes
that
together
form
a
single
system.
A
data
flow
diagram
is
a
refinement
of
a
context
diagram.
Data
flow
diagrams
therefore
show
a
further
level
of
detail
not
seen
in
the
context
diagram.
Data
flow
diagrams
identify
the
source
of
data,
its
flow
between
processes
and
its
destination
along
with
data
generated
by
the
system.
22
STORYBOARDS
A
storyboard
shows
the
various
interfaces
(screens)
in
a
system
as
well
as
the
links
between
them.
The
representation
of
each
interface
should
be
detailed
enough
for
the
reader
to
identify
the
purpose,
contents
and
design
elements.
Areas
used
for
input,
output
and
navigation
should
be
clearly
identified
and
labelled.
Any
links
shown
between
interfaces
should
originate
from
the
navigational
element
that
triggers
the
link.
This
storyboard
represents
an
online
voting
system.
Elements
of
each
screen
are
clearly
identified
and
the
links
between
screens
are
clearly
shown.
23
24
STRUCTURE
CHARTS
Structure
charts
represent
a
system
by
showing
the
separate
modules
or
subroutines
that
comprise
the
system
and
their
relationship
to
each
other.
Rectangles
are
used
to
represent
modules
or
subroutines,
with
lines
used
to
show
the
connections
between
them.
The
chart
is
read
from
top
to
bottom,
with
component
modules
or
subroutines
on
successively
lower
levels,
indicating
these
modules
or
subroutines
are
called
by
the
module
or
subroutine
above.
For
all
modules
or
subroutines
called
by
a
single
module
or
subroutine,
the
diagram
is
read
from
left
to
right
to
show
the
order
of
execution.
SYSTEM
FLOWCHARTS
System
flowcharts
are
a
diagrammatic
way
of
representing
the
system
to
show
the
flow
of
data,
the
separate
modules
comprising
the
system
and
the
media
used.
Standard
symbols
include
those
used
for
representing
major
processes
and
physical
devices
that
capture,
store
and
display
data.
Many
of
these
symbols
have
become
outdated
as
a
result
of
changes
in
technology.
Note
that
system
flowcharts
are
distinctly
different
from
program
flowcharts,
which
are
used
to
represent
the
logic
in
an
algorithm.
They
do
not
use
a
start
or
end
symbol,
and
are
not
intended
to
represent
complex
logic.
25
26
DATA
DICTIONARIES
A
data
dictionary
is
a
comprehensive
description
of
each
data
item
in
a
system.
This
commonly
includes:
variable
name,
size
in
bytes,
number
of
characters
as
displayed
on
screen,
data
type,
format
including
number
of
decimal
places
(if
applicable)
and
a
description
of
the
purpose
of
each
field
together
with
an
example.
27
Structural
elements
come
in
pairs,
e.g.
for
every
BEGIN
there
is
an
END,
for
every
IF
there
is
an
ENDIF.
Indenting
is
used
to
identify
control
structures
in
the
algorithm
The
names
of
subprograms
are
underlined.
This
means
that
when
refining
the
solution
to
a
problem,
a
subroutine
can
be
referred
to
in
an
algorithm
by
underlining
its
name,
and
a
separate
subprogram
developed
to
show
the
logic
of
that
routine.
This
feature
enables
the
use
of
the
top-down
development
concept,
where
details
for
a
particular
process
need
only
be
considered
within
the
relevant
subroutine.
Flowcharts
are
a
diagrammatic
method
representing
algorithms,
which
are
read
from
top
to
bottom
and
left
to
right.
Flowcharts
use
the
following
symbols
connected
by
lines
with
arrowheads
to
indicate
the
flow.
It
is
common
practice
to
show
arrowheads
to
avoid
ambiguity.
Flowcharts
using
these
symbols
should
be
developed
using
only
the
standard
control
structures
(described
on
the
following
pages).
It
is
important
to
start
any
complex
algorithm
with
a
clear,
uncluttered
main
line.
This
should
reference
the
required
subroutines,
whose
detail
is
shown
in
separate
flowcharts.
A
subroutine
should
rarely
require
more
than
one
page,
if
it
correctly
makes
use
of
further
subroutines
for
detailed
logic.
SEQUENCE
BINARY SELECTION
28
MULTI-WAY SELECTION
REPETITION: PRE-TEST
REPETITION:
POST-TEST
29
SUBPROGRAM
TEST
DATA
AND
EXPECTED
OUTPUT
Developing
suitable
test
data
to
ensure
the
correct
operation
of
algorithms
is
an
important
aspect
of
algorithm
development.
Test
data
should
test
every
possible
route
through
the
algorithm
as
well
as
testing
each
boundary
condition.
Testing
each
route
makes
sure
that
all
statements
are
correct
and
that
each
statement
works
correctly
in
combination
with
every
other
statement.
Testing
boundary
conditions
ensures
that
each
decision
is
correct.
Commonly,
a
decision
will
be
out
by
1
as
a
result
of
an
incorrect
operator,
for
example
<
instead
of
<=.
When
designing
test
data
it
must
take
into
account
the:
Legal
and
expect
values:
i.e.
critical
values
(values
which
a
condition
is
based
on)
and
boundary
values.
Legal
but
unexpected
values:
i.e.
data
in
an
incorrect
format
(e.g.
decimal)
but
accepted
by
the
programs
guidelines.
30
Illegal
but
expect
values:
i.e.
illegal
data
due
to
ignorance,
typing
errors
or
poor
instructions.
These
errors
should
be
trapped
and
not
halt
the
execution
of
the
program.
When
initial
test
data
items
are
created
the
expected
output
from
these
inputs
should
be
calculated
by
hand.
Once
the
subroutine
has
been
coded,
the
test
data
is
entered
and
the
output
from
the
subroutine
is
compared
to
the
expected
outputs.
If
the
actual
and
expected
outputs
match
then
this
provides
a
good
indication
that
the
subroutine
works.
If
they
dont
then
clearly
a
logic
error
has
occurred.
Further
techniques
must
be
employed
to
determine
the
source
of
the
error.
31
PROCESSING STRINGS
32
GENERATING
A
SET
OF
UNIQUE
RANDOM
NUMBERS
33
34
PROCESSING
OF
RELATIVE
FILES
35
36
LINEAR SEARCH
BINARY
SEARCH
37
BUBBLE
SORT
38
SELECTION SORT
39
INSERTION
SORT
40
41
RECORDS
A
record
is
a
grouped
list
of
variables,
which
may
each
be
of
different
data
types.
Individual
elements
are
accessed
using
their
field
names
within
the
record.
Before
using
a
record
in
a
program,
most
languages
require
that
the
record
first
be
dimensioned
(that
is,
defined)
as
a
record
type
to
specify
the
component
field
names
and
data
types.
If
the
component
fields
within
the
record
are
strings,
their
length
must
also
be
specified.
In
the
following
example,
a
Product
record
called
ProdRec
is
defined
to
contain
ProdNum,
description,
quantity
and
price.
A
statement
such
as
the
following
is
a
typical
example
of
the
required
code.
Although
it
is
not
mandatory
to
include
such
a
definition
in
an
algorithm,
students
may
find
it
beneficial
to
include
a
relevant
diagram,
such
as
the
one
below,
to
help
clarify
their
thinking.
ARRAY
OF
RECORDS
An
array
of
records
is
an
array,
each
element
of
which
consists
of
a
single
record.
The
fields
in
that
record
may
be
of
different
data
types.
Every
record
in
the
array
must
have
the
same
structure,
which
is
the
same
component
fields
in
the
same
order.
Before
using
an
array
of
records
in
a
program,
most
languages
require
that
it
must
first
be
dimensioned
in
order
to
allocate
sufficient
memory
for
the
specified
number
of
elements.
This
includes
defining
the
record
type
to
specify
the
component
fields
as
in
the
simple
records.
42
The
array
of
records
can
then
be
defined
as
an
array
where
each
element
is
defined
as
one
of
these
records.
In
the
following
example,
an
array
of
records
consisting
of
20
such
records
is
defined:
DIM
ProdArrayofRecords
(20)
as
TYPE
ProdRec
Although
it
is
not
mandatory
to
include
such
a
definition
in
an
algorithm,
students
may
find
it
beneficial
to
include
a
relevant
diagram,
such
as
the
one
below,
to
help
clarify
their
thinking.
FILES
A
file,
in
terms
of
software
development,
means
a
collection
of
data
that
is
stored
in
a
logical
manner.
Normally,
files
are
stored
on
secondary
storage
devices,
usually
a
hard
disk
that
is
43
separate
from
the
application
program
itself.
There
are
essentially
two
methods
of
storing
and
accessing
data
in
a
file:
sequential
files;
relative
(random
access)
files.
SEQUENTIAL
FILES
Data
is
stored
in
sequential
files
in
a
continuous
stream.
The
data
must
be
accessed
from
beginning
to
end.
An
audiocassette
is
a
sequential
storage
medium
for
audio
data.
To
play
the
third
track
on
an
audiocassette
requires
fast
forwarding
through
tracks
one
and
two.
Sequential
files
operate
in
the
same
manner.
To
access
data
stored
in
the
middle
of
a
sequential
file
requires
reading
all
the
preceding
data.
The
data
stored
in
a
sequential
file
may
have
some
structure,
however
the
structure
is
not
stored
as
part
of
the
file.
Applications
that
access
sequential
files
must
know
about
the
structure
of
the
file.
Text
files
are
sequential
files;
the
data
within
the
text
file
is
merely
a
collection
of
ASCII
or
Unicode
characters.
When
using
sequential
files
it
is
necessary
to
structure
the
data
yourself.
Sentinel
values
(dummy
value
to
indicate
the
end
of
data
within
a
file)
can
be
used
to
indicate
logical
breaks
in
the
data
and
also
to
indicate
the
end
of
the
file.
For
example,
tab
characters
may
be
used
between
fields
and
carriage
return
characters
between
records.
Often
a
particular
string,
such
as
ZZZ,
may
be
used
as
a
sentinel
to
indicate
the
end
of
a
sequential
file.
When
sentinel
value
is
used
to
signal
the
end
of
a
file
car
is
required
to
ensure
the
sentinel
value
is
not
processed
as
if
it
were
data.
When
reading
sequential
files
it
is
common
to
use
an
initial
or
priming
read
before
the
main
processing
loop
to
detect
files
which
only
contain
the
sentinel.
Many
programming
languages
include
their
own
commands
for
writing
to
and
reading
from
sequential
files.
These
commands
operate
well
when
the
file
will
only
be
accessed
using
the
inbuilt
commands.
For
instance
the
languages
write
command
is
used
to
create
files
and
the
languages
read
command
is
used
to
get
data
from
the
file.
Essentially
the
format
of
the
text
file
is
defined
and
determined
by
the
languages
commands
and
the
programmer
is
relieved
of
much
of
the
detailed
work.
Sequential
files
can
be
opened
in
one
of
three
modes
input,
output
and
append.
Input
is
used
to
read
the
data
within
the
file,
output
is
used
to
write
data
to
a
new
file
and
append
is
used
to
write
data
commencing
at
the
end
of
an
existing
file.
Notice
it
is
generally
not
possible
to
commence
writing
from
the
middle
of
an
existing
sequential
file.
This
is
because
the
length
of
data
items
within
the
file
is
unknown
and
hence
there
is
no
way
to
ensure
required
data
is
not
overwritten.
Relative
files
overcome
this
restriction.
RELATIVE
(OR
RANDOM
ACCESS)
FILES
Relative
refers
to
the
fact
that
there
is
a
known
structure
to
these
files,
which
allows
the
start
of
individual
records
to
be
determined.
As
each
record
within
the
file
is
exactly
the
same
length
then
the
relative
position
of
each
record
within
the
file
can
be
used
to
access
individual
records.
In
effect
the
position
of
each
record
in
the
file
is
used
as
a
key
to
allow
direct
access
to
each
record.
For
example,
if
records
are
30
bytes
long,
then
the
100th
record
will
commence
after
the
3000th
byte
in
the
file.
Random
access
refers
to
the
ability
to
access
records
in
any
order.
Unlike
sequential
files,
it
is
possible
to
read
the
10th
record,
then
the
2nd
record,
and
then
the
50th
record
and
so
on
in
any
desired
random
order.
With
sequential
files
we
must
read
from
the
starts
of
the
file
through
to
the
end
of
the
file,
hence
if
we
require
the
100th
record
we
must
first
reach
each
of
the
preceding
99
records.
Furthermore
when
writing
data
we
are
able
to
edit
or
update
individual
records
within
a
file,
which
was
not
possible
using
sequential
files.
44
Random
access
or
relative
files
can
be
likened
to
CDs.
With
a
music
CD,
individual
tracks
can
be
played
in
any
order.
Many
CD
players
include
a
random
function
whereby
tracks
are
played
in
a
random
order.
The
structure
of
an
audio
CD
is
stored
on
the
CD.
If
you
wish
to
play
track
five,
the
laser
head
jump
directly
to
the
start
of
track
five.
Random
access
files
allow
individual
data
items
to
be
accessed
directly
without
the
need
to
read
any
of
the
preceding
data.
In
fact
random
access
files
are
often
known
as
direct
access
files.
Relative
files
are
used
to
store
records.
Each
record
is
the
same
data
type
and
must
be
of
precisely
the
same
length.
Complete
records
are
read
and
written
to
relative
files.
The
programming
language
is
able
to
determine
the
precise
byte
where
a
record
begins
because
all
records
are
the
identical
length.
Individual
fields
within
records
can
be
identified
precisely
because
the
exact
length
and
structure
of
each
record
is
known.
Fields
containing
strings
that
are
of
differing
lengths
are
usually
padded
out
using
the
blank
character
(ASCII
code
32).
CUSTOMISED
OFF-THE-SHELF
PACKAGES
Customisation
of
an
existing
software
product
is
often
a
cost
effective
strategy
for
obtaining
new
functionality.
Many
software
developers
spend
much
of
their
time
modifying
their
own
existing
products
to
suit
the
specific
needs
of
individual
clients.
For
example,
Parramatta
Education
Centre
writes
custom
database
applications
often
using
Microsoft
Access
to
build
the
data
entry
forms
and
printed
reports.
Much
of
the
routine
work
performed
by
the
company
involves
upgrading
existing
product
to
include
additional
features.
In
this
case
Microsoft
Access
is
being
used
to
create
and
update
different
COTS
products.
Open
source
software
is
also
routinely
customised
to
add
new
features.
For
example,
phpBB
is
a
popular
open
source
product
used
to
build
online
forums.
Many
of
the
forums
based
on
phpBB
have
been
customised
to
suit
the
unique
requirements
of
particular
forums.
In
many
cases
the
modifications
are
built
as
add-ons,
which
are
then
made
available
to
other
users
of
the
base
software
product.
For
example,
an
add-on
for
phpBB
awards
points
to
users
based
on
the
number
and
quality
of
posts
they
make.
Other
add-ons
allow
paid
advertisements
relevant
to
the
sub
form
being
viewed,
to
be
embedded
within
screens.
The
ability
to
customise
software
helps
the
original
developer
of
the
product
widen
their
market.
It
is
now
common
for
tools
to
be
included
in
many
commercial
products
that
allow
the
end
user
to
create
customised
screens
and
other
functionality
through
the
use
of
wizards
and
various
drag
and
drop
design
screens.
Other
companies
who
specialise
in
software
for
particular
industries
offer
their
own
software
customisation
service.
For
example,
GeoCivil
is
a
software
product
designed
for
use
by
civil
engineers.
The
developers
offer
a
customisation
service
where
GeoCivil
can
be
altered
to
suit
particular
requirements.
45
Drivers
are
temporary
code
used
to
test
the
execution
of
a
module
when
the
module
cant
function
individually
without
a
mainline.
Drivers
pass
values
to
and
from
the
subprogram.
Drivers
need
to
be
used
to
test
the
module
before
becoming
a
standard
module.
THOROUGH
DOCUMENTATION
Documentation
must
be
included
with
any
project
and
aspect
of
a
program.
This
leads
to
successful
teamwork
and
new
members
can
be
hired
and
understand
whats
going
on.
Every
programmer
needs
to
understand
the
processes
of
a
module
if
they
are
to
successfully
use
or
modify
a
module.
Documentation
must
also
include
the
author,
date,
purpose
and
nature
of
parameters
used.
ISSUES
ASSOCIATED
WITH
RE-USABLE
MODULES
Issues
include:
INTERFACE
DESIGN
CONSIDERATION
OF
INTENDED
AUDIENCE
The
user
interface
needs
to
be
designed
to
suit
the
intended
audience.
A
product
written
for
pre-school
children
will
require
quite
a
different
interface
to
a
product
design
to
perform
complex
calculation
for
engineers.
An
accounting
product
designed
for
use
by
accountants
can
use
the
jargon
of
the
finance
world.
A
similar
package
designed
for
the
general
public
should
not
use
jargon.
Systems
intended
for
large
audiences
with
unknown
levels
of
expertise
may
include
user
interfaces
that
can
be
personalised.
For
example,
IBMs
OS400
OS
provides
help
on
error
messages
at
three
user-defined
levels:
beginner,
intermediate
and
advanced.
The
wording
of
error
messages
changes
appropriately.
IDENTIFICATION
OF
DATA-FIELDS
AND
SCREEN
ELEMENTS
Before
the
task
of
designing
a
user
interface
can
commence
the
data
to
be
included
on
each
screen
needs
to
be
determined.
The
system
models,
algorithms
and
data
dictionaries
will
provide
this
information.
Once
the
required
data
is
ascertained
the
next
task
is
to
decide
on
the
most
effective
screen
element
to
use
to
display
each
data
item.
Some
common
screen
elements
in
terms
of
the
data
type
they
best
represent
include:
46
List
boxes
can
be
configured
to
allow
multiple
selections.
The
options
can
be
loaded
from
an
array
or
input
by
programmer
Combination
boxes-
used
to
combine
the
functions
of
a
text
box
and
list
box.
A
text
box
is
provided
for
keyboard
entry
or
the
user
can
select
an
item
from
the
list
box.
They
are
able
to
force
input
of
one
of
the
items
in
the
list.
They
can
also
be
set
so
that
new
items
entered
become
part
of
the
list
Check
boxes-
used
to
obtain
Boolean
input
from
the
user.
This
is
a
self-validating
screen
element
Radio
or
option
buttons-
user
to
force
the
user
to
select
one
of
the
displayed
buttons.
It
is
not
possible
to
select
more
than
one
option.
Only
used
when
a
small
number
of
possible
options
are
available,
because
they
use
a
lot
of
space
Scroll
bars-
used
to
display
the
position
of
a
numeric
data
value
within
a
given
range.
Often
used
to
navigate
within
another
element,
e.g.
list
boxes.
Scroll
bars
give
the
user
a
visual
interpretation
of
their
position
relative
to
the
starts
and
end
of
the
range
of
possible
values
Grids-
used
as
a
two-dimensional
screen
element
that
can
be
likened
to
an
array
of
records.
Rows
can
be
records
and
columns
can
be
fields.
Each
cell
is
a
text
box.
Labels-
used
to
provide
information
and
guidance
to
the
user.
Often
provide
instruction
to
users
in
regard
to
required
input
into
other
screen
elements
Picture
boxes-
used
to
display
graphics
47
fault
of
tolerance
for
the
device.
A
fast
device
that
fails
within
weeks
is
a
poor
performer.
Can
it
recover
from
power
spikes
and
failures?
When
occurs
when
the
device
is
under
load?
The
specific
performance
requirements
should
be
established
so
that
tests
can
be
developed
to
ensure
these
requirements
are
met.
BENCHMARKING
Benchmarking
is
the
process
of
evaluating
something
by
comparing
it
to
an
established
standard.
Benchmarking
enables
competing
product
to
be
fairly
compared.
For
computers,
software
and
computer
components,
standards
are
commonly
developed
which
assess
performance
requirements
found
to
be
important
by
many
users.
For
example,
the
ability
to
render
3D
graphics
in
HD
at
a
particular
speed
would
be
a
relevant
performance
requirement
for
gamers.
A
standard
to
assess
this
requirement
might
specify
a
particular
game
or
utility,
which
should
be
executed
whilst
the
computer
is
also
running
a
browser.
Often
benchmarking
software
is
able
to
record
technical
details
of
the
variables
in
question
so
that
a
fair
mathematical
and
statistical
evaluation
can
be
made.
Symbol Definition
Example
Is defined as
Letter = A|B|C
Or
A|B|C
< >
<Letter>
{ }
Word={<letter>}
[ ]
Optional elements
[<Digit>]
( )
Grouped elements
({<Digit>})
Examples:
-
Letter
=
A|B|C
Digit
=
0|1|2|3|4
Identifier
=
<Letter>{<Letter>|<Digit>}
Expression
=
<Identifier>(<Identifier>|<Digit>){<Letter>}
RAILROAD
DIAGRAMS
Railroad
diagrams
are
a
graphical
method
used
to
define
the
syntax
of
a
programming
language.
48
Rectangles
are
used
to
enclose
non-terminal
symbols,
symbols
that
will
be
further
defined.
Circles
or
rounded
rectangles
are
used
to
enclose
terminal
symbols,
items
that
dont
need
to
be
further
defined.
Paths
to
show
all
valid
combinations
link
these
elements.
Examples:
-
Identifier:
Assignment
Statement:
Advantages
Disadvantages
- Runs
faster
- Programs
are
usually
smaller
- Programs
are
harder
to
reverse
engineer
or
change
because
the
source
code
is
hidden
from
view
INTERPRETATION
Each
line
of
source
code
is
translated
into
machine
code
and
then
immediately
executed.
If
errors
exist
in
the
source
code,
they
will
cause
a
halt
to
execution
once
encountered
by
the
interpreter.
49
If
the
German
speaker
and
English
speakers
are
having
a
conversation,
the
speech
interpreter
translates
each
sentence
into
English
as
it
is
spoken
in
German.
If
the
speech
interpreter
cant
understand
a
German
sentence,
he
is
unable
to
translate
it
into
English.
So
the
German
sentence
must
be
altered
so
the
interpreter
can
understand.
Advantages
Disadvantages
Lexical
analysis
Syntactical
analysis
Code
generation
LEXICAL
ANALYSIS
50
dictionary
or
lexicon
for
the
lexical
analysis
process.
Each
character
in
the
source
code
is
read
sequentially.
Once
a
string
of
characters,
matches
an
element
in
the
symbol
table,
it
is
replaced
by
the
appropriate
token.
SYNTACTICAL
ANALYSIS
Syntactical
analysis
is
the
process
of
checking
the
syntax
of
sentences
and
phrases
is
correct
in
terms
of
the
grammatical
rules
of
the
language.
The
second
part
of
syntactical
analysis
is
checking
the
type
of
identifiers
and
constants
used
in
statements
are
compatible.
Parsing
is
the
process
of
checking
the
syntax
of
a
sentence.
A
parse
tree
is
used
to
diagrammatically
describe
the
component
parts
of
a
syntactically
correct
sentence.
Syntactical
analysis
makes
use
of
parse
trees
to
check
each
statement
against
the
precise
description
of
the
syntax.
The
series
of
tokens
delivered
for
syntactical
analysis
is
constructed
into
a
parse
tree
based
on
the
rules
of
the
particular
language
describe
in
EBNF.
Each
statement
in
a
programming
language
will
have
exactly
on
parse
tree,
resulting
in
one
precise
meaning.
If
a
particular
component
of
the
source
code
cant
be
parsed
then
an
error
message
is
generated.
These
error
massages
as
a
result
of
parsing
will
always
be
from
incorrect
arrangement
of
tokens.
CODE
GENERATION
If
the
processes
of
lexical
and
syntactical
analysis
have
completed
without
error,
then
the
machine
code
can
be
generated.
This
stage
involves
converting
each
token
or
series
of
tokens
into
their
respective
machine
code
instructions.
As
it
is
known
that
the
tokens
are
correct
and
in
the
correct
order,
no
errors
will
be
found
during
code
generation.
The
parse
tree
created
during
syntactical
analysis
will
contain
the
tokens
in
their
correct
order
for
code
generation.
The
code
generation
process
traverses
the
parse
tree,
usually
from
left
to
right
along
the
lower
branches.
Tokens
are
collected
and
converted
into
their
respective
machine
code
instructions
as
the
transversal
occurs.
51
52
save
and
print
commands
which
we
see
in
almost
every
word
processing
application.
Even
the
user
interface
is
using
re-used
subprograms
to
create
the
same
interface
such
as
the
minimise,
maximise
and
close
buttons
on
every
application
window.
A
linker
is
the
process
by
which
subprograms
are
joined
to
the
mainline
of
the
program.
A
linker
handles
the
call
and
return
processes.
It
calls
the
subprogram,
gives
it
control
then
returns
control
to
the
mainline
after
it
has
executed.
Dynamic
link
libraries
are
files
containing
object
code
subroutines
that
add
extra
functionality
to
a
high-level
language.
DLLs
are
common
subprograms
used
to
reduce
the
need
for
multiple
subprograms.
DLLs
can
be
called
from
many
different
applications
and
used
by
the
application. If
a
new
program
is
installed
with
a
newer
version
of
the
DLL
then
it
overwrites
the
old
version.
All
other
programs
then
have
to
use
this
newer
version
of
the
DLL.
53
Data
structures
should
be
selected
and
designed
to
assist
the
processing
to
be
performed.
For
example,
a
program
stores
and
processes
customer
details
including
first
name,
surname,
and
phone,
then
the
programmer
could
use
four
simple
parallel
arrays,
for
each
field
or
they
could
define
a
record
structure
and
then
an
array
of
the
records.
WRITING
FOR
SUBSEQUENT
MAINTENANCE
Coding
should
be
done
in
such
a
way
that
future
maintenance
programmers
could
easily
update
the
code
as
new
requirements
come
to
light.
All
the
points
mentioned
above
will
help
to
simplify
the
tasks
of
maintenance
programmers.
Clear
documentation
within
the
code,
such
as
comments,
appropriate
identifier
names
and
indenting
will
make
understanding
the
logic
of
the
code
clear.
If
a
constant
value
is
to
be
used
throughout
the
code,
it
should
be
assigned
to
an
identifier.
The
identifier
is
used
rather
than
the
actual
value.
Maintenance
programmers
need
only
change
the
value
once.
VERSION
CONTROL
AND
REGULAR
BACKUP
During
coding
all
programmer
should
regularly
save
his
or
her
work.
This
is
to
prevent
loss
of
source
code
should
the
power
go
out,
however
it
also
allows
different
versions
of
the
program
or
more
often
the
current
module
or
even
subroutine
to
be
stored.
Often
when
coding,
programmers
arrive
at
a
solution
but
then
wish
to
test
various
modifications.
By
implementing
a
system
of
version
control
it
is
possible
to
effectively
maintain
different
versions
of
particular
modules.
If
a
modification
does
not
improve
the
solution
then
the
programmer
can
revert
to
an
earlier
version.
RECOGNITION
OF
RELEVANT
SOCIAL
AND
ETHICAL
ISSUES
Syntax
errors
are
any
errors
that
prevent
the
translator
converting
the
high-level
code
into
object
code.
A
syntax
error
is
an
error
in
the
source
code
of
a
program.
Since
computer
programs
must
follow
strict
syntax
to
compile
correctly,
any
aspects
of
the
code
that
do
not
conform
to
the
syntax
of
the
programming
language
will
produce
a
syntax
error.
LOGIC
ERRORS
A
logic
error
(or
logical
error)
is
a
mistake
in
a
program's
source
code
that
results
in
incorrect
or
unexpected
behaviour.
E.g.
Average=
Integer1
+
integer2
/
2
would
result
in
integer1
being
added
to
interger2
divided
by
2.
The
correct
logic
will
be
produced
using
Average=
(Integer1
+
integer2)
/
2. Logic
errors
are
the
most
difficult
errors
to
detect
as
they
are
syntactically
correct
and
do
not
cause
a
system
crash
but
instead
produce
incorrect
outputs.
It
is
essential
that
a
programmer
check
all
of
their
design
algorithms
to
avoid
these
errors
before
coding.
The
programmer
also
must
make
sure
they
are
coding
the
right
logic,
as
simple
logical
mistakes
can
occur.
RUNTIME
ERRORS
Runtime
errors
occur
while
the
program
is
running,
and
will
lead
to
a
crash.
Sometimes
they
are
caused
by
wrong
syntax,
other
times
they
are
caused
by
the
inability
of
the
computer
to
perform
the
intended
task.
54
Most
high-level
language
source
code
editors
in
integrated
development
environments
(IDE)
will
pause
the
execution
of
the
program
and
set
it
into
a
break
mode.
The
editors
will
indicate
the
line
in
which
the
error
occurred.
Runtime
errors
could
by
caused
by:
division
by
zero,
arithmetic
overflow
(when
a
value
is
assigned
to
a
variable
that
is
outside
the
range
allowed
for
that
data
type),
accessing
inappropriate
memory
locations.
METHODS
OF
ERROR
DETECTION
AND
CORRECTION
DEBUGGING
OUTPUT
STATEMENTS
Strategic
placement
of
temporary
output
statements
can
help
to
isolate
the
source
of
errors.
By
placing
output
statements
within
called
subroutines,
the
programmer
can
determine
which
subroutines
have
been
called.
This
assists
in
the
detection
of
the
source
of
an
error.
Often
a
debugging
output
statement
will
be
progressively
moved
as
the
debugging
process
continues.
In
this
way,
the
flow
of
execution
through
the
code
can
be
precisely
monitored.
Eventually
the
source
of
the
error
is
detected.
DESK
CHECKING
Desk
checking
is
the
process
of
working
through
an
algorithm
or
piece
of
source
code
by
hand.
A
table
with
a
column
for
each
variable
is
used.
As
the
algorithm
or
code
is
worked
through
by
hand,
writing
the
new
value
in
the
next
row
shows
changes
to
variables.
A
desk
check
is
particularly
useful
when
the
working
of
a
piece
of
source
code
are
not
fully
understood.
The
process
helps
to
make
the
logic
clear.
DRIVERS
A
driver
provides
an
interface
between
two
components.
A
driver
controls
the
operation
of
some
device.
For
example
a
hardware
driver
is
a
program
that
provides
the
link
between
the
operating
system
and
a
specific
hardware
device.
Another
example
is
a
printer
driver
used
to
control
the
operation
of
a
printer.
In
terms
of
software
development,
a
driver
is
a
subroutine
written
to
test
the
operation
of
one
or
more
other
subroutines.
Commonly,
drivers
set
the
value
of
any
required
variables,
call
the
subroutine
to
be
tested
and
output
the
results
of
the
subroutine
call.
Drivers
are
required
when
software
projects
are
coded
using
a
bottom-up
design
methodology.
Lower-level
subroutines
are
developed
before
higher-level
subroutines.
Because
of
this,
it
is
necessary
to
write
a
driver
to
test
the
operation
of
each
subroutine
as
it
is
being
coded.
FLAGS
55
A
flag
is
used
to
indicate
that
a
certain
condition
has
been
met.
Usually
a
flag
is
set
to
either
true
or
false,
Booleans.
Flags
are
sued
to
signify
the
occurrence
of
a
certain
event.
Flags
are
used
to
check
if
certain
sections
of
code
have
been
executed
or
certain
conditions
have
been
met.
For
example,
a
particular
flag
may
be
set
to
true
when
a
subroutine
is
called.
By
seeing
this
value,
the
programmer
can
determine
the
flow
of
control
through
the
program.
PEER
CHECKING
Often
errors
will
occur
that
seem
to
be
impossible
to
correct
for
the
original
programmer.
Colleagues
on
the
development
team
are
often
able
to
see
the
problem
from
a
fresh
point
of
view.
Many
software
companies
require
that
each
subroutine
is
peer
checked
as
pat
of
their
quality
assurance
strategy.
STRUCTURED
WALKTHROUGH
Structured
walkthroughs,
as
the
name
suggests,
are
more
formal
than
peer
checks.
The
developer
or
team
of
developers
present
their
work
to
a
group
of
interested
parties.
This
group
may
include
representatives
from
management,
marketing
and
potential
users.
The
developers
walk
the
group
through
each
aspect
of
the
program.
The
aim
is
to
receive
feedback
on
the
product
as
it
stand,
comments
are
written
down
for
future
consideration.
Structured
walkthroughs
are
normally
formal
meetings.
They
can
be
used
to
evaluate
the
design
at
different
levels.
Their
aim
is
to
explain
in
a
structured
manner
the
operation
of
some
part
of
the
design
and
development
process
and
to
obtain
feedback.
STUBS
A
stub
is
a
small
subroutine
used
in
place
of
a
yet
to
be
coded
subroutine.
The
use
of
stubs
allows
higher-level
subroutines
to
be
tested.
Stubs
do
not
perform
any
real
processing,
rather
they
aim
to
stimulate
the
processing
that
will
occur
in
the
final
subroutine
they
replace.
Stubs
are
used
to
set
the
value
of
any
variables
affecting
their
calling
routines
and
then
end.
Sometimes
a
stub
may
include
an
output
statement
to
inform
the
programmer
that
the
call
to
the
stub
has
been
successful.
The
creation
of
stubs
is
required
when
software
projects
are
coded
using
a
top-down
methodology.
Because
higher-level
subroutines
are
created
before
lower-level
subroutines,
it
is
necessary
to
create
dummy
subroutines.
This
enables
testing
of
the
higher-level
subroutines
whilst
they
are
being
coded.
USE
OF
EXPECTED
OUTPUT
When
initial
test
data
items
are
created
the
expected
output
from
these
inputs
should
be
calculated
by
hand.
Once
the
subroutine
has
been
coded,
the
test
data
is
entered
and
the
output
from
the
subroutine
is
compared
to
the
expected
outputs.
If
the
actual
and
expected
outputs
match
then
this
provides
a
good
indication
that
the
subroutine
works.
If
they
dont
then
clearly
a
logic
error
has
occurred.
Further
techniques
must
be
employed
to
determine
the
source
of
the
error.
USE
OF
SOFTWARE
DEBUGGING
TOOLS
BREAKPOINTS
Used
to
temporarily
halt
the
execution
of
the
code.
Once
execution
has
stopped,
it
is
possible
to
examine
the
current
value
of
each
variable.
By
adding
breakpoints
at
strategic
place
within
the
code,
it
is
possible
to
locate
the
source
of
the
error.
PROGRAM
TRACES
56
Tracing,
in
terms
of
error
detection,
refers
to
tracking
some
aspect
of
the
software.
A
program
trace
enables
the
order
of
execution
of
statements
to
be
tracked
or
the
changing
contents
of
variables
to
be
tracked.
Both
these
types
of
trace
involve
analysing
and
following
the
flow
of
execution.
RESETTING
VARIABLE
CONTENTS
Altering
the
value
of
variables
during
the
execution
of
code
can
be
useful
in
determining
the
nature
of
an
error.
The
ability
to
alter
a
variables
contents
can
also
allow
the
programmer
to
determine
which
values
are
causing
errors
without
the
need
for
the
variable
to
actually
attain
that
value
as
a
consequence
of
execution.
SINGLE
LINE
STEPPING
Single
line
stepping
is
the
process
of
halting
the
execution
after
each
statement
is
executed.
Each
statement
is
highlighted
as
it
is
executed.
A
simple
keystroke
allows
execution
to
continue
to
the
next
statement.
WATCH
EXPRESSIONS
A
watch
expression
is
an
expression
whose
value
is
updated
in
the
watch
window
as
execution
continues
usually
watch
expressions
are
variable
names.
By
combining
single
line
stepping
and
watch
expressions,
an
automated
desk
checking
system
can
be
created.
57
the
tasks
under
the
direction
of
the
tutorial.
Tutorials
are
designed
so
users
can
experience
real
world
use
of
the
application
before
using
their
own
data.
ONLINE
HELP
Each
of
the
above
types
of
user
documentation
can
be
in
either
printed
or
electronic
online
form.
It
is
now
common
for
most
user
documentation
to
be
provided
online
rather
than
as
printed
manuals.
Online
documentation
can
be
provided
as
Adobe
PDF
files
which
are
often
similar
in
structure
to
more
traditional
printed
manuals.
Hypertext
help
documents
allow
users
to
efficiently
search
for
specific
items
or
in
many
cases
they
allow
context
sensitive
help
to
be
provided
from
within
the
application.
When
the
user
selects
help
within
the
application
they
are
directed
to
the
most
relevant
help
topic
automatically.
TECHNICAL
DOCUMENTATION
LOG
BOOK
A
logbook
records
the
systematic
series
of
actions
that
have
occurred
during
the
development
of
a
project.
Logbooks
or
process
diaries
are
often
utilised
during
the
design
and
development
of
products
in
many
industries.
Maintaining
a
logbook
is
a
method
of
chronologically
recording
the
processes
undertaken
to
develop
a
final
product.
Individual
members
of
the
development
team
can
maintain
their
own
logbooks
or
a
central
logbook
can
be
maintained
for
each
product
under
development.
SYSTEMS
DOCUMENTATION
There
are
numerous
different
methods
for
modelling
different
aspects
of
the
software
engineering
process.
System
modelling
tools
include:
context
diagram,
DFDs,
structure
charts
etc.
ALGORITHMS
An
algorithm
is
a
method
of
solution
for
a
problem.
Algorithms
describe
the
steps
taken
to
transform
inputs
into
the
required
outputs.
Each
algorithm
will
have
a
distinct
start
and
end,
and
will
be
composed
of
the
three
control
structures:
sequence,
decision,
and
repetition.
Algorithms
are
described
using
an
algorithm
description
language:
pseudocode
or
flowcharts.
SOURCE
CODE
The
source
code
itself
is
probably
the
most
important
form
of
technical
documentation.
It
is
virtually
impossible
to
isolate
logic
errors
in
an
application
without
access
to
the
original
source
code.
For
source
code
to
be
a
valuable
technical
resource
for
future
maintainers,
it
must
be
intelligently
documented.
Internal
Documentation
Comments
Intrinsic
Documentation
Meaningful
Identigiers
Indenting Code
Other
Formatting
Features
Documentation
within
the
source
code
is
often
called
internal
documentation.
Internal
documentation
can
take
two
forms,
comments
and
intrinsic
documentation.
Let
us
examine
each
of
these
in
turn.
Comments
provide
information
for
future
programmers.
The
translator
ignores
comments,
or
remark
statements,
within
the
code.
Each
procedure,
function
and
logical
process
within
58
the
code
should
be
preceded
by
a
comment.
Comments
should
explain
what
a
section
of
code
does
rather
than
how
it
does
it.
Intrinsic
means
belonging
by
its
very
nature.
Intrinsic
documentation
is
therefore
documentation
that
is
part
of
the
code
by
its
very
nature.
In
other
words,
the
code
is
self-
documenting.
There
are
two
main
types
of
intrinsic
documentation:
meaningful
identifiers
and
indentation.
An
identifier
is
the
name
given
to
a
variable,
procedure
or
function.
They
should
describe
the
purpose
of
the
element
it
represents.
Indenting
is
the
process
of
setting
lines
of
code
back
from
the
left
margin.
Indenting
sections
of
code
within
control
structures
improves
the
readability
of
code.
Many
other
formatting
features
are
commonly
used
to
improve
the
intrinsic
readability
of
code.
Leaving
blank
lines
between
comments
and
the
code
they
describe.
Colour
can
be
used
to
visually
differentiate
between
comments,
reserved
words,
operators
and
operands.
Any
factors
that
increase
the
readability
of
the
source
code,
and
are
part
of
the
code,
are
classified
as
intrinsic
documentation.
The
minimum
configuration
should
allow
the
software
to
operate
successfully
with
acceptable
performance
and
response
times.
POSSIBLE
ADDITIONAL
HARDWARE
Additional
hardware
items
are
those
items
supported
by
the
software
product
but
not
essential
for
its
operation.
Extra
RAM
may
allow
the
product
to
operate
with
larger
files.
Sound
cards
may
allow
for
optional
audio
feedback
for
the
sight
impaired.
The
addition
of
a
network
or
internet
connection
may
allow
for
foreign
exchange
rates
to
be
updated
electronically
in
financial
packages.
APPROPRIATE
DRIVERS
OR
EXTENSIONS
Hardware
devices
require
software
drivers
to
allow
them
to
communicate
with
the
system.
These
drivers
are
also
known
as
interfaces,
as
they
convert
signals
from
one
device
into
those
that
can
be
understood
by
another.
Before
a
hardware
device
can
be
used
by
a
59
software
solution,
its
driver
must
be
correctly
installed
and
configured.
Most
peripheral
devices
come
packaged
with
drivers
for
most
popular
OSs
or
are
able
to
use
drivers
included
with
the
OS.
The
driver
is
installed
as
part
of
the
installation
of
the
hardware
device.
Custom
built
hardware
may
require
a
purpose
built
driver
to
be
created.
This
driver
would
need
to
be
installed
as
part
of
the
installation
of
the
software
solution.
In
this
case,
the
driver
is
an
extension
to
the
software
package.
For
example,
a
product
designed
to
monitor
the
climate
within
a
green
house
includes
custom
drivers
for
the
temperature
and
humidity
sensors.
These
sensors
are
part
of
the
minimum
hardware
requirements
for
this
product.
The
drivers
or
extensions
are
installed
as
the
main
application
is
installed.
White-box
or
structural
testing:
checks
the
procedures
or
processes
within
each
module
to
determine
their
correctness.
It
is
what
the
programmer
is
more
concerned
with
as
it
identifies
bugs
in
the
coded
solution.
Types
of
white-box
testing:
60
Statement
coverage
testing:
test
data
is
chosen
to
test
each
statement
in
a
module
Decision
condition
testing:
Test
data
is
selected
to
test
each
decision
within
a
module
COMPARISON
OF
ACTUAL
WITH
EXPECTED
RESULTS
Test
data
should
be
entered
into
the
finished
product
to
ensure
the
actual
outputs
match
the
expected
outputs.
Because
the
correct
outputs
are
already
known,
this
testing
process
ensure
the
final
product
is
able
to
perform
its
processing
correctly.
LEVELS
OF
TESTING
Testing
is
done
progressively;
it
starts
at
the
lowest
level,
i.e.
unit
testing,
than
works
towards
program
testing
and
finally
system
testing
where
it
is
testing
in
different
environments.
Errors
can
be
removed
from
the
bottom
up.
MODULE
Tests
each
module
separately
and
makes
sure
that
each
module
is
performing
their
task
successfully.
A
driver
program
may
be
needed
to
provide
inputs
and
outputs
to
and
from
a
module,
as
the mainline
may
not
be
available
at
this
time.
Within
complex
programs,
module
testing
will
involve
integrating
related
modules
and
testing
them as
a
subsystem.
Black
and
white
box
testing
is
used.
PROGRAM
Ensures
that
the
modules
work
together
and
that
the
mainline
of
the
program
performs
correctly.
Concentrates
on
the
interfaces
and
the
relationship
of
each
module
to
the
mainline.
Uses
white
and
black
box
testing
to
ensure
the
program
performs
the
overall
task(s)
set
in
the design
specifications.
Can
be
done
in
two
different
ways:
Bottom-up
testing:
test
and
corrects
the
lowest
level
modules
first
and
works
up
to
the
higher-level
modules.
Relies
extensively
on
driver
programs
to
test
modules.
Easier
to
detect
problems
at
an
earlier
stage.
The
main
program
is
tested
last
once
all
modules
are
in
place.
Top-down
testing:
starts
with
testing
the
main
program
first
and
uses
stubs
for
some
incomplete
modules
as
it
links
each
module
to
the
main
line.
Any
problems
are
most
likely
to
be
caused
by
the
most
recent
addition,
therefore
problems
can
be
easily
located
if
top-down
testing
is
constantly
used.
Matters
on
preference
and
commonsense.
E.g.
a
library
of
routines
may
already
be
checked
and
free
from
errors,
so
it
would
be
more
reasonable
to
use
a
top-down
approach.
A
bottom-up
approach
would
be
more
useful
if
a
program
and
all
of
its
modules
are
built
from
scratch.
SYSTEM
Uses
black
box
testing
to
check
that
the
program
can
run
outside
the
integrated
development
environment
in
a
range
of
other
environments. E.g.
a
program
may
run
excellently
on
one
set
of
hardware
and
software,
yet
if
you
try
it
on
a
different
set
of
hardware
and
software
configurations,
problems
that
were
never
present
on
the
development
machine
are
now
an
issue
on
several
machines
and
must
be
fixed.
Testers
outside
of
the
development
team
often
do
system
testing.
For
custom
software,
the
program
is
tested
in
the
environment
in
which
it
will
be
used.
Acceptance
testing
(for
custom
software):
using
potential
users
of
a
program
to
test
custom software
that
is
only
designed
to
run
for
one
particular
set
of
hardware
and
software
specifications.
This
allows
a
program
to
optimise
for
one
particular
system.
61
E.g.
PlayStation
3
console
is
only
one
system
and
exclusive
game
developers
can
run
acceptance
testing
with
users
to
optimise
their
game
for
the
PS3
system
only.
Alpha
testing
(more
general/commercial
software):
is
the
first
phase
of
testing
done
under
controlled
conditions
with
selected
participants.
Once
all
initial
alpha
bugs
found
have
been
fixed
beta
testing
takes
place.
Beta
testing
(more
general/commercial
software):
is
the
second
phase
of
testing
and
involves
volunteers
to
test
a
program
on
a
wide
range
of
specified
hardware
and
software
systems.
E.g.
Windows
7
Beta
could
be
downloaded
freely
and
tested
by
a
wide
range
of
users
systems.
These
provide
an
enormous
amount
of
feedback
very
quickly
to
the
developers
and
final
bugs
of
the
beta
stage
can
be
fixed
up
before
the
release
candidate
is
released.
USE
OF
LIVE
TEST
DATA
When
a
system
is
finally
installed
and
implemented
within
the
total
system
it
is
said
to
be
live.
Once
a
product
goes
live
it
needs
to
undergo
a
series
of
tests
to
ensure
its
robustness
under
real
conditions.
This
testing
occurs
using
live
test
data.
Live
test
data
is
produced
to
simulate
extreme
conditions
that
could
conceivably
occur.
The
use
of
live
test
data
aims
to
ensure
the
software
product
will
achieve
its
requirements
under
these
extreme
conditions.
Different
sets
of
live
test
data
are
used
to
test
particular
scenarios.
CASE
tools
are
available
to
assist
in
the
task
of
producing
live
test
data
sets
and
also
to
automate
the
testing
process.
Later
in
this
chapter,
we
examine
CASE
tools
used
for
this
purpose.
For
most
products
live
test
data
should
be
created
to
test
each
of
the
following
conditions
LARGE
FILE
SIZES
Many
commercial
applications
obtain
input
from
large
databases.
During
the
development
of
software
products,
relatively
small
data
sets
are
used.
At
the
alpha
and
beta
testing
stages
large
files
should
be
used
to
test
the
systems
performance.
The
use
of
large
files
will
highlight
problems
associated
with
data
access.
Often
systems
that
perform
at
acceptable
speeds
with
small
data
files
become
unacceptably
slow
when
large
files
are
accessed.
This
is
particularly
the
case
when
data
is
accessed
via
networks.
Large
data
files
highlight
aspects
of
the
code
that
are
inefficient
or
require
extensive
processing.
Many
large
systems
will
postpone
intensive
processing
activities
until
times
when
system
resources
are
available.
For
example,
updating
of
bank
account
transactions
takes
place
during
the
night
when
processing
resources
are
available.
MIX
OF
TRANSACTION
TYPES
Testing
needs
to
include
random
mixes
of
transaction
types.
Module
and
program
testing
usually
involve
testing
specific
transactions
or
processes
one
at
a
time.
During
system
testing
we
need
to
test
that
transactions
occurring
in
random
order
to
no
create
problems.
Often
the
results
of
one
process
will
influence
a
number
of
other
processes.
If
a
transaction
is
currently
being
completed
on
specific
data
and
that
data
is
altered
by
another
transaction
then
problems
can
occur.
When
a
number
of
applications
are
used
to
access
the
same
database
then
conflicts
are
inevitable.
Software
must
include
mechanisms
to
deal
with
such
eventualities.
RESPONSE
TIMES
The
response
time
is
the
time
taken
for
a
process
to
complete.
The
process
could
be
user
activated
or
it
could
be
activated
internally
by
the
application.
Response
times
are
62
dependent
on
all
the
system
components,
together
with
their
interaction
with
each
other
and
other
processes
that
may
be
occurring
concurrently.
Any
processes
that
are
likely
to
take
more
than
one
second
should
provide
feedback
to
the
user.
Progress
bars
are
a
common
way
of
providing
this
feedback.
Data
entry
forms
that
need
to
validate
data
before
continuing
should
be
able
to
do
so
in
less
than
one
second,
0.1
seconds
is
preferable.
Response
times
should
be
tested
on
minimum
hardware
using
typical
data
of
different
types.
Any
applications
that
affect
and/or
interface
with
the
new
product
should
be
operating
under
normal
of
heavy
loads
when
the
testing
takes
place.
VOLUME
DATA
(LOAD
TESTING)
Large
amounts
of
data
should
be
entered
into
the
new
system
to
test
the
application
under
extreme
load
conditions.
Multi-user
products
should
be
tested
with
large
numbers
of
users
entering
and
processing
data
simultaneously.
Large
systems
that
require
extensive
data
input
require
special
consideration.
Alpha
testing
by
the
software
developer
must
try
to
simulate
the
number
of
users
who
will
simultaneously
use
the
product.
This
can
by
done
using
a
laboratory
of
computers
and
users
in
conjunction
with
CASE
tools.
In
many
cases,
it
is
difficult
to
simulate
real
volumes
of
data
in
a
beta
testing
environment
without
actually
implementing
the
system.
CASE
tools
are
available
that
enable
automatic
completion
of
data
entry
forms.
Many
of
these
tools
allow
the
creation
of
virtual
users.
This
allows
one
machine
to
simulate
the
activities
of
hundreds
or
even
thousands
of
simultaneous
users
entering
data.
INTERFACES
BETWEEN
MODULES
AND
PROGRAMS
An
interface
provides
a
communication
link
between
system
components.
In
terms
of
modules
within
a
program,
the
interface
is
usually
provided
through
the
use
of
parameters
that
are
used
to
pass
data
to
and
from
modules.
The
screens
of
an
application
provide
and
interface
between
the
users
and
the
program.
Hardware
drivers
are
programs
that
provide
an
interface
between
applications
and
hardware
devices.
Programs
that
interact
with
other
programs
require
an
interface
to
complete
their
tasks.
All
these
interfaces
require
testing.
General
application
CASE
tools:
suitable
for
creating
written
reports
and
for
basic
analysis
of
some
results.
E.g.
word
processors
and
spreadsheets.
Specialized
CASE
tools:
provide
structure
assistance,
sometimes-automated
processes
and
the
development
of
test
data.
63
File
comparisons
ensures
input
files
follow
set
rules
and
output
files
are
correct.
Test
management:
tracks
test
data
and
results.
Conducts
tests
of
multiple
modules.
Volume
tester:
used
to
test
a
program
under
network
conditions
with
high
amounts
of
users.
Functional
tester:
Provides
user
interaction
with
a
program
to
test
all
possible
pathways
by
inputting
test
data
to
cover
all
pathways.
Dynamic
analyser:
allows
statements
to
be
counted
every
time
they
are
executed.
Simulator:
creates
a
real
time
environment
in
which
the
program
can
be
tested.
An
example
of
a
test
report
for
a
payroll
system
currently
being
developed
is
shown
below.
Note
the
inclusion
of
a
narrative
discussion
of
problems
encountered
during
testing,
a
table
of
expected
outcomes
and
a
table
of
what
occurred
during
testing.
Note:
This
is
just
one
possible
representation.
There
are
many
other
valid
ways
of
documenting
the
testing
process.
64
COMMUNICATION
WITH
THOSE
FOR
WHOM
THE
SOLUTION
HAS
BEEN
DEVELOPED
Communicating
with
clients
or
users
must
be
done
in
an
honest,
direct
and
non-technical
way. This
will
ensure
the
program
meets
the
specifications,
if
not,
the
program
can
be
easily
adjusted
to
meet
them.
TEST
RESULTS
Test
results
should
document
both
the
positive
and
negative
experiences
of
a
program.
They
should
include:
Problems
should
be
identified
such
as
bugs,
length
response
times,
inability
to
test
a
module due
to
lack
of
data
input.
Limitations
of
the
program:
scope
or
size
of
the
data
type
handling
capacity.
Data
input restrictions
and
processing
limits
are
placed
here.
Assets:
user
interface,
results
from
live
data
and
results
that
show
user
needs
being
met
are documented
here.
Recommendations
should
be
made
such
as
that
certain
bugs
will
be
fixed
in
feature
development,
modules
will
be
rewritten
or
a
program
is
open
to
new
features.
COMPARISON
WITH
THE
ORIGINAL
DESIGN
SPECIFICATION
Summary
of
the
program
and
its
tasks,
which
compare
to
the
design
specifications,
are
communicated
to
the
client
base.
Any
specifications
that
have
not
been
met
should
be
alerted
to
the
client
and
a
reason
why
it wasnt
met.
If
possible
these
problems
should
be
fixed.
65
The
original
design
specifications
include
requirements
for
the
particular
application
together
with
specifications
in
regard
to
documentation
required,
screen
design
and
code
design.
In
essence,
these
specifications
describe
what
the
software
should
do
together
with
how
it
should
be
done.
Evaluation
of
the
design
specifications
will
therefore
ensure
the
product
realises
its
requirements
and
that
those
involved
in
the
design
process
have
maintained
standards
in
regard
to
how
the
requirements
have
been
achieved.
Bud
fixes
New
hardware
New
software
purchases
Major
upgrades
of
OSs
will
have
a
flow-on
effect
to
other
applications.
Upgrading
of
applications
to
take
advantage
of
features
available
in
new
OSs
required.
A
software
company
whose
products
have
a
large
customer
base
may
receive
many
thousands
of
requests
for
modification
of
their
products.
The
company
must
then
prioritise
these
requests
and
make
decisions
on
which
modifications
will
be
included
and
which
will
not.
Often
registered
users
are
surveyed
to
obtain
their
views
on
possible
modifications.
Support
departments
provide
a
valuable
source
of
data.
The
number
of
support
issues
about
specific
function
highlights
useability
problems.
This
information
can
be
used
as
the
basis
for
selecting
appropriate
modifications
for
inclusion
in
an
upgrade.
Customised
software
written
for
a
specific
client
will
require
continual
maintenance
as
requirements
change.
The
focus
of
companies
change,
small
companies
will
grow
ad
their
products
will
change.
As
customised
software
is
an
extensive
purchase,
maintaining
the
product
is
normally
preferable
to
changing
products,
or
developing
a
new
product
from
the
ground
up.
Because
custom
software
is
written
to
solve
a
particular
problem,
minor
changes
to
the
requirements
often
necessitate
alterations
and
additions
to
the
source
code.
66
Software
based
on
COTS
products
is
particularly
susceptible
to
upgrades
in
the
base
COTS
product.
Macro
and
script
commands
may
change,
resulting
in
unexpected
results.
Often
applications
will
need
to
be
recompiled
using
the
new
version
of
the
COTS
product.
The
internet
has
automated
the
upgrading
of
many
commonly
used
applications.
Products
using
the
resources
of
these
applications
may
require
similar
upgrading
to
continue
to
operate
as
intended.
LOCATING
OF
SECTION
TO
BE
ALTERED
Once
a
decision
has
been
made
to
modify
a
particular
aspect
of
a
product,
the
location
of
the
section
to
be
altered
needs
to
be
determined.
Models
of
the
system
are
used
to
assist
in
this
process.
Structure
charts
and
DFDs
will
assist
in
the
location
of
the
modules
requiring
modification.
Once
a
module
has
been
identified,
the
programmer
can
analyse
the
original
algorithms
and
IPO
diagrams,
and
the
actual
source
code.
A
thorough
understanding
of
the
original
source
code
is
required
before
any
modifications
are
undertaken.
DETERMINING
CHANGES
TO
BE
MADE
After
the
section
of
code
to
be
modified
has
been
located,
we
need
to
determine
the
changes
to
be
made.
Depending
on
the
nature
of
the
modification,
changes
may
be
required
to
data
structures,
files,
and
the
user
interface,
as
well
as
to
the
source
code.
The
consequences
of
changes
made
to
one
module
must
be
considered.
If
a
record
data
structure
is
altered
then
what
effect
will
this
have
on
other
modules
that
access
this
data?
For
example,
if
a
field
within
a
record
that
once
contained
Boolean
data
is
change
to
hold
integers,
then
there
will
be
consequences
for
all
modules
that
access
these
records.
Analysis
of
the
original
documentation
should
alert
programmers
to
the
possible
consequences
of
changes.
IMPLEMENTING
AND
TESTING
SOLUTION
Once
the
changes
to
be
made
have
been
determined,
these
changes
must
be
implemented
within
the
existing
application.
In
many
cases,
changes
will
be
made
to
the
source
code
and
the
application
will
be
recompiled,
tested
and
distributed.
In
other
cases,
a
small
application
or
patch
may
be
written
to
implement
the
changes
on
end-users
systems.
Customs
systems,
written
using
COTS
products,
can
often
be
changed
directly
on
the
site
or
over
an
internet
link.
This
is
particularly
the
case
with
script
and
macro
modifications.
In
many
cases,
script
and
macros
are
not
compiled
until
runtime,
so
the
source
code
is
available
for
change
at
the
end-users
site.
The
implementation
of
modifications
can
have
a
ripple
effect
to
other
aspects
of
the
application.
These
problems
are
minimised
if
the
original
product
was
designed
using
a
structured
modular
approach.
When
each
procedure
or
function
has
been
designed
independently
then
the
effect
of
changes
will
be
easier
to
identify
and
correct.
Modules
that
are
used
in
a
number
of
places
throughout
the
application
should
be
modified
in
such
a
way
that
they
retain
their
original
processing,
including
input
and
output
parameters.
Changes
to
modules
that
alter
parameters
will
require
modifications
to
each
higher-level
calling
module.
Testing
of
modifications
should
be
performed
using
the
same
techniques
employed
for
the
development
of
new
products.
Often
CASE
tool
scripts
and
test
data
used
during
the
original
testing
can
be
reused.
The
tests
should
not
be
restricted
to
the
modified
code
segments,
but
applied
to
all
aspects
of
the
application
that
are
in
any
way
affected
by
the
change.
DOCUMENTING
CHANGE
SOURCE
CODE
DOCUMENTATION
67
Documentation
of
the
source
code
includes
comments
and
intrinsic
documentation.
The
major
function
of
source
code
documentation
is
to
ease
the
job
of
maintenance
programmers.
Often
documenting
code
at
the
time
of
writing
is
viewed
as
tedious,
however
at
maintenance
time
this
documentation
will
prove
invaluable.
As
source
code
is
written
the
programmer
obviously
understands
the
logic.
At
the
time
they
are
written,
many
statements
within
the
source
code
will
appear
trivial
however
the
code
will
not
be
so
obvious
when
viewed
by
other
programmers
at
a
later
date.
Comments
should
be
included
that
clearly
describe
what
each
code
segment
does.
The
code
should
be
self-
documenting
so
that
it
clearly
explains
how
the
code
segment
achieves
its
purpose.
Careful
documentation
of
any
changes
made
to
the
original
code
must
be
maintained.
Details
of
who
made
the
change,
and
when,
should
be
included
within
the
code.
Any
original
comments
that
are
no
longer
relevant
should
be
removed
or
modified.
It
is
vital
that
the
integrity
of
the
documentation
be
maintained
throughout
the
modification
of
the
product.
UPDATING
ASSOCIATED
HARDCOPY
DOCUMENTATION
AND
ONLINE
HELP
Printed
documentation
is
becoming
a
rare
occurrence
as
a
means
of
distributing
documentation.
In
most
cases
user
and
technical
documentation
is
stored
and
distributed
electronically.
Electronic
formats
allow
for
more
efficient
editing
and
distribution
of
all
types
of
documentation.
Most
software
products
now
contain
online
help
files.
Most
OSs
have
their
own
standard
format
for
online
help
files.
They
also
contain
an
application
to
access
these
files.
Most
high-level
languages
now
include
development
tools
to
assist
in
the
integration
of
online
help
within
products.
Maintenance
personal
must
consider
the
effect
on
the
integrity
of
online
help
of
modification
made
to
products.
Additions
and
modification
to
the
online
help
system
should
be
made
in
order
to
reflect
any
modification
made
to
the
application.
USE
OF
CASE
TOOLS
TO
MONITOR
CHANGES
AND
VERSIONS
Maintaining
accurate
records
of
the
maintenance
process
is
a
complex
and
time-consuming
task.
CASE
tools
are
available
to
automate
this
process.
The
systematic
control
of
changes
to
a
software
product
is
known
as
configuration
management.
Many
CASE
tools
specifically
designed
to
serve
the
process
of
configuration
management
are
available.
Some
of
the
functions
commonly
addressed
in
these
CASE
tools
include:
Change
management-
supports
the
life
cycle
of
each
change,
beginning
with
the
initial
change
request
to
the
inclusion
of
the
modification
into
the
final
product.
Each
step
of
the
software
development
process
for
each
modification
is
monitored.
For
example,
a
history
is
maintained
of
each
modification
made,
when
it
was
made,
and
by
whom
Parallel
development-
maintenance
of
products
by
large
teams
of
developers
requires
significant
coordination.
Parallel
development
CASE
tools
enable
multiple
users
to
work
on
the
same
project
without
fear
of
duplication
or
destroying
another
team
members
work.
Individual
files
are
checked
out
as
required
by
individual
team
members.
This
effectively
locks
the
file,
preventing
others
from
making
modifications.
When
the
modification
is
complete,
the
file
is
checked
in.
The
checking-in
process
releases
the
file
and
retains
a
record
of
the
changes
made.
Distributed
parallel
development
tools
create
scripts
of
changes
made
to
each
file;
these
scripts
are
distributed
electronically
to
other
teams
members.
This
ensures
developers
work
with
the
most
up-to-date
code
available
Version
control-
CASE
tools
manage
multiple
versions
of
software
components
or
modules.
The
system
tracks
which
version
of
each
component
of
module
should
be
used
when
the
application
is
finally
reassembled.
A
record
is
maintained
of
prior
68
9.4
OPTIONS
9.4.1
OPTION
1
PROGRAMMING
PARADIGMS
DEVELOPMENT
OF
DIFFERENT
PARADIGMS
A
paradigm
can
be
defined
as
a
model
or
pattern.
In
relation
to
computer
programming
it
relates
to
the
way
a
solution
to
a
problem
is
structured
not
how
it
is
coded.
Another
term
closely
related
to
paradigm
is
methodology,
which
relates
to
the
approach
used
when
writing
program
code.
Until
relatively
recently,
the
architecture
of
computer
hardware
had
driven
the
development
of
programming
languages.
Although
many
different
computer
architectures
are
currently
being
developed,
by
far
the
most
common
is
still
the
traditional
Von
Neumann
architecture
first
developed
to
calculate
trajectories
for
bombs
and
shells
during
WWII.
The
Von
Neumann
architecture
was
implemented
in
the
ENIAC
(electronic
numerical
integrator
and
computer).
Probably
every
computer
you
have
seen
or
used
is
based
on
the
Von
Neumann
concept.
Essentially,
this
architecture
separates
data
and
processing.
Data
is
sent
to
the
CPU
for
processing
and
the
result
is
sent
back
to
memory
for
storage.
The
CPU
is
a
sequential
device;
instructions
are
processed
one
at
a
time
in
a
predetermined
sequence.
The
Von
Neumann
computer
has
lead
to
the
development
of
procedural
or
imperative
languages.
Imperative
languages
use
sequencing,
decisions
and
repetition
as
their
main
problem
solving
methods.
Data
and
processing
are
separated
with
imperative
languages.
We
create
variables
for
data
storage
and
then
we
perform
processes
on
them.
Each
program
has
a
beginning
and
a
distinct
end.
This
is
not
the
only
way
of
doing
things.
LIMITATIONS
OF
THE
IMPERATIVE
PARADIGM
Although
new
imperative
programming
languages
and
user-friendly
GUI
interfaces
have
made
the
development
of
software
an
easier
task,
the
resulting
products
are
still
performing
the
same
tasks
using
essentially
the
same
techniques.
The
difference
is
they
are
just
doing
it
a
lot
faster.
Imperative
languages
require
the
developer
to
understand
all
details
of
the
problem
and
to
be
able
to
solve
the
problem
completely.
Many
types
of
problem
do
not
have
precise
solutions
and
developing
an
algorithm,
which
solves
such
problems,
is
not
feasible.
The
imperative
paradigm
on
which
languages
such
as
Pascal,
C,
Fortran
and
Basic
are
based,
restrict
the
developer
in
many
ways.
For
instance,
a
function
can
only
accept
data
as
its
inputs
and
it
can
only
return
data
as
its
output.
Individual
programs
are
designed
to
solve
a
particular
problem
and
they
must
be
modified
to
solve
related
problems.
The
human
brain
can
use
the
solution
techniques
used
on
past
problems
to
assist
in
the
resolution
of
new
seemingly
unrelated
problems.
As
human
beings
we
do
not
think
in
terms
of
data
and
processes,
our
brains
are
able
to
readily
connect
the
two.
The
human
brain
is
able
to
make
inferences
and
deduction
based
on
experience
and
intuition.
Surely
if
we
can
create
programming
languages
that
can
simulate
the
brain
more
accurately
then
productivity
gains
will
result.
DIFFICULTY
SOLVING
CERTAIN
TYPES
OF
PROBLEM
69
Computers
were
initially
designed
to
solve
mathematical
are
arithmetical
problems.
The
finance
and
motivation
for
the
development
and
production
of
early
computers
came
from
the
military.
This
was
soon
followed
by
statistical
analysis
of
census
data
and
then
by
large
business
applications.
These
problems
were
best
solved
using
mathematical
techniques.
Many
real
life
problems
do
not
have
definite
answers
or
they
have
a
variety
of
answers.
Other
problems
can
be
solved
using
strategies
that
cannot
be
stated
easily
in
strict
mathematical
terms.
For
example,
doctors
diagnose
diseases
in
patients
using
symptoms
and
a
certain
amount
of
intuition
gained
from
experience
with
other
patients.
Driving
a
car
involves
far
more
knowledge
than
knowing
how
to
operate
the
controls.
Good
drivers
are
able
to
become
an
integral
part
of
the
vehicle,
they
sense
potential
dangers
before
they
become
problems.
Communication
between
people
involves
multiple
signals.
Tone,
volume
and
inflexions
in
voice
have
different
subtle
meanings;
unable
to
be
solved
efficiently
using
imperative
languages.
A
different
set
of
rules
governs
the
solution
of
these
types
of
problem.
A
new
programming
paradigm
that
could
help
software
developers
work
towards
solutions
to
these
problems
would
be
invaluable.
THE
NEED
TO
SPECIFY
CODE
FOR
EVERY
INDIVIDUAL
PROCESS
The
imperative
paradigm
requires
that
every
part
of
the
problem
must
be
solved
in
complete
detail
before
the
final
software
will
operate.
For
example,
when
using
top-down
design
thorough
testing
cant
commence
until
the
final
lowest
level
subroutines
have
been
written.
Imagine
the
subroutine
required
to
sort
was
not
present
within
a
database
management
system,
the
whole
application
would
fail
even
though
this
sort
subroutine
is
a
very
minor
part
of
the
large
application.
Now
imagine
it
was
not
necessary
to
specify
details
of
every
single
process.
Instead
imagine
the
software
could
infer
a
suitable
method
of
solution
at
runtime.
If
this
were
the
case
then
software
will
be
able
to
react
to
the
changing
needs
of
users
without
the
need
for
modification.
This
is
a
significant
aim
of
the
logic
paradigm
and
is
why
the
logic
paradigm
is
used
to
develop
many
AI
applications.
DIFFICULTY
OF
CODING
FOR
VARIABILITY
Many
problems
solved
by
software
developers
are
similar
in
many
ways
or
involve
similar
solutions
strategies,
e.g.
searching
and
sorting.
There
are
countless
other
example
where
programming
tasks
are
repeated
as
part
of
the
solution
to
multiple
problems.
Well-
structured
imperative
programs
can
be
designed
so
that
modules
of
code
can
be
readily
reused.
However
this
is
not
an
integral
part
of
the
imperative
paradigm.
It
would
be
preferable
if
code
could
be
used
to
solve
related
problems
without
the
need
to
be
rewritten.
This
reusability
should
be
an
integral
part
of
the
paradigm
in
much
the
same
way
as
our
brain
uses
past
experience
to
solve
new
related
problems.
The
development
of
software
products
has
now
become
one
of
the
fastest
growing
industries.
This
was
not
the
case
a
mere
20-30
years
ago.
It
makes
sense
that
software
should
be
designed
so
that
code
can
be
reused
in
different
contexts
to
solve
different
problems.
Is
there
a
paradigm
that
will
encourage
the
reuse
of
code?
If
so,
them
the
task
of
software
developers
will
be
simplified
and
the
quality
of
the
resulting
products
improved.
Many
other
industries
reuse
components
in
a
variety
of
contexts.
For
example,
the
padding
in
a
lounge
chair
can
also
be
used
in
the
seat
of
a
car
or
to
provide
soundproofing
in
a
studio.
A
12-volt
light
bulb
may
be
used
in
a
torch,
on
a
boat,
in
an
airplane
or
in
a
car.
Sheet
aluminium
is
used
to
produce
soft
drink
cans,
lithographic
printing
plates
and
to
line
the
outside
or
air
transport
crates.
As
software
developers,
we
would
benefit
from
programming
languages
that
allow
us
to
reuse
software
components
in
a
similar
way.
EMERGING
TECHNOLOGIES
70
The
first
programmers
had
to
program
in
machine
language,
i.e.
binary.
The
programmer
was
forced
to
think
like
the
machine
and
the
language
was
dependent
on
the
processor.
John
Von
Neumann
was
using
his
student
to
convert
his
code
into
binary
machine
language
for
input
using
punched
cards.
Apparently
one
of
his
students
asked
why
the
machine
could
not
perform
the
conversion.
Von
Neumann
replied
that
the
resources
of
the
computer
where
far
too
valuable
to
be
wasted
on
such
menial
tasks.
Eventually
assembler
languages
emerged
to
perform
this
precise
task.
At
the
time,
assembler
languages
were
viewed
as
a
way
of
removing
the
programmer
from
the
technical
aspects
of
implementation
so
they
could
focus
on
solving
the
problem.
Compared
to
todays
modern
languages
assembler
is
extremely
primitive.
The
ever-increasing
speed
of
the
technology
underpinning
computer
hardware
has
allowed
programming
languages
to
further
remove
the
programmer
from
the
machine
code.
The
emphasis
is
on
creating
programming
tools
to
assist
in
the
development
of
software
products
without
the
need
for
the
programmer
to
directly
interact
with
lower-level
CPU
processes.
The
final
software
product
may
not
always
be
as
efficient
as
a
similarly
developed
machine
code
product,
however
its
development
is
certainly
a
simpler
and
more
intuitive
process.
The
imperative
paradigm
has
evolved
over
the
years
from
lower-level
languages
that
directly
accessed
the
computers
hardware
functions.
Computers
are
not
powerful
enough
that
different
ways
of
viewing
and
solving
problems
can
be
utilised.
In
the
next
section,
we
look
at
some
different
approaches
that
are
emerging
to
assist
in
the
solution
of
problems.
IMPERATIVE
Imperative
languages
use
sequencing,
decisions
and
repetition
as
their
main
problem
solving
methods.
Data
and
processing
are
separated
with
imperative
languages.
We
create
variables
for
data
storage
and
then
we
perform
processes
on
them.
The
programmer
must
specify
an
explicit
sequence
of
steps
to
follow
to
produce
a
result.
Concepts
of
imperative
include:
-
Control
structures
Variables
Array
Advantages
Disadvantages
- Efficient
execution
- Programmer
must
deal
with
management
of
- Easier
to
read
the
logic
variables
and
assignment
of
values
to
them
of
the
program
- Labour
intensive
construction
of
program
- Difficult
to
solve
certain
types
of
problems
- The
need
to
code
for
every
individual
process
- Difficulty
of
coding
for
variability
One
area
of
problems
suited
to
the
imperative
paradigm
is
business
applications.
These
applications
follow
a
clear
business
process,
which
can
be
programmed.
Imperative
languages
require
the
developer
to
understand
all
details
of
the
problem
and
to
be
able
to
solve
the
problem
completely.
Many
types
of
problems
do
not
have
precise
solutions
and
developing
an
algorithm,
which
solves
such
problems,
is
not
feasible.
LOGIC
PARADIGM
The
logic
paradigm
uses
facts
and
rules
as
its
basic
building
blocks.
The
logic
paradigm
is
used
to
develop
programs
in
the
area
of
AI.
Prolog
is
the
most
common
logical
programming
71
language.
It
is
short
for
programming
in
logic.
One
of
the
major
influences
on
the
nature
of
the
Prolog
language
was
the
need
to
process
natural
language.
Prolog
is
used
primarily
in
the
areas
of
AI
including
natural
language
processing.
Advantages
Disadvantages
CONCEPTS
FACTS
Let
us
assume
we
have
a
simple
food
chain
where
lions
eat
dogs,
dogs
eat
cats,
cats
ear
birds,
birds
eat
spiders
and
spiders
eat
flies.
We
write
the
facts
as
follows:
eat
(lion,
dog).
eat
(dog,
cat).
eat
(cat,
bird).
eat
(bird,
spider).
eat
(spider,
fly).
QUERIES
Querying
these
facts
is
simple.
Say
we
wish
to
determine
if
a
spider
can
eat
a
dog.
We
enter
the
line:
?-eat
(spider,
dog)
When
we
run
the
program
(query),
the
Prolog
inference
engine
responds
NO,
where
as
if
we
enter
the
query:
?-eat
(dog,
cat)
The
output
would
be
yes.
In
Prolog
a
query
is
known
as
a
goal,
i.e.
our
goal
was
to
find
out
if
a
dog
could
eat
a
cat.
When
we
just
have
facts
in
our
database
and
no
rules
then
Prolog
merely
searches
through
the
facts
for
a
match.
If
a
match
occurs
then
our
goal
has
been
fulfilled
and
Yes
is
output.
If
no
match
is
found
then
our
goal
fails
and
No
is
output.
GENERAL
FACTS
Let
us
introduce
a
general
fact
into
our
program
to
say
that
everyone
like
eating
flies.
Rather
than
having
to
write
a
long
list
of
new
facts,
we
can
use
a
variable,
say
X
to
represent
any
value.
Our
new
fact
would
be:
eat
(X,
fly).
Variables
must
have
an
upper
case
first
letter.
Now
if
our
goal
is:
?-eat
(dog,
fly)
or
72
73
If
a
plane
has
a
jet
engine
and
a
single
seat,
then
it
is
a
jet
fighter
If
a
plane
has
a
jet
engine
and
a
pressured
cabin,
then
it
can
fly
above
15000
feet
If
a
plane
has
fixed
windows
then
it
has
a
pressurised
cabin
If
a
plane
can
fly
above
15000
feet,
then
it
must
be
air-conditioned
Suppose
we
wish
to
prove
using
these
rules,
that
a
jet
with
fixed
windows
must
be
air-
conditioned.
Let
us
examine
this
problem
using
firstly
a
backward
chaining
strategy
and
then
using
a
forward
chaining
strategy.
Backward
chaining:
Assume
the
theory
is
true
and
then
ask
questions
to
systematically
verify
the
necessary
rules
are
present.
Using
this
method
we
commence
with
out
desired
goal,
namely
to
prove
that
a
74
jet
with
fixed
windows
must
be
air-conditioned.
We
scan
down
our
rules
until
we
find
one
that
results
in
an
air-conditioned
plane.
Rule
4
meets
this
need.
This
rule
provides
us
with
a
sub-goal
to
prove,
namely
that
our
plane
can
fly
above
15000
feet.
Rule
2
meets
this
requirement.
Two
new
sub-goals
result:
the
plane
must
have
a
jet
engine
and
a
pressurised
cabin.
We
know
the
plane
has
a
jet
engine,
as
part
of
the
question.
Rule
3
proves
that
it
also
has
to
have
a
pressurised
cabin.
Out
theory
is
then
proved:
jets
with
fixed
windows
must
be
air-conditioned.
Forward
chaining:
Start
from
the
beginning
of
the
facts
and
rules
and
ask
question
to
determine
which
path
to
follow
next
to
arrive
at
a
conclusion.
Using
this
approach
we
use
the
knowledge
that
our
jet
has
fixed
windows
and
attempt
to
reach
a
conclusion.
We
find
rule
3
that
means
our
plane
must
also
be
pressurised.
So
our
plane
is
a
jet
with
fixed
windows
that
is
pressurised.
Using
this
information,
we
scan
our
rules
again.
We
find
that
rule
2
means
our
plane
can
also
fly
above
15000
feet.
As
a
consequence,
rule
4
becomes
relevant
and
our
plane
must
be
air-
conditioned.
Out
theory
is
once
again
proved.
LANGUAGE
SYNTAX
Each
of
these
questions
illustrates
different
Prolog
concepts
and
the
suggested
solutions
include
notes
to
further
explain
any
new
concepts.
QUESTION
1
Consider
the
following
Prolog
code:
member
(A,
[A|B]).
%Rule
1
member
(A,
[B|C]):-
member(A,
C).
%Rule
2
Square
brackets
are
used
to
denote
a
list.
A
list
is
a
data
structure
which
simply
groups
a
number
of
data
items
A
vertical
bar
is
used
to
separate
the
head
of
a
list
from
its
tail.
The
head
of
a
list
is
simply
its
first
item
and
the
tail
is
the
remaining
list
The
%
signs
is
used
to
begin
a
comment
a) Explain
how
the
goal
member
(5,
[2,5,4]).
Would
be
evaluated:
member(5,
[2,5,4])
=
member(5,
[2|5,4])
%matches
rule
2
=
member(5,
[5,4])
%new
goal
so
we
can
prove
rule
2
=
member(5,
[5|4])
%matches
rule
1
Hence,
the
goal
evaluates
to
True
75
Each
sub-goal
is
tested
in
tern.
A
is
1
which
is
a
valid
member
of
the
list,
same
for
B
and
C.
E
is
then
calculated
as
90,
which
is
less
thank
100
so
this
sub-goal
is
true.
Finally
(100-90)/5
is
2
which
is
the
D
value,
so
the
sub-goal
is
true.
b) Explain
why
the
three
member
sub-goals
are
required
within
the
rule:
When
a
goal
does
not
include
values,
then
the
inference
must
try
to
assert
values.
The
inference
engine
systematically
tries
each
value
in
the
list
of
possible
values
for
each
variable.
If
no
set
of
possible
values
can
be
asserted.
Without
the
3
member
clauses,
we
would
then
have
to
find
or
prove
the
values
for
A,
B,
C
some
other
way.
APPROPRIATE
USES
Prolog
is
the
leading
programming
language
for
AI
applications
and
research.
The
logic
paradigm
is
based
on
formal
predicate
logic,
which
has
historically
been
the
basis
of
much
research
into
how
humans
think,
reason
and
make
decisions.
In
logic
we
describe
the
problem
we
wish
to
solve
rather
than
describing
how
to
solve
the
problem.
The
logic
paradigm
is
particularly
good
at
representing
knowledge
and
then
using
this
knowledge
to
reason.
Often
the
reasoning
performed
by
the
inference
engine
is
essentially
pattern
matching.
This
pattern
matching
ability
is
used
for
a
variety
of
AI
applications
including
natural
language
processing
such
as
grammar
and
spell
checks.
It
is
also
used
to
simulate
the
reasoning
of
human
experts
within
expert
systems.
An
expert
system
is
used
to
perform
functions
that
would
normally
be
performed
by
a
human
expert
in
that
field.
A
doctor
may
use
an
expert
system
to
diagnose
illnesses
or
economists
may
use
an
expert
system
to
forecast
economic
trends.
In
general,
expert
systems
do
not
reach
definite
conclusions
rather
they
weigh
up
the
evidence
and
present
possible
conclusions.
The
reasoning
is
stored
in
a
knowledge
base
that
is
interrogated
by
an
expert
system
shell.
Expert
systems
use
facts
and
rules
to
reach
conclusions
using
similar
techniques
to
those
used
in
Prolog.
Many
expert
systems
also
allow
the
inclusion
of
heuristics.
These
are
rules
that
are
generally
accepted
as
true
within
the
particular
specialist
area,
heuristics
are
criteria
for
deciding
which,
among
several
alternative
courses
of
action,
76
promises
to
be
the
most
effective
in
order
to
achieve
some
goal.
Heuristics
are
often
described
as
rules
of
thumb.
Heuristics
are
what
give
expert
systems
a
human
feel.
ANOTHER
EXAMPLE
The
following
example
uses
a
family
database.
Some
facts
of
the
family
database
are:
female(X)
meaning
that
X
is
a
female.
male(Y)
meaning
that
Y
is
a
male.
parent(X,
Y)
meaning
that
X
is
the
parent
of
Y.
Some
examples
of
facts
for
this
database
are:
female(karen)
female(rosemary)
female(sun
yi)
female(mahdu)
male(sam)
male(steve)
parent(sam,
karen)
parent(rosemary,
karen)
parent(mahdu,
sam)
parent(steve,
sam)
parent(rosemary,
sun
yi)
Some
rules
for
the
family
database
are:
grandparent(X,
Y)
:-
parent(X,
Z),
parent(Z,
Y)
X
is
the
grandparent
of
Y
if
X
is
the
parent
of
Z
and
Z
is
the
parent
of
Y
sibling(X,
Y)
:-
parent(Z,
X),
parent(Z,
Y),
X
Y
X
is
the
sibling
of
Y
when
Z
is
the
parent
of
both
X
and
Y,
and
X
and
Y
are
different
people.
(ie
you
cannot
be
your
own
sibling).
Example
goals
for
the
family
database
are:
grandmother(mahdu,
karen)
this
would
evaluate
to
true
based
on
the
facts
defined
grandmother(mahdu,
steve)
this
would
evaluate
to
false
based
on
the
facts
defined
Advantages
Disadvantages
77
78
public
-
move()
{
}
fight
()
{
}
getTreasure
()
{
}
Creating
an
object
based
on
a
class
is
called
instantiation.
The
following
code
instantiates
the
object
Mr
Nice
and
Mrs
Lovely:
Goodie
Mr
Nice
=
new
Goodie()
Goodie
Mrs
Lovely
=
new
Goodie()
INHERITANCE
Inheritance
is
the
ability
of
objects
to
take
on
the
characteristics
of
their
parent
class
or
classes.
So
far
we
have
only
created
goodies,
our
program
requires
some
baddies
as
well.
The
baddie
class
declaration
would
be
similar
in
many
respects
to
the
goodie
class
declaration.
The
attributes
are
the
same
except
intelligence
is
removed
and
aggression
and
badType
are
added.
If
we
create
a
super
class
called
character,
this
class
could
contain
all
the
common
attributes
and
methods
used
in
both
goodie
and
baddie
classes.
This
is
the
concept
of
inheritance,
subclasses
inherit
all
the
attributes
and
methods
of
their
super
class,
and
subclasses
also
extend
the
super
class.
The
ability
of
a
class
to
inherit
features
from
other
classes
is
extremely
useful
in
OOP.
Because
each
class
has
been
developed
as
a
robust
unit
we
know
that
inherited
components
will
also
be
robust.
Development
of
new
objects
with
similar
characteristics
to
existing
objects
is
greatly
simplified
using
inheritance.
Following
are
the
amended
declaration
of
the
classes:
public
class
character
{
private
-
agility:
int
health:
int
}
public
-
move
()
{
}
fight
()
{
}
public
-
getTreasure()
{}
public
-
makeNoise
()
{}
ABSTRACTION
79
The
process
of
designing
objects
by
breaking
them
down
into
component
classes
allows
us
to
concentrate
on
the
details
of
our
current
work.
This
is
the
process
of
abstraction.
The
hierarchy
of
classes
is
designed
in
such
a
way
that
each
class
is
reduced
so
as
to
include
only
its
necessary
attributes
and
methods.
Abstraction
allows
us
to
isolate
parts
of
the
problem
and
consider
its
solution
apart
from
the
main
problem.
Encapsulation
and
inheritance
greatly
assist
in
the
process
of
abstraction.
They
allow
us
to
put
the
overall
problem
aside
with
confidence
whilst
sub-problems
are
dealt
with.
CONSTRUCTORS
In
OOP
you
cannot
just
assign
values
to
attributes.
For
example,
you
cant
say
MrNice.health=7,
this
this
would
contradict
the
concept
of
encapsulation.
Encapsulating
attributes
within
objects
is
achieved
by
specifying
that
each
attribute
is
private.
Only
the
classes
methods
can
change
the
value
of
attributes.
To
initialise
the
value
of
an
objects
attributes
requires
a
special
type
of
method
called
a
constructor.
Every
time
an
object
is
instantiated
its
constructor
method
is
executed.
If
you
havent
written
a
constructor,
then
the
compiler
creates
a
dummy
one
for
you.
If
we
want
all
our
baddies
to
be
created
with
both
agility
and
health
set
at
5,
and
also
want
to
be
able
to
create
baddies
of
different
badType
with
different
amounts
of
aggression.
Constructor
methods
always
have
the
same
name
as
the
class.
Following
are
our
class
declarations
with
constructors
that
perform
the
task:
public
class
character
{
private
-
agility:
int
health:
int
}
public
character(int
a,
int
h)
{
if
(a<0
||
a>10)
a
=
0;
this.agility
=
a;
if
(h<0
||
h>10)
h
=
0;
this.health
=
h;
}
move
()
{
}
fight
()
{
}
public
baddie
(int
a,
int
b)
{
super
(5,5);
if
(a<0
||
a>10)
a
=
0;
this.aggression
=
a;
if
(b<0
||
b>10)
b
=
0;
this.badType
=
b;
80
}
makeNoise
()
{}
In
the
above
this
is
used
to
specify
the
current
object,
and
super
is
used
to
refer
to
public
items
in
the
super
class.
To
create
a
baddie
object
called
crazyTree
with
agility
and
health
of
5,
aggression
of
3,
and
a
badType
of
2:
Baddie
crazyTree
=
new
Baddie(3,2)
METHODS
public
class
goodie
extends
character
{
private
-
personality:
int
intelligence:
int
public
-
getTreasure()
{
int
h
=
this.getHealth()
this.setHealth(h++)
int
p
=
this.getPersonality
this.setPersonality(p++)
}
getHealth()
{
return
this.health
}
getPersonality()
{
return
this.personality
}
setHealth(int
h)
{
this.health
=
h
}
setPersonality
(int
p)
{
this.personality
=
p
}
}
If
someone
is
playing
the
adventure
game
and
they
find
a
piece
of
treasure.
The
command
mrNice.getTreasure()
gets
executed,
and
then
health
and
personality
are
incremented.
POLYMORPHISM
Generally
polymorphism
is
the
ability
to
appear
in
more
than
one
form.
In
terms
of
OOP,
polymorphism
refers
to
the
ability
of
the
same
command
to
process
objects
from
the
same
super
class
differently
depending
on
their
subclass.
At
runtime
the
system
chooses
the
precise
method
to
execute
based
on
the
subclass
of
each
particular
object
being
processed.
Polymorphism
allows
programmer
to
process
objects
from
a
variety
of
different
subclasses
together
efficiently
within
loops.
For
example
when
using
shapes,
the
subclass
method
getArea
would
be
different
for
circles
and
rectangles.
APPROPRIATE
USES
Used
in
cases
where
there
are
related
classes
of
data
that
lend
themselves
naturally
to
a
class
hierarchy.
This
is
often
games
or
programs
with
a
high
amount
of
interaction
e.g.
MS
word.
OOP
languages
are
used
in
computer
games
and
web-based
database
applications.
ANOTHER
EXAMPLE
81
class
Point
{
private
point_no:
integer
x_coordinate:
double
y_coordinate:
double
public
getPoint(point_no):
return
x_coordinate
and
y_coordinate
}
sub-class
Circle
{
is
a
Point
private
circle_no:
integer
radius:
double
public
getArea(circle_no):
return
Math.PI*radius*radius
}
sub-class
Rectangle
{
is
a
Point
private
rectangle_no:
integer
height:
double
width:
double
public
getArea(rectangle_no):
return
height*width
}
The
software
used
must
allow
students
to:
define
classes,
objects,
attributes
and
methods
make
use
of
inheritance,
polymorphism
and
encapsulation
use
control
structures
and
variables.
82
83
The
good
news
is
that
languages
can
considerably
reduce
development
times.
In
addition,
hardware,
although
not
designed
for
these
paradigms,
is
now
capable
of
executing
applications
at
such
a
speed
that
efficiency
concerns
are
often
of
reduced
importance.
This
is
particularly
true
of
many
popular
OOP
languages
such
as
Java
and
C++.
LEARNING
CURVE
Programming
languages
based
on
the
logical
programming
paradigm
have
not
gained
wide
acceptance
amongst
the
general
software
development
community.
They
are
often
viewed
as
obtuse
and
specialised.
It
is
a
difficult
task
to
thoroughly
learn
languages
based
on
new
paradigms
and
often
the
learning
curve
is
steep.
The
benefits
however
are
often
greater
than
first
envisaged.
OO
languages,
on
the
other
hand,
are
accepted
and
used
by
a
large
proportion
of
software
development
companies.
84