0% found this document useful (0 votes)
13 views106 pages

Module 4 - Learner Guide

This Learner Guide provides an overview of Module 4: Carrying out systems development, part of the Further Education and Training Certificate in Information Technology: Systems Development. It outlines the learning outcomes, assessment processes, and the importance of attending training workshops, emphasizing the learner's responsibility in the learning process. Key concepts covered include object-oriented programming, terminology, and principles such as encapsulation and information hiding.

Uploaded by

Malvin Sambindi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views106 pages

Module 4 - Learner Guide

This Learner Guide provides an overview of Module 4: Carrying out systems development, part of the Further Education and Training Certificate in Information Technology: Systems Development. It outlines the learning outcomes, assessment processes, and the importance of attending training workshops, emphasizing the learner's responsibility in the learning process. Key concepts covered include object-oriented programming, terminology, and principles such as encapsulation and information hiding.

Uploaded by

Malvin Sambindi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 106

Learner Guide

Module 4
Carrying out systems development
US: 14909, 14924, 14930, 14915, 14908
Further Education and Training Certificate: Information Technology: Systems
Development
SAQA 78965 - Level 4 - 175 Credits
Learner Information:

Name & Surname:

ID Number

Tel/Cell

Email Address

Organisation:

Facilitator Name:

Copyright
All rights reserved. The copyright of this document, its previous editions and any annexures thereto,
is protected and expressly reserved. No part of this document may be reproduced, stored in a
retrievable system, or transmitted, in any form or by any means, electronic, mechanical,
photocopying, recording or otherwise without the prior permission.

2
Learner Guide Introduction

The purpose of this Learner Guide is to provide learners with the necessary
knowledge and it provides a comprehensive overview of the module: Carrying out
systems development, and forms part of a series of modularised Learner Guides
that have been developed for the qualification: Further Education and Training
About Certificate: Information Technology: Systems Development - Qual ID: 78965 NQF
LEVEL 4, Worth 175 Credits.
This Learner Guide has been designed to improve the skills and knowledge of
learners, and thus enabling them to effectively and efficiently complete specific
tasks.

At the end of this module, you will be able to:


 Describe the difference between programming in Object Orientated and
Procedural Languages
 Demonstrate an understanding of information systems analysis
Outcomes  Demonstrate an understanding of the principles of developing software
for the internet
 Design a computer program according to given specifications
 Demonstrate an understanding of testing IT systems against given
specifications

The only way to establish whether a learner is competent and has accomplished
the Learning Outcomes is through an assessment process.
Assessment involves collecting and interpreting evidence about the learner’s
ability to perform a task.
Assessment This guide may include assessments in the form of activities, assignments, tasks or
projects, as well as workplace practical tasks. Learners are required to perform
tasks on the job to collect enough and appropriate evidence for their portfolio of
evidence, proof signed by their supervisor that the tasks were performed
successfully.

To qualify and receive credits towards the learning program, a registered assessor
Qualify will conduct an evaluation and assessment of the learner’s portfolio of evidence
and competency

3
Learners are required to attend the training workshops as a group or as specified
by their organization. These workshops are presented in modules, and conducted
Learner
by a qualified facilitator. The responsibility of learning rest with the learner, so:
Responsibility
 Be proactive and ask questions,
 Seek assistance and help from your facilitators, if required.

4
US: 14909, NQF Level 4 Worth 4 Credits
Learning Unit 1 Describe the difference between programming in Object
Orientated and Procedural Languages

This unit standard is intended:


to provide a conceptual knowledge of the areas covered
for those entering the workplace in the area of systems development
as additional knowledge for those wanting to understand the areas covered
Unit Standard People credited with this unit standard are able to:

Purpose  Describe basic object oriented terminology


 Describe the fundamental differences between procedural and
object oriented programming.
The performance of all elements is to a standard that allows for further
learning in this area.

Learning The credit value of this unit is based on a person having the prior knowledge
and skills to:
Assumed to be
 be able to apply the principles of Procedural Computer
in Place Programming

5
Session 1
Describe basic object oriented terminology.
SO 1

 The description explains the basic principles of a class.


Learning  The description explains the basic principles of an object.

Outcomes  The description explains the basic principles of information hiding and

(Assessment encapsulation

Criteria)  The Description explains the basic principles of inheritance.


 The Description explains the principles of polymorphisme.

Introduction
Object-oriented programming
Object-oriented programming is a programming paradigm that uses abstraction to create models
based on the real world. It uses several techniques from previously established paradigms, including
modularity, polymorphism, and encapsulation. Today, many popular programming languages (such
as Java, JavaScript, C#, C++, Python, PHP, Ruby and Objective-C) support object-oriented
programming (OOP).Object-oriented programming may be seen as the design of software using a
collection of cooperating objects, as opposed to a traditional view in which a program may be seen
as a collection of functions, or simply as a list of instructions to the computer. In OOP, each object is
capable of receiving messages, processing data, and sending messages to other objects. Each object
can be viewed as an independent little machine with a distinct role or responsibility. Object-oriented
programming is intended to promote greater flexibility and maintainability in programming, and is
widely popular in large-scale software engineering. By virtue of its strong emphasis on modularity,
object oriented code is intended to be simpler to develop and easier to understand later on, lending
itself to more direct analysis, coding, and understanding of complex situations and procedures than
less modular programming methods.2

6
Terminology
Class
Defines the characteristics of the Object.
Object
An Instance of a Class.
Property
An Object characteristic, such as color.
Method
An Object capability, such as walk.
Constructor
A method called at the moment of instantiation.
Inheritance
A Class can inherit characteristics from another Class.
Encapsulation
A Class defines only the characteristics of the Object, a method defines only how the method
executes.
Abstraction
The conjunction of complex inheritance, methods, properties of an Object must be able to simulate
a reality model.
Polymorphism
Different Classes might define the same method or property.

7
The description explains the basic principles of an object.
How People Approach Object-Oriented Technology
Object-oriented technology is both immense and far-reaching. End users of computer systems and
computer-based systems notice the effects of object-oriented technology in the form of increasingly
easy-to-use software applications and operating systems and in more flexible services being
provided by such industries as banking, telecommunications, and cable television. For the software
engineer, object-oriented technology encompasses object-oriented programming languages, object-
oriented development methodologies, management of object-oriented projects, object-oriented
computer hardware, and object-oriented computer aided software engineering, among others. It is
not surprising, therefore, that there is some confusion regarding object-oriented terms and
concepts. In this article, we will provide the reader with working definitions for object-oriented
terms and concepts that are necessary for a reader to acquire a basic understanding of object-
oriented technology.
Many of the terms commonly used in object-oriented technology were originally used to describe
object-oriented programming (coding) concepts. Specifically, although the terms were borrowed
from a non-computer-software perspective, they were first used extensively to describe concepts
embodied in object-oriented programming languages, such as Smalltalk, C++, and Eiffel. However,
these terms are quite useful even if one never intends to write any software at all. For example, an
industrial modeler could create an object-oriented model of a plastics manufacturing facility.
Molding machines, plastic parts, and even the "recipes" (proportional combinations) of the
chemicals used to create the various plastics could all be described in object-oriented terms. Further,
dynamic and static relationships among these items could also be described in object-oriented
terms. Finally, keep in mind that there is no one ultimate set of definitions for object-oriented terms
and concepts. Depending on who you are talking to, terms and definitions will vary slightly. This is
normal; in different parts of the United States, the same breakfast item might be referred to as a
pancake, a griddle cake, a flapjack, or a hot cake. Even in technical arenas, this variation in
terminology is common. A chemist might use the terms "valance" and "oxidation state" to identify
the same concept.

Object-Oriented Terms and Concepts


Objects
Objects are the physical and conceptual things we find in the universe around us. Hardware,
software, documents, human beings, and even concepts are all examples of objects. For purposes of
modeling his or her company, a chief executive officer could view employees, buildings, divisions,

8
documents, and benefits packages as objects. An automotive engineer would see tires, doors,
engines, top speed, and the current fuel level as objects. Atoms, molecules, volumes, and
temperatures would all be objects a chemist might consider in creating an object-oriented
simulation of a chemical reaction. Finally, a software engineer would consider stacks, queues,
windows, and check boxes as objects.
Objects are thought of as having state. The state of an object is the condition of the object, or a set
of circumstances describing the object. It is not uncommon to hear people talk about the "state
information" associated with a particular object. For example, the state of a bank account object
would include the current balance, the state of a clock object would be the current time, the state of
an electric light bulb would be "on" or "off." For complex objects like a human being or an
automobile, a complete description of the state might be very complex. Fortunately, when we use
objects to model real world or imagined situations, we typically restrict the possible states of the
objects to only those that are relevant to our models. We also think of the state of an object as
something that is internal to an object. For example, if we place a message in a mailbox, the
(internal) state of the mailbox object is changed, whereas the (internal) state of the message object
remains unchanged. Sometimes people think of objects as being strictly static. That is, the state of
an object will not change unless something outside of the object requests the object to change its
state. Indeed, many objects are passive (static). A list of names does not spontaneously add new
names to itself, nor would we expect it to spontaneously delete names from itself. However, it is
possible for some objects to change their own state. If an object is capable of spontaneously
changing its own state, we refer to it as an "object with life." (Objects with life are sometimes also
called "active objects" or "actors.") Clocks and timers are common examples of objects with life. If
we were modeling a business process, we would recognize that salespeople and customers were
also objects with life.

The description explains the basic principles of a class.


Classes, Metaclasses, Parameterized Classes, and Exemplars
There are two broad categories of objects: classes and instances. Users of object-oriented
technology usually think of classes as containing the information necessary to create instances, i.e.,
the structure and capabilities of an instance is determined by its corresponding class. There are
three commonly used (and different) views on the definition for "class":

9
 A class is a pattern, template, or blueprint
for a category of structurally identical items.
The items created using the class are
called instances. This is often referred to as
the "class as a `cookie cutter'" view. As you
might guess, the instances are the "cookies."
 A class is a thing that consists of both a
pattern and a mechanism for creating items
based on that pattern. This is the "class as
an `instance factory'" view; instances are
the individual items that are
"manufactured" (created) using the class's
creation mechanism.
 A class is the set of all items created using a
specific pattern. Said another way, the class
is the set of all instances of that pattern.
In this article, we will use the definition of a "class an `instance factory.'"
We should note that it is possible for an instance of a class to also be a class. A metaclass is a class
whose instances themselves are classes. This means when we use the instance creation mechanism
in a metaclass, the instance created will itself be a class. The instance creation mechanism of this
class can, in turn, be used to create instances -- although these instances may or may not themselves
be classes.
A concept very similar to the metaclass is the parameterized class. A parameterized class is a
template for a class wherein specific items have been identified as being required to create non-
parameterized classes based on the template. In effect, a parameterized class can be viewed as a "fill
in the blanks" version of a class. One cannot directly use the instance creation mechanism of a
parameterized class. First, we must supply the required parameters, resulting in the creation of a
non-parameterized class. Once we have a non-parameterized class, we can use its creation
mechanisms to create instances. In this article, we will use the term "class" to mean metaclass,
parameterized class, or a class that is neither a metaclass nor a parameterized class. We will make a
distinction only when it is necessary to do so. Further, we will occasionally refer to "non-class
instances." A non-class instance is an instance of a class, but is itself not a class. An instance of a
metaclass, for example, would not be a non-class instance.
In this article, we will sometimes refer to "instantiation." Instantiation has two common meanings:

10
 as a verb, instantiation is the process of creating an instance of a class, and
 as a noun, an instantiation is an instance of a class.
Some people restrict the use of the term "object" to instances of classes. For these people, classes
are not objects. However, when these people are confronted with the concepts of metaclasses and
parameterized classes, they have a difficulty attempting to resolve the "problems" these concepts
introduce. For example, is a class that is an instance of a metaclass an object -- even though it is
itself a class? In this article, we will use the term "object" to refer to both classes and their instances.
We will only distinguish between the two when needed.
Black Boxes and Interfaces
Objects are "black boxes." Specifically, the underlying implementations of objects are hidden from
those that use the object. In object-oriented systems, it is only the producer (creator, designer, or
builder) of an object that knows the details about the internal construction of that object. The
consumers (users) of an object are denied knowledge of the inner workings of the object, and must
deal with an object via one of its three distinct interfaces:
 the "public" interface. This is the interface that is open (visible) to everybody.
 the "inheritance" interface. This is the interface that is accessible only by direct specializations of
the object. (We will discuss inheritance and specialization later in this chapter.) In class-based
object-oriented systems, only classes can provide an inheritance interface.
 the "parameter" interface. In the case of parameterized classes, the parameter interface defines
the parameters that must be supplied to create an instance of the parameterized class.
Another way of saying that an item is in the public interface of an object is to say that the object
"exports" that item. Similarly, when an object requires information from outside of itself (e.g., as
with the parameters in a parameterized class), we can say that the object needs to "import" that
information.

The description explains the basic principles of information hiding and encapsulation
Encapsulation vs. Information Hiding
How is encapsulation related to information hiding? You can think of it as two ways of refer- ring to
the same idea. Information hiding is the goal, and encapsulation is the technique you use to
accomplish that goal. Encapsulation can be defined as the hiding of internal data representation
and imple- mentation details in an object. The only way to access the data within an encapsulated
object is to use defined operations. By using encapsulation, you are enforcing information hiding.
Many object-oriented languages use keywords to specify that methods and attributes should be

11
hidden. In Java, for instance, adding the private key word to a method will ensure that only code
within the object can execute it. There is no such keyword in JavaScript; we will instead use the
concept of the closure to create methods and attributes that can only be accessed from within the
object. It is more complicated (and confusing) than just using keywords, but the same end result can
be achieved.

Encapsulation seems to be a combination of one or more of:


 Grouping of relating things together
 GateKeeper (state or data protection)
Information Hiding, on the other hand, is
 Hiding details of implementation

Overview .The term encapsulation is often used interchangeably with information hiding. Not all
agree on the distinctions between the two though; one may think of information hiding as being the
principle and encapsulation being the technique. A software module hides information by
encapsulating the information into a module or other construct which presents an interface. A
common use of information hiding is to hide the physical storage layout for data so that if it is
changed, the change is restricted to a small subset of the total program. For example, if a three-
dimensional point (x,y,z) is represented in a program with three floating point scalar variables and
later, the representation is changed to a single array variable of size three, a module designed with
information hiding in mind would protect the remainder of the program from such a change.
In object-oriented programming, information hiding (by way of nesting of types) reduces software
development risk by shifting the code's dependency on an uncertain implementation (design
decision) onto a well-defined interface. Clients of the interface perform operations purely through it
so if the implementation changes, the clients do not have to change.

The Description explains the basic principles of inheritance.


Specialization and Inheritance
Aggregation is not the only way in which two objects can be related. One object can be a
specialization of another object. Specialization is either:
 the process of defining a new object based on a (typically) more narrow definition of an existing
object, or
 an object that is directly related to, and more narrowly defined than, another object.

12
Specialization is usually associated with classes. It is usually only in the so-called "classless" object-
oriented systems that we think of specialization for objects other than classes. Depending on their
technical background, there are a number of different ways in which people express specialization.
For example, those who are familiar with an object-oriented programming language called Smalltalk
refer to specializations as "subclasses" and to the corresponding generalizations of these
specializations as "superclasses." Those with a background in the C++ programming language use the
term "derived class" for specialization and "base class" for corresponding generalizations. It is
common to say that everything that is true for a generalization is also true for its corresponding
specialization. We can, for example, define "checking accounts" and "savings accounts" as
specializations of "bank accounts." Another way of saying this is that a checking account is a kind of
bank account, and a savings account is a kind of bank account. Still another way of expressing this
idea is to say that everything that was true for the bank account is also true for the savings account
and the checking account. In an object-oriented context, we speak of specializations as "inheriting"
characteristics from their corresponding generalizations. Inheritance can be defined as the process
whereby one object acquires (gets, receives) characteristics from one or more other objects. Some
object-oriented systems permit only single inheritance, a situation in which a specialization may
only acquire characteristics from a single generalization. Many object-oriented systems, however,
allow for multiple inheritances, a situation in which a specialization may acquire characteristics from
two or more corresponding generalizations.
Our previous discussion of the bank account,
checking account, and savings account was an
example of single inheritance. A telescope and a
television set are both specializations of "device that
enables one to see things far away." A television set
is also a kind of "electronic device." You might say
that a television set acquires characteristics from
two different generalizations, "device that enables
one to see things far away" and "electronic device."
Therefore, a television set is a product of multiple
inheritance.

Inheritance
One important characteristic of object-oriented languages is inheritance. Inheritance refers to the
capability of defining a new class of objects that inherits from a parent class. New data elements and

13
methods can be added to the new class, but the data elements and methods of the parent class are
available for objects in the new class without rewriting their declarations.
For example, Java uses the following syntax for inheritance:
public class B extends A {
declarations for new members
}
Objects in class B will have all members that are defined for objects in class A. In addition, they have
the new members defined in the declaration of class B. The extends keyword signals that class B
inherits from class A. We also say that B is a subclass of A and that A is the parent class of B.
In some languages, Java for example, the programmer has some control over which members are
inherited. In Java, a member is defined with a keyword indicating its level of accessibility. The
keyword private indicates that the member is not inherited by subclasses. This capability is not often
used.
[an error occurred while processing this directive]

The Description explains the principles of polymorphism


Polymorphism and Overloading
Polymorphism refers to the capability of having methods with the same names and parameter types
exhibit different behavior depending on the receiver. In other words, you can send the same
message to two different objects and they can respond in different ways. More generally, the
capability of using names to mean different things in different contexts is called overloading. This
also includes allowing two methods to have the same name but different parameters types, with
different behavior depending on the parameter types. Note that a language could support some
kinds of overloading without supporting polymorphism. In that case, most people in the object-
oriented community would not consider it to be an object-oriented language. Polymorphism and
overloading can lead to confusion if used excessively. However, the capability of using words or
names to mean different things in different contexts is an important part of the power of natural
languages. People begin developing the skills for using it in early childhood.
Members
Objects can have their own data, including variables and constants, and their own methods. The
variables, constants, and methods associated with an object are collectively refered to as
its members or features.

14
Classes
Many object-oriented languages use an important construction called a class. A class is a category of
objects, classified according to the members that they have. Like objects, classes can also be
implemented in classical languages, using separate compilation and structs for encapsulation. The
object-oriented language Java uses the following syntax for class definitions:
public class A {

declarations for members

}
Each object in the class will have all members defined in the declarations.
Class Members and Instance Members
In many object-oriented languages, classes are objects in their own right (to a greater or lesser
extent, depending on the language). Their primary function is as factories for objects in the category.
A class can also hold data variable and constants that are shared by all of its objects and can handle
methods that deal with an entire class rather than an individual object. These members are
called class members or, in some languages (C++ and Java, for example), static members. The
members that are associated with objects are called instance members.

15
Session 2 Describe the fundamental differences between procedural and object oriented

SO 2 programming.

 The description explains the use of functions and variables in structure


Learning
programming, using simple examples.
Outcomes
 The description compares encapsulation of data and functions in objects
(Assessment versus procedural programming
Criteria)  The Description identifies possible classes for simple examples.

Procedural vs Object Oriented Programming


ColdFusion started its life as a procedural language and only in more recent times gained object
oriented features. As a result of this history there is a substantial number of procedural ColdFusion
systems in existence today. Considering this, it's worth taking a brief look at what it means to write
procedural code and then see how this differs from an object oriented approach. To make this
comparison we need to first consider the problem that both approaches help us to solve. When
programming any system you are essentially dealing with data and the code that changes that data.
These two fundamental aspects of programming are handled quite differently in procedural systems
compared with object oriented systems, and these differences require different strategies in how we
think about writing code.

Procedural programming
In procedural programing our code is organised into small "procedures" that use and change our
data. In ColdFusion, we write our procedures as either custom tags or functions. These functions
typically take some input, do something, then produce some output. Ideally your functions would
behave as "black boxes" where input data goes in and output data comes out. The key idea here is
that our functions have no intrinsic relationship with the data they operate on. As long as you
provide the correct number and type of arguments, the function will do its work and faithfully return
its output. Sometimes our functions need to access data that is not provided as a parameter, i.e., we
need access data that is outside the function. Data accessed in this way is considered "global" or
"shared" data.

16
So in a procedural system our functions use data they are "given" (as parameters) but also directly
access any shared data they need.
Object oriented programming
In object oriented programming, the data and related functions are bundled together into an
"object". Ideally, the data inside an object can only be manipulated by calling the object's functions.
This means that your data is locked away inside your objects and your functions provide the only
means of doing something with that data. In a well designed object oriented system objects never
access shared or global data, they are only permitted to use the data they have, or data they are
given.

Global and shared data


We can see that one of the principle differences is that procedural systems make use of shared and
global data, while object oriented systems lock their data privately away in objects. Let's consider a
scenario where you need to change a shared variable in a procedural system. Perhaps you need to
rename it, change it from a string to a numeric, change it from a struct to an array, or even remove it
completely. In a procedural application you would need to find and change each place in the code
where that variable is referenced. In a large system this can be a widespread and difficult change to
make.

17
In an object oriented system we know that all variables are inside objects and that only functions
within those objects can access or change those variables. When a variable needs to be changed
then we only need to change the functions that access those variables. As long as we take care that
the functions' input arguments and output types are not changed, then we don't need to change any
other part of the system.

The cost of OO
Object oriented design is complicated to do well, and a substantial amount of time is likely to be
required to learn it in depth. If you have been developing procedural systems for some time then
object oriented concepts will require learning a different way of thinking which is always challenging
and requires effort. However the time to learn is not the only cost. Once you start learning, you may
start to question yourself time and time again if you are writing code "correctly". Your productivity
may be affected as you try different ideas, aiming for a good object oriented solution. A further cost
to consider is not specific to OO, but is specific to OO within ColdFusion. You may read many object
oriented articles and books but you cannot apply their teachings blindly in your ColdFusion
applications. There is a performance factor associated with creating objects in ColdFusion so
applying many of the pure object oriented ideas can adversely affect your application. This then adds
an additional challenge in knowing when not to apply some object oriented ideas.

18
A Real-World Example
Okay, that's enough theory. We're going to put both types of programming to the test with a real-
world example. Let's say that you are working for a vehicle parts manufacturer that needs to update
its online inventory system. Your boss tells you to program two similar but separate forms for a
website, one form that processes information about cars and one that does the same for trucks.
For cars, we will need to record the following information:
 Color
 Engine Size
 Transmission Type
 Number of doors
For trucks, the information will be similar, but slightly different. We need:
 Color
 Engine Size
 Transmission Type
 Cab Size
 Towing Capacity
In procedural programming, you would write the code first to process the car form and then the
code for the truck form. With object-oriented programming, you would write a base class called
vehicle that would record the common characteristics what we need from both trucks and cars. In
this case, the vehicle class will record:
 Color
 Engine Size
 Transmission Type
We'll make each one of those characteristics into a separate method. The color method, for
example, could take the color of the vehicle as a parameter and do something with it, like storing it
in a database. Next, we will create two more classes: truck and car, both of which will inherit all of
the methods of the vehicle class and extend it with methods that are unique to them. The car class
will have a method called number Of Doors and the truck class will have the methods cab Size and
towing Capacity. Okay, so let's assume that we have a working example for both procedural and OO
programming. Now, let's run through a few scenarios that we could come across in a normal working
environment. You know the type of scenario because it always begins with the thought: I really wish
my boss didn't send this in an email request at 4pm on a Friday afternoon.
Scenario 1
Suppose that we suddenly need to add a bus form, that records the following information:

19
 Color
 Engine Size
 Transmission Type
 Number of passengers
Procedural: We need to recreate the entire form, repeating the code for Color, Engine Size, and
Transmission Type.
OOP: We simply extend the vehicle class with a bus class and add the method, number Of
Passengers.
Scenario 2
Instead of storing color in a database like we previously did, for some strange reason our client
wants the color emailed to him.
Procedural: We change three different forms: cars, trucks, and buses to email the color to the client
rather than storing it in the database.
OOP: We change the color method in the vehicle class and because the car, truck, and bus classes all
extend (or inherit from, to put it another way) the vehicle class, they are automatically updated.
Scenario 3
We want to move from a generic car to specific makes, for example: Nissan and Mazda.
Procedural: We create a new form for each make, repeating all of the code for generic car
information and adding the code specific to each make.
OOP: We extend the car class with a nissan class and a mazda class and add methods for each set of
unique information for that car make.
Scenario 4
We found a bug in the transmission type area of our form and need to fix it.
Procedural: We open and update each form.
OOP: We fix the transmission Type method in the vehicle class and the change perpetuates in every
class that inherits from it.
Wrapping It Up
As you can see from the above scenarios, employing an OOP style has significant advantages over
procedural programming, especially as your scale increases. Consider the savings we would receive
from OOP in terms of repeated code, flexibility, and maintenance if we also had to add forms for
boats, motorcycles, planes, go-karts, ATVs, snowmobiles, etc. Objects and methods are also far
easier to test than procedural programming by using unit testing to test results. Does this mean that
you should never use procedural programming? Not necessarily. If you're doing a mockup or a
proof-of-concept app, you might not have the time to make everything object-oriented and so I

20
think it might would be better to use procedural programming for a prototype, but it would be best
to make the production product in an OO-manner.
This has been just a brief foray into a very large subject, but I hope that you've been able to get a
better understanding of procedural vs. object-oriented programming and when and how to use
each. If this tutorial has been helpful to you, please bookmark it for your reference and share it with
your friends and colleagues.

We can summarize the differences as follows :


• Procedural Programming
– top down design
– create functions to do small tasks
– communicate by parameters and return values

• Object Oriented Programming


– design and represent objects
– determine relationships between objects
– determine attributes each object has
– determine behaviours each object will respond to
– create objects and send messages to them to use or manipulate their attributes

21
US: 14924, NQF Level 4 Worth 3 Credits
Learning Unit 2 Demonstrate an understanding of information systems
analysis

This unit standard is intended:


to provide fundamental knowledge of the areas covered
for those working in, or entering the workplace in the area of Systems
Development
Unit Standard
People credited with this unit standard are able to:
Purpose
 Describe information systems analysis
 Explain different systems analysis techniques used in the industry
The performance of all elements is to a standard that allows for further
learning in this area.

The credit value of this unit is based on a person having prior knowledge and

Learning skills to:


 Demonstrate an understanding of fundamental mathematics (at
Assumed to be
least NQF level 3).
in Place  Demonstrate PC competency skills (All End-User Computing unit
Standards.)

22
Session 1
Describe information systems analysis.
SO 1

 The description identifies the position of information systems analysis in


the software development life cycle.
Learning  The description explains the purpose of information systems analysis.

Outcomes  The description outlines the functions of the information systems analyst.

(Assessment  The description outlines information gathering techniques used by

Criteria) information systems analysts.


 The description explains different systems analysis techniques used in the
industry.

Describe information systems analysis.


Stages of the Systems Development Life Cycle
The systems development life cycle (SDLC) gives organizations a means of controlling a large
development project by dividing it into manageable stages with well-defined outputs. SDLC
consumes a significant amount of resources itself - it takes time and money to manage projects in
such an elaborate fashion.
Characteristics of the SDLC:
1. Every stage is defined in terms of the activities and responsibilities of the development team
members.
2. Each stage terminates in a milestone defined in terms of the subproducts to be delivered, such as
system requirements specifications or coded and tested software modules.
3. Stress to students that developers often need to rework the deliverables produced in earlier
stages in the light of the experience they gain as the development effort progresses and to
accommodate legitimate user requests for change.
4. The effort expended on developing an information system is generally surpassed by the efforts
needed for the system's maintenance, which may cost over time twice as much as the development.
Many organizations spend 60 to 70 percent of their IS budgets on systems maintenance.
5. Producing extensive system documentation during the development is necessary to support
maintenance. It is desirable that the documentation necessary for system operation and
maintenance be produced as the by-product of the development process.
Stages of the SDLC and their deliverables
Feasibility Study - recommendation to proceed and system proposal or

23
- recommendation to abandon
Requirements analysis - requirements specifications
Logical design - conceptual design or programs and databases
Physical design - detailed design of systems modules and databases
- specification of system hardware and software
Coding and testing - accepted system with complete documentation
Conversion - installed operational system
Postimplementation review - recommendation for enhancement of the system and of the
development method - recommendation for organizational adjustment
Systems Analysis
The task of systems analysis is to establish in detail what the proposed system will do (as opposed
to how this will be accomplished technologically).
Characteristics of systems analysis include:
1. Establishing the objectives of the new system, conducting an analysis of its costs and the benefits
to be derived from it, and outlining the process of systems implementation.
2. Detailed systems analysis must also establish who the system users are, what information they
should get and in what form, and how this information will be obtained from the incoming data and
from the databases.
Feasibility Study
The main objective of the feasibility study, the introductory phase of development, is to establish
whether the proposed system is feasible or, to be more accurate, desirable, before resources are
committed to the full-scale project. In a feasibility study, systems analysts perform a preliminary
investigation of the business problem or opportunity represented by the proposed system
development project. Specifically, they undertake the following tasks:
1. Define the problem or the opportunity which the system will address
2. Establish the overall objectives of the new system
3. Identify the users of the system
4. Establish the scope of the system.
5. Propose general hardware/systems software options for the new system
6. Perform a make-or-buy analysis for the application
7. Perform a value assessment, such as the cost-benefit analysis, based in part on the estimate of the
development project size
8. Assess the project risk
9. Recommend whether to proceed with the project, or whether to abandon the project

24
The five essential aspects of a feasibility study include:
1. Legal feasibility - will the proposed system conform to laws and regulations?
2. Ethical feasibility - will the proposed system conform to the norms of ethics?
3. Technological feasibility - do we have the technology and skills needed to develop and operate the
system?
4. Economic feasibility - will the system result in competitive advantage or another payoff to the
enterprise?
5. Organizational feasibility - will the organizational change result in an acceptable quality of working
life for those affected by the system?
- Will the political changes caused by the system be accepted?
- Will the organization benefit as a whole?

Requirements Analysis
The principal objective of requirements analysis, the main systems analysis stage, is to produce
the requirements specifications for the system, which set out in detail what the system will do.
Requirements (also known as functional) specifications establish an understanding between the
system developers, its future users, the management and other stakeholders.
Requirements analysis needs to establish:
1. What outputs the system will product, what inputs will be needed, what processing steps will be
necessary to perform inputs into outputs, and what data stores will have to be maintained by the
system.
2. What volumes of data will be handled, what numbers of users in various categories will be
supported and with what levels of service, what file and database capacities the system will need to
maintain, and other quantitative estimates of this type.
3. What interface will be provided for the users to interact with the system, based on the skills and
computer proficiency of the intended users.
4. What control measures will be undertaken in the system
Techniques for information gathering in systems analysis:
Techniques for gathering information during systems analysis can be grouped into four categories. A
combination of these approaches is usually employed. They include:
1. Asking the users
2. Deriving from an existing system
3. Deriving from the analysis of the business area
4. Experimenting with the system under development

25
Deriving from an existing system
The requirements for a proposed system may be derived from an existing information system. The
possibilities are:
1. The system that will be replaced by the proposed system
2. A system similar to the proposed one that has been installed elsewhere and is accessible to the
analyst
3. A proprietary package, whose functionality may be analyzed.
Data Analysis the requirements for the proposed system are derived from the data contained in the
outputs of the existing system and inputs to it. The data can also be obtained from the existing
programs and systems documentation, such as procedures, manuals, organization charts, file
descriptions, operations manuals, and so forth.
Document Analysis concentrates on the analysis of business documents, such as orders or invoices.
Observing Work the work of an intended user or by actually participating in the work, the analyst
learns first-hand about the inadequacies of the existing system.

The description outlines the functions of the information systems analyst.


A systems analyst researches problems, plans solutions, recommends software and systems, at least
at the functional level, and coordinates development to meet business or other requirements.
Although they may be familiar with a variety of programming languages, operating systems,
and computer hardware platforms, they do not normally involve themselves in the actual hardware
or software development. Because they often write user requests into technical specifications, the
systems analysts are the liaisons between vendors and information technology professionals. They
may be responsible for developing cost analysis, design considerations, staff impact amelioration,
and implementation time-lines.

A systems analyst may:


 Identify, understand and plan for organizational and human impacts of planned systems, and
ensure that new technical requirements are properly integrated with existing processes and skill
sets.
 Plan a system flow from the ground up.
 Interact with internal users and customers to learn and document requirements that are then
used to produce business requirements documents.
 Write technical requirements from a critical phase.
 Interact with designers to understand software limitations.

26
 Help programmers during system development, ex: provide use cases, flowcharts or
even database design.
 Perform system testing.
 Deploy the completed system.
 Document requirements or contribute to user manuals.
 Whenever a development process is conducted, the system analyst is responsible for designing
components and providing that information to the developer.

The system development life cycle (SDLC) is the traditional system development method that
organizations use for large-scale IT Projects. The SDLC is a structured framework that consists of
sequential processes by which information system are developed.
1. System Investigation
2. System Analysis
3. System Design
4. Programming and Testing
5. Implementation
6. Operation and Maintenance
System analysts are IS Professionals who specialize in analyzing and designing information systems.

The description outlines information gathering techniques used by information systems analysts.
The description explains different systems analysis techniques used in the industry.
Introduction
Information Gathering is a very key part of the feasibility analysis process. Information gathering is
both an art and a science. It is a science because it requires a proper methodology and tools in order
to be effective. It is an art too, because it requires a sort of mental dexterity to achieve the best
results. In this article we will explore the various tools available for it, and which tool would be best
used depending on the situation.
Information Gathering Tools
There is no standard procedures defined when it comes to the gathering of information. However,
an important rule that must be followed is the following: information must be acquired accurately
and methodically, under the right conditions and with minimum interruption to the individual from
whom the information is sought.
Review of Procedural Forms

27
These are a very good starting point for gathering information. Procedural manuals can give a good
picture of the system to be studied: how the existing system works, what are its assumptions, what
information flows in, and what flows out, what dependency there is on the external system, if any.
Problems that one can encounter here are the lack of updated manuals or documents, or sometimes
the need for possession of the correct documents. Hence, this is just one of the means of gathering
information. However, procedural forms can capture some important information like:
 Who are the users of the forms?
 Do the forms have all the necessary information?
 How readable and understandable are they?
 How does the form help other users to make better decisions?

On Site Visits and Observations


The main objective of an on site visit is to get as close to the real system as possible.
It is important that the person who visits on site is a keen observer and is knowledgeable about the
system and the normal activities that occur within the system. When a person observes a system,
the emphasis is more on observing how things are done rather than giving advice as to what is
wrong or right or passing judgment. There are various observation methods used:
Direct or Indirect:- The analyst can observe the subject or the system directly. E.g.: How do the
workers perform a job on the factory floor? An indirect form of observation is done using some
devices like video cameras or video tapes which would capture the information.
Structured or Unstructured:- In a structured observation the specific actions are recorded. E.g.:
Before a shopper buys a product, how many related products did he see before selecting the final
product? An unstructured method would record whatever actions would happen at a given point of
time.
Interviews and Questionnaires
The interview is a face-to-face interpersonal meeting designed to identify relations and verify
information to capture raw information as told by the interviewee.
Interview is a flexible tool and a better tool than a questionnaire for the evaluation of the validity of
the information that is being gathered. It is an art that requires experience in arranging the
interview, setting the stage, establishing rapport. The questions must be phrased clearly, avoiding
misunderstandings and carefully evaluating the responses. The disadvantage in this technique is the
preparation time it requires and it is obviously restricted to only one person at a time which means
that the whole process of gathering results will take far longer.

28
Questionnaire is a self-administered tool that is more economical and requires less skill to
administer than the interview. At any point in time, unlike the interview, feedback from many
respondents can be collected at the same time. Since questionnaires are usually self-administered it
is critical that questions be clear and unabiguous. With the help of a table below we can understand
the differences between the questionnaires and an interview. This is designed to give a completely
unbiased viewpoint of both methods. We will be able to view them in such a way that the benefits
and shortcomings of each will be easily visible right away.
Questionnaire Interview
Economical Less Economical
Can be completed by many people at the Can be administered to ONLY ONE person at a time
same time
Chances of error or omissions are fewer It could be error prone since it depends upon the
skill of the interviewer to gauge the questions and
interpret the responses.
Anonymity can be maintained. Hence user is Anonymity is not maintained. Hence the user might
not prevented from giving his candid opinion feel forced to conceal his candid opinion on an
about an issue issue.
Gives time to the respondents. Hence they It may not give time to the respondents. Hence they
can think and give their regarded opinions on may not get enough time to think and give their
an issue opinion on an issue
Types of Interview
=Structured Interview
The skill of the interviewer helps in getting the interviewee to respond and move to the next
question without diversion. The questions are presented to the interviewee with exactly the same
wording and in the same order.
=Unstructured Interview
In the unstructured Interview the respondents are allowed answer freely in their own words. The
responses are not forced. They are self-revealing and personal rather than general and superficial.
The interviewer has encouraged the respondents talk freely. Unstructured interviews provide an
opportunity to delve more deeply into complex topics than is possible with surveys.
Types of Questionnaire
=Fill-in-the-blanks Questions:
They seek specific responses.
=Yes / No Questions:

29
They just seek one value either true or false or Yes or NO. There is no mixed response.
=Ranking Scale Questions:
The respondent need to rank the responses into a certain scale. For eg. to a question you might be
asked to rate a service from a level 1 to 5.
=Multiple-Choice Questions:
They ask for a specific answer choices.
Suggested Reading
Planning Tools. In this article we explore project planning tools. We shall see how they aid in project
planning activities and how they also have their shortcomings.
Structured Analysis.

Summary Asking the Users:


Interviewing the users requires considerable skill and preparation. Interviews are a very rich but
costly and time-consuming communication channel.
Characteristics of interviews include:
1. Both open-ended and closed-ended questions may be employed where appropriate. Open-ended
questions aim to draw the user out into a longer explanation or opinion, and closed-ended questions
can be answered with yes, no, or a specific brief response.
2. The interviewing process must be planned, since managers at several levels may have to be
questioned.
3. The analyst has to prepare for each interview by establishing the position, activities, and
background of the interviewee. In a structured interview, the analyst relies on a prepared list of
questions. In an unstructured interview, the direction unfolds as the person being interviewed
answers the largely-open ended questions; follow-up questions are then asked.
4. During the interview, the analyst must convey a clear understanding of the purpose of the
interview, ask specific questions in terms understandable to the interviewee, listen rather than
anticipate answers, control the interview but be open to a suddenly discovered rich source of
information, and create a record of what is learned.
5. The analyst should analyze the results immediately following an interview session.
Questionnaires:
Questionnaires are an efficient way of asking many users at once, particularly users who are
dispersed in the field.
Characteristics of questionnaires:
1. Increasingly, questionnaires are distributed on diskettes or intranets.

30
2. An easy-to-fill-out questionnaire with concise and closed-ended questions is most likely to meet
with success. Simple yes/no questions and checklists are preferable.
3. Questionnaires have limitations as compared with interviews, in part because of their requisite
simplicity.
4. Generally, questionnaires are employed together with other, more powerful, means of eliciting
user requirements.
5. Group decision-making processes such as the Delphi method, brainstorming, and nominal group
techniques may also be used in search of creative new solutions. These techniques are sometimes
used during JAD sessions.

31
Session 2
Explain different systems analysis techniques used in the industry.
SO 2

 The explanation identifies different techniques for describing data


structures.
Learning  The explanation identifies different techniques for documenting business

Outcomes process flows.

(Assessment  The explanation identifies different techniques for documenting data

Criteria) flows.
 The explanation identifies different analysis tools to assist with
documentation.

Explain different systems analysis techniques used in the industry.


Deriving the Requirements from the Analysis of the Business Area
Informational analysis of the business unit to be served by a system may be carried out with
Business Systems Planning. As well, the critical success factors methodology can be used to establish
the CSFs of the individual managers and support them with information. A method that will also help
establish the informational needs of an individual manager is decision analysis. It consists of the
following steps:
1. Identify the key decisions that a manager makes
2. Define the steps of the process whereby the manager makes these decisions
3. Define the information needed for the decision process
4. Establish what components of this information will be delivered by the information system and
what data will be needed to do so.
Experimenting with the System as it is being Developed
Experimenting with the system under development is the prototyping approach.
Characteristics of the prototype approach include:
1. An initial system version that embodies some of the requirements is built.
2. The users are able to define their requirements in an Aas compared to something@ manner -
which is much easier than defining them without such a comparison.
3. Prototype may be discarded after it has been put to such use, or it may evolve into the system to
be delivered.

32
Techniques and Tools of Structured Systems Analysis:
Data Flow Diagrams
The purpose of systems analysis is to devise a logical model of the proposed system. Using the
methodology known as structured systems analysis, we graphically describe the system as
interacting processes that transform input data into output data. These processes may then become
code modules as the system is further designed and programmed.

Data Flow Diagrams


The principal tool used in structured analysis is the data flow diagram (DFD), which graphically
shows the flow and transformation of data in the system. A DFD representation of a system is the
graphical depiction of what the system will do.
There are four symbols employed in a DFD. These include:
1. Process - circle
2. Data flow - line
3. Data store - parallel lines
4. External entity - square
Process
Characteristics of a process include:
1. A process (shown as circle) as the term is used in structured analysis, transforms inputs into
outputs.
2. The name of the process very briefly explains what the process does.
3. Since the processes are the Active@ components of the system, their names reflect this.
4. All processes in a DFD are numbered.
Data Flow
A data flow, shown as a line ending in an arrow, represents a flow of data into or out of a process.
Characteristics of a data flow:
1. Flows show the movement of data between all the components of a DFD.

33
2. Although during the initial analysis we may consider physical data flows in the existing system,
ultimate analysis will deal with their logical data content.
Data Store
A data store, shown as a parallel line, shows a repository of data maintained by the system.
Characteristics of a data store:
1. A repository may become a data file or a database component - such decisions are made during
system design.
2. Both data stores and data flows are data structures
3. Data store names are frequently in the plural, to distinguish them from data flows.

External Entity
An external entity is represented as a square. It is a source from which the system draws data or a
receiver of information from the system.
Characteristics of external entities
1. Entities are external to the system; they are beyond the system boundary.
2. External entities may also be other systems with which the given system interacts.
Using Data Flow Diagrams: From Context Diagram to Level-0
Our fundamental requirement concerning tools for systems development is that they lend
themselves to a process of progressive refinement, bringing in the details gradually. This helps
manage the complexity of the analysis process. DFD levelling is such a process of progressive
refinement. A context diagram shows only the system interfaces (inputs and outputs) flowing to or
from external entities. A context diagram shows the system as a single process bearing the system's
name, with its principal incoming and outgoing data flows, as they are known at this early stage of
analysis. If there are too many to show, a table may be employed to show inputs and outputs in two
columns.
Figure 15.9 - use this figure to discuss a level-0 DFD of a simple order processing system.
Levelling Data Flow Diagrams
Levelling is the gradual refinement of DFD's, which brings more and more detail into the picture. In
doing so, it is necessary to ensure quality by following these basic principles of DFD levelling:
1. The top level DFD is the context diagram of the system. Subsequent levels are obtained by
progressively breaking down individual processes into separate DFDs.
2. Decomposition is governed by the balancing rule, which ensures consistency. The balancing rule
states that data flows coming into and leaving a process must correspond to those coming into and
leaving the lower level DFD that shows this process in more detail.

34
3. No more than 10 processes should be shown on a given DFD to prevent cluttering the diagram.
4. Not all processes have to be broken down - only those whose complexity warrants it.
5. The levelling process is not over until it's over. That is, as you introduce more detail into the
analysis by levelling, you will note omissions in higher level DFDs - and you will correct them.
6. The numbering rules for processes is as follows. Use 1, 2, 3, and so on in the level - 0 DFD. If you
decompose, say process 3, the processes shown on process 3's level-1 DFD will be numbered 3.1,
3.2, 3.3, and so on.

Techniques and Tools of Structured Systems Analysis: Description of Entities


The entities that appear in the data flow diagrams are described in further detail in a data
dictionary. Data dictionaries have evolved into powerful tools as repositories of descriptions of all
project entities.

Description of Processes: Basic Logic Constructs


The principal tool that is used to show how the primitive DFD processes transform their input into
outputs is through describing their logic. The principal tool for this specification is structured English,
which is a form of pseudocode - a code describing the processing logic to a human rather than to a
computer.
Constructs used to express any processing logic include sequence, loop, and decision. An additional
construct represents multiple choice.
Sequence - specified that one action be carried out after another
Loop - specifies that certain actions be carried out repeatedly while the given condition holds
Decision - specifies alternative courses of action, depending on whether a certain condition holds or
not. It expresses this thought: AIf a given condition exists, one action should be taken, or else the
alternative action

35
Multiple Choice - frequently, a need arises to choose one of a set of several actions, based on a
condition that may have more than two outcomes. Though this situation may be expressed with
nested IF constructs, it is far easier to express it with the multiple selection (CASE) construct.
Description of Complex Decisions in Processes: Decision Tables and Trees
Decision tables and decision trees help us consider all the possible actions that need be taken under
a given set of circumstances in a complete and unambiguous fashion.
A decision table specifies in tabular form the actions to be carried out when given conditions exist.
To design a decision table:
1. Specify the name of the table as its heading, and insert a reference to it at the place in the process
description where the table applies
2. List all possible conditions in the condition stub
3. List all possible actions in the action stub
4. Fill in the condition entries by marking the presence (Y) or absence (N) of the conditions. The
number of rules, that is, entries in the right-hand side of the table equals the number of possible
combinations of conditions.
5. For every condition entry, mark with an X an action entry opposite the action(s) to be taken under
these circumstances.
Decision trees present conditions as branches of a tree, going from left to right.
Characteristics of decision trees
1. They are easier to read than are decision tables, but the greater the number of conditions, the
more tedious they are to draw up.
2. They are better for checking the completeness of the policy represented.
Data Dictionaries and the Description of Data
All the descriptions of the DFD entities are entered into the data dictionary of the project. We need
to plan what data and what relationships among data are stored in an organization's databases. The
principal vehicle for managing data is the data dictionary. The composition of each data store and
data flow appearing in the DFDs must be described in the dictionary. Generally, both data flows and
the records in the data stores are data structures, that is, they are composed of more elementary
data entities.

Computer-aided software engineering (CASE) is the scientific application of a set of tools and
methods to a software system with the desired end result of high-quality, defect-free, and
maintainable software products. It also refers to methods for the development of information
systems together with automated tools that can be used in the software development process.

36
Components
1. Diagrammatic Tool
2. Information Repository
3. Interface Generators
4. Management Tools
CASE tools are a class of software that automate many of the activities involved in various life
cycle phases. For example, when establishing the functional requirements of a proposed application,
prototyping tools can be used to develop graphic models of application screens to assist end users to
visualize how an application will look after development. Subsequently, system designers can use
automated design tools to transform the prototyped functional requirements into detailed design
documents. Programmers can then use automated code generators to convert the design
documents into code. Automated tools can be used collectively, as mentioned, or individually. For
example, prototyping tools could be used to define application requirements that get passed to
design technicians who convert the requirements into detailed designs in a traditional manner
using flowcharts and narrative documents, without the assistance of automated design software.
Types of tools are:
 Business process engineering tools.
 Process modeling and management tool
 Project planning tools.
 Risk analysis tools
 Project management tools
 Requirement tracing tools
 Metrics management tools
 Documentation tools
 System software tools
 Quality assurance tools
 Database management tools
 Software configuration management tools
 Analysis and design tools
 Interface design and development tools
 Prototyping tools

37
US: 14930, NQF Level 4 Worth 3 Credits
Learning Unit 3 Demonstrate an understanding of the principles of
developing software for the internet

This unit standard is intended:


 to demonstrate fundamental of knowledge of the areas covered
 for those working in, or entering the workplace in the area of
systems development
Unit Standard People credited with this unit standard are able to:

Purpose  Review the requirements for a web-based computer application


 Design a web-based computer application
 Present the design of a web-based computer application
The performance of all elements is to a standard that allows for further
learning in this area

Open.
The credit value of this unit is based on a person having the prior knowledge
Learning and skills to:
Assumed to be  demonstrate an understanding of fundamental English (at least NQF
level 3)
in Place
 demonstrate PC competency skills (End User Computing unit
standards up to level 3).

38
Session 1
Explain the network issues related to Internet applications.
SO 1

 The explanation identifies the Internet uses a session-less network


protocol.
Learning
 The explanation lists the implications of session-less application
Outcomes
development.
(Assessment  The explanation identifies the Internet uses limited band-width.
Criteria)
 The explanation lists the implications of slow wand-width to application
design.

The explanation identifies the Internet uses a session-less network protocol.


PROTOCOLS
PROTOCOL– Set of rules or language use by computer and networking devices to communicate with
one another

SERVICE - A service use by computer and networking devices such as file and print services
Networking Protocols
TCP/IP - Abbreviation for Transmission Control Protocol/Internet Protocol,the suite of
communications protocols used to connect hosts on the Internet. TCP/IP uses several protocols, the
two main ones being TCP and IP. TCP/IP is built into the UNIX operating system and is used by the
Internet, making it the de facto standard for transmitting data over networks.

Introduction to Network Protocols


Just as diplomats use diplomatic protocols in their meetings, computers use network protocols to
communicate in computer networks. There are many network protocols in existence; TCP/IP is a
family of network protocols that are used for the Internet.
A network protocol is a standard written down on a piece of paper (or, more precisely, with a text
editor in a computer). The standards that are used for the Internet are
called Requests For Comment (RFC). RFCs are numbered from 1 onwards. There are more than
4,500 RFCs today. Many of them have become out of date, so only a handful of the first thousand
RFCs are still used today. The International Standardization Office (ISO) has standardized a system
of network protocols called as ISO OSI. Another organization that issues communication standards is
the International Telecommunication Union (ITU) located in Geneva. The ITU was formerly known

39
as the CCITT and, being founded in 1865, is one of the oldest worldwide organizations (for
comparison, the Red Cross was founded in 1863). Some standards are also issued by
the Institute of Electrical and Electronics Engineers (IEEE). RFC, standards released
by RIPE (Réseaux IPEuropéens), and PKCS (Public Key Cryptography Standard) are freely available
on the Internet and are easy to get hold of. Other organizations (ISO, ITU, and so on) do not provide
their standards free of charge—you have to pay for them. If that presents a problem, then you have
to spend some time doing some library research. First of all, let's have a look at why network
communication is divided into several protocols. The answer is simple although this is a very
complex problem that reaches across many different professions. Most books concerning network
protocols explain the problem using a metaphor of two foreigners (or philosophers, doctors, and so
on) trying to communicate with each other. Each of the two can only communicate in his or her
respective language. In order for them to be able to communicate with each other, they need a
translator as shown in the following figure:

Figure 1.1: Three-layer communication architecture


The two foreigners exchange ideas, i.e., they communicate. But they only do so virtually. In reality,
they are both handing over information to their interpreters, who then transmit this information by
sending vibrations through the surrounding air with their vocal cords. Or if the parties are far away
from each other, the interpreters communicate over the phone; thus the information is physically
transmitted over phone lines. We can therefore talk about virtual communication in the horizontal
direction (philosophical communication, the shared language between interpreters, and electronic
signals transmitted via phone lines) and real communication in the vertical direction (foreigner-to-
interpreter and interpreter-to-phone). We can thus distinguish three levels of communication:
1. Between two foreigners
2. Between interpreters
3. Physical transmission of information using media (phone lines, sound waves, etc.)
Communication between the two foreigners and between the two interpreters is only virtual. In fact,
the only real communication happens between the foreigner and his or her interpreter. Even more
layers are used in computer networks. The number of layers depends on which system of network

40
protocols you choose to use. The system of network protocols is sometimes referred to as
the network model. You most commonly work with a system that uses the Internet, which is also
referred to as the TCP/IP family. In addition to TCP/IP, we will also come across the ISO OSI model
that was standardized by the ISO.

Comparison of TCP/IP and ISO OSI network models


The TCP/IP family uses four layers while ISO OSI uses seven layers as shown in the figure above. The
TCP/IP and ISO OSI systems differ from each other significantly, although they are very similar on the
network and transport layers. Except for some exceptions like SLIP or PPP, the TCP/IP family does
not deal with the link and physical layers. Therefore, even on the Internet, we use the link and
physical protocols of the ISO OSI model.
1.1 ISO OSI
Communication between two computers is shown in the following figure:

41
Seven-layer architecture of ISO OSI

The explanation lists the implications of session-less application development.


And solution developers to find other methods of uniquely tracking a visitor through a web-base
application. Various methods of managing a visitor’s session have been proposed and used, but the
most popular method is through the use of unique session IDs. Unfortunately, in too many cases
organisations have incorrectly applied session ID management techniques that have left their
“secure” application open to abuse and possible hijacking. This document reviews the common

42
assumptions and flaws organisations have made and proposes methods to make their session
management more secure and robust.
Understanding the Situation
Most organisations now have substantial investments in their online Internet presences. For major
financial institutions and retailers, the Internet provides both a cost effective means of presenting
their services and products to customer, and a method of delivering a personalised 24-7 presence. In
almost all cases, the preferred method of delivering these services is over common HTTP. Due to the
way this protocol works, there is no inbuilt facility to uniquely identify or track a particular customer
(or session) within an application – thus the connection between the customer’s web-browser and
the organisations web-service is referred to as stateless. Therefore, organisations have been forced
to adopt custom methods of managing client sessions if they wish to maintain state. The most
common method of tracking a customer through a web site is by assigning a unique session ID – and
having this information transmitted back to the web server with every request. Unfortunately,
should an attacker guess or steal this session ID information, it is normally a trivial exercise to hijack
and manipulate another user’s active session. An important aspect of correctly managing state
information through session IDs relates directly to authentication processes. While it is possible to
insist that a client using an organisations web application provide authentication information for
each “restricted” page or data submission, it would soon become tedious and untenable. Thus
session IDs are not only used to follow clients throughout the web application, they are also used to
uniquely identify an authenticated user – thereby indirectly regulating access to site content or
information. The methods available to organisations for successfully managing sessions and
preventing hijacking type attacks are largely dependant upon the answers to a number of critical
questions:
1. Where and how often are legitimate clients expected to utilise the web-based application?
2. At what stage does the organisation really need to manage the state of a client’s session?
3. What level of damage could be done to the legitimate client should an attacker be able to
impersonate and hijack their account?
4. How much time is someone likely to invest in breaking the session management method?
5. How will the application identify or respond to potential or real hijacking attempts?
6. What is the significance to application usability should it be necessary to use an encrypted
version of HTTP (HTTPS)?
7. What would be the cost to the organisations reputation should information about a security
flaw in any session management be made public?

43
Finding answers to these questions will enable the organisation to evaluate the likelihood and
financial risk of an inappropriate or poorly implemented session management solution.
Maintaining State
Typically, the process of managing the state of a web-based client is through the use of session IDs.
Session IDs are used by the application to uniquely identify a client browser, while background
(server-side) processes are used to associate the session ID with a level of access. Thus, once a client
has successfully authenticated to the web application, the session ID can be used as a stored
authentication voucher so that the client does not have to retype their login information with each
page request.
Organisations application developers have three methods available to them to both allocate and
receive session ID information:
 Session ID information embedded in the URL, which is received by the application through HTTP
GET requests when the client clicks on links embedded with a page.
 Session ID information stored within the fields of a form and submitted to the application.
Typically the session ID information would be embedded within the form as a hidden field and
submitted with the HTTP POST command.
 Through the use of cookies.
Each method has certain advantages and disadvantages, and one may be more appropriate than
another. Selection of one method over another is largely dependant upon the type of service the
web application is to deliver and the intended audience. Listed below is a more detailed analysis of
the three methods. It is important that an organisations system developers understand the
limitations and security implications of each delivery mechanism.
URL Based Session ID's
Session ID information embedded in the URL, which is received by the application through HTTP GET
requests when the client clicks on links.
Example: http://www.example.com/news.asp?article=27781;sessionid=IE60012219
Advantages:
 Can be used even if the client web-browser has high security settings and has disabled the
use of cookies.
 Access to the information resource can be sent by the client to other users by providing
them with a copy of the URL.
 If the Session ID is to be permanently associated with the client-browser and their computer,
it is possible for the client to “Save as a favourite”.
 Depending upon the web browser type, URL information is commonly sent in the HTTP

44
REFERER field. This information can be used to ensure a site visitor has followed a particular
path within the web application, and subsequently used to identify some common forms of
attack.
Disadvantages:
 Any person using the same computer will be able to review the browser history file or stored
favourites and follow the same URL.
 URL information will be logged by intermediary systems such as firewalls and proxy servers.
Thus anyone with access to these logs could observe the URL and possibly use the
information in an attack.
 It is a trivial exercise for anyone to modify the URL and associated session ID information
within a standard web browser. Thus, the skills and equipment necessary to carry out the
attack are minimal – resulting in more frequent attacks.
 When a client navigates to a new web site, the URL containing the session information can
be sent to the new site via the HTTP REFERER field.
Hidden Post Fields
Session ID information stored within the fields of a form and submitted to the application. Typically
the session ID information would be embedded within the form as a hidden field and submitted with
the HTTP POST command.
Example: Embedded within the HTML of a page –
<FORM METHOD=POST ACTION=”/cgi-bin/news.pl”>
<INPUT TYPE=”hidden” NAME=”sessionid” VALUE=”IE60012219”>
<INPUT TYPE=”hidden” NAME=”allowed” VALUE=”true”>
<INPUT TYPE=”submit” NAME=”Read News Article”>
Advantages:
 Not as obvious as URL embedded session information, and consequently requires a slightly
higher skill level for an attacker to carry out any manipulation or hijacking.
 Allows a client to safely store or transmit URL information relating to the site without
providing access to their session information.
 Can also be used even if the client web-browser has high security settings and has disabled
the use of cookies.
Disadvantages:
 While it requires a slightly higher skill level to perform, attacks can be carried out using
commonly available tools such as Telnet or via personal proxy services.
 The web application page content tends to be more complex – relying upon embedded form

45
information, client-side scripting such as JavaScript, or embedded within active content such
as Macromedia Flash. In addition - pages tend to be larger, requiring more time for the client
to download and thus perceiving the site as slower and more unresponsive.
 Due to poor coding practices, a failure to check the submission type (i.e. GET or POST) at the
server side may allow the POST content to be reformed into a URL that could be submitted
via the HTTP GET method.
Cookies
Each time a client web browser accesses content from a particular domain or URL, if a cookie exists,
the client browser is expected to submit any relevant cookie information as part of the HTTP
request. Thus cookies can be used to preserve knowledge of the client browser across many pages
and over periods of time. Cookies can be constructed to contain expiry information and may last
beyond a single interactive session. Such cookies are referred to as “persistent cookies”, and are
stored on the client browsers hard-drive in a location defined by the particular browser or operating
system (e.g. c:\documents and settings\clientname\cookies for Internet Explorer on Windows XP).
By omitting expiration information from a cookie, the client browser is expected to store the cookie
only in memory. These “session cookies” should be erased when the browser is closed.
Example: Within the plain text of the HTTP server response –
Set-Cookie: sessionID=”IE60012219”; path=”/”; domain=”www.example.com”; expires=”2003-06-01
00:00:00GMT”; version=0
Advantages:
 Careful use of persistent and session type cookies can be used to regulate access to the web
application over time.
 More options are available for controlling session ID timeouts.
 Session information is unlikely to be recorded by intermediary devices.
 Cookie functionality is built in to most browsers. Thus no special coding is required to ensure
session ID information is embedded within the pages served to the client browser.
Disadvantages:
 An increasingly common security precaution with web browsers is to disable cookie
functionality. Thus web applications dependant upon the cookie function will not work for
“security conscious” users.
 As persistent cookies exist as text files on the client system, they can be easily copied used
on other systems. Depending on the hosts file access permissions, other users of the host
may steal this information and impersonate the user.
 Cookies are limited in size, and are unsuitable for storing complex arrays of state

46
information.
 Cookies will be sent with very page and file requested by the browser within the domain
defined by the SET-COOKIE.

The Session ID
An important aspect of managing state within the web application is the “strength” of the session ID
itself. As the session ID is often used to track an authenticated user through the application,
organisations must be aware that this session ID must fulfil a particular set of criteria if it is not to be
compromised through predictive or brute-force type attacks. The two critical characteristics of a
good session ID are randomness and length.
Session ID Randomness
It is important that the session ID is unpredictable and the application utilises a strong method of
generating random ID’s. It is vital that a cryptographically strong algorithm is used to generate a
unique session ID for an authenticated user. Ideally the session ID should be a random value. Do not
use linear algorithms based upon predictable variables such as date, time and client IP address.
To this end, the session ID should fulfil the following criteria:
 It must look random – i.e. it should pass statistical tests of randomness.
 It must be unpredictable – i.e. it must be infeasible to predict what the next random value will
be, given complete knowledge of the computational algorithm or hardware generating the ID
and all previous ID’s.
 It cannot be reliably reproduced – i.e. if the ID generator is used twice with exactly the same
input criteria, the result will be an unrelated random ID.
Session ID Length
It is important that the session ID be of a sufficient length to make it infeasible that a brute force
method could be used to successfully derive a valid ID within a usable timeframe. Given current
processor and bandwidth limitations, session ID’s consisting of over 50 random characters in length
are recommended – but make them longer if the opportunity exists. The actual length of the session
ID is dependant upon a number of factors:
 Speed of connection – i.e. there is typically a big difference between Internet client, B2B and
internal network connections. While an Internet client will typically have less than a 512 kbps
connection speed, an internal user may be capable of connecting to the application server at 200
times faster. Thus an internal user could potentially obtain a valid session ID in 1/200th of the
time.

47
 Complexity of the ID – i.e. what values and characters are used within the session ID? Moving
from numeric values (0-9) to a case-sensitive alpha-numeric (a-z, A-Z, 0-9) range means that, for
the same address space, the session ID becomes much more difficult to predict. For example,
the numeric range of 000000-999999 could be covered by 0000-5BH7 using a case-sensitive
alpha-numeric character set.
Session Hijacking
As session ID’s are used to uniquely identify and track a web application user, any attacker who
obtains this unique identifier is potentially able to submit the same information and impersonate
someone else – this class of attack is commonly referred to as Session Hijacking. Given the inherent
stateless nature of the HTTP (and HTTPS) protocol, the process of masquerading as an alternative
user using a hijacked session ID is trivial. An attacker has at his disposal three methods for gaining
session ID information – observation, brute force and misdirection of trust.
Observation
By default all HTTP traffic crosses the wire in an unencrypted, plain text, mode. Thus, any device
with access to the same wire or shared network devices is capable of “sniffing” the traffic and
recording session ID information (not to mention user authentication information such as user
names and passwords). In addition, many perimeter devices automatically log aspects of HTTP traffic
– in particular the URL information. A simple security measure to prevent “sniffing” or logging of
confidential URL information is to use the encrypted form of HTTP – HTTPS.
Brute Force
If the session ID information is generated or presented in such a way as to be predictable, it is very
easy for an attacker to repeatedly attempt to guess a valid ID. Depending upon the randomness and
the length of the session ID, this process can take as little time as a few seconds. In ideal
circumstances, an attacker using a domestic DSL line can potentially conduct up to as many as 1000
session ID guesses per second. Thus it is very important to have a sufficiently complex and long
session ID to ensure that any likely brute forcing attack will take many hundreds of hours to predict.
A paper by David Endler on the processes involved in brute forcing session ID’s should be sought by
readers requiring background information on this process.

The explanation identifies the Internet uses limited band-width.


Bandwidth in computer networking refers to the data rate supported by a network connection or
interface. Network bandwidth is not the only factor that contributes to the perceived speed of a
network. A lesser known element of network performance - latency - also plays an important role.
What Is Network Bandwidth?

48
Bandwidth is the primary measure of computer network speed. Virtually everyone knows the
bandwidth rating of their modem or their Internet service that is prominently advertised on network
products sold today. In networking, bandwidth represents the overall capacity of the connection.
The greater the capacity, the more likely that better performance will result. Bandwidth is the
amount of data that passes through a network connection over time as measured in bits per second
(bps). Bandwidth can refer to both actual and theoretical throughput, and it is important to
distinguish between the two. For example, a standard dial-up modem supports 56 Kbps of peak
bandwidth, but due to physical limitations of telephone lines and other factors, a dial-up connection
cannot support more than 53 Kbps of bandwidth (about 10% less than maximum) in practice.
Likewise traditional Ethernet networks that theoretically support 100 Mbps or 1000 Mbps of
maximum bandwidth, but this maximum amount cannot reasonably be achieved due to overhead in
the computer hardware and operating systems.
Broadband and Other High Bandwidth Connections
The term high bandwidth is sometimes used to distinguish faster broadband Internet connections
from traditional dial-up or cellular network speeds. Definitions vary, but high bandwidth connections
generally support data rates of minimum 64 Kbps (and usually 300 Kbps or higher). Broadband is just
one type of high bandwidth network communication method.
Measuring Network Bandwidth
Numerous tools exist for administrators to measure the bandwidth of network connections. On LANs
(local area networks), these tools include netperf and ttcp. On the Internet, numerous bandwidth
and speed test programs exist, most available for free online use. Even with these tools at your
disposal, bandwidth utilization is difficult to measure precisely as it varies over time depending on
the configuration of hardware and characteristics of software applications including how they are
being used.

The explanation lists the implications of slow band-width to application design


It is not considered good user experience when people are feeling that they are waiting a long time
for a page to load (perceived performance). Broadband is one of the reasons that can cause slow
connectivity on the web. Also, your dial-up users (yes it still exists!) have to wait a long, long time
for large images (meaning their file size) or large amounts of coding (in your pages) to transfer to
their computers and load into their browsers. Lots of code equals larger file size. And one more
thing, if you have huge amounts of code on a web page, some browsers can experience difficulty in
display, especially when it is on a slower computer as it processes the information

49
Session 2 Demonstrate an understanding of different user interface methods used for

SO 2 Internet applications.

Learning  The demonstration identifies different user interface methods used for

Outcomes Internet application development.

(Assessment  The demonstration explains each of the user interface methods identified

Criteria) in 1, indicating the implication of each method.

The demonstration identifies different user interface methods used for Internet application
development. The demonstration explains each of the user interface methods identified in 1,
indicating the implication of each method.
ASP: Active Server Pages - Introduction

1. Presentation of Active Server Pages


ASP (Active Server Pages) is a standard developed by Microsoft in 1996 for the development of
interactive web applications (page with dynamic content). The content of an ASP webpage (with the
.asp extension) may differ depending on certain parameters (information stored in a database, the
user preferences, ...) while a classic webpage (with the .htm or .html extension) will display the same
information continuously. ASP is actually a technology, or more precisely a programming
environment where the interactions between the client browser, the web server, as well as the
connections to databases (via ADO, ActiveX Data Objects), COM components (Component Object
Model), in the form of objects. ASPs are executed on the server side (as well as the CGI, PHP,
...scripts) and not the client side (while scripts written in JavaScript or Java applets runs on the client
side - in the browser). ASP can be integrated in a web page in HTML using special tags that will
instruct the Web server that the code included within these tags must be interpreted and data
(usually HTML code) must be returned to the client browser. Thus, Active Server Pages is part of a 3-
tier architecture. This term means that a server that supports Active Server Pages can be used as an
intermediary between the client browser and a database, using the ADO (ActiveX Data Objects)

50
technology, which provides the elements necessary to initiate connection to a databases
management system and the handling of data using the SQL language.

Characteristics of Active Server Pages


ASP were designed to operate on the Microsoft Web server called Microsoft IIS (Internet
Information Server). This web server, developed by Microsoft in 1996, has the advantage of being
free, it runs under the Microsoft Windows NT operating system . However, this proprietary
technology is now available on other web servers, like the Netscape FastTrack Server for
Chili!Software and other servers including Apache (with the Apache::ASP module), making it possible
to create websites using ASP technology on various platforms (Unix, Linux, PowerPC, ...).

The basic objects of Active Server Pages


Active Server Pages are made up of the objects that will be "processed" by the server. The seven
basic objects are:
 Application: it is the object representing the web application itself, that is to say, an object
containing all information shared by visitors connected to the online application.
 Object Context: it can control any transactions with the Microsoft Transaction Server (MTS:
Microsoft Transaction Server).
 Request: This object is used to retrieve information sent to the server in the HTTP request from
the client.
 Response: It is used to create and send the HTTP response to the client (browser).
 Server: it contains information specific to the web server.
 Session: it allows you to manage user sessions, that is to say to keep information from one page
to another.

51
 ASPError: this object retrieves and sets the errors encountered during the execution of ASP
scripts.
Active Server Pages (ASP) is the Microsoft solution for providing dynamic Web content. Actually, ASP
looks very similar to JSP; both use custom tags to implement business logic and text (HTML) for
invariant Web page parts.

2. Rich clients and browser-based clients


Explain the benefits and drawbacks of rich clients and browser-based clients as deployed in a
typical Java EE application.

Client Considerations
 Network Considerations
The client depends on the network, and the network is imperfect. Although the client appears to be
a stand-alone entity, it cannot be programmed as such because it is part of a distributed application.
Three aspects of the network:
o Latency is non-zero.
o Bandwidth is finite.
o The network is not always reliable.
A well-designed enterprise application must address these issues, starting with the client. The ideal
client connects to the server only when it has to, transmits only as much data as it needs to, and
works reasonably well when it cannot reach the server.
 Security Considerations
Different networks have different security requirements, which constrain how clients connect to an
enterprise. For example, when clients connect over the Internet, they usually communicate with
servers through a firewall. The presence of a firewall that is not under your control limits the choices
of protocols the client can use. Most firewalls are configured to allow Hypertext Transfer Protocol
(HTTP) to pass across, but not Internet Inter-Orb Protocol (IIOP). This aspect of firewalls makes Web-
based services, which use HTTP, particularly attractive compared to RMI- or CORBA-based services,
which use IIOP. Security requirements also affect user authentication. When the client and server
are in the same security domain, as might be the case on a company intranet, authenticating a user
may be as simple as having the user log in only once to obtain access to the entire enterprise, a
scheme known as Single Sign On. When the client and server are in different security domains, as
would be the case over the Internet, a more elaborate scheme is required for single sign on, such as
that proposed by the Liberty Alliance.

52
 Platform Considerations
Every client platform's capabilities influence an application's design. For example, a browser client
cannot generate graphs depicting financial projections; it would need a server to render the graphs
as images, which it could download from the server. A programmable client, on the other hand,
could download financial data from a server and render graphs in its own interface.
Design Issues and Guidelines for Browser Clients
Browsers are the thinnest of clients; they display data to their users and rely on servers for
application functionality. From a deployment perspective, browser clients are attractive for a couple
of reasons. First, they require minimal updating. When an application changes, server-side code has
to change, but browsers are almost always unaffected. Second, they are ubiquitous. Almost every
computer has a Web browser and many mobile devices have a microbrowser.
 Presenting the User Interface
Browsers have a couple of strengths that make them viable enterprise application clients. First, they
offer a familiar environment. Browsers are widely deployed and used, and the interactions they offer
are fairly standard. This makes browsers popular, particularly with novice users. Second, browser
clients can be easy to implement. The markup languages that browsers use provide high-level
abstractions for how data is presented, leaving the mechanics of presentation and event-handling to
the browser.
The trade-off of using a simple markup language, however, is that markup languages allow only
limited interactivity. For example, HTML's tags permit presentations and interactions that make
sense only for hyperlinked documents. You can enhance HTML documents slightly using
technologies such as JavaScript in combination with other standards, such as Cascading Style Sheets
(CSS) and the Document Object Model (DOM). However, support for these documents, also known
as Dynamic HTML (DHTML) documents, is inconsistent across browsers, so creating a portable
DHTML-based client is difficult. Another, more significant cost of using browser clients is potentially
low responsiveness. The client depends on the server for presentation logic, so it must connect to
the server whenever its interface changes. Consequently, browser clients make many connections to
the server, which is a problem when latency is high. Furthermore, because the responses to a
browser intermingle presentation logic with data, they can be large, consuming substantial
bandwidth.
 Validating User Inputs
Consider an HTML form for completing an order, which includes fields for credit card information. A
browser cannot single-handedly validate this information, but it can certainly apply some simple
heuristics to determine whether the information is invalid. For example, it can check that the

53
cardholder name is not null, or that the credit card number has the right number of digits. When the
browser solves these obvious problems, it can pass the information to the server. The server can
deal with more esoteric tasks, such as checking that the credit card number really belongs to the
given cardholder or that the cardholder has enough credit. When using an HTML browser client, you
can use the JavaScript scripting language, whose syntax is close to that of the Java programming
language. Be aware that JavaScript implementations vary slightly from browser to browser; to
accommodate multiple types of browsers, use a subset of JavaScript that you know will work across
these browsers. (For more information, see the ECMAScript Language Specification.) It may help to
use JSP custom tags that autogenerate simple JavaScript that is known to be portable. Validating
user inputs with a browser does not necessarily improve the responsiveness of the interface.
Although the validation code allows the client to instantly report any errors it detects, the client
consumes more bandwidth because it must download the code in addition to an HTML form. For a
non-trivial form, the amount of validation code downloaded can be significant. To reduce download
time, you can place commonly-used validation functions in a separate source file and use
the script element's src attribute to reference this file. When a browser sees the src attribute, it will
cache the source file, so that the next time it encounters another page using the same source file, it
will not have to download it again. Also note that implementing browser validation logic will
duplicate some server-side validation logic. The EJB and EIS tiers should validate data regardless of
what the client does. Client-side validation is an optimization; it improves user experience and
decreases load, but you should NEVER rely on the client exclusively to enforce data consistency.
 Communicating with the Server
Browser clients connect to a J2EE application over the Web, and hence they use HTTP as the
transport protocol. When using browser interfaces, users generally interact with an application by
clicking hyperlinked text or images, and completing and submitting forms. Browser clients translate
these gestures into HTTP requests for a Web server, since the server provides most, if not all, of an
application's functionality. User requests to retrieve data from the server normally map to HTTP GET
requests. The URLs of the requests sometimes include parameters in a query string that qualify what
data should be retrieved. User requests to update data on the server normally map to HTTP POST
requests. Each of these requests includes a MIME envelope of type application/x-www-form-
urlencoded, containing parameters for the update. After a server handles a client request, it must
send back an HTTP response; the response usually contains an HTML document. A J2EE application
should use JSP pages to generate HTML document

54
Session 3 Demonstrate an awareness of the implications of copyright, ownership and

SO 3 royalties.

 The demonstration shows an awareness of copyright issues related to


Learning Internet development.

Outcomes  The demonstration shows an awareness of ownership issues related to

(Assessment Internet development.

Criteria)  The demonstration shows an awareness of royalty issues related to


Internet development.

The demonstration shows an awareness of copyright issues related to Internet development.


What is Copyright?
Copyright is a form of protection provided by the laws of the United States (title 17, U.S. Code) to
the authors of "original works of authorship" including literary, dramatic, musical, artistic,
architectural and certain other intellectual works.
***This protection is available to both published and unpublished works.
Material in the "public domain" is intellectual property that does not come under copyright laws.
Nearly all work before the 20th C. is not copyrighted.
What is Plagiarism?
Plagiarism is the the act of stealing and passing off the ideas, words, or other intellectual property
produced by another as one's own. For example, using someone else's words in a research paper
without citing the source, is an act of plagiarism.
History of copyright:
 First law enacted 1790.
 1976 copyright law followed international law, extending copyright for 50 years after death of
the author/creator.
 On October 27, 1998, President Clinton signed into law the "Sonny Bono Copyright Extension
Act," which extends the terms of almost all existing copyrights by 20 years, to provide copyrights
in the United States the same protection afforded in Europe. The basic term of copyright
protection, the life of the creator plus 50 years, has been increased to life plus 70 years. The
term for "work for hire" has been extended from 75 to 95 years.

How long does copyright last?


 Works created on or after Jan 1978 - life of author + 70

55
 Work for hire 95 years
The OWNER/manufacturer/creator[but not always the creator ] of the work CAN:
 copy the work.
 create derivative works based upon the work.
 sell, rent, lease, lend copies of the work.
 publicly perform literary, musical, dramatic, motion picture and other audiovisual works.
 publicly perform sound recordings.
It is not necessary to have a notice of copyright (i.e.: © 1997 Jane Doe) for material to be copyright
protected in the U.S. Once something tangible is produced, text, graphics, music, video, etc., it is
automatically copyrighted. Sound recordings and some other property use other copyright
symbols. Anyone can use the copyright symbol on her or his original work.
The Internet and Copyright:
"The Internet has been characterized as the largest threat to copyright since its inception. The
Internet is awash in information, a lot of it with varying degrees of copyright protection. Copyrighted
works on the Net include new s stories, software, novels, screenplays, graphics, pictures, Usenet
messages and even email. In fact, the frightening reality is that almost everything on the Net is
protected by copyright law. What is protected on the WWW?
The unique underlying design of a Web page and its contents, including:
 links
 original text
 graphics
 audio
 video
 html, vrml, other unique markup language sequences
 List of Web sites compiled by an individual or organization
 and all other unique elements that make up the original nature of the material.
When creating a Web page, you CAN:
 Link to other Web sites. [However, some individuals and organizations have specific
requirements when you link to their Web material. Check a site carefully to find such
restrictions. It is wise to ask permission. You need to cite source, as you are required to do in
a research paper, when quoting or paraphrasing material from other sources. How much
you quote is limited.]
 Use free graphics on your Web page. If the graphics are not advertised as "free" they should
not be copied without permission.

56
When creating a Web page, you CANNOT:
 Put the contents of another person's or organizations web site on your Web page
 Copy and paste information together from various Internet sources to create "your own"
document. [You CAN quote or paraphrase limited amounts, if you give credit to the original
source and the location of the source. This same principle applies to print sources, of
course.]
 Incorporate other people's electronic material, such as e-mail, in your own document,
without permission.
 Forward someone's e-mail to another recipient without permission
 Change the context of or edit someone else's digital correspondence in a way which changes
the meaning
 Copy and paste others' lists of resources on your own web page
 Copy and paste logos, icons, and other graphics from other web sites to your web page
(unless it is clearly advertised as "freeware." Shareware is not free). Some organizations are
happy to let you use their logos, with permission - it is free advertising. But they want to
know who is using it. They might not approve of all sites who want to use their logo.
Many aspects of the issue of copyright and the Internet are still not resolved. This information,
however, should serve as a useful guide to help you avoid violation of copyright rules and the pitfalls
of unknowingly plagiarizing someone else's material. When in doubt, please consult the official
copyright rules and guidelines.

The demonstration shows an awareness of ownership issues related to Internet development.


Who Owns The Internet?
Ownership of the internet is a complicated issue. In theory, the internet is owned by everyone that
uses it. Yet, in reality, certain entities exert more influence over the "mechanics" and regulation of
the internet than others. To understand the notion of ownership, one must understand the
backbone of the internet--Domain Name Systems. As the internet continues to become a larger
component of education, teachers need to be aware of the political, commercial, and public
influences affecting the internet. The internet opens the door to new horizons of curriculum
development, communications, research, and resources to support education. As educators, the
Domain Name System has the potential to provide direction and simplification ofinternet resources.
The following issues will be examined in this discussion of ownership:
Domain Name Systems
Control of Domain Name Systems

57
Conflicts and Inequities in the Domain Name System
Relevance to Education

In deciding which ownership or licensing arrangements will work for your business, keep in mind the
following rule: The more important content or technology is to your site, the more crucial it is that
you either get ownership or a broad license to use and possibly modify those materials. This is true
whether or not the Web developer has a valid reason to retain ownership. If it's essential that you
own copyright ownership in a database or other technology, don't enter into an agreement that
won't confer the rights you need.

The demonstration shows an awareness of royalty issues related to Internet development.


Royalty-free, or RF, refers to the right to use copyrighted material or intellectual property without
the need to pay royalties or license fees for each use or per volume sold, or some time period of use
or sales. Many computer industry standards, especially those developed and submitted by industry
consortiums or individual companies, involve royalties for the actual use of these standards. These
royalties are typically charged on a "per port" basis, where the manufacturer of end-user devices has
to pay a small fixed fee for each device sold, and also include a substantial annual fixed fee. With
millions of devices sold each year, the royalties can amount to several millions of dollars, which is a
significant burden for the manufacturer. Examples of such royalties-based standards include IEEE
1394, HDMI, and H.264/MPEG-4 AVC.

How Long to Expect Royalty Rates / Profit Sharing from Software Development
Whether the software royalties should continue indefinitely, one should figure out what happens
after developing version 1.0 of the application. If the software developers' input stops there (and
other developers end up taking the application further), perhaps the software developer should
expect royalties to stop at some point too. This becomes tricky when the client insists on owning the
adapted source code which you originally owned. The client should pay for the initial source code if
he wishes to take ownership of the final "adapted program." For example, if the client wishes to
enter a limited contract period of development, such as 3 to 5 years, what happens when the
contractual period expires? Therefore, software ownership must be clearly defined at the beginning
of the contract.

58
Session 4
Explain version control and security issues related to Internet Applications.
SO 4

Learning  The explanation identifies version control issues related to Internet

Outcomes development.

(Assessment  The explanation identifies security issues related to Internet development,

Criteria) and explains ways of handling each.

The explanation identifies version control issues related to Internet development


Internet security is a branch of computer security specifically related to the Internet, often
involving browser security but also network security on a more general level as it applies to other
applications or operating systems on a whole. Its objective is to establish rules and measures to use
against attacks over the Internet.[ The Internet represents an insecure channel for exchanging
information leading to a high risk of intrusion or fraud, such as phishing. Different methods have
been used to protect the transfer of data, including encryption.

A JIT compiler runs after the program has started and compiles the code (usually bytecode or some
kind of VM instructions) on the fly (or just-in-time, as it's called) into a form that's usually faster,
typically the host CPU's native instruction set. A JIT has access to dynamic runtime information
whereas a standard compiler doesn't and can make better optimizations like inlining functions that
are used frequently. This is in contrast to a traditional compiler that compiles all the code to
machine language before the program is first run. To paraphrase, conventional compilers build the
whole program as an EXE file BEFORE the first time you run it. For newer style programs, an
assembly is generated with pseudocode (p-code). Only AFTER you execute the program on the OS
(e.g., by double-clicking on its icon) will the (JIT) compiler kick in and generate machine code (m-
code) that the Intel-based processor or whatever will understand.
Security token
Some online sites offer customers the ability to use a six-digit code which randomly changes every
30-60 seconds on a security token. The key on the security token have mathematical computations
built-in and manipulate numbers based on the current time built into the device. This means that
every thirty seconds there's only a certain possible array of numbers which would be correct to
validate access to the online account. The website that the user is logging into would be made aware
of that devices' serial number and therefore would know the computation and correct time built

59
into the device to verify that the number given is in deed one of the handful of six-digit numbers that
would work in that given 30-60 second cycle. After the 30-60 seconds the device will present a new
random six-digit number which can log into the website.
Firewalls
A firewall controls access between networks. It generally consists of gateways and filters which vary
from one firewall to another. Firewalls also screen network traffic and are able to block traffic that is
dangerous. Firewalls act as the intermediate server between SMTP and HTTP connections.
Role of firewalls in Internet security and web security
Firewalls impose restrictions on incoming and outgoing packets to and from private networks. All the
traffic, whether incoming or outgoing, must pass through the firewall; only authorized traffic is
allowed to pass through it. Firewalls create checkpoints between an internal private network and
the public Internet, also known as choke points. Firewalls can create choke points based on IP source
and TCP port number. They can also serve as the platform for IPsec. Using tunnel mode capability,
firewall can be used to implement VPNs. Firewalls can also limit network exposure by hiding the
internal network system and information from the public Internet.

The explanation identifies security issues related to Internet development, and explains ways of
handling each.
Among the dimensions that could all too easily be compromised are:
 Information Privacy. Threats here can range from public disclosures about an individual’s
medical or credit records, to identity theft, to the acquisition (and possibly the diffusion) of
classified information that could compromise national security.
 Provision of Services. Another vulnerability is the provision of services; attacks aimed specifically
at denial of service have been very effective in causing short-term disruption. Because of the
dependence on Internet service providers, denial of service attacks cause enormous backlogs in
communications and interfere with transactions in both business and government.
 Critical Roles and Missions. A more serious possibility is that the implementation of missions of
government agencies and departments or businesses could be affected by attacks that
undermine the functionality of the systems themselves. An alternative is what might be called
information tampering, something that could have serious physical consequences when virtual
systems control real world processes such as manufacturing of drugs, traffic flows, safety
systems, and the like.
 Electronic Commerce. Another area that could prove to be vulnerable in a variety of ways is e-
commerce. Breaches of security in financial transactions could result from (or indeed could

60
result in) various forms of cyber-crime including fraud. Moreover, the capacity to disrupt
information and communication systems on which companies depend provides enormous
opportunities for extortion. A growing number of corporations are becoming dependent upon
information security for both their ability to conduct business on a daily basis and also to
maintain credibility with their customer base. The banking and insurance industries immediately
come to mind in that regard. Additionally, incidents such as the Distributed Denial of Service
attack against the Internet in February demonstrate the fragility of e-commerce security at this
juncture. As the financial incentives drive more and more businesses into the realm of e-
commerce, the potential for malicious activity more than keeps pace. Whether from criminals,
terrorists, nations, unhappy customers or bored teenagers, e-commerce is a growing target of
opportunity.
 National Infrastructure. Advanced industrialized and post-industrialized societies depend on
a series of infrastructures – communications, transportation, power grids, etc. – that are
critical to the effective functioning of these societies. Damage or disruption to these
infrastructures could have enormous consequences, particularly as cascading effects are
taken into account. Further, as technology continues to evolve, the definition of just what
comprises the "Critical National Infrastructure" will become blurred. It can be anticipated
that systems that directly impact the daily functioning of technologically evolved societies
will become more and more transparent to the members of those societies. The effects of
these imbedded systems will be taken for granted. Should those systems become
compromised, the impact will be as profound culturally as it is economically or from a
national security standpoint.
 Substantive Information. It is not only the medium that is vulnerable, but also the message itself.
The integrity and validity of certain kinds of information could all too easily be compromised
through the distribution of memes. A meme is broadly defined as a self-propagating or actively
contagious idea. [Lynch]. In this context, the notion of contagion is neutral. Nevertheless, it is
obvious that cyber-space is a wonderful domain for the propagation of "memetic viruses" that
replicate and in effect, drive out or overwhelm the existing information.[Matthews]. The
problem here is different from the other kinds of vulnerabilities that are related either to the
availability of the channels of communication themselves or to viruses and malicious code that
influence the instruction sets contained in software. Memetic viruses, in contrast, concern the
content of information. Ironically, although the study of memes has developed in the west, the
notion of manipulation of information and ideas to deceive and thereby influence decision-
making processes is central to Chinese and Russian notions of information warfare

61
Personal Information
HTTP clients are often privy to large amounts of personal information (e.g. the user's name, location,
mail address, passwords, encryption keys, etc.), and SHOULD be very careful to prevent
unintentional leakage of this information via the HTTP protocol to other sources. We very strongly
recommend that a convenient interface be provided for the user to control dissemination of such
information, and that designers and implementors be particularly careful in this area. History shows
that errors in this area often create serious security and/or privacy problems and generate highly
adverse publicity for the implementor's company.
Abuse of Server Log Information
A server is in the position to save personal data about a user's requests which might identify their
reading patterns or subjects of interest. This information is clearly confidential in nature and its
handling can be constrained by law in certain countries. People using the HTTP protocol to provide
data are responsible for ensuring that such material is not distributed without the permission of any
individuals that are identifiable by the published results.

62
US: 14915, NQF Level 4 Worth 8 Credits
Learning Unit 4 Design a computer program according to given
specifications

This unit standard is intended:


 To provide a proficient knowledge of the areas covered.
 For those entering the workplace in the area of systems
development.
Qualifying learners are able to:
Unit Standard  Apply the fundamental principles of procedural programming design

Purpose techniques.
 Demonstrate an understanding of the features of a procedural
computer program that will solve a given simple problem.
 Operate procedural computer program development toolsThe
performance of all elements is to a standard that allows for further
learning in this area.

Open.
Learning The credit value of this unit is based on a person having the prior knowledge
Assumed to be and skills to:

in Place  be able to apply the principles of Computer Programming (SGB-ID =


SDG001).

63
Session 1 Apply the fundamental principles of program design techniques to the given

SO 1 specification.

 The application includes the drawing of a program structure diagram for a


given simple problem.
Learning  The application includes the drawing of a decision tree for a given simple

Outcomes problem.

(Assessment  The application includes the creation of a decision table for a given simple

Criteria) problem
 The application allows previously-prepared design technique outputs to be
read and desk-checked for accuracy.

The application includes the drawing of a program structure diagram for a given simple problem.
Key Design Considerations
Determine the Application Type
Choosing the appropriate application type is the key part of the process of designing an application.
Your choice is governed by your specific requirements and infrastructure limitations. Many
applications must support multiple types of client, and may make use of more than one of the basic
archetypes. This guide covers the following basic application types:
 Applications designed for mobile devices.
 Rich client applications designed to run primarily on a client PC.
 Rich Internet applications designed to be deployed from the Internet, which support rich UI and
media scenarios.
 Service applications designed to support communication between loosely coupled components.
 Web applications designed to run primarily on the server in fully connected scenarios.

Determine the Deployment Strategy


Your application may be deployed in a variety of environments, each with its own specific set of
constraints such as physical separation of components across different servers, a limitation on
networking protocols, firewall and router configurations, and more. Several common deployment
patterns exist, which describe the benefits and considerations for a range of distributed and non-
distributed scenarios. You must balance the requirements of the application with the appropriate
patterns that the hardware can support, and the constraints that the environment exerts on your
deployment options. These factors will influence your architecture design.

64
Determine the Appropriate Technologies
When choosing technologies for your application, the key factors to consider are the type of
application you are developing and your preferred options for application deployment topology and
architectural styles. Your choice of technologies will also be governed by organization policies,
infrastructure limitations, resource skills, and so on. You must compare the capabilities of the
technologies you choose against your application requirements, taking into account all of these
factors before making decisions.

A graphic representation of an algorithm, often used in the design phase of programming to work
out the logical flow of a program.
Examples of Program Structure Diagram

Who can use them and how


 Software developers: Use Real-Time Object-Oriented Modeling (ROOM) notation to model data
communication within real-time applications.
 CASE designers: Use transitions to illustrate actor behavior in environments and toward other
actors.
 Software architects: Show the reactive behavior of architecture components in a closed system.

65
The application includes the drawing of a decision tree for a given simple problem.
A decision tree is a kind of flowchart -- a graphical representation of the process for making a
decision or a series of decisions. Businesses use them to determine company policy; sometimes
simply for choosing what policy is, other times as a published tool for their employees. Individuals
can use decision trees to help them make difficult decisions by reducing them to a series of simpler,
or less emotionally laden, choices. Regardless of the context or type of decision, the structure of a
decision tree remains the same. Or you can say A decision tree is a decision support tool that uses a
tree-like graph or model of decisions and their possible consequences, including chance event
outcomes, resource costs, and utility. It is one way to display an algorithm. Brainstorm each of the
variables in the decision you want the decision tree to help you make. Write them down on a sheet
of paper, or in the margin of your main sheet.

Prioritize the variables you've listed and write them down in order. Depending on the kind of
decision you're making, you can prioritize the variables chronologically, by order of importance, or
both.

66
 For a simple work vehicle, you might prioritize your car decision trees as price, fuel efficiency,
model, style and options. If you were buying the car as a gift for your spouse, the priorities might
go style, model, options, price, and fuel efficiency.

Key Points:
Decision trees provide an effective method of Decision Making because they:
 Clearly lay out the problem so that all options can be challenged.
 Allow us to analyze fully the possible consequences of a decision.
 Provide a framework to quantify the values of outcomes and the probabilities of achieving them.
 Help us to make the best decisions on the basis of existing information and best guesses.

The application includes the creation of a decision table for a given simple problem
Software engineering benefits
Decision tables, especially when coupled with the use of a domain-specific language, allow
developers and policy experts to work from the same information, the decision tables themselves.
Tools to render nested if statements from traditional programming languages into decision tables
can also be used as a debugging tool. Decision tables have proven to be easier to understand and
review than code, and have been used extensively and successfully to produce specifications for
complex systems.[
Method
The Decision Table is divided into four quadrants.

The upper half lists the conditions being tested; the lower half lists the possible actions to be
taken. Each column represents a certain type of condition or rule.

67
Guidelines for constructing a decision table
Steps to Develop a Decision Table
To Construct a Decision Table:
1) Draw boxes for the top and bottom left quadrants.
2) List the conditions in the top, left quadrant. When possible, phrase the conditions as
questions that can be answered with a Y for yes and an N for no. This type of Decision Table is
known as a limited entry table. When a Decision Table requires more than two values for a
condition, it is known as an extended entry table.
3) List the possible actions in the bottom, left quadrant.
4) Count the possible values for each condition and multiply these together to determine how
many unique combinations of conditions are present. Draw one column in the top and bottom right
quadrants for each combination.
For example, if there are two conditions and the first condition has two possible values while the
second has three possible values, draw six (2 * 3) columns.
5) Enter all possible combinations of values in the columns in the top, right quadrant of the
table.
6) For each column (each unique combination of conditions), mark an X in the bottom, right
quadrant in the appropriate action row. The X marks the intersection between the required action
and each unique combination of condition values.

The application allows previously-prepared design technique outputs to be read and desk-checked
for accuracy.
Determine the Quality Attributes
Quality attributes—such as security, performance, and usability—can be used to focus your thinking
on the critical problems that your design should solve. Depending on your requirements, you might
need to consider every quality attribute, or you might only need to consider a subset. For example,
every application design must consider security and performance, but not every design needs to
consider interoperability or scalability. Understand your requirements and deployment scenarios
first so that you know which quality attributes are important for your design. Keep in mind that
quality attributes may conflict; for example, security often requires a tradeoff against performance
or usability.

68
Session 2
Demonstrate an understanding of the features of a computer program.
SO 2

 The demonstration includes the research of a problem in terms of inputs


Learning and outputs.

Outcomes  The demonstration includes the features of a procedural computer

(Assessment program that will solve the given problem.

Criteria)  The demonstration outlines why a batch or online program will be the best
solution to the problem.

The description explains techniques used to research problems in terms of inputs and outputs.
An actor in the Unified Modeling Language (UML) "specifies a role played by a user or any other
system that interacts with the subject. "An Actor models a type of role played by an entity that
interacts with the subject (e.g., by exchanging signals and data), but which is external to the subject.
“Actors may represent roles played by human users, external hardware, or other subjects. Note that
an actor does not necessarily represent a specific physical entity but merely a particular facet (i.e.,
“role”) of some entity that is relevant to the specification of its associated use cases. Thus, a single
physical instance may play the role of several different actors and, conversely, a given actor may be
played by multiple different instances. UML 2 does not permit associations between Actors. The use
of generalization/specialization relationship between actors is useful in modeling overlapping
behaviours between actors and does not violate this constraint since a generalization relation is not
a type of association. Actors interact with use cases.

So the following are the places where use case diagrams are used:
 Requirement analysis and high level design.
 Model the context of a system.

69
 Reverse engineering.
 Forward engineering.

A sequence diagram is a kind of interaction diagram that shows how processes operate with one
another and in what order. It is a construct of a Message Sequence Chart. A sequence diagram
shows object interactions arranged in time sequence. It depicts the objects and classes involved in
the scenario and the sequence of messages exchanged between the objects needed to carry out the
functionality of the scenario. Sequence diagrams are typically associated with use case realizations in
the Logical View of the system under development. Sequence diagrams are sometimes called event
diagrams, event scenarios, and timing diagrams. A sequence diagram shows, as parallel vertical
lines (lifelines), different processes or objects that live simultaneously, and, as horizontal arrows, the
messages exchanged between them, in the order in which they occur. This allows the specification
of simple runtime scenarios in a graphical manner.

The demonstration includes the features of a procedural computer program that will solve the
given problem.
A Structure Chart (SC) in software engineering and organizational theory is a chart which shows the
breakdown of a system to its lowest manageable levels. They are used in structured programming to
arrange program modules into a tree. Each module is represented by a box, which contains the
module's name. The tree structure visualizes the relationships between modules. A structure chart is
a top-down modular design tool, constructed of squares representing the different modules in
the system, and lines that connect them. The lines represent the connection and or ownership
between activities and sub activities as they are used in organization

70
Top-down and bottom-up are both strategies of information processing and knowledge ordering,
used in a variety of fields including software, humanistic and scientific theories ( systemic), and
management and organization. In practice, they can be seen as a style of thinking and teaching.
A top-down approach (also known as stepwise design or deductive reasoning and in many cases
used as a synonym of analysis or decomposition) is essentially the breaking down of a system to gain
insight into its compositional sub-systems. In a top-down approach an overview of the system is
formulated, specifying but not detailing any first-level subsystems. Each subsystem is then refined in
yet greater detail, sometimes in many additional subsystem levels, until the entire specification is
reduced to base elements. A top-down model is often specified with the assistance of "black boxes",
these make it easier to manipulate. However, black boxes may fail to elucidate elementary
mechanisms or be detailed enough to realistically validate the model. Top down approach starts
with the big picture. It breaks down from there into smaller segments.[
A bottom-up approach (also known as inductive reasoning, and in many cases used as a synonym
of synthesis) is the piecing together of systems to give rise to grander systems, thus making the
original systems sub-systems of the emergent system. Bottom-up processing is a type of information
processing based on incoming data from the environment to form a perception. Information enters
the eyes in one direction (input), and is then turned into an image by the brain that can be
interpreted and recognized as a perception (output). In a bottom-up approach the individual base
elements of the system are first specified in great detail. These elements are then linked together to
form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete
top-level system is formed. This strategy often resembles a "seed" model, whereby the beginnings
are small but eventually grow in complexity and completeness. However, "organic strategies" may
result in a tangle of elements and subsystems, developed in isolation and subject to local
optimization as opposed to meeting a global purpose.

71
Decision trees, decision tables
First it takes advantage of the sequential structure of decision tree branches so that the order of
checking conditions and executing actions is immediately noticeable. Second, Conditions and actions
of decision trees are found on some branches but not on others which contrasts with decision
tables, in which they are all part of the same table. Those conditions and actions that are critical are
connected directly to other conditions and actions, whereas those conditions that do not matter are
absent. In other words it does not have to be symmetrical. Third, compared with decision tables,
decision trees are more readily understood by others in the organization. Consequently, they are
more appropriate as a communication tool. Unbalanced Decision Tables are a compromise between
Decision Tables and Decision Trees. Decision Trees themselves can become quite complex with
enough conditions and actions. Unbalanced Decision Tables provide either a prioritized list of
conditions that lead to a set of actions, or a list of conditions that lead to a set of actions. The result
is often more concise than either traditional Decision Tables or Decision Trees.

The demonstration outlines why a batch or online program will be the best solution to the
problem
Batch processing is execution of a series of programs ("jobs") on a computer without manual
intervention. Jobs are set up so they can be run to completion without manual intervention. So, all
input data are preselected through scripts, command-line parameters, or job control language. This
is in contrast to "online" or interactive programs which prompt the user for such input. A program
takes a set of data files as input, processes the data, and produces a set of output data files. This
operating environment is termed as "batch processing" because the input data are collected
into batches of files and are processed in batches by the program.

Batch processing has these benefits:


 It can shift the time of job processing to when the computing resources are less busy.
 It avoids idling the computing resources with minute-by-minute manual intervention and
supervision.
 By keeping high overall rate of utilization, it amortizes the computer, especially an expensive
one.
 It allows the system to use different priorities for batch and interactive work.

72
73
Session 3 Demonstrate an understanding of how to document program designs using

SO 3 appropriate tools.

 The operation demonstrates the use of the editor of the development


Learning tools to produce procedural program source code

Outcomes  The operation includes the use of the syntax checker of the tools to check

(Assessment for syntax errors.

Criteria)  The operation uses the tool to compile the procedural source code
produced.

The operation demonstrates the use of the editor of the development tools to produce program
source code.
A programming tool or software development tool is a program or application that software
developers use to create, debug, maintain, or otherwise support other programs and applications.
The term usually refers to relatively simple programs, that can be combined together to accomplish
a task, much as one might use multiple hand tools to fix a physical object. Sometimes called text
editor, a program that enables you to create and edit text files. There are many different types of
editors, but they all fall into two general categories: line editors: A primitive form of editor that
requires you to specify a specific line of text before you can make changes to it. screen -oriented
editors: Also called full-screen editors, these editors enable you to modify any text that appears on
the display screen by moving the cursor to the desired location.
EDITOR COMMANDS
Command Description
Ctrl-a Moves the cursor to the beginning of the current line.
[Home]
Ctrl-b Moves the cursor backwards one character.
[Left Arrow]
Ctrl-c Copys highlighted text (the current selection) to a temporary holding area.
Ctrl-d Deletes the character to the right of the cursor.
( [Delete] on Windows )
Ctrl-e Moves the cursor to the end of the current line.
[End]
Ctrl-f Find a sequence of characters. A prompt bar pops up for entering the

74
desired sequence of characters. An [Esc] aborts the find operation.
Ctrl-g Find the next occurance of a sequence of characters, specified by last FIND
or SEARCH.

The operation includes the use of the syntax checker of the tools to check for syntax errors.
In computer science, a syntax error refers to an error in the syntax of a sequence of characters or
tokens that is intended to be written in a particular programming language. For compiled languages
syntax errors occur strictly at compile-time. A program will not compile until all syntax errors are
corrected. For interpreted languages, however, not all syntax errors can be reliably detected until
run-time, and it is not necessarily simple to differentiate a syntax error from a semantic error; many
don't try at all. In 8-bit home computers that used BASIC interpreter as their primary user interface,
the SYNTAX ERROR error message became somewhat notorious, as this was the response to any
command or user input the interpreter couldn't parse. A syntax error may also occur when an invalid
equation is entered into a calculator. This can be caused, for instance, by opening brackets without
closing them, or less commonly, entering several decimal points in one number.

Syntax Checking Error Messages


These messages are output when the compiler is checking your COBOL program for syntax and
consistency. The descriptions for each message lists the text of each message, and where necessary
explain the error or problem that causes the message and gives advice on how to prevent it. The
severity is not listed, as the same message can be output with a different severity depending on the
setting of directives.
Format of Syntax Checking Error Messages
Syntax checking error messages have the following format:
Line-of-COBOL-code
nnnn-s code**** (mmmm)**
message
where the variables are:

nnnn The message number.

mmmm The page where the previous error occurred.

s One of the following severity codes:

U Unrecoverable. An unrecoverable error stops the COBOL system. These

75
messages are produced by the run-time system.
S Severe. You must correct the syntax error or inconsistency in your program.
Otherwise the compiler cannot generate code.
E Error. The compiler will make an assumption about what you meant. You
might want to correct your program in case the compiler's assumption is not
what you intended.
W Warning. This means there might be an error, although the program is
syntactically correct.
I Information. This draws your attention to something in the source code that
you might need to be aware of. It does not mean there is an error.

You can disable reporting of errors of E-level, W-level, and I-level, using the WARNING directive.
When the Compiler has finished, the total number of errors in each category is also output. You can
disregard some levels of errors and continue working. You can:
 Debug programs that have S-level, E-level, W-level, and I-level errors regardless of the setting of
the E run-time switch.
 Produce object code from intermediate code that has E-level, W-level, and I-level errors, but not
S-level errors.
 Run programs that have E-level, W-level, and I-level errors. If the E-level run-time switch is on,
which overrides the default setting, you can also run programs with S-level errors.
The error messages can contain variable information. This information is indicated as an item in
italics. For example:
User-name data-name not unique will have the name of the item that is not unique in place of the
text data-name.

List of Syntax Checking Error Messages


0001 Undefined error. Inform Technical Support
Your program contains an error which the COBOL system has failed to recognize.
Resolution:
Send Technical Support a copy of your source code to enable them to find the cause of the error.

0002 Unexpected SQL error. Inform Technical Support


Your program contains an SQL error which the COBOL system has failed to recognize.
Resolution:
Send Technical Support a copy of your source code to enable them to find the cause of the error.

76
0003 Illegal format: Literal
The sequence of characters forming a literal in your source code does not conform to the rules
governing the construction of such names. A literal can be either nonnumeric or numeric. If numeric
it can be up to 18 digits in length, but it must not contain more than one sign character or more than
one decimal point. A nonnumeric literal can consist of any allowable character in the computer's
character set up to a maximum of 160 characters in the Procedure Division, or 2048 characters in the
Data Division. A nonnumeric literal must be enclosed in quotation marks. If you have used a
figurative constant as the literal make sure that it is referenced by an allowable reserved word (such
as ZERO) which you have spelled correctly. A figurative constant and a numeric literal must not be
enclosed in quotation marks. You might also have used the wrong class of literal for the context of
the sentence. Alternatively, if you have used the figurative constant ALL in your code, you have not
coded it in accordance with the rules governing the use of this constant. ALL must be followed by a
nonnumeric literal and not by a numeric one.
Resolution:
Revise your code to comply with the above rules.
0004 Illegal character
Your program contains a character that is not part of the COBOL language set.
Resolution:
Replace the illegal character with a valid one.
0005 User-name user-name not unique
You have given the same user-name without qualification to more than one data item or procedure-
name in your source code.
Resolution:
You must rename or qualify the duplicated data items or procedure-names to ensure that
uniqueness of reference is achieved.

0007 specified in column 7 of otherwise blank line


The indicator area, column 7, contains an illegal character.
Resolution:
Legal characters are *, D, -, / or space.
0008 Unknown COPY file filename specified
A file with the name filename, specified in conjunction with a COPY statement, cannot be found.
0009 '.' missing

77
Your code does not contain a period in a place where one is expected by the rules of COBOL syntax.
Resolution:
Insert one at the relevant place.
0010 Word starts or is continued in wrong area of source line
The word starts either in area A when it should have started in area B, or in area B when it should
have started in area A.
0011 Reserved word missing or incorrectly used
You have either used a reserved word in a place where a user defined word is expected or you have
failed to use a reserved word where one is needed.
Resolution:
Alter the reserved word into a user defined one or insert a reserved word according to the context
of this message.

The operation uses the tool to compile the program source code produced.
User written code, standard functions, library functions.
Library Functions:
Q-Basic provides a number of functions. These inbuilt functions of Q-basic are called library
functions. These are divided into string and numeric functions. LEFT$, LEN, MID$, LCASE$ etc are the
examples of string functions and ABS, SQR, INT, VAL etc are the examples of numeric variables.
User Defined Functions:
While standard functions are pre-defined and provided for by QBasic, user-defined functions are
completely defined and customized by the programmer. User-defined functions return a single value
and are generally used to perform an operation that will be needed numerous times in a program. In
QBasic, user-defined functions are referred to as procedures; similar to SUB procedures except
function procedures return one value. Arguments may be sent into a function procedure for use in
the function operation, but the value returned by the function will not be included in the parameter
list. The value is returned in the function itself. Each user=defined function starts with BEGIN
FUNCTION Funct Name (x,y,z) and ends with END FUNCTION. The code between these two lines is
executed whenever the function is invoked from main program, from another function, or SUB, or
from itself. Funct Name is a name for your function (choose a descriptive one). Arguments (x,y,z) are
the variables passed to the function. The form of a function procedure is as follows:
FUNCTION name ( parameter list )
REM
REM body of function

78
REM
END FUNCTION
Subroutines and functions:
A subroutine (also called a "module") is a "mini-program" inside your program. In other words, it is a
collection of commands--and can be executed anywhere in your program. To create a subroutine:
Go to the "Edit" menu Select "New Sub" Enter a name for the subroutine Type a list of commands
between SUB and END SUB. (Topic 1.1 SUB ….. END SUB statement: will provide detail information)
Functions:
Function is the same as a subroutine, except it returns a value. Also, you must leave out the CALL
command. To return a value, set a variable with the same name as the function.
Local and Global variable:
When a variable is declared within a main module or procedure without using SHARED attribute,
only code within that main module or procedure can access or change the value of that variable. This
type of variable is called as LOCAL variable. When a variable is declared with SHARED attribute in a
main module, it can be used in a procedure without passing it as parameter. Any SUB or FUNCTION
procedure within the module can use this type of variable. This type of variable which is available to
all SUB and FUNCTION procedure with the module is known as GLOBAL variable.

79
Session 4
Apply fundamental principles of problem analysis.
SO 4

 The application provides an appreciation of the steps and techniques of


Learning program maintenance.

Outcomes  The application provides examples to demonstrate different problem

(Assessment analysis techniques (at least 2).

Criteria)  The application uses logic flow techniques to solve given elementary
problems.

The application provides an appreciation of the steps and techniques of program maintenance.
Program development can be described as a seven step process:
1. Understand the problem.
2. Plan the logic of the program.
3. Code the program using a structured high level computer language.
4. Using a compiler, translate the program into a machine language.
5. Test and debug the program.
6. Put the program into production.
7. Maintain and enhance the program.
Planning the logic of the program requires the development of algorithms. An algorithm is a finite,
ordered set of unambiguous steps that terminates with a solution to the problem. Human readable
representations such as flow charts and pseudo code are typically used to describe the steps of an
algorithm and the relationships among the steps. A flow chart is a graphical representation of the
steps and control structures used in an algorithm. A flow chart does not involve a particular
programming language, but rather uses a set of geometric symbols and flow control lines to describe
the algorithm. From a flowchart, a programmer can produce the high level code required to compile
an executable program. Initially, the standard for describing flow charts only specified the types of
shapes and lines used to produce a flow chart. The introduction of structured programming in the
1960�s and 70�s brought with it the concept of Structured Flow Charts. In addition to a standard
set of symbols, structured flow charts specify conventions for linking the symbols together into a
complete flow chart. The structured programming paradigm evolved from the mathematically
proven concept that all problems can be solved using only three types of control structures:
Sequence, Decision (or Selection), Iterative (or looping).
The definition of structured flow charts used in this document and software further defines:

80
3 types of sequential structures: Process, Input/Output, and Subroutine Call
3 types of decision structures: Single Branch, Double Branch, and Case.
4 types of iterative structures: Test at the Top, Test at the Bottom, Counting, and User Controlled
Exit.

Description identifies different problem analysis techniques (at least 2).


Problem definition, Hierarchical (Top-Down) approach, Stepwise refinement, Modularity.
Top-down and bottom-up are both strategies of information processing and knowledge ordering,
used in a variety of fields including software, humanistic and scientific theories and management
and organization. In practice, they can be seen as a style of thinking and teaching. A top-
down approach (also known as stepwise design or deductive reasoning and in many cases used as a
synonym of analysis or decomposition) is essentially the breaking down of a system to gain insight
into its compositional sub-systems. In a top-down approach an overview of the system is
formulated, specifying but not detailing any first-level subsystems. Each subsystem is then refined in
yet greater detail, sometimes in many additional subsystem levels, until the entire specification is
reduced to base elements. A top-down model is often specified with the assistance of "black boxes",
these make it easier to manipulate. However, black boxes may fail to elucidate elementary
mechanisms or be detailed enough to realistically validate the model. Top down approach starts
with the big picture. It breaks down from there into smaller segments.
A bottom-up approach (also known as inductive reasoning, and in many cases used as a synonym
of synthesis) is the piecing together of systems to give rise to grander systems, thus making the
original systems sub-systems of the emergent system. Bottom-up processing is a type of information
processing based on incoming data from the environment to form a perception. Information enters
the eyes in one direction (input), and is then turned into an image by the brain that can be
interpreted and recognized as a perception (output). In a bottom-up approach the individual base
elements of the system are first specified in great detail. These elements are then linked together to
form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete
top-level system is formed.

Product design and development


During the design and development of new products, designers and engineers rely on both a
bottom-up and top-down approach. The bottom-up approach is being utilized when off-the-shelf or
existing components are selected and integrated into the product. An example would include
selecting a particular fastener, such as a bolt, and designing the receiving components such that the

81
fastener will fit properly. In a top-down approach, a custom fastener would be designed such that it
would fit properly in the receiving components.] For perspective, for a product with more restrictive
requirements (such as weight, geometry, safety, environment, etc.), such as a space-suit, a more
top-down approach is taken and almost everything is custom designed. However, when it's more
important to minimize cost and increase component availability, such as with manufacturing
equipment, a more bottom-up approach would be taken, and as many off-the-shelf components
(bolts, gears, bearings, etc.) would be selected as possible. In the latter case, the receiving housings
would be designed around the selected components.

Modularity
Modularity is designing a system that is divided into a set of functional units (named modules) that
can be composed into a larger application. A module represents a set of related concerns. It can
include a collection of related components, such as features, views, or business logic, and pieces of
infrastructure, such as services for logging or authenticating users. Modules are independent of one
another but can communicate with each other in a loosely coupled fashion. A composite application
exhibits modularity. For example, consider an online banking program. The user can access a variety
of functions, such as transferring money between accounts, paying bills, and updating personal
information from a single user interface (UI). However, behind the scenes, each of these functions is
a discrete module. These modules communicate with each other and with back-end systems such as
database servers. Application services integrate components within the different modules and
handle the communication with the user. The user sees an integrated view that looks like a single
application.
Figure 1 illustrates a design of a composite application with multiple modules.

82
Module composition
Why Choose a Modular Design?
The following scenarios describe why you might want to choose a modular design for your
application:
 Simplified modules. Properly defined modules have a high internal cohesion and loose coupling
between modules. The coupling between the modules should be through well-defined
interfaces.
 Developing and/or deploying modules independently. Modules can be developed, tested,
and/or deployed on independent schedules when modules are developed in a loosely coupled
way. By doing this, you can do the following:
o You can independently version modules.
o You can develop and test modules in isolation.
o You can have modules developed by different teams.
 Loading modules from different locations. A Windows Presentation Foundation (WPF)
application might retrieve modules from the Web, from the file system and/or from a database.
A Silverlight application might load modules from different XAP files. However, most of the time,
the modules come from one location; for example, there is a specific folder that contains the
modules or they are in the same XAP file.
 Minimizing download time. When the application is not on the user's local computer, you want
to minimize the time required to download the modules. To minimize the download time, only

83
download modules that are required to start-up the application. The rest are loaded and
initialized in the background or when they are required.
 Minimizing application start-up time. To get part of the application running as fast as possible,
only load and initialize the module(s) that are required to start the application.
 Loading modules based on rules. This allows you to only load modules that are applicable for a
specific role. An application might retrieve from a service the list of modules to load.

The application uses logic flow techniques to solve given elementary problems.
1. Planning the Solution

84
Two common ways of planning the solution to a problem are to draw a flowchart and to write
pseudocode, or possibly both. Essentially, a flowchart is a pictorial representation of a step-by-step
solution to a problem. It consists of arrows representing the direction the program takes and boxes
and other symbols representing actions. It is a map of what your program is going to do and how it is
going to do it. The American National Standards Institute (ANSI) has developed a standard set of
flowchart symbols. Figure 1 shows the symbols and how they might be used in a simple flowchart of
a common everyday act-preparing a letter for mailing. Pseudocode is an English-like nonstandard
language that lets you state your solution with more precision than you can in plain English but with
less precision than is required when using a formal programming language. Pseudocode permits you
to focus on the program logic without having to be concerned just yet about the precise syntax of a
particular programming language. However, pseudocode is not executable on the computer. We will
illustrate these later in this chapter, when we focus on language examples.

Coding the Program


As the programmer, your next step is to code the program-that is, to express your solution in a
programming language. You will translate the logic from the flowchart or pseudocode-or some other
tool-to a programming language. As we have already noted, a programming language is a set of rules
that provides a way of instructing the computer what operations to perform. There are many
programming languages: BASIC, COBOL, Pascal, FORTRAN, and C are some examples. Although
programming languages operate grammatically, somewhat like the English language, they are much
more precise. To get your program to work, you have to follow exactly the rules-the syntax-of the
language you are using. Of course, using the language correctly is no guarantee that your program
will work, any more than speaking grammatically correct English means you know what you are
talking about. The point is that correct use of the language is the required first step. Then your
coded program must be keyed, probably using a terminal or personal computer, in a form the
computer can understand.
One more note here: Programmers usually use a text editor, which is somewhat like a word
processing program, to create a file that contains the program. However, as a beginner, you will
probably want to write your program code on paper first.
2. Testing the Program
Some experts insist that a well-designed program can be written correctly the first time. In fact, they
assert that there are mathematical ways to prove that a program is correct. However, the
imperfections of the world are still with us, so most programmers get used to the idea that their
newly written programs probably have a few errors. This is a bit discouraging at first, since

85
programmers tend to be precise, careful, detail-oriented people who take pride in their work. Still,
there are many opportunities to introduce mistakes into programs, and you, just as those who have
gone before you, will probably find several of them. Eventually, after coding the program, you must
prepare to test it on the computer. This step involves these phases:
 Desk-checking. This phase, similar to proofreading, is sometimes avoided by the programmer
who is looking for a shortcut and is eager to run the program on the computer once it is written.
However, with careful desk-checking you may discover several errors and possibly save yourself
time in the long run. In desk-checking you simply sit down and mentally trace, or check, the logic
of the program to attempt to ensure that it is error-free and workable. Many organizations take
this phase a step further with a walkthrough, a process in which a group of programmers-your
peers-review your program and offer suggestions in a collegial way.
 Translating. A translator is a program that (1) checks the syntax of your program to make sure
the programming language was used correctly, giving you all the syntax-error messages, called
diagnostics, and (2) then translates your program into a form the computer can understand. A
by-product of the process is that the translator tells you if you have improperly used the
programming language in some way. These types of mistakes are called syntax errors. The
translator produces descriptive error messages. For instance, if in FORTRAN you mistakenly
write N=2 *(I+J))-which has two closing parentheses instead of one-you will get a message that
says, "UNMATCHED PARENTHESES." (Different translators may provide different wording for
error messages.) Programs are most commonly translated by a compiler. A compiler translates
your entire program at one time. The translation involves your original program, called a source
module, which is transformed by a compiler into an object module. Prewritten programs from a
system library may be added during the link/load phase, which results in a load module. The
load module can then be executed by the computer.
 Debugging. A term used extensively in programming, debugging means detecting, locating, and
correcting bugs (mistakes), usually by running the program. These bugs are logic errors, such as
telling a computer to repeat an operation but not telling it how to stop repeating. In this phase
you run the program using test data that you devise. You must plan the test data carefully to
make sure you test every part of the program.

86
US: 14908, NQF Level 4 Worth 6 Credits
Learning Unit 5 Demonstrate an understanding of testing IT systems
against given specifications

This unit standard is intended:


 To demonstrate fundamental of knowledge of the areas covered
 for those working in, or entering the workplace in the area of
Hardware, Infrastructure Maintenance and Support
Unit Standard People credited with this unit standard are able to:

Purpose  Select an appropriate test procedure for the hardware and software
 Apply the test procedure to hardware and software
 Collect and record data from tests
The performance of all elements is to a standard that allows for further
learning in this area.

Open.
The credit value of this unit is based on a person having the prior knowledge
Learning and skills to:
Assumed to be  Demonstrate an understanding of fundamental English (at least NQF
level 3)
in Place
 Demonstrate PC competency skills (End User Computing unit
standards up to level 3)

87
Session 1
Select an appropriate test procedure for the IT Systems to be tested.
SO 1

 The selection clarifies the purpose of the test and the data required from
it.
Learning
 The selection identifies any factors that may affect the choice of the test
Outcomes
procedure.
(Assessment  The selection identifies the resources available for the test procedure.
Criteria)
 The selection complies with all relevant regulatory, licensing, contractual
and health and safety requirements.

The selection clarifies the purpose of the test and the data required from it.
What the purpose of testing?
There are two fundamental purposes of testing: verifying procurement specifications and managing
risk. First, testing is about verifying that what was specified is what was delivered: it verifies that the
product (system) meets the functional, performance, design, and implementation requirements
identified in the procurement specifications. Second, testing is about managing risk for both the
acquiring agency and the system’s vendor/developer/integrator.
The testing program is used to identify when the work has been “completed” so that the contract
can be closed, the vendor paid, and the system shifted by the agency into the warranty and
maintenance phase of the project.
The purpose of system testing is to ensure that a system meets its specification and any non-
functional requirements (such as stability and throughput) that have been agreed with its users.

System testing
System testing of software or hardware is testing conducted on a complete, integrated system to
evaluate the system's compliance with its specified requirements. System testing falls within the
scope of black box testing, and as such, should require no knowledge of the inner design of the code
or logic.
As a rule, system testing takes, as its input, all of the "integrated" software components that have
passed integration testing and also the software system itself integrated with any applicable
hardware system(s).
The purpose of integration testing is to detect any inconsistencies between the software units that
are integrated together (called assemblages) or between any of the assemblages and the hardware.

88
System testing is a more limited type of testing; it seeks to detect defects both within the "inter-
assemblages" and also within the system as a whole.

What is involved in a hardware test program?


In general, the hardware test program can be broken into six phases as described below.
 Prototype testing – Prototype testing is generally required for “new” and custom product
development but may also apply to modified product depending on the nature and
complexity of the modifications. This tests the electrical, electronic, and operational
conformance during the early stages of product design.
 Design Approval Testing (DAT) – DAT is generally required for final pre-production product
testing and occurs after the prototype testing. The DAT should fully demonstrate that the ITS
device conforms to all of the requirements of the specifications.
 Factory Acceptance Testing (FAT) – FAT is typically the final phase of vendor inspection and
testing that is performed prior to shipment to the installation site. The FAT should
demonstrate conformance to the specifications in terms of functionality, serviceability,
performance and construction (including materials).
 Site Testing – Site testing includes pre-installation testing, initial site acceptance testing and
site integration testing. This tests for damage that may have occurred during shipment,
demonstrates that the device has been properly installed and that all mechanical and
electrical interfaces comply with requirements and other installed equipment at the
location, and verifies the device has been integrated with the overall central system.
 Burn-In and Observation Period Testing – A burn-in is normally a 30 to 60 day period that a
new devise is operated and monitored for proper operation. If it fails during this period,
repairs or replacements are made and the test resumes. The clock may start over at day one,
or it may resume at the day count the device failed. An observation period test normally
begins after successful completion of the final(acceptance) test and is similar to the burn-in
test except it applies to the entire system.
 Final Acceptance Testing – Final acceptance testing is the verification that all of the
purchased units are functioning according to the procurement specifications after an
extended period of operation. The procurement specifications should describe the time
frames and requirements for final acceptance. In general, final acceptance requires that all
devices be fully operational and that all deliverables (e.g., documentation, training) have
been completed.

89
What is involved in a software test program?
In general, the software test program can be broken into three phases as described below.
 Design Reviews – There are two major design reviews: (1) the preliminary design review
conducted after completion and submission of the high-level design documents and (2) the
detailed design (or critical) review conducted after submission of the detailed design
documents.
 Development Testing – For software, development testing includes prototype testing, unit
testing, and software build integration testing. This testing is normally conducted at the
software developer’s facility.
 Site Testing – Site testing includes hardware/software integration testing, subsystem
testing, and system testing. Some integration testing can be conducted in a development
environment that has been augmented to include representative system hardware elements
(an integration facility) but must be completed at the final installation site (i.e., the
transportation management center) with communications connectivity to the field devices.

The selection identifies any factors that may affect the choice of the test procedure.

Here are some of the factors to consider, which can affect the test effort:
 While good project documentation is a positive factor, it’s also true that having to produce
detailed documentation, such as meticulously specified test cases, results in delays. During
test execution, having to maintain such detailed documentation requires lots of effort, as
does working with fragile test data that must be maintained or restored frequently during
testing.

90
 Increasing the size of the product leads to increases in the size of the project and the project
team. Increases in the project and project team increases the difficulty of predicting and
managing them. This leads to the disproportionate rate of collapse of large projects.
 The life cycle itself is an influential process factor, as the V-model tends to be more fragile in
the face of late change while incremental models tend to have high regression testing costs.
 Process maturity, including test process maturity, is another factor, especially the
implication that mature processes involve carefully managing change in the middle and end
of the project, which reduces test execution cost.
 Time pressure is another factor to be considered. Pressure should not be an excuse to take
unwarranted risks. However, it is a reason to make careful, considered decisions and to plan
and re-plan intelligently throughout the process.
 People execute the process, and people factors are as important or more important than
any other. Important people factors include the skills of the individuals and the team as a
whole, and the alignment of those skills with the project’s needs. It is true that there are
many troubling things about a project but an excellent team can often make good things
happen on the project and in testing.
 Since a project team is a team, solid relationships, reliable execution of agreed-upon
commitments and responsibilities and a determination to work together towards a common
goal are important. This is especially important for testing, where so much of what we test,
use, and produce either comes from, relies upon or goes to people outside the testing
group. Because of the importance of trusting relationships and the lengthy learning curves
involved in software and system engineering, the stability of the project team is an
important people factor, too.
 The test results themselves are important in the total amount of test effort during test
execution. The delivery of good-quality software at the start of test execution and quick,
solid defect fixes during test execution prevents delays in the test execution process. A
defect, once identified, should not have to go through multiple cycles of fix/retest/re-open,
at least not if the initial estimate is going to be held to.

The selection identifies the resources available for the test procedure.
Resource Types
In system testing, various operations are necessary for detecting failures. In configuration testing,
operations such as software and/or hardware installation, and change of configuration (such as
preferences or property) can produce failures. In stress testing, operations such as reading huge data

91
files and heavily loaded use of the network can produce failures. Considering the various operations
to be done in a system test, resources can be classified into four types: module, data, storage, and
semaphore.

Each type of resource has the following specific attributes. A change of an attribute causes a change
in the behavior of the SUT.
 Module resource: DLL and so on, which
have attributes such as version, date, and
so on
 Data resource: Such as registry and file,
which have attributes that refer to content
 Storage resource: Buffer, memory, disk,
and so on, which have attributes such as
size, capacity, and so on
 Semaphore resource: Such as Ethernet and
DB record, which have attributes such as
collision, lock, and so on Figure 2 shows an
example of each type of resource and its attributes.
It should be noted that a change in an attribute of a module resource such as "version" and "date" is
not a change in the attribute itself, but a change in the attribute as a result of the replacement of the
resource. The "date/time" of a DLL, for example, should not be changed by modifying the
information in directory entries, but by replacing it with a DLL created on a different date and time.
The Resource Allocation Process
First we properly defined the new scenario and problems we faced. We noticed that the differences
among the projects were related to the effort, not to the technology or to any other project
characteristic. And the major problem was estimating the testing team size and the testing project
length.
We decided that the projects under test would be classified by effort. Then, we created the
following classifications:
 Projects up to seven PMs: The testing team would apply the "Free Allocation" system.
 Projects from eight to twelve PMs: The testing team would apply the "Intermediate
Allocation" system.
 Projects more than twelve PMs: The testing team would apply the "Full Allocation" system.

92
Based on this classification, we wrote the "Resource Allocation Process" containing a new approach
to our testing methodology. The systems test group believes that the questions will be answered
when applying the Resource Allocation Process to projects. That means cost-sensitive projects still
can benefit from an independent testing team involved throughout the project lifecycle.
The process was presented to all groups (development, sustaining, and documentation) at my
organization and the development managers decided it should be applied in the next projects.

The selection complies with all relevant regulatory, licensing, contractual and health and safety
requirements.
Software developers often distinguish acceptance testing by the system provider from acceptance
testing by the customer (the user or client) prior to accepting transfer of ownership. In the case of
software, acceptance testing performed by the customer is known as user acceptance testing (UAT),
end-user testing, site (acceptance) testing, or field (acceptance) testing.
Self-regulatory organizations, certain alternative trading systems, plan processors, and certain
exempt clearing agencies would be required to carefully design, develop, test, maintain, and surveil
systems that are integral to their operations. The proposed rules would require them to ensure their
core technology meets certain standards, conduct business continuity testing, and provide certain
notifications in the event of systems disruptions and other events.
Software system safety
In software engineering, software system safety optimizes system safety in the design, development,
use, and maintenance of software systems and their integration with safety-critical hardware
systems in an operational environment.
What a safety-critical software system is
 A safety-critical software system is a
computersystemwhosefailureormalfunctionmayseverelyharmpeople'slives, environment or
equipment.
 Some fields and examples:
o Medicine (patient monitors)
o Nuclear engineering (nuclear power station control)
o Transport (railway systems, cars anti-lock brakes)
o Aviation (control systems: fly-by-wire)
o Aerospace (NASA space shuttle)
o Civil engineering (calculate structures)
o Military devices, etc.

93
Testing safety-critical software systems
 Basic idea: Identify hazards as early as possible in the development life-cycle and try to
reduce them as much as posible to an aceptable level.
 Remember: Always test software against specifications!
 Independent verification required
 If formal methods have been used then formal mathematical proofis a verification activity.
 Already known techniques used for typical systems
o White box testing
o Black box testing
o Reviews
o Static analysis
o Dynamic analysis and coverage
 Specific procedures and techniques from safety engineering:
o Probabilistic riskassessment (PRA)
o Failure modes and effect sanalysis (FMEA)
o Fault trees analysis (FTA)
o Failure mode, effects and criticality analysis (FMECA)
o Hazard and operatibility analysis (HAZOP)
o Hazard and risk analysis
o Cause and effect diagrams (aka fish bone diagrams or Ishikawa diagrams)

94
Session 2
Apply the test procedure to the IT Systems to be tested.
SO 2

 The application ensures correct preparation of the test procedure.


 The application tests the hardware using the selected test procedure.
 The application tests the software using the selected test procedure.
Learning
 The application ensures that all performance parameters and operational
Outcomes
requirements are tested.
(Assessment
 The application identifies any problems with the test procedure and takes
Criteria)
appropriate action.
 The application complies with all relevant regulatory, licensing, contractual
and health and safety requirements.

Apply the test procedure to the IT Systems to be tested


Test Approach
Test Approach Description
The general methodology/approach for PeopleSoft Upgrade System Testing at ITS will involve the
following steps:
1. Both a minor and major upgrade will test all the critical business functions at least once.
2. Confirm the key contact list (lead matrix) is updated.
3. Review reports needed for tracking System Testing, update if necessary.
4. Review the issue management process, update if necessary.
5. Confirm the test conditions, cycles, and plans based on functional requirements are updated.
Plans should include conditions for testing users’ security access.
6. Ensure test conditions are grouped by business process. There should be a separate test plan
that follows a transaction through the entire system versus just at the entry or exit point. (End-
to-end test plan).
7. Confirm the interface listing worksheet is updated. (For HE - This worksheet lists all external
interfaces, which are files that are received or sent, outside of HRMS and SA.) This should
include all database links and the access granted (i.e. Read Only, Read/Write, etc).
8. Confirm the critical path business processes are updated.
9. Identify any risks that may jeopardize the schedule completion.
10. Review with Central Offices; obtain Acknowledgment of System Test Plan(s).

95
For each Phase of System Testing:
11. Execute the test condition.
12. Check the output against the expected results.
13. Evaluate and document any unexpected results. Utilize testing incidents database.
14. Make sure that any required corrections are migrated and re-tested.
15. Make sure that final testing components (conditions, input, and expected results) are accurate,
complete and documented in such a way to make them repeatable and reusable.
16. Review and obtain Acknowledgment of System Test results where appropriate (i.e. new
functionality).

Test Phases
There are separate phases of testing which are designated on the timeline within the overall System
Test phase. Each phase may include several types of testing. The level of testing for an upgrade is
more condensed and may not be as time-consuming as for an implementation. When possible,
some of these phases may be done concurrently.
The following is a list of the test phases included in the overall System Test timeframe. However,
some of these phases are not covered in detail in this document. There are separate Approach
documents for those noted. The types of testing listed in SIT1 and SIT2 are described in the table in
the following section.
 System Integration Testing I (SIT1)
This phase includes integration, system, user testing, security, some end-to-end, and regression
test types. All RTP conditions with a criticality of A and those with a B and a High or Medium
level of change will be tested. Refer to the Upgrade Prioritization Approach Document for
details on the RTP prioritization. The testers may include Developers, CPU Business Analysts, ITS
Help Desk, Business Process Owners and End Users.
 System Integration Testing II (SIT2)
This phase includes regression testing (all failed SIT1 conditions), end to end, batch, system,
integration, and user testing. All RTP conditions that are Low B’s and C’s will be tested, per
module discretion. Refer to the Upgrade Prioritization Approach Document for details on the
RTP prioritization. The testers may include Developers, CPU Business Analysts, ITS Help Desk,
Business Process Owners, and End-Users. Note: There is a separate Approach document for
Batch testing.
 Parallel Testing

96
This phase validates all processes work together to support the business functions to ensure
successful payroll runs. There is a separate Approach document for this testing effort.
 Load Testing
This phase validates that critical functions will meet production performance requirements
during peak transaction volumes.
There is a separate Approach document for this testing effort.
 Model Office
This phase enables end-users an opportunity to log into the system, perform their typical tasks
on the new system, verify their security access, validate their procedures and get comfortable
with the new system. This phase is scheduled after the hard-freeze and if additional defects are
found during this phase, migrations will require additional levels of sign-offs. Participation in
this phase is at the discretion of each module. There is a separate Approach document for this
testing effort.
 Infrastructure/Gateway Testing
There is a separate Approach document for this testing effort.

Testing Type Descriptions


The following is a list of test types that will be performed during SIT1 and SIT2:
Test Type Focus System Test
Phase
Integration Testing Testing to find errors in complete functions and SIT1 & SIT2
processes within and between units. Ensure
everything has been linked together correctly.
System Testing Validate that the system functionality performs as SIT1 & SIT2
specified. Functional requirements define how the
system should perform.
End-to-End Testing Validate a transaction through the entire system, not SIT1 & SIT2
just at entry and exit points. This means a transaction
is followed throughout the various modules it may
touch. Must be coordinated.
Regression Testing Ensures that the application doesn’t negatively impact SIT1 & SIT2
previously migrated objects/modules. Re-tests the
application to ensure that a fix did not cause another
portion to break that was previously working. This is

97
Test Type Focus System Test
Phase
done as objects are migrated to fix errors.
Security Testing Eliminates security accessibility errors. SIT1 & SIT2
User Testing Same focus as Integration and System Testing SIT1 &/or SIT2 –
(executing RTP conditions), performed by Users. Also, per module
validates production-ready and data integrity. Note: discretion
There is no separate User Testing “phase” – how users
are incorporated into the overall testing effort may
vary by module. This is the opportunity for users to
validate functionality prior to the hard freeze.

98
Session 3
Collect and record data from tests.
SO 3

 The recording ensures that the required data was produced.


 The recording ensures that the data was correctly collected.
Learning
 The recording ensures that the data are sufficient to meet the purpose of
Outcomes
the test.
(Assessment  The recording identifies any problems with the collection of data and takes
Criteria) appropriate action.
 The results are recorded by using an appropriate information system.

Collect and record data from tests


Setting Up Machines and Collecting Diagnostic Information Using Test Settings
You can use Test settings in Microsoft Test Manager and Visual Studio to collect extra data when you
run your tests. For example, you might want to make a video recording as you run your test. There
are diagnostic data adapters to:
 Collect each UI action step in text format
 Record each UI action for playing back
 Collect system information
 Collect event log data
 Collect IntelliTrace data to help isolate non-reproducible bugs

99
Diagnostic data adapters can also be used to change the behavior of a test machine. For example,
with a test setting in Visual Studio, you can emulate various network topology bottlenecks to
evaluate the performance of your team’s application.
Using test settings with Microsoft Test Manager
With Microsoft Test Manager, you configure a test plan to run your tests. A test plan can have two
test settings:
 Manual runs
 Automated runs
You create these test settings using the Properties page of the test plan in Microsoft Test Manager.

You can configure both of these test settings to use a lab environment which can emulate a single
machine, or multiple machine roles. The test setting includes separate configuration settings for the
types of data to collect for each machine role using diagnostic data adapters.

Lab environments
A lab environment is a collection of virtual and physical machines that you can use to develop and
test applications. A lab environment can contain multiple machine roles needed to test multi-tiered
applications, such as workstations, web servers, and database servers. You can create and manage

100
lab environments and run tests in a lab environment using Microsoft Test Manager. When you run
your tests using a lab environment, the test will collect data, or affect the behavior of the machine
for each specific machine role that you configured in your test settings. In addition, you can use a
build-deploy-test workflow with your lab environment to automate the process of building,
deploying, and running automated tests on your application.
The following illustration shows examples of test settings and environments for a test plan.

The following illustration shows how you define the set of machine roles for your test settings. You
can then select a lab environment that has computers or virtual machines that are assigned to each
machine role to use when you run your tests. You can select any lab environment that includes at
least the set of machine roles that are defined in your test settings. The lab environment may
include other machine roles that are not specified in your test settings, as shown in the following
illustration.

Using test settings with Visual Studio


To run your unit, coded UI, web performance, or load tests by using Visual Studio, you can add,
configure and select the test settings to use when you run your tests. To run your tests, collect data,
or affect a test machine remotely, you must specify a test controller to use in your test settings. The
test controller will have agents that can be used for each role in your test settings.

101
Setting Up Test Machines to Run Tests or Collect Data
Using Visual Studio 2012, you can run your tests and also collect data and diagnostics when you run
your tests. You use test settings to specify the data and diagnostics that you want to collect. You can
even select diagnostic data adapters that affect the way that your test machine performs. For
example, you might want to create a video recording of your desktop while you run your test, or
collect system information about your Web server. Additionally, you might want to emulate a slow
network to impose a bottleneck on the system.
To run tests remotely on multiple machines, or collect data and diagnostics remotely you must use a
test controller and test agents. The test controller runs as a service and assigns tests to a test agent
to run. In addition it can tell the test agent what data or diagnostics need to be collected. You can
manage the test controller and agents by using Visual Studio, or if you register the test controller
with Team Foundation Server, then you can manage the controller and agents by using Microsoft
Test Manager.
If you have a distributed application, you define a role for each computer to use to run tests or
collect data. For example, if you have an application that consists of a Web server, a database server,
and a desktop client, you would define one role for each of these. The desktop client can run the
tests and collect data locally, and the other roles can collect any data that you require on the
machine that you assign to that role. You can also assign multiple machines to the same role.
If you are using Microsoft Test Manager, you create an environment for this set of roles. An
environment is a collection of computers in which each computer has an assigned role.
The following sections of this topic provide more information about the ways to run tests and collect
data, based on the type of tests that you run and whether you want to use an environment:
Manual Tests
It is recommended that you run your manual tests on a local machine that is not part of the
environment. You can collect data or affect a test machine for your manual tests in the following
ways:
 Collect data on the local machine using default test settings
 Collect data on a local machine specifying the data to collect
 Collect data on local and remote tiers of your application
Automated Tests
You can run tests either by using Microsoft Test Manager or by using Visual Studio 2012.
If you plan to run your automated tests by using Microsoft Test Manager, you must use a lab
environment that contains a set of roles to run your tests from your test plan. You must create a test

102
controller that is registered with your team project in Team Foundation Server. However, Microsoft
Test Manager will set up the test agent in each machine in the environment.
If you plan to run automated tests by using Visual Studio, you can just run your automated tests on
your local machine and use test settings to collect data locally. If you want to collect data or affect
the test machine for specific parts of a multitier application, you can select a test controller and test
agents and add roles to use in your test settings. You should not register the test controller with
Team Foundation Server. However, you must set up a test agent in each machine on which you plan
to initiate tests or collect test data.
The following illustration shows a test controller and test agents that are installed on a machine for
each role in an application under test and the tasks that the test agent can perform. The test
controller manages the test agents that are registered to it.

Important

If you want to use a test controller as part of an environment by using Microsoft Test Manager, you
must register it with Team Foundation Server, as shown in the following illustration. However, if you
want to use a test controller from Visual Studio, do not register the test controller with Team
Foundation Server.

Caution

The test agents and test controllers can be installed in different domains if your testing setup requires

103
it.

Environments
If you use Microsoft Test Manager to conduct your tests, you create lab environments on which to
run the tests. There are two kinds of environments: standard and SCVMM environments. A standard
environment can use physical computers or virtual machines, and the virtual machines can run on
any virtualization framework. An SCVMM environment uses only virtual machines that are managed
by System Center Virtual Machine Manager (SCVMM).
Microsoft Test Manager can be used to set up both kinds of environment. In the case an SCVMM
environment, you can stop and start environments, store environments in a library, and create
multiple copies of them.
In both cases, you assign roles to each machine in the environment. For example, typical roles
are Web Server and Desktop Client. The role names are used by your test workflow to determine
what software and tests to deploy on each machine.
Test Manager inserts a test agent on each computer, which enables the test controller to deploy
software, run tests, and collect test results.

104
Session 4
Prepare the testing to ensure the given specifications will be addressed.
SO 4

 The preparation ensures a plan is prepared for the testing in line with the
given specifications.
Learning
 The preparation ensures the plan specifies what needs to be testing.
Outcomes
 The preparation documents the test scenarios and test data to be used for
(Assessment the test.
Criteria)
 The preparation documents the outcomes expected for each of the
scenarios prepared.

Prepare the testing to ensure the given specifications will be addressed


Preparing for System Integration Testing
Planning and preparation is crucial for functional testing success, especially at the system-integration
level. Planning and preparation requirements fall into three general categories:
 Testing Hierarchy Initiating a test at a system level when the components are not ready can
be frustrating at best and disastrous in some cases. The testing plan must be structured in a
manner that builds from the simple to the complex and from the utility systems to the end-
user systems.
 Climate Interactions By nature, buildings and their systems are designed to create a
controlled environment, isolated from the local climate. To do this, the systems need to
function over the entire range of conditions that will be encountered, and a major goal of
integrated operation testing is to verify that contingency. Thus, the testing plan must
consider the juxtaposition of the commissioning and start-up schedule with the seasons to
anticipate and plan for deferred testing where necessary and to protect the building and
untested equipment that is coming on line from damage due to inappropriate operation for
the current climate conditions.
 Operating Environment Compressed schedules and phased occupancy are becoming the
rule in modern construction. These contingencies frequently force phased start-ups of
partially complete systems to serve portions of a building that will become occupied and
fully operational before the fabrication of the system is complete. In addition, the need for a
semi-controlled environment to facilitate the installation of finishes can create intense
pressure to use the partially complete HVAC systems for temporary heating and cooling, a
high-risk undertaking in many circumstances. Retro commissioning projects, by nature, are

105
challenged with testing machinery that is serving an operating facility without disrupting
operations. Thus, it is essential that the test plan take these issues into account and include
contingencies for dealing with them.

Test Documentation
Test documentation is the complete suite of artifacts that
describe test planning, test design, test execution, test
results and conclusions drawn from the testing activity. As
testing activities typically consume 30% to 50% of project
effort, testing represents a project within a project.
Testing activities must therefore be fully documented to
support resource allocation, monitoring and control. This
page identifies the types of documents you need to set up
and run your test program and summarises their content.
The Test Documentation Tree
Tests must be planned and documented to ensure that
test coverage is systematic and complete.

106

You might also like