0% found this document useful (0 votes)
54 views101 pages

Software Engineering Lab-Original - Final (Verified)

Nice

Uploaded by

scanandprint24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views101 pages

Software Engineering Lab-Original - Final (Verified)

Nice

Uploaded by

scanandprint24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 101

Ex.

NO:1 Work Breakdown Structure


Ex.NO: 2(a) Java Program for Tight coupling

package tightcoupling;

class Volume {

public static void main(String args[]) {

Box b = new Box(15, 15, 15);

System.out.println(b.volume);

class Box {

public int volume;

Box(int length, int width, int height) {

this.volume = length * width * height;

}
Compilation and Execution:
Javac Volume.java
Java Volume

Output is :
3375
EX.NO: 2(b) Java Program for Tight coupling

class Volume {
public static void main(String args[]) {
Cylinder b = new Cylinder(25, 25, 25);
System.out.println(b.getVolume());
}
}
final class Cylinder {
private int volume;
Cylinder(int length, int width, int height) {
this.volume = length * width * height;

}
public int getVolume() {
return volume;
}
}

Compilation and Execution:


Javac Volume.java
Java Volume

Output
15625
EX.NO: 3(a) Java Program for low cohesion class

class PlayerDatabase
{
public void connectDatabase();
public void printAllPlayersInfo();
public void printSinglePlayerInfo();
public void printRankings();
public void printEvents();
public void closeDatabase();
}
EX.NO: 3(b) Java Program for high cohesion class

Java program to illustrate

// high cohesive behavior

class Multiply {

int a = 5;

int b = 5;

public int mul(int a, int b)

this.a = a;

this.b = b;

return a * b;

class Display {

public static void main(String[] args)

{
Multiply m = new Multiply();

System.out.println(m.mul(5, 5));

Compilation and Execution:


Javac Display.java

Java Display

Output:
25
EX.NO: 4 Functional and Non-Functional Requirements

Business requirements. These include high-level statements of goals, objectives, and needs.

Stakeholder requirements. The needs of discrete stakeholder groups are also specified to define
what they expect from a particular solution.

Solution requirements. Solution requirements describe the characteristics that a product must
have to meet the needs of the stakeholders and the business itself.

 Nonfunctional requirements describe the general characteristics of a


system. They are also known as quality attributes.
 Functional requirements describe how a product must behave, what its
features and functions.

Transition requirements. An additional group of requirements defines what is needed from an


organization to successfully move from its current state to its desired state with the new product.

Let’s explore functional and nonfunctional requirements in greater detail.

Functional requirements and their specifications:


Functional requirements are product features or functions that developers must implement to
enable users to accomplish their tasks. So, it’s important to make them clear both for the
development team and the stakeholders. Generally, functional requirements describe system
behavior under specific conditions. For instance:

A search feature allows a user to hunt among various invoices if they want to credit an issued
invoice.

 Software requirements specification document


 Use cases
 User stories
 Work Breakdown Structure (WBS) (functional decomposition)
 Prototypes
 Models and diagrams
Software requirements specification document
Functional and nonfunctional requirements can be formalized in the requirements specification
(SRS) document. (To learn more about software documentation, read our article on that topic.)
The SRS contains descriptions of functions and capabilities that the product must provide. The
document also defines constraints and assumptions. The SRS can be a single document
communicating functional requirements or it may accompany other software documentation like
user stories and use cases.

We don’t recommend composing SRS for the entire solution before the development kick-off,
but you should document the requirements for every single feature before actually building it.
Once you receive the initial user feedback, you can update the document.

SRS must include the following sections:

Purpose. Definitions, system overview, and background.

Overall description. Assumptions, constraints, business rules, and product vision.

Specific requirements. System attributes, functional requirements, database requirements.

It’s essential to make the SRS readable for all stakeholders. You also should use templates with
visual emphasis to structure the information and aid in understanding it. If you have
requirements stored in some other document formats, link to them to allow readers to find the
needed information.

Example: If you’d like to see an actual document, download this SRS example created at
Michigan State University, which includes all points mentioned above in addition to presenting
use cases to illustrate parts of the product.

Use cases
Use cases describe the interaction between the system and external users that leads to achieving
particular goals.

Each use case includes three main elements:

Actors. These are the users outside the system that interact with the system.

System. The system is described by functional requirements that define an intended behavior of
the product.

Goals. The purposes of the interaction between the users and the system are outlined as goals.
There are two formats to represent use cases:

 Use case specification structured in textual format


 Use case diagram

A use case specification represents the sequence of events along with other information that
relates to this use case. A typical use case specification template includes the following
information:

 Description
 Pre- and Post- interaction condition
 Basic interaction path
 Alternative path
 Exception path

A use case diagram doesn’t contain a lot of details. It shows a high-level overview of the
relationships between actors, different use cases, and the system.

The use case diagram includes the following main elements:

Use cases. Usually drawn with ovals, use cases represent different use scenarios that actors
might have with the system (log in, make a purchase, view items, etc.)

System boundaries. Boundaries are outlined by the box that groups various use cases in a
system.

Actors. These are the figures that depict external users (people or systems) that interact with the
system.

Associations. Associations are drawn with lines showing different types of relationships between
actors and use cases.

User stories:

A user story is a documented description of a software feature seen from the end-user
perspective. The user story describes what exactly the user wants the system to do. In Agile
projects, user stories are organized in a backlog, which is an ordered list of product functions.
Currently, user stories are considered to be the best format for backlog items.

A typical user story is written like this:


As a <type of user>, I want <some goal> so that <some reason>.

Example:

As an admin, I want to add descriptions to products so that users can later view these
descriptions and compare the products.

User stories must be accompanied by acceptance criteria. These are the conditions that the
product must satisfy to be accepted by a user, stakeholders, or a product owner. Each user story
must have at least one acceptance criterion. Effective acceptance criteria must be testable,
concise, and completely understood by all team members and stakeholders. They can be written
as checklists, plain text, or by using Given/When/Then format.

Example:

Here’s an example of the acceptance criteria checklist for a user story describing a search
feature:

 A search field is available on the top-bar.


 A search is started when the user clicks Submit.
 The default placeholder is a grey text Type the name.
 The placeholder disappears when the user starts typing.
 The search language is English.
 The user can type no more than 200 symbols.
 It doesn’t support special symbols. If the user has typed a special symbol
in the search input, it displays the warning massage: Search input cannot
contain special symbols.

Finally, all user stories must fit the INVEST quality model:

 I – Independent
 N – Negotiable
 V – Valuable
 E – Estimable
 S – Small
 T – Testable

Independent. This means that you can schedule and implement each user story separately. This
is very helpful if you implement continuous integration processes.
Negotiable. This means that all parties agree to prioritize negotiations over specification. This
also means that details will be created constantly during development.

Valuable. A story must be valuable to the customer. You should ask yourself from the
customer’s perspective “why” you need to implement a given feature.

Estimatable. A quality user story can be estimated. This will help a team schedule and prioritize
the implementation. The bigger the story is, the harder it is to estimate it.

Small. Good user stories tend to be small enough to plan for short production releases. Small
stories allow for more specific estimates.

Testable. If a story can be tested, it’s clear enough and good enough. Tested stories mean that
requirements are done and ready for use.

Functional decomposition or Work Breakdown Structures (WBS)


A functional decomposition or WBS is a visual document that illustrates how complex processes
break down into their simpler components. WBS is an effective approach to allow for an
independent analysis of each part. WBS also helps capture the full picture of the project.

We suggest the following logic of functional decomposition:

1. Find the most general function.


2. Find the closest sub function.
3. Find the next level of sub function.
4. Check your diagram.

Or the decomposition process may look like this:

High Level Function ->Sub-function -> Process -> Activity

The features should be decomposed to the point at which the lowest level parts can’t be broken
down any further.

Software prototypes:

Software prototype is an umbrella term for different forms of early stage deliverables that are
built to showcase how requirements must be implemented. Prototypes help bridge the vision
gaps and let stakeholders and teams clarify complicated areas of products in development.
Traditionally, prototypes represent how the solution will work and give examples of how users
will interact with it to accomplish their tasks.

Prototypes can be cheap and fast visual representations of requirements (throwaway prototypes)
or more complex ones (evolutionary prototypes). The latter can even become the early versions
of the product that already have some pieces of the final code. Effectively, evolutionary
prototypes may even turn into MVPs that we’ve described in a separate article.

Design documents and prototypes


Design requirements are usually collected and documented using three main formats that morph
into one another:

Wireframes. Wireframes are low-fidelity graphic structures of a website or an app. They help
map different product pages with sections and interactive elements.

Mockups. Once wireframes are ready, they are turned into mockups, visual designs that convey
the look and feel of the final product. Eventually, mockups can become the final design of the
product.

Design prototypes. These documents contain visuals and allow for some interface interactions,
like scrolling, clicking on links, or filling in forms. Design prototypes can be built from scratch
using HTML and CSS, but most UX teams use prototyping services like InVision.

NON FUNCTIONAL REQUIREMENTS:


Nonfunctional requirements describe how a system must behave and establish constraints of its
functionality. This type of requirements is also known as the system’s quality attributes.

Let’s have a close look at typical nonfunctional requirements.

Usability
Usability defines how difficult it will be for a user to learn and operate the system. Usability can
be assessed from different points of view:

Efficiency of use: the average time it takes to accomplish a user’s goals, how many tasks a user
can complete without any help, the number of transactions completed without errors, etc.

Intuitiveness: how simple it is to understand the interface, buttons, headings, etc.


Low perceived workload: how many attempts are needed by users to accomplish a particular
task.

Example: Usability requirements can consider language barriers and localization tasks: People
with no understanding of French must be able to use the product. Or you may set accessibility
requirements: Keyboard users who navigate a website using <tab>, must be able to reach the
“Add to cart” button from a product page within 15 <tab> clicks.

Security
Security requirements ensure that the software is protected from unauthorized access to the
system and its stored data. It considers different levels of authorization and authentication across
different users roles. For instance, data privacy is a security characteristic that describes who can
create, see, copy, change, or delete information. Security also includes protection against viruses
and malware attacks.

Example: Access permissions for the particular system information may only be changed by the
system’s data administrator.

Reliability
Reliability defines how likely it is for the software to work without failure for a given period of
time. Reliability decreases because of bugs in the code, hardware failures, or problems with other
system components. To measure software reliability, you can count the percentage of operations
that are completed correctly or track the average period of time the system runs before failing.

Example: The database update process must roll back all related updates when any update fails.

Performance is a quality attribute that describes the responsiveness of the system to various user
interactions with it. Poor performance leads to negative user experience. It also jeopardizes
system safety when it’s is overloaded.

Example: The front-page load time must be no more that 2 seconds for users that access the
website using an LTE mobile connection.

Availability
Availability is gauged by the period of time that the system’s functionality and services are
available for use with all operations. So, scheduled maintenance periods directly influence this
parameter. And it’s important to define how the impact of maintenance can be minimized. When
writing the availability requirements, the team has to define the most critical components of the
system that must be available at all time. You should also prepare user notifications in case the
system or one of its parts becomes unavailable.
Example: New module deployment musn’t impact front page, product pages, and check out
pages availability and mustn’t take longer than one hour. The rest of the pages that may
experience problems must display a notification with a timer showing when the system is going
to be up again.

Scalability
Scalability requirements describe how the system must grow without negative influence on its
performance. This means serving more users, processing more data, and doing more transactions.
Scalability has both hardware and software implications. For instance, you can increase
scalability by adding memory, servers, or disk space. On the other hand, you can compress data,
use optimizing algorithms, etc.

Example: The website attendancy limit must be scalable enough to support 200,000 users at a
time.

Final words
All the software projects include the information boundaries that describe the product and project
goals. These boundaries are drawn in the project requirements and specifications. The value of
creating a software requirement specifications is in the optimization of the development process.
Software requirement specifications answer all developer’s questions about the product that are
required to start the work. The functional specification is approved by the client and ensures that
developers are building what the customer wants.

EX.No:5 Complete plan for Microsoft Project


MS Project: Creating a New Plan:

Creating a project plan in MS Project isn’t that difficult. In the following article I’ll show how
to create a simple project plan using a real project example.

Get (back) on track with Microsoft Project!


 My new ebook shows you how to use MS Project in real-life projects.
 From entering tasks, using dependencies, planning a budget, using resources to
ongoing activities like tracking effort and costs.
 Straight to the point, easy to understand.

Setting up a new project:

Let’s create a new project and make some general adjustments. By the way I’m using MS
Project 2016 here.

STEP 1: CREATING THE PROJECT

After MS Project has launched, choose File > New > Blank Project

When you create a new project, MS Project sets the current date as the plan’s start date.

STEP 2: GENERAL PROJECT SETTINGS

For every new project, you want to change some key information before continuing.

Go to Project > Project Information

STEP 2.1 – PROJECT START DATE

Enter the start date of your project:

Don’t worry, you can change the start date later.


STEP 2.2 – SET THE CALENDAR

See field Calendar in the same dialog: Here you can select alternative calendars:

 Standard – 8-hour work days from Monday to Friday, with 1-hour break (use this one)
 24 hours – if there are no breaks and no non-working time
 Night Shift – Covers 11 pm to 8 am an all nights from Monday to Friday, with one hour
breaks

Once you have made the settings, choose “OK” to close the dialog box.

create a real project:

I’m not going to bore you with an artificial project like ‘how to bake a cake’ or that kind of BS.

We’ll create the plan for a real project.

In this case, it’s a project where a company is moving to a new office space , and we are in
charge of planning the move, including getting office furniture and all office stuff shipped to
the new location. And we want to do this without interfering with daily business. People
should be able to do their work, then on the weekend all items will be transfered, so that your
colleagues can start working from the new office building the following Monday.

Key project data:

 Goal: Office relocation


 Timeline: March – October
 Project phases: Project preparation, Selection of offices, Space design and furnishing,
Physical relocation

Building the Task List


STEP 1: ENTER TASKS

Now enter the tasks for the first project phase in the tabular view

STEP 2: ENTER TASK DURATION

Now we enter the start and finish date for each task.

In this case, MS Project automatically calculates the duration:

Tip: You can also enter a start date and a duration, and let MS Project calculate the Finish
date.

When entering duration, use the following logic:

Duration you want to enter What to enter How it appears in MS Project

30 minutes 30m 30 mins

6 hours 2h 6 hrs

3 days 3d 3 days

6 weeks 6w 6 weeks

4 months 4mo 4 months

Now we have to talk about how Project uses those values to calculate the duration of your
project. When you enter for example a duration of 12 days for a task, the task will span across
3 weeks. Why? In MS Project’s default configuration, a week as only 5 workdays. Therefore
the task will consume two full weeks plus 2 days of the 3rd week. Got it?

The following table shows you what default settings apply:

Value entered Value MS Project default value

1 month 28 to 31 days, depending on the month 160 hours (20 workdays)

1 week 7 days 40 hours (5 workdays)

1 day 24 hours 8 hours (1 workday)

1 hour 60 minutes 60 minutes

STEP 3: CREATING SUMMARY TASKS


When you have to manage dozens or even hundreds of tasks, it makes sense to group tasks
together into summary tasks.

A summary task is not an actual task. It’s more like a “wrapper” to group several related tasks
together under one name. Duration, start and end date of a summary task depends on the tasks
included under the summary task. The start date will be the earliest start date of all
subordinated tasks. Likewise, the end date of a summary task is equal to the latest end date of
the subordinated tasks.

To create a summary task, we first create a new task above the first one.

Select the 2nd row, press the right mouse button and choose Insert Task:

In the new row, enter the new task Project preparation:

Now, select rows 3-8 and press the indentation button in the Format tab:

You should get the following result:

Project preparation is now a summary task. You can see this by the bold font and the little
black triangle next to the task.

Project preparation is our first project phase during which we set up the project, including
creating a project plan, staffing the project and creating a project budget .

Tip: You can create more levels of depth, depending on the complexity of your project. All
you have to do is define a summary task and indenting the activities that should be rolled up.

CREATING A MILESTONE

Milestones are specific points on the project timeline. They are used to measure or review the
progress of a project, and to inform management and stakeholders about the current status.
Technically, a milestone is like a task with zero duration. Milestones are used as markers for
major achievements, such as “construction completed” or “project approved by
management”.

To create a milestone, select a task and right-click –> Go to Advanced and check “Milestone”
in the dialog box:

Once you click “OK”, take a look at the Gannt view. You now see the task is marked as a
diamond with a date next to it.

Here it is: Project approved by management is now a milestone:


STEP 4: LINKING TASKS IN A SPECIFIC SEQUENCE

Project-related tasks have to performed in a specific order, so that we can accomplish the
project objectives.

Once you have entered the list of tasks, it is time to link these tasks together.

Take a look at the above screenshot:

 You can see our two project phases Project preparation and Selection of offices in bold.
Both are summary tasks (scroll up if you’ve missed how to create summary tasks).
 In the logical sequence, Selection of offices should come after Project preparation is
complete.
 Using the terminology of MS Project, we must make Project
preparation the predecessor of Selection of offices, which means it should come before
the office selection phase.
 We can easily make Project preparation the predecessor by entering “1” as the
predecessor number in the “Predecessors” column. “1” is the task number of Project
preparation.

Project plan in MS Project

As I told you at the beginning MS Project isn’t that hard to use. I suggest you spend some more
time playing around with Project and working with your sample project. Specifically, you
should practice the following tasks:

 Creating a new project


 Entering new tasks
 Adjusting duration, start and end date of tasks

 Linking several tasks together in a specific sequence


 Printing the project plan
 Saving your new project .

In manual scheduling, you set the start and end date manually for every task. For example,
you might say “Our business planning workshops will take place from Wednesday September
5th til Friday September 14th”. Then you schedule this activity for those exact days.
In automatic scheduling, you set the start date for the entire project, and you define how long
each task will take to complete (“Our business planning workshop takes 5 days”). Then, once
you’ve set the dependencies (linking the tasks in the right sequence), Project is able to
calculate the entire schedule automatically for you.

Of course, in its default configuration, MS Project doesn’t consider your individual scheduling
preferences. If you have a developer who works only 3 days per week, Project would not know
that, and an automatically generated plan would be inconsistent with your actual
circumstances. That’s because Project would assume that any task can be scheduled on all 5
workdays.

Which scheduling mode should you use? I recommend you always start with automatic, and
that you only switch to manual mode once you know exactly what tasks have to be performed,
how long they will take etc. Manual scheduling should be used in exceptions only.

EX.NO:6
Features, Vision, Business objectives, Business rules and stakeholders
in the vision document
A vision document defines the high-level scope and purpose of a program, product, or
project. A clear statement of the problem, proposed solution, and the high-level features of a
product helps establish expectations and reduce risks. This topic provides an outline of
potential content for a vision document.

See Developing a vision for an explanation of how the product owner or business analyst
works with stakeholders to develop a vision document. That topic, which is part of the IBM®
Engineering Lifecycle Management (ELM) scenario guidance, describes the vision-development
process. This topic outlines typical content for the document. You can copy this outline, paste it
into a new document, and use it as the basis for your vision document. Use those portions of this
outline that are relevant for your project.

When a team uses the Requirements Management (RM) capability in the ELM, the vision
document can be expressed in one or more rich-text documents or modules. You can embed
requirements and related artifacts in rich-text documents or use the numbered hierarchical
structure of a module to organize content. Team members can set attributes, such as priority and
status, on each artifact and create trace links between related documents, modules, and individual
artifacts.

To review the steps for creating and linking documents and modules, see Creating modules.

The vision document outline


1: Introduction

This introduction provides an overview of the entire vision document. It includes the purpose,
scope, definitions, acronyms, abbreviations, references, and an overview of the full document.

1.1 Purpose: State the purpose of this vision document.

1.2 Scope: Briefly describe the scope of this vision document, including which programs,
projects, applications, and business processes the document is associated with. Include anything
else that this document affects or influences.

1.3 Definitions, acronyms and abbreviations: Define all terms, acronyms, and abbreviations
that are required to interpret the vision correctly. This information might be provided by
reference to the project glossary, which can be developed online in the RM repository.

1.4 References: List all documents that the vision document refers to. Identify each document by
title, report number (if applicable), date, and publishing organization. Specify the sources from
which readers can obtain the references; the sources are ideally available in RM or in other

online repositories. This information might be provided by reference to an appendix or to another


document.

1.5 Overview: Describe the vision-document contents and explain how the document is
organized.
2: Positioning

2.1 Business opportunity: Briefly describe the business opportunity that is addressed by this
project.

2.2 Problem statement: Summarize the problem that this project solves. Use the following
statements as a model, providing project details to replace the parenthetical elements:

The problem of (describe the problem) affects (the stakeholders affected by the problem). The
impact of the problem is (what is the impact of the problem). A successful solution would
include (list some key benefits of a successful solution).

2.3 Product position statement: Provide an overall statement that summarizes at the highest
level the unique position the product intends to take in the marketplace. Use the following
statements as a model, providing project details to replace the parenthetical elements:

For the (target customer), who (statement of the need or opportunity). The (product name) is a
(product category) that (statement of key benefit, that is, the compelling reason to buy). Unlike
(primary competitive alternative), our product (statement of primary differentiation).

A product position statement communicates the intent of the application and the importance of
the project to all concerned stakeholders.

3: Stakeholder and user descriptions

To provide products and services that meet stakeholders' and users' needs, you must identify and
involve all stakeholders as part of the requirements-definition process. You must also identify the
system users and ensure that the stakeholder community represents them adequately.

This section provides a profile of the stakeholders and users who are involved in the project. This
section also identifies the key problems that stakeholders and users consider that the proposed
solution must address. This section does not describe specific requests or requirements; a
separate stakeholder requests artifact captures these items. The key-problem description provides
the background and justification for requirements.

3.1 Market demographics: Summarize the key market demographics that motivate your
product decisions. Describe and position target market segments. Estimate the market size and
growth by using the number of potential users. Alternatively, estimate the amount of money

that your customers spend trying to meet the needs that your product or enhancement would
fulfill. Review major industry trends and technologies. Answer these strategic questions:

 What is the reputation of your organization in these markets?


 What would you like the reputation to be?
 How does this product or service support your goals?
3.2 Stakeholder summary: List all the identified stakeholders. For each stakeholder type,
provide this information:

 Name: Name the stakeholder type.


 Represents: Briefly describe which individuals, teams, or organizations this stakeholder
type represents.
 Role: Briefly describe the role this stakeholder type plays in the development effort.

3.3 User summary: List all the identified user types. For each user type, provide this
information:

 Name: Name the user type


 Description: Briefly describe the relationship of this type of user to the system under
development.
 Stakeholder: List which stakeholder type represents this user type.

3.4 User environment: Detail the working environment of the target user. Here are some
suggestions:

 How many people are involved in completing the task? Is this changing?
 How long is a task cycle? How much time do users spend in each activity? Is this
changing?
 What unique environmental constraints affect the project? For example, do users require
mobile devices, work outdoors, or work during flights?
 Which system platforms are in use today? Are there future platforms planned?
 What other applications are in use? Does your application need to integrate with them?

In this section, you might include extracts from the business model to outline the task and
workers who are involved.

3.5 Stakeholder profiles: Describe each stakeholder in the project by completing the following
table for each stakeholder. Remember: Stakeholder types can be users, strategy departments,
legal or compliance departments, technical developers, operations teams, and others. A thorough
profile covers the following topics for each stakeholder type:

 Representative: State who represents the stakeholder to the project (This information is
optional if it is documented elsewhere.) Enter the representatives' names.

 Description: Briefly describe the stakeholder type.


 Type: Qualify the expertise of the stakeholder, such as guru, business expert , or casual
user. This designation can suggest technical background and degree of sophistication.
 Responsibilities: List the key responsibilities of the stakeholder on the system under
development; list their interests as a stakeholder.
 Success criteria: State how the stakeholder defines success. How is the stakeholder
rewarded?
 Involvement - Describe how the stakeholder is involved in the project. Where possible,
relate the involvement to the process roles; for example, a stakeholder might be a
requirements reviewer.
 Deliverables: Identify additional deliverables that the stakeholder requires. These items
might be project deliverables or output from the system under development.
 Comments or issues: State problems that interfere with success and any other relevant
information.

3.6 User profiles: Describe each user of the system here by completing the following table for
each user type. Remember user types can be experts and novices; for example, an expert might
need a sophisticated, flexible tool with cross-platform support, while a novice might need a
tool that is easy to use. A thorough profile covers these topics for each type of user:

 Representative: State who represents the user to the project. (This information is optional
if it is documented elsewhere.) This representative often refers to the stakeholder who
represents a set of users; for example, Stakeholder: Stakeholder1.
 Description: Briefly describe the user type.
 Type: Qualify the expertise of the user, such as guru or casual user. This designation can
suggest technical background and degree of sophistication.
 Responsibilities: List the key user responsibilities with respect to the system; for
example, state who captures customer details, produces reports, and coordinates work,
and so on.
 Success criteria: State how the user defines success. How is the user rewarded?
 Involvement: Describe how the user is involved in the project. Where possible, relate the
involvement to process roles; for example, a stakeholder might be a requirements
reviewer.
 Deliverables: Identify the deliverables that the user produces and for whom.
 Comments or issues: State problems that interfere with success and any other relevant
information. Describe trends that make the user's job easier or harder.

3.7 Key stakeholder or user needs: List the key problems with existing solutions as the
stakeholder perceives them. Clarify these issues for each problem:

 What are the reasons for this problem?


 How is the problem solved now?
 What solutions does the stakeholder want?

You must understand the relative importance that the stakeholder places on solving each
problem. Ranking and cumulative voting techniques help indicate the problems that must be
solved versus issues that stakeholders would like to be addressed. Use this table to capture the
stakeholder needs.
NeedPriorityConcernsCurrentProposed solution
solution

Table 1. Stakeholder needs

3.8 Alternatives and competition: Identify alternatives that the stakeholder perceives as
available. These alternatives can include buying a competitor's product, building a homegrown
solution, or maintaining the status quo. List any known and available competitive choices.
Include the major strengths and weaknesses of each competitor as the stakeholder perceives
them.

4: Product overview
This section provides a high-level view of the product capabilities, interfaces to other
applications, and systems configurations. This section typically consists of three subsections:

 Product perspective
 Product functions
 Assumptions and dependencies

4.1 Product perspective: Put the product in perspective with regards to other related products
and the user's environment. If the product is independent and completely self-contained, state it
here. If the product is a component of a larger system, relate how these systems interact and
identify the relevant interfaces between the systems. One way to display the major components
of the larger system, interconnections, and external interfaces is to use a business process or use
case diagram.

4.2 Summary of capabilities: Summarize the major benefits and features that the product will
provide. For example, a customer support system might use this part to address problem
documentation, routing, and status reporting without elaborating on detail that these functions
require. Organize the functions so that the list is understandable to the customer or to anyone
else who reads the document for the first time. A simple table that lists the key benefits and
their supporting features might suffice, as in the following example.

Customer benefit Supporting features


New support staff can quickly learn A knowledge base assists support
how to use the product. personnel in quickly identifying known fixes and workarounds.

Customer satisfaction is improved Problems are uniquely itemized, classified,


because nothing falls through the and tracked throughout the resolution process.
Customer benefit Supporting features
cracks. Automatic notification occurs for any aging issues.
Management can identify problem Trend and distribution reports enable a high-level
areas and gauge staff workload. review of problem status.

Distributed support teams can work With a replication server, current database information
together to solve problems. can be shared throughout the enterprise.

Customers can help themselves, A knowledge base can be made available over the Internet.
lowering support costs and improving The knowledge base includes hypertext search capabilities
response time. and a graphical query engine.

Table 2. Benefits and features example

4.3 Assumptions and dependencies: List each of factor that affects the features that the vision
document includes. List assumptions that, if changed, will alter the vision document. For
example, an assumption might state that a specific operating system will be available for the
designated hardware for the software product. If the operating system is not available, the vision
document will require change.

4.4 Cost and pricing: Record relevant cost and pricing impacts and constraints. For example,
distribution costs (the number of CDs and CD mastering) or other cost-of-goods-sold constraints
(manuals and packaging) might be material or irrelevant to project success, depending on the
nature of the application.

4.5 Licensing and installation: Licensing and installation issues can also directly affect the
development effort. For example, the need to support serializing, password security, or network
licensing will create additional system requirements that must be considered in the development
effort. Installation requirements might also affect coding, or create the need for separate
installation software.

5: Product features

List and briefly describe the product features. Features are the high-level capabilities of the
system that are required to deliver benefits to the users. Each feature is a requested service that
typically requires a series of inputs to achieve a satisfactory result. For example, a feature of a
problem-tracking system might be the ability to provide trending reports. As the use case model
takes shape, update the description to refer to the use cases.

Because the vision document is reviewed by a wide variety of involved personnel, keep the level
of detail general enough for everyone to understand. However, offer sufficient detail to provide
the team with the information it needs to create a use case model or other design documents.
To manage application complexity, for a new system or an incremental change, list capabilities
at such a high level that you include approximately 25-99 features. These features provide the
basis for product definition, scope management, and project management. Each feature will be
expanded into greater detail in the use case model.

Throughout this section, make each feature relevant to users, operators, or other external
systems. Include a description of functions and usability issues that must be addressed. The
following guidelines apply:

 Avoid design. Keep feature descriptions at a general level. Focus on required capabilities
and why (not how) they should be implemented.
 Designate all features as requirements of a specific feature type for easy reference and
tracking.

5.1 Feature 1.

5.2 Feature 2.

6:Constraints
Note any design constraints, external constraints, such as operational or regulatory
requirements, or other dependencies.
7: Quality ranges
Define the quality ranges for performance, robustness, fault tolerance, usability, and similar
characteristics that the feature set does not describe.
8: Precedence and priority
Define the priority of the different system features.
9: Other product requirements

At a high level, list applicable standards, hardware or platform requirements, performance


requirements, and environmental requirements.

9.1 Applicable standards: List all standards that the product must comply with. The list can
include these standards:

 Legal and regulatory standards (FDA, UCC)


 Communications standards (TCP/IP, ISDN)
 Platform compliance standards (Windows, UNIX, and so on)
 Quality and safety standards (UL, ISO, CMM)

9.2 System requirements: Define the system requirements for the application. These can
include the supported host operating systems and network platforms, configurations, memory,
peripheral devices, and companion software.

9.3 Performance requirements: Detail performance requirements. Performance issues can


include such items as user-load factors, bandwidth or communication capacity, throughput,
accuracy, reliability, or response times under various load conditions.
9.4 Environmental requirements: Detail environmental requirements as needed. For hardware-
based systems, environmental issues can include temperature, shock, humidity, and radiation.
For software applications, environmental factors can include use conditions, user environment,
resource availability, maintenance issues, error handling, and recovery.

10: Documentation Requirements


This section describes the documentation that you must develop to support successful
application deployment.
10.1 Release notes, read me file: Release notes or an abbreviated read me file can include a
"What's new" section, a discussion of compatibility issues with earlier releases, and installation
and upgrade alerts. The document can also contain or link to fixes in the release and any
known problems and workarounds.
10.2 Online help: Many applications provide an online help system to assist the user. The
nature of these systems is unique to application development as they combine aspects of
programming (searchable information and web-like navigation) with aspects of technical
writing (organization and presentation). Many teams find that developing an online help
system is a project within a project that benefits from scope management and planning at the
project outset.
10.3 Installation guides: A document that includes installation, configuration, and upgrade
instructions is part of offering a full solution.
10.4 Labeling and packaging: A consistent look and feel begins with product packaging and
applies to installation menus, splash screens, help systems, GUI dialog boxes, and so on. This
section defines the needs and types of labeling to be incorporated in the code. Examples
include copyright and patent notices, corporate logos, standardized icons, and other graphic
elements.
11: Appendix 1 - Feature attributes
Give features attributes that can be used to evaluate, track, prioritize and manage the product
items that are proposed for implementation. Outline all requirement types and attributes in a
separate requirements management plan. However, you might want to list and briefly describe
the attributes for features that have been chosen. The following subsections represent a set of
suggested feature attributes.
11.1 Status: Teams set feature status after negotiation and review by the project management
team. Status tracks progress throughout the life of the project. The following table provides an
example of typical status-attribute values.
Status Description
Proposed Describes features that are under discussion but have
not been reviewed and accepted by the official channel. The official channel might
be a working group that consists of representatives from the project team,
product management, and user or customer community.

Approved Capabilities that are deemed useful and feasible and have
been approved for implementation by the official channel.

Incorporated Features that have been incorporated into the product baseline.

Table 3. Status value examples


Status Description

11.2 Benefit: The marketing group, the product manager, or the business analyst sets the
feature benefits. All requirements are not created equal. Ranking requirements by their relative
benefit to the user opens a dialog with customers, analysts, and members of the development
team. Use benefits in managing project scope and determining development priority. The
following table provides an example of typical benefit or priority attribute values.

Priority Description
Critical Essential features. Failure to implement a critical feature means that the system will not
meet customer needs. All critical features must be implemented in the release or the schedule
will slip.
Important Features important to the effectiveness and efficiency of the system for most applications.
The functions cannot be easily provided in some other way. Omitting an important feature
might affect customer or user satisfaction, or even revenue. However, the release
will not be delayed because an important feature is not included.

Useful Features that are useful in less typical applications, are used less frequently, or that can be met with
significant revenue or customer satisfaction impact can be expected if such an item is not included

Table 4. Benefit priority examples

11.3 Effort: The development team estimates the effort that is required to implement features.
Some features require more time and resources than others. Estimating the time, required code,
or functions, helps gauge complexity and set expectations of what can be accomplished in a
given time frame. Use the estimate in managing scope and determining development priority.

11.4 Risk: The development team establishes risk levels, based on the probability that the
project will experience undesirable events, such as cost overruns, schedule delays, or even
cancellation. Most project managers find categorizing risks as high, medium, and low is
sufficient, although finer gradations are possible. Risk can often be assessed indirectly by
measuring the uncertainty (range) of the project team's schedule estimate.

11.5 Stability: The analyst and development team establish feature stability based on the
probability that the feature will change or the team's understanding of the feature will change.
Stability is used to help establish development priorities and determine those items for which
additional elicitation is the appropriate next action.

11.6 Target release: Teams record the earliest intended product version that will include the
feature. You can use this field to allocate features from a vision document into a particular
baseline release. When combined with the status field, your team can propose, record, and
discuss various features of the release without committing them to development. Only features
whose status is set to "incorporated" and whose target release is defined will be implemented.
With scope management, the target release version number can be increased, and the item
remains in the vision document but is scheduled for a later release.
11.7 Assigned to: In many projects, features are assigned to feature teams that are responsible
for further elicitation, writing the software requirements, and implementation. The process
helps everyone on the project team better understand responsibilities.

11.8 Reason: Teams use this text field to track the source of the requested feature.
Requirements exist for specific reasons. This field records an explanation or a reference to an
explanation. For example, the reference might point to a page and line number of a product
requirement specification or point to a minute marker on a customer-interview video.

2. Stakeholders Stakeholders are those organisations or people that have an interest in the
organisation, these interests varied and for many reasons. They can be a source of potential
conflict for the successful accomplishment of the organisations strategy and goals.

Ex.NO: 7 Complete class Diagram using Rational Rows


Ex.NO: 8 Complete object Diagram using Rational Rows
EX.NO:9 Metrics for quality attributes for any software application
MAINTAINABILITY:

It is the ease with which you can modify software, adapt it for other purposes, or transfer
it from one development team to another. Compliance with software architectural rules and use
of consistent coding across the application combine to make software maintainable
USABILITY:

The user interface is the only part of the software visible to users, so it’s vital to have a
good UI. Simplicity and task execution speed are two factors that lead to a better UI.

Returning briefly to the functional and non-functional requirements that affect software quality,
usability is a non-functional requirement. Consider an airline booking system that allows you to
book flights (functional requirement). If that system is slow and frustrating to use (non-
functional requirement), then the software quality is low.

RILIABILITY ENGINEERING:
It is the risk of software failure and the stability of a program when exposed to
unexpected conditions. Reliable software has minimal downtime, good data integrity, and no
errors that directly affect users

SOFTWARE TESTABILITY

Quality software requires a high degree of testability. Finding faults in software with high
testability is easier, making such systems less likely to contain errors when shipped to end users.
The harder it is to provide quality assurance, the tougher time you’ll have ensuring that quality
applications are deployed into production

EFFICIENCY
It refers to an application’s use of resources and how that affects its scalability, customer
satisfaction, and response times. Software architecture, source code design, and individual
architectural components all contribute to performance efficiency.

CORRECTNESS

The correctness of a software system refers to:


– Agreement of program code with specifications
– Independence of the actual application of the software system.
The correctness of a program becomes especially critical when it is embedded in a complex
software system

ROBUSTNESS
A software system is robust if the consequences of an error in its operation, in the input,
or in the hardware, in relation to a given application, are inversely proportional to the probability
of the occurrence of this error in the given application.
– Frequent errors (e.g. erroneous commands, typing errors) must be handled with particular care.
– Less frequent errors (e.g. power failure) can be handled more laxly, but still must not lead to
irreversible consequences.
EX.NO:10 Use Case

A use case is a software and system engineering term that describes how a user uses a system to
accomplish a particular goal. A use case acts as a software modeling technique that defines the
features to be implemented and the resolution of any errors that may be encountered.

There are three basic elements that make up a use case:

 Actors: Actors are the type of users that interact with the system.
 System: Use cases capture functional requirements that specify the intended behavior of
the system.
 Goals: Use cases are typically initiated by a user to fulfill goals describing the activities
and variants involved in attaining the goal.

Use cases are modeled using unified modeling language and are represented by ovals containing
the names of the use case. Actors are represented using lines with the name of the actor written
below the line. To represent an actor's participation in a system, a line is drawn between the actor
and the use case. Boxes around the use case represent the system boundary.

Characteristics associated with use cases are:

 Organizing functional requirements


 Modeling the goals of system user interactions
 Recording scenarios from trigger events to ultimate goals
 Describing the basic course of actions and exceptional flow of events
 Permitting a user to access the functionality of another event

The steps in designing use cases are:

 Identify the users of the system


 For each category of users, create a user profile. This includes all roles played by the
users relevant to the system.
 Identify significant goals associated with each role to support the system. The system’s
value proposition identifies the significant role.
 Create use cases for every goal associated with a use case template and maintain the same
abstraction level throughout the use case. Higher level use case steps are treated as goals
for the lower level.
 Structure the use cases
 Review and validate the users
Importance of Use Case Diagrams

As mentioned before use case diagrams are used to gather a usage requirement of a system.
Depending on your requirement you can use that data in different ways. Below are few ways to
use them.

 To identify functions and how roles interact with them – The primary purpose of use
case diagrams.
 For a high-level view of the system – Especially useful when presenting to managers or
stakeholders. You can highlight the roles that interact with the system and the
functionality provided by the system without going deep into inner workings of the
system.
 To identify internal and external factors – This might sound simple but in large
complex projects a system can be identified as an external role in another use case.

Use Case Diagram objects

Use case diagrams consist of 4 objects.

 Actor
 Use case
 System
 Package

The objects are further explained below.

Actor
Actor in a use case diagram is any entity that performs a role in one given system. This could
be a person, organization or an external system and usually drawn like skeleton shown below.

Use Case
A use case represents a function or an action within the system. It’s drawn as an oval and
named with the function.
System
The system is used to define the scope of the use case and drawn as a rectangle. This an
optional element but useful when you’re visualizing large systems. For example, you can create
all the use cases and then use the system object to define the scope covered by your project. Or
you can even use it to show the different areas covered in different releases.

Package
The package is another optional element that is extremely useful in complex diagrams. Similar
to class diagrams, packages are used to group together use cases. They are drawn like the
image shown below.

Use Case Diagram Guidelines

Although use case diagrams can be used for various purposes there are some common guidelines
you need to follow when drawing use cases.
These include naming standards, directions of arrows, the placing of use cases, usage of system
boxes and also proper usage of relationships.

We’ve covered these guidelines in detail in a separate blog post. So go ahead and check out use
case diagram guidelines.

Relationships in Use Case Diagrams

There are five types of relationships in a use case diagram. They are

 Association between an actor and a use case


 Generalization of an actor
 Extend relationship between two use cases
 Include relationship between two use cases
 Generalization of a use case

We have covered all these relationships in a separate blog post that has examples with images.
We will not go into detail in this post but you can check out relationships in use case diagrams.

How to Create a Use Case Diagram

Up to now, you’ve learned about objects, relationships and guidelines that are critical when
drawing use case diagrams. I’ll explain the various processes using a banking system as an
example.

Identifying Actors
Actors are external entities that interact with your system. It can be a person, another system or
an organization. In a banking system, the most obvious actor is the customer. Other actors can be
bank employee or cashier depending on the role you’re trying to show in the use case.

An example of an external organization can be the tax authority or the central bank. The loan
processor is a good example of an external system associated as an actor.

Identifying Use Cases


Now it’s time to identify the use cases. A good way to do this is to identify what the actors need
from the system. In a banking system, a customer will need to open accounts, deposit and
withdraw funds, request check books and similar functions. So all of these can be considered as
use cases.

Top level use cases should always provide a complete function required by an actor. You can
extend or include use cases depending on the complexity of the system.
Once you identify the actors and the top level use case you have a basic idea of the system. Now
you can fine tune it and add extra layers of detail to it.

Look for Common Functionality to use Include


Look for common functionality that can be reused across the system. If you find two or more use
cases that share common functionality you can extract the common functions and add it to a
separate use case. Then you can connect it via the include relationship to show that it’s always
called when the original use case is executed. ( see the diagram for an example ).

Is it Possible to Generalize Actors and Use Cases


There may be instances where actors are associated with similar use cases while triggering a few
use cases unique only to them. In such instances, you can generalize the actor to show the
inheritance of functions. You can do a similar thing for use case as well.

One of the best examples of this is “Make Payment” use case in a payment system. You can
further generalize it to “Pay by Credit Card”, “Pay by Cash”, “Pay by Check” etc. All of them
have the attributes and the functionality of payment with special scenarios unique to them.

Optional Functions or Additional Functions


There are some functions that are triggered optionally. In such cases, you can use the extend
relationship and attach an extension rule to it. In the below banking system example “Calculate
Bonus” is optional and only triggers when a certain condition is matched.

Extend doesn’t always mean it’s optional. Sometimes the use case connected by extending can
supplement the base use case. The thing to remember is that the base use case should be able to
perform a function on its own even if the extending use case is not called.
Use case diagram for stackholders
EX.NO:11

Identify and analyze all the possible risks and its risk mitigation
plan for the system to be automated

INTRODUCTION

Ensuring that adequate and timely risk identification is performed is the responsibility
of the owner, as the owner is the first participant in the project. The sooner risks are identified,
the sooner plans can be made to mitigate or manage them. Assigning the risk identification
process to a contractor or an individual member of the project staff is rarely successful and
may be considered a way to achieve the appearance of risk identification without actually
doing it.
It is important, however, that all project management personnel receive specific
training in risk management methodology. This training should cover not only risk analysis
techniques but also the managerial skills needed to interpret risk assessments. Because the
owner may lack the specific expertise and experience to identify all the risks of a project
without assistance, it is the responsibility of DOE’s project directors to ensure that all
significant risks are identified by the integrated project team (IPT). The actual identification of
risks may be carried out by the owner’s representatives, by contractors, and by internal and
external consultants or advisors. The risk identification function should not be left to chance
but should be explicitly covered in a number of project documents:
 Statement of work (SOW),
 Work breakdown structure (WBS),
 Budget,
 Schedule,
Acquisition plan, and
 Execution plan.

METHODS OF RISK IDENTIFICATION

There are a number of methods in use for risk identification. Comprehensive


databases of the events on past projects are very helpful; however, this knowledge frequently
lies buried in people’s minds, and access to it involves brainstorming sessions by the project
team or a significant subset of it. In addition to technical expertise and experience, personal
contacts and group dynamics are keys to successful risk identification.
Project team participation and face-to-face interaction are needed to encourage open
communication and trust, which are essential to effective risk identification; without them, team
members will be reluctant to raise their risk concerns in an open forum. While smaller,
specialized groups can perform risk assessment and risk analysis, effective, ongoing risk
identification requires input from the entire project team and from others outside it. Risk
identification is one reason early activation of the IPT is essential to project success.
The risk identification process on a project is typically one of brainstorming, and the usual
rules of brainstorming apply:
 The full project team should be actively involved.
 Potential risks should be identified by all members of the project team.
 No criticism of any suggestion is permitted.
 Any potential risk identified by anyone should be recorded, regardless of whether other
members of the group consider it to be significant.
 All potential risks identified by brainstorming should be documented and followed up by
the IPT.
The objective of risk identification is to identify all possible risks, not to eliminate risks from
consideration or to develop solutions for mitigating risks—those functions are carried out during
the risk assessment and risk mitigation steps.

Some of the documentation and materials that should be used in risk identification as they
become available include these:
 Sponsor mission, objectives, and strategy; and project goals to achieve this strategy,
 SOW,
Project justification and cost-effectiveness (project benefits, present worth, rate of return, etc.),

 WBS,
 Project performance specifications and technical specifications,
 Project schedule and milestones,
 Project financing plan,
 Project procurement plan,
 Project execution plan,
 Project benefits projection,
 Project cost estimate,
 Project environmental impact statement,
 Regulations and congressional reports that may affect the project,
 News articles about how the project is viewed by regulators, politicians, and the public,
and
 Historical safety performance.
The risk identification process needs to be repeated as these sources of information change and
new information becomes available.
There are many ways to approach risk identification. Two possible approaches are (1) to
identify the root causes of risks—that is, identify the undesirable events or things that can go
wrong and then identify the potential impacts on the project of each such event—and (2) to
identify all the essential functions that the project must perform or goals that it must reach to be
considered successful and then identify all the possible modes by which these functions might
fail to perform. Both approaches can work, but the project team may find it easier to identify all
the factors that are critical to success, and then work backward to identify the things that can go
wrong with each one.
Risk identification should be performed early in the project (starting with preproject planning,
even before the preliminary concept is approved) and should continue until the project is
completed. Risk identification is not an exact science and therefore should be an ongoing process
throughout the project, especially as it enters a new phase and as new personnel and contractors
bring different experiences and viewpoints to risk identification. For this reason, the DOE project
director should ensure that the project risk management plan provides for periodic updates.

So what framework do we have for Risk Analysis and Management?


I am suggesting that we take a 4 Step Process and apply it always and from the very outset
(feasibility) of our project. The 4 steps are:

1. Risk Identification

2. Risk Analysis

3. Risk Response Plan

4. Risk Monitoring and Control

So let’s start at the beginning


You have just taken over on a new project and are busy, busy, busy. You believe that the key is
to get the scope nailed down, the team built and the funding secured – we will take a look at risks
later on – they are not really that important, because this is essentially the same as a project we
did last year, but bigger and with more aggressive delivery dates.

That is a lot of assumption – if they are all TRUE, then you are in the clear; if any of them are
FALSE, then you are ignoring potentially fatal risks – russian roulette comes to mind.

Apart from that, risk identification is going to influence how (and possibly what) you choose to
execute the project and will also possibly influence your calculation of required contingency
requirements.

So let’s start with “Risk Identification”.

In order to address this properly we need to define “What” the activity is and “How” we will
execute it.

Let’s start with the WHAT


The key objective of this stage is to capture any risk/problems which might occur during the
delivery of the project objectives which may impact our chances of success.

The HOW is a little bit more detailed


Here we will need to have some brainstorming sessions with the client and key technical
resources so that we can evaluate the type of things that might happen, how likely they are to
happen and finally, what the impact of such an event would be.
So let’s take a look at a framework that would allow us to analyze and manage the risks for our
project from the outset – let’s call it the Kevlar Jacket Approach.
Step 1 – Risk Identification
As stated above, to aid in identifying the risk, we first need to identify the right people to aid us
in this – technical experts, customer, project manager. Once we have identified the right team,
we then need to conduct the risk analysis – here we need to use a mix of:

1. One on one meetings

2. Brainstorming meetings

3. Review of previous project risk and issue registers

During these activities we need to capture key information to allow us to analyze, respond and
manage these risk. During the identification step, make sure that you capture the following:

1. Give each risk a unique identifier – a simple number from 1 to n.

2. A risk description which is sufficiently detailed enough for anyone reading the risk register to
understand the risk or ask intelligent questions of the person who identified the risk.

3. A risk indicator – i.e. any event which might be an early warning sign of the risk occurring, or
may trigger a sequence of events that if not controlled properly would lead to the risk occurring.

4. Categorise the risks into specific buckets – e.g. Safety, technical, commercial etc..

5. Record who identified the risk, on what date

6. Record during what activity was it captured – e.g. brainstorming session, 1 on 1 interview –
this will be useful for analysis across projects and will help you continuously improve your risk
analysis process.

So that then covers the information you need to capture once you have identified a specific risk.
The next step is to analyse it and get a detailed understanding of what its occurrence would
mean.

Step 2 – Risk Analysis


Once you have identified a risk you need to analyze it – you need to once again ensure that you
have all the right people present to conduct this in a meaningful way.

The key activities here are:

1. Get an understanding of the impact to the project/business if the risk did in fact materialise –
this will require key input from the customer and the technical experts.
2. Rank this in terms of significance, using a scoring system of 1 to 5, where 0 = none, 1 = low
and 5 = very high – make sure that this is discussed in detail and there is a reasoned basis for the
score.

3. Gain an understanding of the probability of this event occurring and rank this in terms of
probability, using a scoring system of 1 to 5, where 0 = none, 1 = low and 5 = very high – make
sure that this is discussed in detail and there is a reasoned basis for the score.

4. Now calculate a Risk Score = Significance x Probability

5. Colour code the risk based on score – define Red, Yellow and Green bands e.g. 0-5 = Green,
6-12 = Yellow, >12 = Red.

Now we have some useful information to go forward with – the next step is to build a plan to
either reduce the impact or eliminate the opportunity for it to occur. We are now in the realm of
managing our risks and we must build a risk response plan – our next step.

Step 3 – Risk Response Plan


Here we start to look at how we manage the risk we have identified – we have 4 main strategies
that we can use, and we are going to call the them 4 T’s.

The 4 T’s

1. Terminate (often referred to as Avoidance)

2. Transfer

3. Treat (oftener referred to as Mitigate)

4. Tolerate

So, let’s look at each of these individually.

Terminate is where specific steps are taken to ensure that the risk is eliminated (avoided) or that
the impact it had is prevented.

Transfer is where the risk is passed to another party; the weakness with this is that the risk does
not go away, it’s just causes someone else a problem.

Treat is where by taking certain actions immediately, the risks can be reduced.

Tolerate is what it says and the reason we tolerate them is that despite the fact that we cant do
much to reduce or eliminate them, the benefits of taking them far outweigh the penalties/cost.
Step 4 – Risk Monitoring and Control
This is the routine part and requires the project manager to be diligent and monitor the status of
the risk, the residual score by reassessing the risk at critical junctures and the risk state – is it
static increasing or declining. This is a key activity and depending on the trend and significance
may require a renewed effort by the project team to ensure that identified risks are dealt with
appropriately.

Finally, the housekeeping. Put all of this information in one central location – A risk register.
Make sure that all key stakeholders are informed on this and agree with your plan of action.

Would love to hear your comments and am delighted to answer any questions you might have.

Risk Mitigation Strategies

There are five principal risk mitigation strategies. Of course, each one serves a different purpose
for different businesses. It becomes a subjective matter to decide how to approach risk. However,
with the use of risk management software and risk assessment matrices, you can be better
prepared to assess, monitor and manage risk.

Let’s take a look at the main strategies:

1. Risk Acceptance: Risk acceptance comes down to “risking it.” It’s coming to terms that the
risk exists and there is nothing you will do to mitigate or change it. Instead, it understands the
probability of it happening and accepting the consequences that may occur. This is the best
strategy when risk is small or unlikely to happen. It makes sense to adopt risk when the cost of
mitigating or avoiding it will be higher than merely accepting it and leaving it to chance.

2. Risk Avoidance: If a risk from starting a project, launching a product, moving your business,
etc. is too large to accept, it may be better to avoid it. In this case, risk avoidance means not
performing that activity that causes the risk. Managing risk in this way is most like how people
address personal risks. While some people are more risk-loving and others are more risk-averse,
everyone has a tipping point at which things become just too risky and not worth attempting.

3. Risk Mitigation: When risks are evaluated, some risks are better not to avoid or accept. In this
instance, risk mitigation is explored. Risk mitigation refers to the processes and methods of
controlling risk. When you identify risk and its probability, you can allocate resources for
management.
4. Risk Reduction: Businesses can assign a level at which risk is acceptable, which is called the
residual risk level. Risk reduction is the most common strategy because there is usually a way to
at least reduce risk. It involves taking countermeasures to decrease the impact of consequences.
For example, one form of risk reduction is risk transfer, like that of buying insurance.

5. Risk Transfer: As mentioned, risk transfer involves moving the risk to another third party or
entity. Risk transfers can be outsourced, moved to an insurance agency, or given to a new entity
as is what happens when leasing property. Risk transfers don’t always result in lower costs.
Instead, a risk transfer is the best option when it can be used to reduce future damage. So,
insurance can cost money, but it may end up being more cost-effective than having the risk occur
and being solely responsible for reparations.

Risk Evaluation

To determine the right risk mitigation strategy to take, you must evaluate risks. This involves
three steps:

 Identification: First and foremost, you must identify and define the types of risks that
your business faces. There are both internal and external risks. When identifying risks,
consider if they are preventable, such as operational risks, or not avoidable like natural
disasters.

 Impact assessment: Once you have identified risk, you can estimate their impact. This
involves defining the probability that a risk will occur and its respective result or
consequence.

 Develop strategies: Finally, you can determine the necessary strategy for those risks that
are likely to happen with medium or high probability. While you may still want to
monitor low risks, they are less of a priority when it comes to taking the next step and
making a plan.

How to Determine Risk Mitigation Plans


All risks and rewards are measured differently based on your business goals. However, to
adequately address risk mitigation strategies, you’ll want to consider the following:

 Understand the user and their needs: Know your customers and their needs. When
assessing risks, consider their needs as they are the backbone of your business.
 Seek out experts and use them: Risk doesn’t have to be managed alone. There are both
software systems and experts in the field that are there to serve as resources.

 Recognise risk that occurs: The worst thing you can do as a business leader is denying
that risk exists because that’s not realistic or helpful to anyone. When you can recognise,
define and address risk, you can better prepare your team and managers to know how to
deal with the different types of risk.

 Encourage risk-taking: Sometimes, risk-taking is the best strategy. If your business can
handle it, encourage risk-taking. To make this seem less daunting, have back-up plans
and communicate them so that everyone is on the same page.
 Recognise opportunities: It’s possible that taking a risk can open the door to new
opportunities. If you shape the conversation around risk like this, it can support a
problem-solving mentality that knows how to deal with risk.

 Encourage consideration of mitigation options: Get everyone involved and consider


feedback from your team. Everyone might have a different idea or method to mitigate
risk. You can use data and analytics to assess options and choose the best path to take.

 Not all risks require a mitigation plan: As mentioned above, sometimes it’s best to
accept risk. Understand that this is an option, and some risk doesn’t require a plan at all.

‍Trends with Risk Mitigation

Despite the importance of risk mitigation, consulting companies are seeing trends across
businesses when it comes to risk assessment and management. For example:

 Lacking education: In many cases, education is needed to teach companies and their
employees how to manage risk and how to use risk management software.

 Choosing ignorance: Some stand by “ignorance is bliss” and would rather ignore risks
and managing them until a problem occurs.

 Choosing ignorance: Some stand by “ignorance is bliss” and would rather ignore risks
and managing them until a problem occurs.

 Non-existent tools: A lot of businesses don’t have the means to perform self-assessment

 Little understanding: Some companies are oblivious to their strengths and weaknesses

 It’s a need, but not being done: Most management asks their team regularly
about compliance and measurement, but nothing is being done about it. It’s important to
note that tools are there to help with this!
The Use of Analytics Tools

One of the best ways to help recognise, define and monitor risk is to employ analytics tools
within your organisation. For example, analytical automation tools like SolveXia helps
organisations make business processes and data analytics more efficient. Being able to get deep
insights, set up controls and procedures, and analyse processes with real-time dashboards means
that you can alleviate the burden of the unknown, or risk.

For the aspects within your control, you can use such tools and make well-informed decisions to
mitigate certain types of risks. For example, you can use data to predict how your bottom line
will be affected if you initiate a price change. Or, you can rest assured knowing that all processes
are tracked and have audit trails, so tools like SolveXia automatically reduces compliance risk.
EX.NO:12 Ishikawa Diagram (Can be called as Fish Bone Diagram or
Cause &amp; Effect Diagram)

A fishbone diagram is a visualization tool for categorizing the potential causes of a


problem. This tool is used in order to identify a problem’s root causes. Typically used for root
cause analysis, a fishbone diagram combines the practice of brainstorming with a type of mind
map template. It should be efficient as a test case technique to determine cause and effect.

A fishbone diagram is useful in product development and troubleshooting processes, typically


used to focus a conversation around a problem. After the group has brainstormed all the possible
causes for a problem, the facilitator helps the group to rate the potential causes according to their
level of importance and diagram a hierarchy. The name comes from the diagram's design, which
looks much like a skeleton of a fish. Fishbone diagrams are typically worked right to left, with
each large "bone" of the fish branching out to include smaller bones, each containing more detail.

Dr. Kaoru Ishikawa, a Japanese quality control expert, is credited with inventing the fishbone
diagram to help employees avoid solutions that merely address the symptoms of a much larger
problem. Fishbone diagrams are considered one of seven basic quality tools and are used in the
"analyze" phase of Six Sigma's DMAIC (define, measure, analyze, improve, control) approach to
problem-solving.

Fishbone diagrams are also called a cause and effect diagram, or Ishikawa diagram.

How to create a fishbone diagram


Fishbone diagrams are typically made during a team meeting and drawn on a flipchart or
whiteboard. Once a problem that needs to be studied further is identified, teams can take the
following steps to create the diagram:

1. The head of the fish is created by listing the problem in a statement format and drawing a
box around it. A horizontal arrow is then drawn across the page with an arrow pointing to the
head. This acts as the backbone of the fish.
2. Then at least four overarching "causes" are identified that might contribute to the problem.
Some generic categories to start with may include methods, skills, equipment, people,
materials, environment or measurements. These causes are then drawn to branch off from the
spine with arrows, making the first bones of the fish.

3. For each overarching cause, team members should brainstorm any supporting information
that may contribute to it. This typically involves some sort of questioning methods, such as
the 5 Why's or the 4P's (Policies, Procedures, People and Plant) to keep the conversation
focused. These contributing factors are written down to branch off their corresponding cause.

4. This process of breaking down each cause is continued until the root causes of the problem
have been identified. The team then analyzes the diagram until an outcome and next steps are
agreed upon.
Example of a fishbone diagram
The following graphic is an example of a fishbone diagram with the problem "Website went
down." Two of the overarching causes have been identified as "Unable to connect to server" and
"DNS lookup problem," with further contributing factors branching off.
Steps to develop a Fishbone Diagram
The steps below outline the major steps to take in creating a Fishbone Diagram.
1. Determine the problem statement (also referred to as the effect). This is written at the
mouth of the “fish.” Be as clear and specific as you can about the problem. Beware of
defining the problem in terms of a solution (e.g., we need more of something).
2. Identify the major categories of causes of the problem (written as branches from the
main arrow). Major categories often include: equipment or supply factors,
environmental factors, rules/policy/procedure factors, and people/staff factors.
3. Brainstorm all the possible causes of the problem. Ask “Why does this happen?” As
each idea is given, the facilitator writes the causal factor as a branch from the
appropriate category (places it on the fishbone diagram). Causes can be written in
several places if they relate to several categories.
4. Find out “Why does this happen?” about each cause. Write sub-causes branching off
the cause branches.
5. Continues to analyze “Why?” to generate deeper levels of causes and continue

organizing them under related causes or categories. This will help you to identify and
then address root causes to prevent future problems.

Creating a Fishbone Diagram


1. Click Diagram > New form the toolbar.
2. In the New Diagram window, choose Cause and Effect Diagram (a fishbone diagram is also

know as a cause and effect diagram), then click Next at the bottom of the window.

3.Name the diagram (for example: Difficulty on Locating a Drawing), then click OK to finish creating
a new diagram.

4.Name the diagram (for example: Difficulty on Locating a Drawing), then click OK to finish creating
a new diagram.
5.You will then see something like this:

6.Double click Problem on the right hand side of the diagram, then rename it. In this case, we will

rename it to Difficulty on Locating a Drawing.

7.Double click Category1 to rename it to Man, then right click Man and select Add Primary

Cause from the toolbar to create a new primary cause.


8.Double click Cause and rename it to Library workers aren’t adequately informed, then create a
secondary cause by right clicking Library workers aren’t adequately informed and select Add
Secondary Cause.

9.Rename the secondary cause Cause by double clicking it.

10.Repeat step 5 to 8 above to create more primary and secondary causes.

To create a new category, right click any empty space inside the fish, then select Add Category from
the toolbar.
11. You will see something like this when you finish your diagram:
EX.NO:13 Traceability Matrix

In software development, a traceability matrix (TM)[1]:244 is a document, usually in the form of a


table, used to assist in determining the completeness of a relationship by correlating any
two baselined documents using a many-to-many relationship comparison.[1]:3–22 It is often used
with high-level requirements (these often consist of marketing requirements) and detailed
requirements of the product to the matching parts of high-level design, detailed design, test plan,
and test cases.

A requirements traceability matrix may be used to check if the current project requirements are
being met, and to help in the creation of a request for proposal,[2] software requirements
specification,[3] various deliverable documents, and project plan tasks.[4]

Common usage is to take the identifier for each of the items of one document and place them in
the left column. The identifiers for the other document are placed across the top row. When an
item in the left column is related to an item across the top, a mark is placed in the intersecting
cell. The number of relationships are added up for each row and each column. This value
indicates the mapping of the two items. Zero values indicate that no relationship exists. It must
be determined if a relationship must be made. Large values imply that the relationship is too
complex and should be simplified.

To ease the creation of traceability matrices, it is advisable to add the relationships to the source
documents for both backward traceability and forward traceability.[5] That way, when an item is
changed in one baselined document, it is easy to see what needs to be changed in the other.
Sample traceability matrix

RE RE RE
Require Req RE RE RE RE RE RE RE RE RE RE RE
Q1 Q1 Q1
ment s Q1 Q1 Q1 Q1 Q1 Q1 Q1 Q1 Q1 Q1 Q1
test
TEC TEC TEC
identifier UC UC UC UC UC UC UC UC UC UC UC
ed H H H
s 1.1 1.2 1.3 2.1 2.2 2.3.1 2.3.2 2.3.3 2.4 3.1 3.2
1.1 1.2 1.3

Test
321 3 2 3 1 1 1 1 1 1 2 3 1 1 1
cases

Tested
implicitly
77

1.1.1 1 x

1.1.2 2 x x

1.1.3 2 x x

1.1.4 1 x

1.1.5 2 x x

1.1.6 1 x

1.1.7 1 x

1.2.1 2 x x

1.2.2 2 x x
1.2.3 2 x x

How to Create a Traceability Matrix in Excel

Creating a traceability matrix in Excel is going to take some time and sleuthing. If you’ve
already tracked down the details of which artifacts you want to trace, the process will go much
more smoothly.

1. Define Your Goal

Your first step when creating a traceability matrix in Excel — or creating a traceability matrix,
period — is to define your goals.

What do you want to deliver with the traceability matrix?

Here are some example goals:

I want to create a traceability matrix to prove that I’ve met compliance requirements for my
product.

I want to create a traceability matrix to make sure that my requirements have been tested and
passed before I ship.

I want to create a traceability matrix so that I know which tests and issues are impacted if a
requirement changes.

By setting your goal before you begin, you’ll make sure you’re gathering the right information
for your traceability matrix.

2. Gather Your Artifacts

You’ll need to define which artifacts should be included, based on your goal.

At its most basic, a traceability matrix should include:

 Requirements
 Tests
 Test results
 Issues

Once you’ve defined your artifacts, you’ll need to gather them. This might mean tracking down
the most recent requirements document. Each requirement listed should have a unique
requirements ID. And this ID should not change if your requirements are reordered.
You’ll also need to track down your test cases. If testing is in progress or completed, you’ll need
to find test statuses. If tests failed, you’ll also need to find any issues that may have been
detected.

3. Create a Traceability Matrix Template in Excel

Once you’ve defined and gathered your documents, you’re ready to make your traceability
matrix template.

You’ll need to add a column for each of your artifacts. For a basic traceability matrix, your
columns will be:

 Column 1: Requirements
 Column 2: Tests
 Column 3: Test Results
 Column 4: Issues

Then, you’ll be ready to start adding your artifacts in the columns you’ve created.

4. Copy and Paste Requirements From Your Requirements Document

Now it’s time to open up your requirements document and start copying and pasting your
requirement IDs into your first column of the traceability matrix template.

This may take a while, depending on how many requirements you have.

5. Copy and Paste Test Cases From Your Test Case Document

Next, you’ll enter your test case IDs into the second column. Test cases should be in the same
row as the requirements they are tied to.

This may take a while — especially if your test cases aren’t stored in a central spot.

6. Copy and Paste Test Results and Issues (If You Have Them)

A test implies that a requirement was implemented. So, you’ll also need to have the results of
your test runs in your traceability matrix, as well as any issues that may have come up.

You might have test run results tracked in a spreadsheet. And you might have issues in Jira. Both
of those will need to be copied over to the traceability matrix — and put in the same row as their
related test cases and requirements.

You can indicate whether a test run passed or failed by changing the background color of the cell
(e.g., green for passed and red for failed).
7. Update the Traceability Matrix — Constantly

It’s one effort to create a traceability matrix. But it’s a full-time job to keep it updated and do it
right.

If a requirement changes, you’ll need to update the traceability matrix. Or there might be
requirements you decided not to fulfill — and you’ll need a way to indicate that, too. If someone
adds a test case, you’ll need to update the matrix. When a test run passes or fails, you’ll need to
update it again. If an issue found in testing is resolved, you’ll need to update it yet again.

Remember to keep a close eye on your requirement IDs. Those should stay the same, even if you
reorder your requirements list or reuse a requirement.

Example Traceability Matrix in Excel


Build Your Traceability Skills

You’ve learned how to create a traceability matrix in Excel. Now, learn how to get the most out
of traceability — and use it for compliance and risk management.

3 Problems With a Traceability Matrix in Excel

Using a traceability matrix in Excel can work. But it’s going to take significant manual effort to
get what you need out of it.

That’s because…
Creating a Traceability Matrix Is Busy Work.

Creating and maintaining a traceability matrix in Excel can become a full-time job.

It’s unlikely that you’ll have a simple list of requirements. You’ll probably have marketing
requirements that are decomposed into product requirements, which are decomposed into a user
interface (UI) specification, which is decomposed into product architecture.

All of these requirements are important to track. But trying to track them all in Excel will be
difficult. You’ll need a column for each type of requirement.

Keeping a Traceability Matrix Up-to-Date Is Expensive.

Your artifacts are bound to change throughout development. Customer feedback may influence
the priority of requirements. Development obstacles might put requirements on hold. And every
time that happens, someone will have to go back and update Excel.

Some people wait to create a traceability matrix until the end, hoping to avoid extra busy work.
But if you do that, you’ll still need to track everything down and copy/paste it into Excel.

And you won’t be able to use a traceability matrix created at the end of a project for anything
other than proof. That means you’ll miss out data that could expedite your release process.

So, the costs of maintaining that Excel traceability matrix stack up.

Making a Mistake Is Common.

When you’re manually inputting your requirements, tests, and issues, it’s easy to get something
wrong.

If your traceability matrix isn’t accurate, you won’t be able to use it for anything. That means
you won’t be able to prove you’ve met compliance requirements. And you won’t be able to use it
to make data-driven decisions. In fact, you might end up making the wrong decision based on an
inaccuracy — such as a test run that actually failed but was marked as passed.

You can avoid these problems by using a traceability matrix tool, such as Helix ALM.

How to Use Traceability Matrix Tools

Creating a traceability matrix can be easier. And it starts with selecting the right traceability
matrix tools.

These tools take the time-consuming work out of creating a traceability matrix. That’s because
they can create relationships between work items. This accelerates — and even automates — the
process of creating a matrix. You won’t have to copy and paste requirement after requirement
and test case after test case.
Helix ALM is one tool that makes it easier to create a traceability matrix. In Helix ALM, all of
your artifacts are housed in one spot, so you don’t need to go searching.

1. Define Your Goals

There’s one step that’s the same with Helix ALM and Excel traceability matrices. And that’s
defining your goals before you begin.

You need to know what you want to deliver with your traceability matrix, whether it’s:

 Proof of compliance.
 A guarantee that your requirements have been tested.
 Understanding of the impact of change.

2. Establish Your Business Process and Artifacts

You can use Helix ALM to automate the process of creating a traceability matrix. That’s because
you can set up your business process once. And then you can use it to instantly create a
traceability matrix report every time you need it.

To set up your business process, you’ll need to define:

 Your artifacts (item types).


 The relationships (links) between artifacts.

You’ll configure this within Helix ALM. And then Helix ALM will maintain all of the data
you’ll need to mine.

3. Get Your Traceability Matrix Report

You’ll already be working with requirements, test cases, and issues in Helix ALM. So, as you
create them, you’ll be automatically linking them to each other. That means when it comes time
to create a traceability matrix, your work is practically done.

All you need to do is run the traceability matrix report. The process you set up and configured in
the previous step will take care of the manual steps and data gathering for you.

You’ll get a report showing the relationship between requirements, tests, and issues. You’ll be
able to customize it by adding or removing columns to meet your needs. But the hard work is
done for you.

And, if you have issues in Jira (and Jira is integrated with Helix ALM), the matrix report will
show those issues in the issues column.

Example for functional requirements:


EX.NO:14 Estimation Techniques - Function Points
A Function Point (FP) is a unit of measurement to express the amount of business
functionality, an information system (as a product) provides to a user. FPs measure software
size. They are widely accepted as an industry standard for functional sizing.
For sizing software based on FP, several recognized standards and/or public specifications have
come into existence. As of 2013, these are −

ISO Standards
 COSMIC − ISO/IEC 19761:2011 Software engineering. A functional size measurement
method.
 FiSMA − ISO/IEC 29881:2008 Information technology - Software and systems
engineering - FiSMA 1.1 functional size measurement method.
 IFPUG − ISO/IEC 20926:2009 Software and systems engineering - Software
measurement - IFPUG functional size measurement method.
 Mark-II − ISO/IEC 20968:2002 Software engineering - Ml II Function Point Analysis -
Counting Practices Manual.
 NESMA − ISO/IEC 24570:2005 Software engineering - NESMA function size
measurement method version 2.1 - Definitions and counting guidelines for the
application of Function Point Analysis.

Object Management Group Specification for Automated Function Point

Object Management Group (OMG), an open membership and not-for-profit computer industry
standards consortium, has adopted the Automated Function Point (AFP) specification led by the
Consortium for IT Software Quality. It provides a standard for automating FP counting
according to the guidelines of the International Function Point User Group (IFPUG).
Function Point Analysis (FPA) technique quantifies the functions contained within software
in terms that are meaningful to the software users. FPs consider the number of functions being
developed based on the requirements specification.
Function Points (FP) Counting is governed by a standard set of rules, processes and
guidelines as defined by the International Function Point Users Group (IFPUG). These are
published in Counting Practices Manual (CPM).

History of Function Point Analysis

The concept of Function Points was introduced by Alan Albrecht of IBM in 1979. In 1984,
Albrecht refined the method. The first Function Point Guidelines were published in 1984. The
International Function Point Users Group (IFPUG) is a US-based worldwide organization of
Function Point Analysis metric software users. The International Function Point Users

Group (IFPUG) is a non-profit, member-governed organization founded in 1986. IFPUG owns


Function Point Analysis (FPA) as defined in ISO standard 20296:2009 which specifies the
definitions, rules and steps for applying the IFPUG's functional size measurement (FSM)
method. IFPUG maintains the Function Point Counting Practices Manual (CPM). CPM 2.0 was
released in 1987, and since then there have been several iterations. CPM Release 4.3 was in
2010.
The CPM Release 4.3.1 with incorporated ISO editorial revisions was in 2010. The ISO
Standard (IFPUG FSM) - Functional Size Measurement that is a part of CPM 4.3.1 is a
technique for measuring software in terms of the functionality it delivers. The CPM is an
internationally approved standard under ISO/IEC 14143-1 Information Technology – Software
Measurement.

Elementary Process (EP)

Elementary Process is the smallest unit of functional user requirement that −

 Is meaningful to the user.


 Constitutes a complete transaction.
 Is self-contained and leaves the business of the application being counted in a consistent
state.

Functions

There are two types of functions −

 Data Functions
 Transaction Functions
Data Functions
There are two types of data functions −

 Internal Logical Files


 External Interface Files
Data Functions are made up of internal and external resources that affect the system.
Internal Logical Files
Internal Logical File (ILF) is a user identifiable group of logically related data or control
information that resides entirely within the application boundary. The primary intent of an ILF
is to hold data maintained through one or more elementary processes of the application being
counted. An ILF has the inherent meaning that it is internally maintained, it has some logical
structure and it is stored in a file. (Refer Figure 1)
External Interface Files
External Interface File (EIF) is a user identifiable group of logically related data or control
information that is used by the application for reference purposes only. The data resides entirely
outside the application boundary and is maintained in an ILF by another application. An EIF
has the inherent meaning that it is externally maintained, an interface has to be developed to get
the data from the file. (Refer Figure 1)
Transaction Functions
There are three types of transaction functions.

 External Inputs
 External Outputs
 External Inquiries
Transaction functions are made up of the processes that are exchanged between the user, the
external applications and the application being measured.
External Inputs
External Input (EI) is a transaction function in which Data goes “into” the application from
outside the boundary to inside. This data is coming external to the application.

 Data may come from a data input screen or another application.


 An EI is how an application gets information.
 Data can be either control information or business information.
 Data may be used to maintain one or more Internal Logical Files.
 If the data is control information, it does not have to update an Internal Logical File. (Refer
Figure 1)
External Outputs
External Output (EO) is a transaction function in which data comes “out” of the system.
Additionally, an EO may update an ILF. The data creates reports or output files sent to other
applications. (Refer Figure 1)
External Inquiries
External Inquiry (EQ) is a transaction function with both input and output components that
result in data retrieval. (Refer Figure 1)

Definition of RETs, DETs, FTRs:

Record Element Type


A Record Element Type (RET) is the largest user identifiable subgroup of elements within an
ILF or an EIF. It is best to look at logical groupings of data to help identify them.
Data Element Type
Data Element Type (DET) is the data subgroup within an FTR. They are unique and user
identifiable.
File Type Referenced
File Type Referenced (FTR) is the largest user identifiable subgroup within the EI, EO, or EQ
that is referenced to.
The transaction functions EI, EO, EQ are measured by counting FTRs and DETs that they
contain following counting rules. Likewise, data functions ILF and EIF are measured by
counting DETs and RETs that they contain following counting rules. The measures of
transaction functions and data functions are used in FP counting which results in the functional
size or function points.
EX.NO:15 Estimation Techniques - Use-Case Points

A Use-Case is a series of related interactions between a user and a system that enables the user
to achieve a goal.
Use-Cases are a way to capture functional requirements of a system. The user of the system is
referred to as an ‘Actor’. Use-Cases are fundamentally in text form.
Use-Case Points – Definition
Use-Case Points (UCP) is a software estimation technique used to measure the software size
with use cases. The concept of UCP is similar to FPs.
The number of UCPs in a project is based on the following −

 The number and complexity of the use cases in the system.


 The number and complexity of the actors on the system.
o Various non-functional requirements (such as portability, performance,
maintainability) that are not written as use cases.
o The environment in which the project will be developed (such as the language,
the team’s motivation, etc.)
Estimation with UCPs requires all use cases to be written with a goal and at approximately the
same level, giving the same amount of detail. Hence, before estimation, the project team should
ensure they have written their use cases with defined goals and at detailed level. Use case is
normally completed within a single session and after the goal is achieved, the user may go on to
some other activity.

History of Use-Case Points

The Use-Case Point estimation method was introduced by Gustav Karner in 1993. The work
was later licensed by Rational Software that merged into IBM.

Use-Case Points Counting Process

The Use-Case Points counting process has the following steps −

 Calculate unadjusted UCPs


 Adjust for technical complexity
 Adjust for environmental complexity
 Calculate adjusted UCPs

Step 1: Calculate Unadjusted Use-Case Points.


You calculate Unadjusted Use-Case Points first, by the following steps −

 Determine Unadjusted Use-Case Weight


 Determine Unadjusted Actor Weight
 Calculate Unadjusted Use-Case Points
Step 1.1 − Determine Unadjusted Use-Case Weight.
Step 1.1.1 − Find the number of transactions in each Use-Case.
If the Use-Cases are written with User Goal Levels, a transaction is equivalent to a step in the
Use-Case. Find the number of transactions by counting the steps in the Use-Case.
Step 1.1.2 − Classify each Use-Case as Simple, Average or Complex based on the number of
transactions in the Use-Case. Also, assign Use-Case Weight as shown in the following table −

Use-Case Complexity Number of Transactions Use-Case Weight

Simple ≤3 5

Average 4 to 7 10

Complex >7 15

Step 1.1.3 − Repeat for each Use-Case and get all the Use-Case Weights. Unadjusted Use-Case
Weight (UUCW) is the sum of all the Use-Case Weights.
Step 1.1.4 − Find Unadjusted Use-Case Weight (UUCW) using the following table −

Use-Case Use-Case Number of Use- Product


Complexity Weight Cases

Simple 5 NSUC 5 × NSUC

Average 10 NAUC 10 × NAUC

Complex 15 NCUC 15 × NCUC

Unadjusted Use-Case Weight (UUCW) 5 × NSUC + 10 × NAUC + 15 × NCUC

Where,
NSUC is the no. of Simple Use-Cases.
NAUC is the no. of Average Use-Cases.
NCUC is the no. of Complex Use-Cases.
Step 1.2 − Determine Unadjusted Actor Weight.
An Actor in a Use-Case might be a person, another program, etc. Some actors, such as a system
with defined API, have very simple needs and increase the complexity of a Use-Case only
slightly.
Some actors, such as a system interacting through a protocol have more needs and increase the
complexity of a Use-Case to a certain extent.
Other Actors, such as a user interacting through GUI have a significant impact on the
complexity of a Use-Case. Based on these differences, you can classify actors as Simple,
Average and Complex.
Step 1.2.1 − Classify Actors as Simple, Average and Complex and assign Actor Weights as
shown in the following table −

Actor Complexity Example Actor Weight

Simple A System with defined API 1

Average A System interacting through a Protocol 2

Complex A User interacting through GUI 3

Step 1.2.2 − Repeat for each Actor and get all the Actor Weights. Unadjusted Actor Weight
(UAW) is the sum of all the Actor Weights.
Step 1.2.3 − Find Unadjusted Actor Weight (UAW) using the following table −

Actor Complexity Actor Weight Number of Actors Product

Simple 1 NSA 1 × NSA

Average 2 NAA 2 × NAA

Complex 3 NCA 3 × NCA


Unadjusted Actor Weight (UAW) 1 × NSA + 2 × NAA + 3 × NCA

Where,
NSA is the no. of Simple Actors.
NAA is the no. of Average Actors.
NCA is the no. of Complex Actors.
Step 1.3 − Calculate Unadjusted Use-Case Points.
The Unadjusted Use-Case Weight (UUCW) and the Unadjusted Actor Weight (UAW) together
give the unadjusted size of the system, referred to as Unadjusted Use-Case Points.
Unadjusted Use-Case Points (UUCP) = UUCW + UAW
The next steps are to adjust the Unadjusted Use-Case Points (UUCP) for Technical Complexity
and Environmental Complexity.
Step 2: Adjust For Technical Complexity
Step 2.1 − Consider the 13 Factors that contribute to the impact of the Technical Complexity of
a project on Use-Case Points and their corresponding Weights as given in the following table −

Factor Description Weight

T1 Distributed System 2.0

T2 Response time or throughput performance objectives 1.0

T3 End user efficiency 1.0

T4 Complex internal processing 1.0

T5 Code must be reusable 1.0

T6 Easy to install .5

T7 Easy to use .5
T8 Portable 2.0

T9 Easy to change 1.0

T10 Concurrent 1.0

T11 Includes special security objectives 1.0

T12 Provides direct access for third parties 1.0

T13 Special user training facilities are required 1.0

Many of these factors represent the project’s nonfunctional requirements.


Step 2.2 − For each of the 13 Factors, assess the project and rate from 0 (irrelevant) to 5 (very
important).
Step 2.3 − Calculate the Impact of the Factor from Impact Weight of the Factor and the Rated
Value for the project as
Impact of the Factor = Impact Weight × Rated Value
Step (2.4) − Calculate the sum of Impact of all the Factors. This gives the Total Technical
Factor (TFactor) as given in table below −

Factor Description Weight Rated Value (0 to 5) Impact (I = W ×


(W) (RV) RV)

T1 Distributed System 2.0

T2 Response time or throughput performance 1.0


objectives

T3 End user efficiency 1.0

T4 Complex internal processing 1.0


T5 Code must be reusable 1.0

T6 Easy to install .5

T7 Easy to use .5

T8 Portable 2.0

T9 Easy to change 1.0

T10 Concurrent 1.0

T11 Includes special security objectives 1.0

T12 Provides direct access for third parties 1.0

T13 Special user training facilities are required 1.0

Total Technical Factor (TFactor)

Step 2.5 − Calculate the Technical Complexity Factor (TCF) as −


TCF = 0.6 + (0.01 × TFactor)
Step 3: Adjust For Environmental Complexity
Step 3.1 − Consider the 8 Environmental Factors that could affect the project execution and
their corresponding Weights as given in the following table −

Factor Description Weight

F1 Familiar with the project model that is used 1.5

F2 Application experience .5

F3 Object-oriented experience 1.0

F4 Lead analyst capability .5

F5 Motivation 1.0

F6 Stable requirements 2.0

F7 Part-time staff -1.0

F8 Difficult programming language -1.0

Step 3.2 − For each of the 8 Factors, assess the project and rate from 0 (irrelevant) to 5 (very
important).
Step 3.3 − Calculate the Impact of the Factor from Impact Weight of the Factor and the Rated
Value for the project as
Impact of the Factor = Impact Weight × Rated Value
Step 3.4 − Calculate the sum of Impact of all the Factors. This gives the Total Environment
Factor (EFactor) as given in the following table −

Factor Description Weight Rated Value (0 to 5) Impact (I = W ×


(W) (RV) RV)
F1 Familiar with the project model that is 1.5
used

F2 Application experience .5

F3 Object-oriented experience 1.0

F4 Lead analyst capability .5

F5 Motivation 1.0

F6 Stable requirements 2.0

F7 Part-time staff -1.0

F8 Difficult programming language -1.0

Total Environment Factor (EFactor)

Step 3.5 − Calculate the Environmental Factor (EF) as −


1.4 + (-0.03 × EFactor)
Step 4: Calculate Adjusted Use-Case Points (UCP)
Calculate Adjusted Use-Case Points (UCP) as −
UCP = UUCP × TCF × EF

Advantages and Disadvantages of Use-Case Points

Advantages of Use-Case Points


 UCPs are based on use cases and can be measured very early in the project life cycle.
 UCP (size estimate) will be independent of the size, skill, and experience of the team that
implements the project.
 UCP based estimates are found to be close to actuals when estimation is performed by
experienced people.
 UCP is easy to use and does not call for additional analysis.
 Use cases are being used vastly as a method of choice to describe requirements. In such
cases, UCP is the best suitable estimation technique.
Disadvantages of Use-Case Points
 UCP can be used only when requirements are written in the form of use cases.
 Dependent on goal-oriented, well-written use cases. If the use cases are not well or
uniformly structured, the resulting UCP may not be accurate.
 Technical and environmental factors have a high impact on UCP. Care needs to be taken
while assigning values to the technical and environmental factors.
 UCP is useful for initial estimate of overall project size but they are much less useful in
driving the iteration-to-iteration work of a team.
EX.NO:16 PERT Estimation Technique

PERT (Program Evaluation Review Technique) is an estimation technique


which was first developed and applied by United States Defence establishment for their Ballistic
Missile development program. It was one of their most ambitious programs. Completion of this
in time, ahead of the other nations was critical for them. Such missile development program was
filled with huge amount of uncertainty, as it required large number supplier agencies working on
new technology development.

This method of estimation which helped them build-in all the uncertainties in their
estimates and helped them to complete this program ahead of their expected schedule.

PERT uses a three point estimation approach for a task. Any task filled with uncertainties
can have a wide range of estimate in which the task actually will get completed. Uncertainties
include both favourable conditions (opportunities) as well as unfavourable conditions (threats).

PERT includes statistical analysis.

The 3 points of estimates are as below:

Formula: (P+4M+O)/6

 Optimistic Time (O): the minimum possible time required to accomplish a task,
assuming everything proceeds better than is normally expected.
 Pessimistic Time (P): the maximum possible time required to accomplish a task,
assuming everything goes wrong (excluding major catastrophes).
 Most likely Time (M): the best estimate of the time required to accomplish a task,
assuming everything proceeds as normal.

Example of the three-time estimates


Conclusion

PERT estimation technique will be a practical approach for estimating when


the tasks on hand are filled with uncertainties, where the tasks may take up different
estimates depending upon certain conditions. Actual estimate is dependent on certain
variables.
Ex.NO:17 Critical Path Method (CPM)

Critical Path: The longest path of scheduled activities that must be met to execute a
project.
Critical Path Method (CPM) Steps

The seven (7) steps in the CPM are: [1]

 Step 1: List of all activities required to complete the project (see Work Breakdown
Structure (WBS)),
 Step 2: Determine the sequence of activities
 Step 3: Draw a network diagram
 Step 4: Determine the time that each activity will take to completion
 Step 5: Determine the dependencies between the activities
 Step 6: Determine the critical path
 Step 7: Update the network diagram as the project progresses

Example of a Critical Path Nodal Diagram

The CPM calculates the longest path of planned activities to the end of the project and the
earliest and latest that each activity can start and finish without making the project longer. This
process determines which activities are “critical” (i.e., on the longest path) and which have “total
float” (i.e., can be delayed without making the project longer).
EX.NO:18 Call graph generation of C++ source code

Usage example

For example, I now haveA.hpp 、 B.hpp 、 C.hpp 、 ABCTest.cppThese files,


I want to see theirCall Graph

The source code is as follows:

Then compile as follows (instrument.cOn topgithubThe address can be downloaded for injecting
address information):
g++ -g -finstrument-functions -O0 instrument.c ABCTest.cpp -o test
Then run the program to gettrace.txt
inputshellcommand./test
Finally
inputshellcommandpython CallGraph.py trace.txt test
pop upCall Graph
The meaning of the mark on the map:

 The green line indicates the first call after the program is started
 The red line indicates the last call to enter the current context
 Each line represents a call,#The number after the symbol is the serial number,at
XXXIndicates that the call occurred in the first few lines of this file (the file path is above
the box)
 In the circle,XXX:YYY,YYYIs the name of the called function,XXXIndicates that this
function is defined in the first few lines of the file

Get C/C++ call relationship:

use-finstrument-functionsCompilation options,allows the compiler to start and end each


functioninjection__cyg_profile_func_enterwith __cyg_profile_func_exit
The implementation of these two functions is defined by the user

In this example, only__cyg_profile_func_enter, Defined ininstrument.cin,


Its function prototype is as follows:
void __cyg_profile_func_enter (void *this_fn, void *call_site);
wherethis_fnIs the called address,call_siteIs the address of the caller

Obviously, if we print out the addresses of all callers and callees,


You can get a complete runtime Call Graph

1. * Function prototypes with attributes */


2. void main_constructor( void )
3. __attribute__ ((no_instrument_function, constructor));
4.
5. void main_destructor( void )
6. __attribute__ ((no_instrument_function, destructor));
7.
8. void __cyg_profile_func_enter( void *, void * )
9. __attribute__ ((no_instrument_function));
10.
11. void __cyg_profile_func_exit( void *, void * )
12. __attribute__ ((no_instrument_function));
13.
14. static FILE *fp;
15.
16. void main_constructor( void )
17. {
18. fp = fopen( "trace.txt", "w" );
19. if (fp == NULL) exit(-1);
20. }
21.
22. void main_deconstructor( void )
23. {
24. fclose( fp );
25. }
26.
27. void __cyg_profile_func_enter( void *this_fn, void *call_site )
28. {
29. /* fprintf(fp, "E %p %p\n", (int *)this_fn, (int *)call_site); */
30. fprintf(fp, "%p %p\n", (int *)this_fn, (int *)call_site);

among themmain_constructorIn callmainExecute


before,main_deconstructorCallingmainAfter execution,
The function of the above functions is to write all the addresses of the caller and the
calleetrace.txtin
However, there is a problem now, which is trace.txtWhat is stored in is the
address, how do we translate the address into a symbol in the source code?
The answer is to useaddr2line

AboveABCTest.cppEngineering as an example, for example, we now have an


address0x400974, Enter the following command
addr2line 0x400aa4 -e a.out -f
The result is

1. ZN1A4AOneEv
2. /home/cheukyin/PersonalProjects/CodeSnippet/python/SRCGraphviz/c++/A.hpp:1

The first line is the name of the function where the address is located, and the second line is the
source code location of the function

However, you must ask,_ZN1A4AOneEvWhat the hell is it?


In order to implement overloading, namespace and other functions, soC++Havename
mangling, So the function name is unreadable

We need to usec++filtFor further analysis:


inputshellCommandaddr2line 0x400aa4 -e a.out -f | c++filt

Is the result much clearer:

A::AOne()

/home/cheukyin/PersonalProjects/CodeSnippet/python/SRCGraphviz/c++/A.hpp:11

Call graph rendering

After the above steps, we can already put all(Caller, callee)After the analysis, it is
equivalent to obtaining all the nodes and edges of the call graph.

The meaning of the mark on the map:

 The green line indicates the first call after the program is started
 The red line indicates the last call to enter the current context
 Each line represents a call,#The number after the symbol is the serial number,at
XXXIndicates that the call occurred in the first few lines of this file (the file path is above
the box)
 In the circle,XXX:YYY,YYYIs the name of the called function,XXXIndicates that this
function is defined in the first few lines of the file
EX.NO:19
Test the percentage of code to be tested by unit test using any code
coverage

Code coverage

It measures the number of lines of source code executed during a given test suite
for a program. Tools that measure code coverage normally express this metric as a percentage.

Formula to find percentage of code coverage:

number of lines of code exercised


Code coverage = x100%
total number of lines of code

So, if you have 90% code coverage then it means, there is 10% of the code that is not covered
under tests. Code coverage tools will use one or more criteria to determine how your code was
exercised or not during the execution of your test suite. Code Coverage utilities hook into your
source code and your test suite and return statistics on how much of your code is actually
covered by your tests.

Example :

find the code coverage for given

number of lines of code exercised=12

total number of lines of code =20

Soln:

12
Code coverage = x 100%
20

= 0.6 x100

Code coverage = 60%


EX.NO:20 JUnit for Unit Tests

JUnit is a unit testing framework designed for Java programming language. Since unit
tests are the smallest elements in the test automation process. With the help of unit tests, we can
check the business logic of any class. So JUnit plays an important role in the development of a
test-driven development framework. It is one of the families of unit testing frameworks which is
collectively known as the xUnit that originated with SUnit.

Unit Testing With JUnit

1. Production Code

public class Student {


public String displayStudentName(String firstName, String lastName) {
return firstName + lastName;
}
}
2. Testing Code

import org.junit.Test;
import static org.junit.Assert.*;
public class StudentTest {
@Test
public void testDisplayStudentName() {
Student student = new Student();
String studentName = student.displayStudentName(“Anshuman”, ”Nain”);
assertEquals(“AnshumanNain”, studentName);
}
}
EX.NO:21 Software Engineering-Transaction Mapping

In many software applications, a single data item triggers one or a number of information flows that
effect a function implied by the triggering data item. The data item, called a transaction, and its
corresponding flow characteristics . In this section we consider design steps used to treat transaction
flow.

An Example

Transaction mapping will be illustrated by considering the user interaction subsystem of the
SafeHome software.

As shown in the figure, user commands flows into the system and results in additional information
flow along one of three action paths. A single data item, command type, causes the data flow to fan
outward from a hub. Therefore, the overall data flow characteristic is transaction oriented.
It should be noted that information flow along two of the three action paths accommodate additional
incoming flow (e.g., system parameters and data are input on the "configure" action path). Each
action path flows into a single transform, display messages and status.

Design Steps

The design steps for transaction mapping are similar and in some cases identical to steps for
transform mapping . A major difference lies in the mapping of DFD to software structure.

Step 1. Review the fundamental system model.


Step 2. Review and refine data flow diagrams for the software.
Step 3. Determine whether the DFD has transform or transaction flow characteristics. Steps 1,
2, and 3 are identical to corresponding steps in transform mapping. The DFD shown in above figure
has a classic transaction flow characteristic. However, flow along two of the action paths emanating
from the invoke command processing bubble appears to have transform flow characteristics.
Therefore, flow boundaries must be established for both flow types.

Step 4. Identify the transaction center and the flow characteristics along each of the action
paths. The location of the transaction center can be immediately discerned from the DFD. The
transaction center lies at the origin of a number of actions paths that flow radially from it. For the
flow shown in figure , the invoke command processing bubble is the transaction center.

The incoming path (i.e., the flow path along which a transaction is received) and all action paths
must also be isolated. Boundaries that define a reception path and action paths are also shown in the
figure. Each action path must be evaluated for its individual flow characteristic. For example, the
"password" path has transform characteristics. Incoming, transform, and outgoing flow are indicated
with boundaries.

Step 5. Map the DFD in a program structure amenable to transaction processing. Transaction
flow is mapped into an architecture that contains an incoming branch and a dispatch branch. The
structure of the incoming branch is developed in much the same way as transform mapping. Starting
at the transaction center, bubbles along the incoming path are mapped into modules. The structure of
the dispatch branch contains a dispatcher module that controls all subordinate action modules. Each
action flow path of the DFD is mapped to a structure that corresponds to its specific flow
characteristics. This process is illustrated schematically infigure below.

Considering the user interaction subsystem data flow, first-level factoring for step 5 is shown in
below figure

The bubbles read user command and activate/deactivate system map directly into the architecture
without the need for intermediate control modules. The transaction center, invoke command
processing, maps directly into a dispatcher module of the same name. Controllers for system
configuration and password processing are created as illustrated in figure.

Step 6. Factor and refine the transaction structure and the structure of each action path. Each
action path of the data flow diagram has its own information flow characteristics. We have already
noted that transform or transaction flow may be encountered. The action path-related "substructure"
is developed using the design steps discussed in this and the preceding section.

As an example, consider the password processing information flow shown (inside shaded area) in
figure The flow exhibits classic transform characteristics. A password is input (incoming flow) and
transmitted to a transform center where it is compared against stored passwords. An alarm and
warning message (outgoing flow) are produced (if a match is not obtained). The "configure" path is
drawn similarly using the transform mapping. The resultant software architecture is shown in the
figure below

Step 7. Refine the first-iteration architecture using design heuristics for improved software
quality. This step for transaction mapping is identical to the corresponding step for transform
mapping. In both design approaches, criteria such as module independence, practicality (efficacy of
implementation and test), and maintainability must be carefully considered as structural
modifications are proposed.
EX.NO:22

Design activities along with necessary artifacts using Design Document.


Artifacts and artifact sets in the treebrowser

Artifacts may take various shapes or forms:

 A model, such as the Use-Case Model or the Design Model, which contains other artifacts.
 A model element, i.e. an element within a model, such as a Design Class, a Use Case or
a Design Subsystem
 A document, such as Business Case or Software Architecture Document
 Source code and executables (kinds of Components)
 Executables
EX.NO:23

Reverse Engineer any object-oriented code to an appropriate class or


object diagrams.

Reverse Engineer any object-oriented system:

You might also like