0% found this document useful (0 votes)
284 views63 pages

Se Activity All

Agile ceremonies are recurring meetings with defined goals that help agile teams plan, track progress, and engage stakeholders. The main ceremonies include daily stand-ups, sprint planning, sprint reviews, and retrospectives. The product owner represents stakeholders and is responsible for backlog management, release planning, and stakeholder communication. The scrum master facilitates ceremonies, removes impediments, and ensures smooth collaboration between team members. Together the product owner and scrum master work with the cross-functional team to deliver working software incrementally in short sprints using an iterative approach.

Uploaded by

salimmulla841
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
284 views63 pages

Se Activity All

Agile ceremonies are recurring meetings with defined goals that help agile teams plan, track progress, and engage stakeholders. The main ceremonies include daily stand-ups, sprint planning, sprint reviews, and retrospectives. The product owner represents stakeholders and is responsible for backlog management, release planning, and stakeholder communication. The scrum master facilitates ceremonies, removes impediments, and ensures smooth collaboration between team members. Together the product owner and scrum master work with the cross-functional team to deliver working software incrementally in short sprints using an iterative approach.

Uploaded by

salimmulla841
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 63

ACTIVITY: 01

Study the traffic signal and the importance of rules and process.

Traffic signal:

A traffic signal, or stoplight


as it is also known, controls
vehicle traffic passing
through the intersection of
two or more roadways by
giving a visual indication to
drivers when to proceed,
when to slow, and when to
stop. In some cases, traffic
signals also indicate to
drivers when they may
make a turn. These signals
may be operated manually
or by a simple timer which allows traffic to flow on one roadway for a fixed period of time,
and then on the other road-way for another fixed period of time before repeating the cycle.
Other signals may be operated by sophisticated electronic controllers that sense the time of
day and flow of traffic to continually adjust the sequence of operation of the signals. Traffic
engineers use signals to avoid traffic congestion and improve safety for both motorists and
pedestrians alike.
The controller is the ‘brain’ of the installation and contains the information required to force
the lights through various sequences. Traffic signals can run under a variety of different
modes which can be dependent on location and time of day.

Fixed Time

Under fixed time operation the traffic signals will display green to each approach for the
same amount of time every cycle regardless of the traffic conditions. This may be adequate in
heavily congested areas but where a lightly trafficked side road is included within the
sequence it can be very wasteful if in some cycles there are no vehicles waiting as the time
could be better allocated to a busier approach.

Vehicle Actuation (VA)

Vehicle Actuation is one of the most common modes of operation for traffic signals, and as
the name suggests, it takes into account the variations in vehicle demand on all approaches
and
adjusts the green time accordingly. The traffic demands are registered through the detection
installed either in the carriageway or above the signal heads. The controller then processes
these demands and allocates the green time in the most appropriate way. Minimum and
maximum green times are specified in the controller and cannot be violated.

Traffic control sotware:

1. TraMM:
TraMM is a software tool to monitor and manage the traffic signal controller WiTraC
remotely from the Command and Control Centre. TraMM has dedicated Human
Machine Interface graphic software to configure, visualize real traffic patterns and
control the traffic signals remotely. It provides option to monitor the data in different
visual formats such as Trends, Charts, Reports and Remote Animation Screens.
Dedicated User Management System is a part of TraMM which embeds application
links for user to navigate the overall system. Centralized junction configuration tool
of TraMM facilitates plan download and upload functionality to configure the
junctions remotely. It receives the online junction pattern periodically from different
junction controllers and the same will be distributed to associated modules for display
and reporting

2. KeySIGNALS:

The UK’s favourite traffic signal design software, KeySIGNALS enables engineers to
produce traffic signal design drawings in AutoCAD.Utilising a large library of
symbols, you can draw, label and configure detailed traffic signal design plans
quickly, ready to be printed to any scale. It also quantifies costs.

3. Road Traffic Signal Controllers:

a. Safe annd Smooth Traffic Control:


Road Traffic Signal Controllers provide safe and smooth road traffic by
conducting road traffic control in accordance with the time of day and the road
traffic conditions, and also perform advanced road traffic control to eliminate road
traffic congestion. Suitable also for crosswalks with multi-information displays,
they are replete with many additional functions.
The benefits of ARTEMIS, an autonomous distributed signal controller developed
by the Company, have also been recognized in various overseas countries that
have adopted it.
b. Autonomous Distributed Signal Control (ARTEMIS) Functions:
ARTEMIS is an environmentally-friendly road traffic signal controller that
relieves road traffic congestion and contributes to the reduction of CO2 and
nitrogen oxide emissions.
With a typical road traffic signal controller, the length of time that the traffic light
is green is determined at the traffic control center based on past road traffic data
measured by vehicle sensors. With ARTEMIS, however, the road traffic signal
controller itself determines the optimum period of time for the light to remain
green in real time by means of road traffic signal controllers positioned at
intersections exchanging information among themselves.
c. Level Crossing Interlocking Road Signal Controllers:
These controllers eliminate the need for a temporary stop at the level crossing and
ensure the smooth flow of road traffic. Information obtained from adjacent
intersections and railway facilities can alleviate road traffic congestion and
increase safety at level crossings.
ACTIVITY: 02
Visit various consulting company web portals and collect case studies.

What is Flipkart?
Flipkart is an eCommerce India based company. It was founded in the year 2007 by Sachin
Bansal and Binny Bansal founded Flipkart. But, in August 2018, 77% of its controlling stakes
were acquired by Walmart.
Flipkart deals with a wide range of product categories such as Electronics Fashion and
lifestyle products. It also offers attractive discounts for grabbing the attention of customers.
Flipkart offers multiple modes of payment, which allow the customer to pay according to
their convenience.

Steps to operate Flipkart:

Steps 1: First of all, visit the Flipkart website using the URL www.flipkart.com.

Step 2: At the top of the screen, you have a search bar. In this search bar type the name of
the product you are interested in and click the search button.
Like, I have typed ‘Thermo steel water bottle’, the other related searches also start appearing
below the search bar, you can modify your search using this.
Step 3: The search results will appear on your screen. Scroll down the screen to go through
the displayed results. At the bottom of the page, you can see that you have more result pages
for your search. So, you can search through them and select (click) a product you like.

Step 4: When you will select a product, it will appear with its detailed description along with
ADD TO CART and BUY NOW button.

Scroll down the screen to see the Specifications and Description of the product.
Ratings & Reviews to see what are the buyer’s reaction to that product.

You even have Question answer section where you can raise your query regarding the
product.

After you are satisfied with the product details click on the Buy Now button or if not satisfied
go back and search for other product.
Step 5: After you click BUY NOW button you have to get across these four sections that you
can see in the image below:
1. Login or SIGNUP
2. Delivery Address
3. Order Summary
4. Payment Option
For login, you have to enter your mobile number and password and click on
the CONTINUE button.

Step 6: Enter the address where you want the product to be delivered in this DELIVERY
ADDRESS section. Click SAVE AND DELIVER HERE.

Step 7: You will see your order summary, to confirm the order enter your valid email id and
click on CONTINUE.
Step 8: Last is the PAYMENT OPTION section, select the mode of your payment and pay
likewise.
ACTIVITY: 03
Document the roles and responsibilities of deifferent agile ceremonies.

What are Agile ceremonies?


Agile ceremonies are meetings with defined lengths, frequencies, and goals. Their purpose is
to help project teams plan, track, and engage stakeholders with their work and help them
reflect on how well they’ve worked together. They’re typically a part of the Scrum
framework of Agile.
As a quick reminder, projects that follow the Scrum framework deliver work in short bursts
called sprints. Each sprint has a specific goal and lasts, on average, 2 weeks.

Roles in an Agile team


Agile teams are often comprised of the following key roles and responsibilities:

Product owner
The product owner represents the stakeholders of the project. The role is primarily
responsible for setting the direction for product development or project progress.
The Product Owner understands the requirements of the project from a stakeholder
perspective and has the necessary soft skills to communicate the requirements to the product
development team. The Product Owner also understand the long-term business vision and
aligns the project with the needs and expectations of all stakeholders. End-user feedback is
taken into account to determine appropriate next-best action plans for the development
throughout the project cycle.
The key responsibilities of a Product Owner include:
 Scrum backlog management
 Release management
 Stakeholder management

Team lead/Scrum master


The Team Lead or Scrum Master ensures team coordination and supports the progress of the
project between individual team members. The Scrum Master takes the instructions from the
Product Owner and ensure that the tasks are performed accordingly.
The role may involve:
 Facilitating the daily Scrum and Sprint initiatives
 Communicating between team members about evolving requirements and planning
 Coaching team members on delivering results
 Handling administrative tasks such as conducting meetings, facilitating collaboration,
and eliminating hurdles affecting project progress
 Shielding team members from external interferences and distractions

Development team members


The team members within the Development Team are comprised of individuals with
responsibilities including but not limited to product development. The team takes cross-
functional responsibilities necessary to transform an idea or a requirement into a tangible
product for the end-users. The required skills might be wrapped up in one or more dev team
members:
 Product designer
 Writer
 Programmer
 Tester
 UX specialist

Stakeholders
The Stakeholder position may not be directly involved in the product development process
but is used to represent a range of key roles that impact the decisions and work of the Scrum
team.
The stakeholder may be:
 The end user of the product
 Business executives
 Production support staff
 Investors
 External auditors
 Scrum team members from associated projects and teams
ACTIVITY: 04

Identify cost of risk;


Identify commonly used risk management tools.

Cost of Risk:
Cost of Risk the cost of managing risks and incurring losses. Total cost of risk is the sum of
all aspects of an organization's operations that relate to risk, including retained (uninsured)
losses and related loss adjustment expenses, risk control costs, transfer costs, and
administrative costs.

What is project cost risk?


Project cost risk is the risk that a project will spend more money than was originally
budgeted. It will either lead to an over spend on the project or a reduction in the deliverables
as money is pulled from other areas to compensate for increased costs.
There is a positive side to cost risk – have you ever had a project come in under budget? It
can and occasionally does happen. This is a cause for celebration and a chance to decide
where to allocate the unspent money, whether it’s on a project team incentive or ploughing
into better deliverables on the next one.
Cost risk can derive from internal issues or be generated from outside your PMO and the
business as a whole. These are internal and external cost risks.

Internal cost risks in project management:


Internal cost risks are generated when something inside the business changes to increase the
amount of money needed. Some examples of internal cost risks include:
 Incorrectly forecasting the budget to complete the project
 Delivery of work taking longer than expected
 The need to outsource to contractors or freelancers
 Unanticipated travelling requirements
 External cost risks in project management

Cost changes that are out of your PMO’s hands are called external cost risks. Although there
is little that can be done about them on a practical level, they still need to be taken into
account. External cost risks on a project could include:
 Change in price of materials needed
 Regulatory changes requiring extra work
 Exchange rate fluctuations
 Banking fees and charges being amended
 Which costs might increase during a project lifecycle?
Nearly everything that could change or go wrong on your project will have a cost associated
with it. Those costs can come from one or more of the following areas:
 Labour – when more work is needed to make changes or do extra tasks that get added on
– known as scope creep – your projects will need to pay overtime, for new staff or
freelancers to get the job done.
 Materials – new requirements can require new software or other project inputs. If an area
of the project fails, this could also require more materials to be purchased, and this may
be at a higher cost than originals.
 Equipment – hardware needs may expand to fulfil new demands on a project, requiring
capital investment. Similarly, a broken-down tool in a factory or a worn-out delivery
vehicle increases equipment costs.
 Administration – as a project expands, morphs, and changes it will take more
administration from your project manager and likely within your PMO. New risk
assessments need to be done, as well as cost and time projects and keeping stakeholders
informed.

How to mitigate cost risks across projects?


You’ll never be able to stop a project from changing its cost profile completely. There needs
to be an allowance for change; it’s the very nature of business. How you deal with changes is
what’s important.
Included in a project budget should be a contingency. Usually, this is around 5-20% of the
project’s main budget and is in the hands of the project sponsor. This means that money
doesn’t have to be found within your PMO or elsewhere in the business when the unlikely
happens.
Ensuring strong planning and projections are done ahead of time will mean you know better
how to budget time and resources for tasks. Having details processes and documentation
available through your PMO can help get this correct from the start.

Top Risk Management Tools & Techniques for Project Management


1. Brainstorming: Before any project begins, the first step is to plan a strategy. For this, the
team members conduct brainstorming sessions with the project manager. This
brainstorming session needs to include all the risks that could impact the project’s
completion and success.

2. Root Cause Analysis: This is a technique to help project members identify all the risks
that are embedded in the project itself. Conducting a root cause analysis shows the
responsiveness of the team members in risk management. It is normally used once a
problem arises so that the project members can address the root cause of the issue and
resolve it instead of just treating its symptom.

3.SWOT Analysis: SWOT is an analysis to measure the strengths, weaknesses, opportunities,


and threats to a project. This tool can be used to identify risks as well. The first step is to start
with the strengths of the project. Then team members need to list out all the weaknesses and
other aspects of the project that could be improved. Here is where the risks of the project will
surface. Opportunities and threats can also be used to identify positive risks and negative
risks respectively. All findings need to be put on a grid to make analysis and cross-
referencing easier.

4.Risk Assessment Template for IT: A risk assessment template is usually made for IT
processes in an organization, but it can be implemented in any project in the company. This
assessment gives a list of risks in an orderly fashion. It is a space where all the risks can be
collected in one place. This is helpful when it comes to project execution and tracking risks
that become crises.

5.Probability and Impact Matrix: Project managers can also use the probability and impact
matrix to help in prioritizing risks based on the impact they will have. It helps with resource
allocation for risk management. This technique is a combination of the probability scores and
impact scores of individual risks. After all the calculations are over, the risks are ranked
based on how serious they are. This technique helps put the risk in context with the project
and helps in creating plans for mitigating it.

6.Risk Data Quality Assessment: When project managers use the risk data quality assessment
method, they utilize all the collected data for identified risks and find details about the risks
that could impact the project. This helps project managers and team members understand the
accuracy and quality of the risk based on the data collected.

7.Variance and Trend Analysis: Just like other control processes in the project, it helps when
project managers look for variances that exist between the schedule of the project and cost
and compare them with the actual results to see if they are aligned or not. If the variances
rise, uncertainty and risk also rise simultaneously. This is a good way of monitoring risks
while the project is underway. It becomes easy to tackle problems if project members watch
trends regularly to look for variances.

8.Reserve Analysis: While planning the budget for the project, contingency measures and
some reserves should be in place as a part of the budget. This is to keep a safeguard if risks
occur while the project is ongoing. These financial reserves are a backup that can be used to
mitigate risks during the project.
knows when they qualify for free shipping. Rare Beauty unconventionally uses large Xs so
customers can easily remove items from their shopping carts. This makes it easy to edit their
orders. You can see why their customers love to purchase online—it’s a quick and painless
checkout.

8. Compliment your customers | Lulu lemon : The first thing you notice in Lululemon’s
shopping cart is the large message at the top: “You’ve Got Great Taste.” A message like that
makes users want to click the cart icon! Complimenting your customers is a great way to
thank them for adding something to their shopping cart. And shows goodwill. This is
especially effective for Lululemon because of their positive and affirmational
branding. Lululemon’s checkout button is large and brightly colored. It draws the users’ eyes
and they recommend complementary products to encourage customers to “Continue
Shopping.”
ACTIVITY: 05

1. Identify a problem and explain how design thinking can be applied to


solve it
2. Design a shopping cart to achieve ease of use, applying design thinking

1. Identify a problem and explain how design thinking can be applied to


solve it

The term “Design Thinking” dates back to the 1987 book by Peter Rowe; “Design Thinking.”
In that book he describes the way that architects and urban planners would approach design
problems. However, the idea that there was a specific pattern of problem solving in “design
thought” came much earlier in Herbert A Simon’s book, “The Science of the Artificial”
which was published in 1969. The concept was popularized in the early 1990s by Richard
Buchanan in his article “Wicked Problems in Design Thinking”.

Problem-Solving and Two Schools of Thought

Design thinking is concerned with solving problems through design. The idea being that the
future output of the process will provide a better answer than the one already available or if
nothing is available – something entirely new.
It is an unconstrained methodology where it is possible that the designer (or design team) will
work on many possible solutions at once. It enables designers to consider the problem in
many different ways and speculate on both the past and future of the problem too.
This is in contrast to the scientific method of problem solving which requires a highly-
defined problem which focuses on delivering a single solution.
This difference was first noted by Brian Lawson, a psychologist, in 1972. He conducted an
experiment in which scientists and architects were asked to build a structure from colored
blocks. He provided some basic rules for the project and observed how they approached it.
The scientists looked to move through a simple series of solutions based on the outcome and
entire rule set. The architects, in contrast, simply focused on the desired end-state and then
tested to see if the solution they had found met the rules.
This led to the idea that scientists solve problems by a process of analysis, whilst designers
solve problems by synthesis. However, later evidence suggests that designers apply both
forms of problem solving to attain “design thinking”.
They do this via a process of divergent thinking. A designer will examine as many possible
solutions at the beginning of a process as they can think of – then they will apply the
scientific side (convergent thinking) to narrow these solutions down to the best output.
Design thinking can be as simple or as complex as the business and users require. This IDEO
process can be seen as a 3 part process or a 9 part process.

The Design Thinking Process


Design thinking is essentially a process which moves from problem to solution via some clear
intermediate points. The classic approach, as proposed by Herbert A Simon, is offered here:

 Definition – where the problem is defined as best as possible prior to solving it


 Research – where the designers examine as much data as they feel necessary to be
able to fully contribute to the problem solving process
 Ideation – where the designer commences creating possible solutions without
examining their practicality until a large number of solutions has been proposed. Once
this is done, impractical solutions are eliminated or played with until they become
practical.
 Prototyping – where the best ideas are simulated in some means so that their value
can be explored with users
 Choosing – where the best idea is selected from the multiple prototypes
 Implementing – where that idea is built and delivered as a product
 Testing – where the product is tested with the user in order to ensure that it solves the
original problem in an effective manner

There are many other design thinking processes outlined in literature – most of which are a
truncated version of the above process combining or skipping stages.

The Principles of Design Thinking

In the book, Design thinking: Understand, Improve Apply, Plattner and Meinel offer four
underlying principles for design thinking:
 Human – all design is of a social nature
 Ambiguity – design thinking preserves and embraces ambiguity
 Re-design – all design processes are in fact re-design of existing processes
 Tangibility – the design process to make something tangible will facilitate
communication of that design

2. Design a shopping cart to achieve ease of use, applying design thinking

1. Recommend accessories and add ones B&H: At first glance, B&H’s “Add to Cart”
screen looks like any other site that sells electronics. But when you click the “Add to Cart”
button, there’s a staggering amount of actions a customer can take within its simple design.
For example, the page displays accessories that pair well with the chosen product—giving
customers a reason to add more items to their cart. In addition, the page prompts customers to
think about warranties for their products. Clicking the shopping cart icon redirects customers
to a more standard page. And it reminds them about those related product
recommendations. This gives them another chance to add the items to their cart in case
they’ve forgotten about them or changed their mind.

2. Use a mini cart to showcase items in the bag | Tilly’s : Tilly’s online clothing shop is
simple and easy-to-use. The page shows only the most necessary information along with
plenty of eye-catching pictures to engage visitors. Their clever “Add to Cart” design
improves their
customers’ shopping experience. When a customer puts a product into their shopping cart, a
“mini cart” expands on the right side. It lets a user keep track of their items at a glance. It
also has a “Checkout” button for when they’re done. So customers can go directly to
checkout with no effort. This “Add to Cart” example shows a great way to encourage impulse
purchases.

3. Show a sidemessage with necessary information | Allbirds: A side message is an


element of a website that slides from the left or right side of the screen. You can use it when
you need to display important information that requires a lot of space. All bird’s shopping
cart side panel design integrates with the site’s overall theme. And it displays everything the
customer needs to know. This is the message that pops up: “Congrats! You get free standard
shipping.” And it’s a great way to fight against cart abandonment from the very start. They
tell customers that the shipping cost won’t change unexpectedly (one of the biggest reasons
for potential customers to leave their shopping carts behind).

4. Show customers how much they need to spend to earn free shipping | Forever
21: Forever 21’s “Add to Cart” is transparent with their free shipping criteria. However, they
require shoppers to spend a certain amount to qualify for free shipping. You can see that this
cart is only $10.01 away from qualifying for free shipping. They also have a large coupon
code field—making it easy to add a discount. These two money-saving features cultivate a
positive emotion, even before customers get to the checkout page. The results: higher sales.

5. Increase urgency on the cart page | Nike: Shoppers get a small notification at the top
right corner of their screen when they click the “Add to Cart” button on Nike’s
website. There are two options in the mini cart window: view your bag or proceed directly to
checkout. It’s a choice that encourages customers to buy now or keep browsing the site. Nike
also places messages like “Just a Few Left, Order Now” to guide shoppers to complete
their checkout process. Catching a customer’s attention in this way can lead to more sales
because no one wants to lose out on a great pair of shoes. This is called the FOMO or the
Fear Of Missing Out strategy. Another nice touch on Nike’s shopping cart page is the
estimated shipping date. It gives shoppers an opportunity to imagine themselves in their new
shoes by a particular date. Details like these matter!

6. Recommend products customers might love | Sephora: Sephora’s cart notification


system combines tons of the best practices for cart page design features. They display a pre-
total amount. And it shows customers how much more money they need to spend to qualify
for free shipping. They also personalize product recommendations for each
customer. Sephora goes beyond these other websites by offering free samples (along with
rewards and promotions) with an easy to access link on the cart notification page. Customers
can choose up to two free samples to add to their shopping cart. This is a great solution to
provide added value to customers, and it encourages them to try new products.

7. Offer free samples | Rare Beauty: Rare Beauty is another makeup brand with a heavy
online presence. Like Sephora, they offer customers free items in the same box as their
order.They also show how much more money needs to be added to the order. Then the
shopper
knows when they qualify for free shipping. Rare Beauty unconventionally uses large Xs so
customers can easily remove items from their shopping carts. This makes it easy to edit their
orders. You can see why their customers love to purchase online—it’s a quick and painless
checkout.

8. Compliment your customers | Lulu lemon : The first thing you notice in Lululemon’s
shopping cart is the large message at the top: “You’ve Got Great Taste.” A message like that
makes users want to click the cart icon! Complimenting your customers is a great way to
thank them for adding something to their shopping cart. And shows goodwill. This is
especially effective for Lululemon because of their positive and affirmational
branding. Lululemon’s checkout button is large and brightly colored. It draws the users’ eyes
and they recommend complementary products to encourage customers to “Continue
Shopping.”
ACTIVITY: 06

1. Prepare RPM requirement traceability matrix for shopping cart


2. List the criteria to select the requirement management tools
3. Identify different requirement management tools and list their features
4. Identify frequently used UML diagrams and also identify tools used to
draw them

Requirement Management tools give the edge to the organizations in terms of giving
increased business value, reduce budget issues, etc.
In today’s world of development, it becomes utterly necessary to track changes in the
requirement at every step, automatically. The audit trail of requirements is done more
efficiently with the involvement of tools.

List of Top Requirements Management Tools

Given below is a list of Requirements Management Tools available in the market.


1. Visure
2. SpiraTeam by Inflectra
3. Jama Software
4. KatalonTestOps
5. ReqSuite® RM
6. Xebrio
7. Requirements and Test Management for Jira
8. Process Street
9. Visual Trace Spec
10. IBM Rational DOORS
11. Accompa
12. IRIS Business Architect
13. Borland Caliber
14. Atlassian JIRA
15. Aligned Elements
16. Case Complete

1) Visure
Price: Visure Solutions offers a free 30-day trial that can be downloaded from their website.
Perpetual and subscription licenses are available and can be used on-premises or cloud-based.
Detailed pricing and demo can be found on the Visure Solutions website.

Visure is a leading provider of requirements management tools offering a comprehensive


collaborative ALM platform including full traceability, tight integration with MS Word/Excel, risk
management, test management, bug tracking, requirements testing, requirements quality analysis,
Requirement versioning and baseline, powerful reporting and standard compliance
templates for ISO 26262, IEC 62304, IEC 61508, CENELEC 50128, DO-178B/C, FMEA,
SPICE, CMMI, etc.

2) SpiraTeam by Inflectra
SpiraTeam is a powerful application lifecycle management platform with robust and fully
integrated requirements management functionality. Voted the Quadrant Leader in
Requirements Management by SoftwareReviews.com in 2021, SpiraTeam is ideal for agile
teams who need to manage requirements with ease and confidence.
 Plan, create, edit and manage requirements using planning boards, GANTT,
customizable workflows, while linking requirements to tests, and other artifacts in
SpiraTeam.
 SpiraTeam’s requirements matrix allows users to drill down from each of captured
requirements to determine how many test cases have validated the functionality, and
status of each of the defects logged.
 With SpiraTeam, users can quickly and easily organize requirements in a hierarchical
structure, viewed as a kanban board or on a mind map. Requirements can be
prioritized, estimated, and associated with specific releases and artifacts.
 In SpiraTeam, each requirement is displayed with its associated test coverage.
Requirements can be moved, copied, and filtered according to a variety of criteria.
 SpiraTeam’s requirements management module is designed with end-to-end
traceability at its core, which makes it ideal for regulated industries.

3) Jama Software
Price: You can get a quote for pricing details. It offers a free trial for the product.
Jama Software provides the leading platform for Requirements, Risk, and Test management.
With Jama Connect and industry-focused services, teams building complex products,
systems, and software improve cycle times, increase quality, reduce rework, and minimize
effort proving compliance.
Jama Software’s growing customer base of more than 600 organizations includes companies
representing the forefront of modern development in Autonomous Vehicles, Healthcare,
Financial Services, Industrial Manufacturing, Aerospace, and Defense.
Jama Connect was rated as the top Application Lifecycle Management (ALM) tool for 2019
by Trust Radius. In particular, the reviewers praise the product’s purposeful collaboration,
ease of adaptability, and live traceability.
Our Score: 8.5 out of 10

4) KatalonTestOps
KatalonTestOps is a free, robust orchestration tool that manages all of your testing activities
effectively and efficiently. TestOps centralizes every requirement in one place, gives your
teams full visibility of their tests, resources, and environments by integrating with any testing
framework you love.
 Deployable on Cloud, Desktop: Window and Linux system.
 Compatible with almost all testing frameworks available: Jasmine, JUnit, Pytest,
Mocha, etc; CI/CD tools: Jenkins, CircleCI, and management platforms: Jira, Slack.
 Plan efficiently with Smart Scheduling to optimize the test cycle while maintaining
high quality.

 Evaluate release readiness to boost release confidence.


 Maximize resources and drive ROI through optimization of server usage, environment
coverage.
 Enhance collaboration and increase transparency through comments, dashboards, KPI
tracking, actionable insights – all in one place.
 Streamlined result collection and analysis through robust failure analysis and
reporting capabilities across any framework.
 Real-time data tracking for fast, accurate debugging through live and comprehensive
reports on test execution to identify root causes of any issues.
 Customizable alerts for full control to manage your systems without continuous
follow-ups

5) ReqSuite® RM
Price: Osseno offers a free trial for the ReqSuite RM. There are three pricing plans for the
product, Basic (Starts from $143 per 3 users per month), Standard (Starts from $276 per 5
users per month), and Enterprise (Starts from $664 per 10 users per month).
ReqSuite® RM is a very intuitive yet powerful solution for requirements management, or
the management of other project-relevant information (e.g., solution concepts, test cases, etc.)
as well. Due to its easy and extensive configurability, ReqSuite® RM can be quickly and
completely adapted to the individual customer situation, which is why companies from
different industries rely on ReqSuite® RM.
Besides the mentioned configurator, the unique selling propositions include AI-supported
assistance functions, such as for guiding the analysis and approval workflow, checking the
quality and completeness of requirements, automatic linking, reuse recommendations, etc.
ReqSuite® RM is a fully web-based application and can be operated on the Cloud or on-
premise. The smallest license package (3 users) starts with 129€ per month. A free trial is
available.
Our Score: 9 out of 10

6) Xebrio
Xebrio tracks individual requirements throughout the lifecycle of your project with multiple
levels of stakeholder approvals & collaboration capabilities, the ability to associate
requirements to tasks, milestones, and test cases, and a transparent yet detailed process for
requirement change management, thereby guaranteeing requirement traceability.
Not just limited to Requirements Management, Xebrio is a complete Project Management
Tool with extensive task management, Collaboration, Communication, Test management,
Bug tracking, Asset management, Release management, and comprehensive reporting
capabilities, with all under the same roof and no add-ons or plugins are required.
It also sports a comprehensive dashboard and offers detailed data-driven insights with easy-
to-grasp reports.
Being very intuitive and user-friendly, Xebrio also offers you an extended free trial and great
support.
Our Score: 9 out of 10.

7) Requirements and Test Management for Jira


Requirements and Test Management for Jira isn’t just a requirements management tool.
It’s an app that enables the whole software development process right inside your Jira.
A transparent structure for your requirements is just as important as an accurate method for
testing them. The two processes are closely related to each other because they focus on the
quality of your product.
With regards to how flexible Jira is when it comes to extending its possibilities, you can have
all the necessary tools and objects in one place without having to integrate with external
software.
If you’d like to get rid of your current requirements management tools or are just starting
your software project, RTM for Jira will help you build an environment where all your teams
can work – from requirements to release.

8) Process Street
Process Street is one of the most user-friendly Requirements Management tools to manage
processes, team workflow, checklists, and operating procedures. It’s easy and time-saving to
interact with a client via this tool. By using Process Street one can design one’s own
processes, without being experts in it.
This tool is available to use free of cost for 30 days, which includes all features.
 Process Street is a workflow and process management tool that provides an option to
manage recurring checklists and procedures in the business.
 Some of the best features that Process Street offers are Regular workflow scheduling,
Activity feed, tasks assignment, Instant visibility, Run processes as collaborative
workflows, etc.

9) Visual Trace Spec


Visual Trace Spec is a user-friendly tool for requirements specification and traceability. This
fully customizable software includes a word like interface for document generation.
Visual Trace Spec is equally effective for developing software systems, embedded systems,
medical devices, apps, and more.
Our Score: 8.5 out of 10
Price: You can get a quote for pricing details. It offers a SaaS Solution and Native Client
Edition. Visual Trace Spec offers a free trial for 30 day

10) IBM Rational DOORS


IBM Rational tool is a leading requirement tool. It is a client-server application and multi-
platform tool which is designed to manage, capture, track, analyze and link requirements as
well as standards. It also provides a unique graphical view of the hierarchy of the data.
It also provides a unique graphical view of the hierarchy of the data. Rational Door Next
generation is available in the market with no cost to users of active subscription and support.

UML Diagram Tools

UML (Unified Modeling Language) Diagrams are very important in the field of software
engineering. It allows us to visualize and analyze systems. It is efficient because as they say “One
picture is worth a thousand words”. It is easier to explain to clients using diagrams. In

software engineering, UML diagrams are used to visualize the project before it is done and
to document the project after it is done. You need to know that a lot of time is often saved
with the help of these UML diagrams. Also, there are the best UML modeling
tools available out there that help us to draw UML diagrams. So, we’re going to explore
various such tools with their specifications. Let’s get started:

1. Draw.io: Draw.io is a free open-source collaborative workspace for drawing


UML diagrams. It also contains predefined templates for drawing any UML
diagram, creating wireframes, business charts, etc. It is available as both software
and online tool. It is used by many enterprises. Draw.io supports enterprise
browsers. It is linked with Google Drive so it automatically saves our work. It has
a beginner-friendly interface and is mostly used for professional diagrams. It was
founded by Gaudenz Alder in 2000. It supports the file format of PNG, JPEG,
SVG, PDF, etc. It is fully a free source and it does not contain any paid service. It
is supported in browsers like Chrome, Microsoft Edge, and Mozilla Firefox. The
OS supports for this tool are Windows, Linux, and macOS.

2. Lucidchart: The Lucidchart is a tool where users draw diagrams and charts. It
provides a collaborative platform for drawing diagrams. It is an online tool and
paid service. People can use it with a single sign-up. It is user-friendly. It contains
all the shapes for drawing any UML diagrams. It also provides basic templates
for all diagrams. So we can draw any diagram easily with the help of those
templates. It also helps business people to plan for their events. So it will be very
helpful for data visualization, diagramming, etc. It uses a free trial option. It
allows only a few frames and shapes for free trial users. Free users are allowed to
use only 60 shapes per diagram. It does not support enterprise browsers. This tool
is not only helpful for drawing UML diagrams but also helpful for any project
design and project management.

3. Visual Paradigm: Visual Paradigm is a diagramming tool used by business


organizations for planning and modeling. Visual Paradigm is available both as an
online tool and software. It requires a single sign-up for using an online tool. It
contains predefined layouts. It is a paid platform and provides a free trial of 30
days. The purpose of the Visual paradigm is not only limited to drawing UML
diagrams but also for many purposes like creating Business Cards, Brochures,
Book covers, Gift Cards, etc. It can also be used as an image editing tool. It was
released in 2002. For premium users, it provides many types of categories like
Enterprise, Professional, Standard, Modeler. The cost varies depending on the
category.

4. Edraw Max: Edraw Max is developed by the Edrawsoftcompany. It was


released in March 2019. Edraw is a 2D diagramming tool. It is also used
to create flowcharts, workflow diagrams, mind maps, network
diagrams, etc. It is a fully-featured drawing app. It is known for its extensive
compatibility because the diagrams can be exported in various formats like
images, documents, HTML, etc. It is available on various platforms like
Windows, macOS, and Linux. It is very easy to use because it contains a lot of
templates. It also comes up with additional image editing features which can help
to enhance the images we are drawing. It is available only as software that
requires system storage and compatibility to deal with. This Edraw Max can also
be used to do project planning, drawing electrical circuits, and building planning.

5. StarUML: It is developed by MKLab and is proprietary commercial software.


StarUML is a tool specifically used for drawing UML diagrams. It is
a diagramming tool mainly used for agile and concise modeling. It also uses
source codes of programming languages and converts that into models. StarUML
is available only as software. So it requires system storage and system
compatibility to download the software. It is available on platforms like macOS,
Windows, Ubuntu. It can be published as HTML docs. Star UML also allows for
pdf export for the purpose of clean printing. It also supports markdown syntax. It
allows code generation for various programming languages like C, C++, Java,
and Python. It also allows the users to write their own extensions using HTML5,
CSS3, JavaScript, Node.js. It also allows for asynchronous model validation.

6. Gliffy: Gliffy was founded by Chris Kohlhardt and Clint Dickson in 2005.
Gliffy is an online programming tool that allows any team to visually share ideas.
The main advantage of Gliffy is the drag and drop feature which makes the
drawing so easy. We can effortlessly drag and drop the shapes and use the
available templates. It eases the process of drawing diagrams. Using this we can
draw wireframes for apps, design projects. In Gliffy we can draw the diagrams
with ease, share the document with anyone we want, and can collaborate
instantly, import and export the documents easily. Another advantage of Gliffy is
that it is cloud-based and does not need to require the users to download it. The
allowed format of downloadable extensions in Gliffy are PNG, JPG, SVG, etc.
We can also choose the size like fit to max size, screen size, medium, large,
extra-large.

7. Cacoo: Cacoo was founded by Masanori Hashimoto in 2009. It is located in


Japan. Cacoo is the online software for drawing UML diagrams, creating
wireframes and flowcharts. It is available both as a software and online tool. It
provides a free trial of 2 months. It also provides a free template for making any
design. The advantage of Cacoo is that we can edit together and share the same
diagram to edit with our friends, we can track and review the changes
periodically. We can converse and comment about the diagrams. Cacoo also has
drawing templates in various fields like development, product/design project
management, marketing, business, general, and custom templates. The export
options available are PNG, SVG, PDF, PS, PPT, etc.
ACTIVITY: 07

Explore Agile Estimation Techniques and Prepare a Report

Estimating work effort in agile projects is fundamentally different from traditional methods
of estimation. The traditional approach is to estimate using a “bottom-up” technique: detail
out all requirements and estimate each task to complete those requirements in hours/days, and
then use this data to develop the project schedule. Agile projects, by contrast, use a “top-
down” approach, using gross-level estimation techniques on feature sets, then employing
progressive elaboration and rolling-wave planning methods to drill down to the task level on
a just-in-time basis, iteratively uncovering more and more detail each level down. This paper
will elaborate on two common techniques for agile estimation (planning poker and affinity
grouping), as well as touch on how the results of these exercises provide input into
forecasting schedule and budget.

Top-down vs. Bottom-up

The traditional method for estimating projects is to spend several weeks or months at the
beginning of a project defining the detailed requirements for the product being built. Once all
the known requirements have been elicited and documented, a Gantt chart can be produced
showing all the tasks needed to complete the requirements, along with each task estimate.
Resources can then be assigned to tasks, and actions such as loading and leveling help to
determine the final delivery date and budget. This process is known as a bottom-up method,
as all detail regarding the product must be defined before project schedule and cost can be
estimated.
In the software industry, the use of the bottom-up method has severe drawbacks due to
today's speed of change. Speed of change means that the speed of new development tools and
the speed of access to new knowledge is so great that any delay in delivery leaves one open to
competitive alternatives and in danger of delivering an obsolete product (Sliger, 2010).
The top-down method addresses this key issue, by using the information currently available
to provide gross-level estimates. Rolling-wave planning is then used to incorporate new
information as it's learned, further refining estimates and iteratively elaborating with more
detail as the project progresses. This method of learning just enough to get started, with a
plan to incorporate more knowledge as work outputs evolve, allows the project team to react
quickly to adversity and changing market demand.

Estimation Techniques

Gross-level estimation techniques are in use by teams using agile approaches such as Scrum
and Extreme Programming, and this paper will cover two of the most popular techniques:
Planning Poker and Affinity Grouping. Estimation units used will also be examined, as these
units should be such that they cannot be confused with time.
Planning Poker

The most popular technique of gross level estimation is Planning Poker, or the use of the
Fibonacci sequence to assign a point value to a feature or item (Grinning, 2002). The
Fibonacci sequence is a mathematical series of numbers that was introduced in the 13th
century and used to explain certain formative aspects of nature, such as the branching of
trees. The series is generated by adding the two previous numbers together to get the next
value in the sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, and so on. For agile estimation purposes,
some of the numbers have been changed, resulting in the following series: 1, 2, 3, 5, 8, 13,
20, 40, and 100.
These numbers are represented in a set of playing cards (see Exhibit 1). Team members play
“Planning Poker” (Exhibit 2) to provide an estimate in the form of a point value for each
item. Here are the steps:
 Each team member gets a set of cards.
 The business owner (who does NOT get to estimate) presents the item to be estimated.
 The item is discussed.
 Each team member privately selects a card representing his/her estimate.
 When everyone is ready, all selected cards are revealed at the same time.
 If all team members selected the same card, then that point value is the estimate.
 If the cards are not the same, the team discusses the estimate with emphasis placed on the
outlying values:
 The member who selected the lowest value explains why he/she selected the value.
 The member who selected the highest value explains why he/she selected the value.
 Select again until estimates converge.
 Should lengthy or “in-the-weeds” conversations result, team members may use a two-
minute timer to time box the discussion, selecting again each time the timer runs out,
until conversion.
 Repeat for each item (Cohn, 2006, pp. 56–57).

Exhibit 1. Planning Poker Cards

Exhibit 2. Playing Planning Poker (Photo courtesy of Museums and the Web. All Rights
Reserved)
There are several reasons Fibonacci numbers are used, and used in this format. First is the
notion that once teams eliminate time as the estimate base, they are less likely to demand
more detail and pad estimates. These numbers instead represent relative size, not time. As a
result, the estimation exercise goes quite quickly. Teams generally spend roughly two
minutes on each item, allowing a backlog of 30 items to be estimated in an hour. The fact that
teams are limited to only 9 choices (i.e., point values or cards) also helps speed up the
process.
The sequence also provides the right level of detail for smaller and better-understood
features, while avoiding a false sense of accuracy for higher estimates. For example, an item
with a high estimate (20 or higher) means the item is large and not yet well understood.
Debating whether the item was a 20 or a 19 or a 22 would be a waste of time as there simply
isn't enough data available. Once the item gets closer to the iteration in which the item will be
worked, it can be broken down into smaller pieces and estimated in more granular numbers
(1–13). Items with point estimates from 1–13 can generally be completed within a single
iteration (1–4 weeks).
It is important to note that points do not have the same meaning across teams; for example,
one team's “five” does not equal another team's “five.” Thus team velocity, which is derived
from points, should not be used to compare productivity across teams.

Affinity Grouping

An even faster way to estimate, and one used when the number of items to estimate is large,
is affinity grouping. Team members simply group items together that are like-sized, resulting
in configuration similar to the one in Exhibit 3. The method is simple and fast:
 The first item is read to the team members and placed on the wall.
 The second item is read and the team is asked if it is smaller or larger than the first item;
placement on the wall corresponds to the team's response (larger is to the right, smaller is
to the left).
 The third item is read and the team is asked if it is smaller or larger than the first and/or
second items; the item is placed on the wall accordingly.
 Control is then turned over to the team to finish the affinity grouping for the remainder of
the items.
Teams may choose to continue in the same fashion, placing one item at a time on the wall
after group discussion. However, a faster way is to have each team member select an item
and place it based on their own best understanding. This is done with all team members
working in parallel until all items have been assessed and placed on the wall. Several hundred
items can be estimated in a relatively short time. Once all items are on the wall, the team
reviews the groupings. Items that a team member believes to be in the wrong group are
discussed and moved if appropriate.
Once affinity grouping is complete, estimation unit values such as points can be assigned. In
Exhibit 3, the first set on the far left would be labeled as having a value of 1 point, the second
set would be 2 points, the third set 3 points, the fourth set 5 points, and the last set 8 points.
Affinity grouping can also be done for other estimation units, such as T-shirt sizes. Exhibit 4
shows an example of affinity grouped items labeled with T-shirt sizes instead of points.

Exhibit 3. Affinity Grouping Example

Exhibit 4. Affinity Grouping Using T-shirt Sizes (Graphic courtesy of Chris Sterling.
All Rights Reserved.)

Estimation Units

The use of T-shirt sizes (Extra Small [XS], Small [S], Medium [M], Large [L], Extra Large
[XL]) is another way to think of relative sizes of features. This is an even greater departure
from the numeric system, and like all good gross-level estimation units can in no way be
associated with a specific length of time.
Other arbitrary tokens of measurement include Gummi Bears, NUTS (Nebulous Units of
Time), and foot-pounds. Teams may create their own estimation units, and as you can see,
they often have a bit of fun in doing so.
This paper does not cover the use of time-based units such as ideal development days and/or
hours. These are already common and well understood, so their explanations were not
included. It is worth noting however that gross-level estimating has the potential to be more
successful when decoupled from the notion of time. Because time estimates are often turned
into commitments by management and business, team members feel more pressure to be as
accurate as possible. As a result they request more and more detail about the item being
estimated. This turns gross-level estimation into the more time-consuming detail-level
estimation and defeats the original intent and purpose.

Forecasting Schedule and Budget

Once gross-level estimates and team velocity are determined, schedule and budget can be
forecast. Teams determine their velocity by adding up the total number of points for all the
items they completed in an iteration. For example, a team may have selected five items with a
total point value of 23 points (see Exhibit 5). At the end of their two-week iteration, they
were only able to complete four of the five items. Their velocity is 15, or the sum of the point
values of items 1–4. Teams do not get “partial credit” for completing portions of an item, so
even if they had started on item 5, it would not count, as it was not completed.

Exhibit 5. Determining Team Velocity


Determining the Schedule

A team's average velocity is used in forecasting a long-term schedule. Average velocity is


calculated by summing the velocity measurements from the team's last three iterations, and
dividing that total by three. So if a team completed 15 points in its first iteration, and 20
points in each of two subsequent iterations, the team's average velocity is 18 (15+20+20 / 3).
If a team can do 18 points in one iteration on average, and there are 144 points worth of work
to be completed in the project, it will take the team eight iterations to complete the work (144
/ 18). If each iteration is two weeks, then the forecast completion is 16 weeks. This method
allows us to answer the question, “When will we be done with all this work?”
If the team has a track record of velocity data, it is possible to determine the most optimistic
completion date, the most pessimistic, and the most likely. The team's average velocity
number is used to calculate the most likely scenario, while velocity numbers from the team's
worst-performing iterations are used to calculate the most pessimistic forecast completion
date. Using velocity from iterations where the team was able to complete more than expected
provides the most optimistic forecast.
We can also use these numbers to answer the question, “We must deliver something by this
date—of these features, how many will we have done by then?” See Exhibit 6 for an example
of the most likely amount forecast to be complete, the pessimistic forecast, and the optimistic
forecast. This example is for a team whose average velocity is 20, and which has a worst-
performance velocity of 12 and a best-performance velocity of 25. Given this and only six
weeks (three iterations), how much can be completed? The pessimistic forecast is that only
items 1–8 will be done in six weeks. The optimistic forecast is that items 1–18 will be
completed. And the most likely forecast, based on the team's average velocity of 20, is that
items 1–13 will be completed in six weeks.
If teams were using non-numeric estimation units such as T-shirt sizes, the algorithms for
forecasting will be more complex. It is recommended that the sizes be converted to a numeric
system to more easily generate similar data. For example, a Small could be converted to a
NUT of 3, a Medium to a NUT of 5, and so on. These may also be converted to time ranges
(a Small could be 1–3 days, for example) but this is inherently risky, due to issues already
cited in the Estimation Units section.

Exhibit 6. Example Forecast for Item Completion

Determining Budget

In this section we look at answering, “We only have this much money—how long will it last
and how much will we have done before we run out?” First, a simple formula is used to
determine the cost per point:
Σ (loaded team salaries for period n) / points completed in period n
Take the sum of the team's salaries (loaded) for a period of time, say three two-week
iterations, and divide that by the number of points the team completed in the same time
frame. So a team whose total loaded salaries are $240,000 over six weeks, and completed 60
points of work in those three iterations, would have a cost per point of $4,000. Now use the
following formula to determine budget:
(Cost per point x total point value of items to be completed) + other expenses = forecast
budget
Quite often not all features for a product are defined at the outset of a project, which is as
expected for agile projects. So budget estimates are based on what we know today, plus a
forecast algorithm that is based on historic data or expert guidance. For example, say there
are only 20 features listed so far, but the business won't be able to provide any additional
feature requests or refinements until after seeing how the first release is received by the
customer. The budget for the project, which is slated for three releases, would only have
forecast data available for the first release and not the entire project. The team could use the
algorithm above to forecast budget for the first release, then assume an additional 20% for the
second release and an additional 5% for the last release, based on past experience.
ACTIVITY: 08

Study boiler plate and present necessary characteristics of boiler plate for
a large and small project

The term boilerplate refers to standardized text, copy, documents, methods, or procedures
that may be used over again without making major changes to the original. A boilerplate is
commonly used for efficiency and to increase standardization in the structure and language
of written or digital documents. This includes contracts, investment prospectuses, and bond
indentures. In the field of contract law, documents contain boilerplate language, which is a
language that is considered generic or standard in contracts.

Boilerplates Work

Boilerplate is any text, documentation, or procedures that can be reused more than once in a
new context without any substantial changes to the original. Boilerplates are commonly used
online and in written documents by a variety of entities including corporations, legal firms,
and medical facilities. Users can make slight changes to the language or certain portions of
the text to tailor a document for different uses. For instance, a media release contains
boilerplate language at the bottom, which is usually information about the company or
product, and can be updated for different situations before being disseminated to the public.
The term is also commonly used in the information technology industry, referring to coding
that can be created and reused over and over again. In this case, the IT specialist only has to
rework some of the code to fit into the current need, without making major changes to the
original structure.

History of Boilerplate

In the 19th century, a boilerplate referred to a plate of steel used as a template in the
construction of steam boilers. These standardized metal plates reminded editors of the often
trite and unoriginal work that ad writers and others sometimes submitted for publication. 2
The legal profession began using the term boilerplate as early as the mid-1950s, when an
article in the Bedford Gazette criticized boilerplates because they often included fine
print designed to skirt the law. Businesses now use boilerplate clauses in contracts, purchase
agreements, and other formal documents. Boilerplate clauses are designed to protect
businesses from making errors or legal mistakes in the language. The wording of these
passages is generally not up for negotiation with customers, who will often sign boilerplate
documents without actually reading or understanding them. This type of boilerplate, written
by a party with superior bargaining power and presented to a weaker party, is often called
an adhesion contract in the legal profession. Courts may set aside provisions of such
contracts if they find them coercive or unfair.

Necessary characteristics of boilerplate for large projects (production ready)


 Good and Readable Documentation?
 Code structure with a deeper abstraction level
 Follows Proper Coding Standard
 Has CLI tool (for rapid prototyping and setup)
 Scalable?
 Easy testing tools
 Necessary API modules
 Support for Internationalization and Localization?
 Code Splitting
 Server and Client code for setup
 Proper Navigation and Routing Structure?

After all these minimum specs, you should start editing and altering the code in order to build
your project. There are some Big Tech Companies who even build their own boilerplate.
They use it for respective and similar projects throughout time.

Boilerplate for smaller projects (Scaffolding)

These types of boilerplates are generally kind of “Starter Kits” or in professional way it is
called “Scaffolding”. Their main target users are novice developers or new early adopters.
It focuses on fast prototyping by creating the elements which are necessary only for new
projects. These require less functionality and are not scalable over time and project.
Their code structure is not much expanded, and doesn’t involve deeper abstraction layer as
users only need to build core features. This eliminates the need for extra utilities.
ACTIVITY: 09
1. Identify different DevOps tools and list features
2. Study and report OWASP coding guidelines
3. Learn and report Twelve Factor App methodologies
4. Identify different version control and configuration management tools
and report their market share

DevOps Tools - An Overview


Numerous statistics shown by multiple researches conducted over the years strongly support
this outcome. In 2019, the IT & telecom sector in the U.S. held around 30% market share in
2019 due to the increasing usage of DevOps tools to supplement the Network Function
Virtualization (NFV) technologies and manage their container-based networks. Also, by the
end of 2019, the worldwide revenue of DevOps software tools market totaled USD 8.5
billion. Even in the current pandemic induced slowdown, IDC found that the DevOps
software tools market has shown positive single digit growth in 2020 and 2021 so far.
DevOps tools ensure transparency, automation, and collaboration stay at the forefront of your
value stream. These tools facilitate ways for effective sharing and exchange of information
and technical know-how between all stakeholders be it development, operations, security or
business teams for effective product output.

List of DevOps Tools


DevOps tools help firms to resolve some of the challenges that come with the implementation
of DevOps practices. However, there is no “one-size-fits-all” solution available out there. As
a result, there is a wide variety of DevOps tools for every requirement.
We have already discussed how DevOps benefits your business and listed popular tools in
our previous article – "A Guide to DevOps". In this article, we will have developed
a Comprehensive List of DevOps Tools and divided them different categories. Let's have a
look:

1. Version Control tools


GitHub: Github is considered as one of the largest and most advanced development
platforms in the world. Millions of developers and companies build, ship, and maintain their
software on GitHub. Some of its’ salient features are:
 Collaborative Coding
 Automation / CI & CD
 Security including additional features for enterprise customers
 Project Management
Bitbucket: Bitbucket is a very popular platform, with over 10 million registered users.
Although it is a platform for hosting code, it goes beyond just code management. Teams can
plan projects, collaborate on code, test, and deploy from a single platform. Some of its
features are:
 Tighter Jira and Trello integration.
 Integrated CI/CD to build, test and deploy.
 Pull requests and approve code review more efficiently.
 Keep your code secured in the Cloud with IP whitelisting and 2-step verification.
GitLab: It is an all-in-one DevOps Tool for rapid software delivery. It enables teams to
perform all tasks right from Planning to SCM to Delivery to Monitoring and Security.
Following are a few of its features:
 Single interface, single conversation thread, and single data store to manage projects
effectively – Single source of truth.
 CI/CD for robust, scalable, and end-to-end automation to work together efficiently –
Continuous Everything.
 Built-in functionality for Automated Security, Code Quality and Vulnerability
management & with tight governance, your DevOps speed never slows down.

2. Container Management tools


Docker: Docker is a light-weight tool that aims to simplify and accelerate various workflows
in your SDLC with an integrated approach. A docker container image is a standalone,
executable package that includes everything you need to run an application. Some of its
primary features are that helped it become indispensable among DevOps tools are:
 Standardized Packaging format for diverse applications.
 Container runtime that runs on various Linux and Windows Server OSs.
 Developers use Docker for build, test and collaborate.
 Docker Hub to explore millions of images from community and verified publishers.
 Package, Execute and Manage distributed applications with Docker App.
Kubernetes: Kubernetes is an open-source DevOps tool used to automate deployment and
management of containerized applications & perhaps one of the most popular container
orchestration tools. Features that differentiate it from other DevOps Tools include:
 Make changes to your application or its configuration and monitoring application
health simultaneously – Automated rollouts & rollbacks.
 It offers own IP addresses and a single DNS name for a set of Pods – Service Delivery
and load balancing.
 Automatically mount the storage system of your choice.
 Self-healing capability.
Mesos: Apache Mesos is a DevOps tool to manage computer clusters. It is a distributed
systems kernel for resource management and scheduling across entire datacenter and cloud
environments. Its features include:
 Offers native support to launch containers with Docker and AppC images.
 Supports cloud-native and legacy applications to run in the same cluster with
pluggable scheduling policies.
 Runs on cross platforms like Linux, OSX and Windows.
 Scale to 10,000s of nodes easily.

3. Application Performance Monitoring tools


Prometheus: Prometheus is an open-source and community driven performance monitoring
solution. It also supports container monitoring and creates alerts based on time series data.
Solution includes the following features:
 Scaling with the help of functional sharding and federation.
 Numerous client libraries allow easy service instrumentation.
 Powerful reporting capabilities through PromQL.
Dynatrace: Covers all monitoring needs such as application performance, digital experience,
business analytics, AIOps, and infrastructure monitoring. Its features are:
 Automate orchestration with open APIs.
 Provides extensive cloud support and compatible with all major db technologies.
 Dynatrace APM solution provides automatic quality checks and KPIs.
 AI Driven problem detection and resolution.
AppDynamics: AppDynamics facilitates real-time insights into application performance.
This DevOps tool monitors and reports on the performance of all transactions flowing
through your application. Its features are:
 Agents are Intelligent and know when to capture the details of transactions.
 Solves performance problems through an analytics-driven approach.
 Automatically finds out normal performance and stop false alarms.
 Smart Analytics enables to find and fix issues from the very beginning.
 Enables full system-wide data recording.

4. Deployment & Server Monitoring tools


Splunk: Splunk is a monitoring and exploring tool that is used on SaaS and on-premises. It
has features like:
 Monitor and troubleshoot across your infrastructure, including physical, virtual, or in
the cloud.
 Modernize applications for better customer experiences through accelerated
innovation.
 AIOps with Machine Learning for predictive alerting and auto-remediation.
 Improved Efficiency in MTTA with mobile-first, automated incident response.
Datadog: Datadog is a SaaS-based DevOps tool for server and app monitoring having hybrid
cloud environments. It facilitates monitoring of Docker containers as well. Some of its salient
features:
 Seamlessly aggregates metrics and events across the full DevOps stack.
 Offers end-to-end user experience visibility in a single platform.
 Prioritizes business and engineering decisions with user experience metrics.
 Built to give visibility across teams.
Sensu: Sensu is an open-source devops tool for monitoring cloud environments. It is easily
deployable through Puppet and Chef. Following are its features:
 The Sensu Observability Pipeline is integrated, secure and scalable. Collaboration
between development and operations relies on self-service workflows with integrated
authentication solutions.
 Declarative configurations and a service-based approach to monitoring let you define
the monitoring insights that matter most, automating your workflows so you can focus
on what matters.

5. Configuration Management tools


Chef: Chef is an open-source DevOps tool for automation and configuration management
built by Erlang and Ruby. Its features are:
 “Cookbooks” which facilitates infrastructure coding in languages specific to domains.
 Easily integrated with cloud platforms like Amazon AWS, MS Azure, GCP etc.
 Configuration as code.
Puppet: Puppet is responsible for managing and automating your infrastructure and complex
workflows in a simplistic manner. Features of this DevOps Tool are:
 Automate and simplify critical manual tasks by extracting configuration details across
various operating systems and platforms.
 When you have 100, 1000 of severs or a mixed environment or when you have to
plans to scale your infrastructure, it becomes difficult to maintain all servers in a
certain state - Puppet helps you in saving time and money, scaling effectively and do
this effectively.
Ansible: Ansible delivers simple IT automation that ends repetitive tasks and frees up teams
for more strategic work. Focusing on two key use-cases:
 Configuration management – Aim to be the simplest solution and designed to be
minimal in nature, consistent, secure and highly reliable, with a focus on getting
started quickly for administrators, developers and IT managers.
 Orchestration - Ansible’s library of modules and easy extensibility, makes it simple to
orchestrate different conductors in different environments, all using one simple
language.

6. CI / Deployment Automation tools


Bamboo: It is a DevOps tool to help you practice Continuous Delivery, from code to
deployment. It gives the ability to tie automated builds, tests, and releases together in a single
workflow. Some of its salient features are:
 Allows users to create multi-stage build plans and set up triggers to start builds upon
commits.
 Parallel automated tests unleash the power of Agile Development and make catching
bugs easier and faster.
 Tighter integration with Jira, Bitbucket.
Jenkins: Written in Java, Jenkins is an open-source platform for continuous integration and
continuous delivery that is used to automate your end-end release management lifecycle.
Jenkins has emerged as one of the essential DevOps Tools because of its features:
 Used as a simple CI server or turned into the CD hub for any project.
 Easily set up and configured via its web interface, which includes on-the-fly error
checks and built-in help.
 Easily distribute work across multiple machines, helping drive builds, tests and
deployments across multiple platforms faster.
IBM UrbanCode: As a deployment automation and release management solution, IBM
UrbanCode enables uninterrupted delivery for any combination of on-premises, mainframe
and cloud applications. Some of its features are:
 Uses an enterprise-optimized solution along with development, testing and release
tools to enhance build management.
 Automates application development, middleware configuration, and database
changes.

7. Test Automation tools


Test.ai: It is an AI-powered automation testing tool to release apps faster and with better
quality. Its AI-Bots:
 Build tests without coding or scripting.
 Accelerate testing to the speed of DevOps.
 Scale testing to any platform, any app.
 Maintain tests automatically and improve quality everywhere.
Ranorex: An all-in-one solution for any type of automated testing, whether it is cross-
browser testing or cross-device testing. Its features include:
 All the tools you need for test automation comes in a single license.
 Test on real devices or simulators/emulators.
 Allows simple integration with CI servers, issue tracking tools, and more
Selenium: Primarily used to automate web applications for testing purposes but can, also, be
used to automate other web-based admin tasks. Three components:
 Selenium WebDriver allows you to create robust, browser-based regression
automation suites and tests and helps you in scaling and distributing scripts across
many environments.
 Selenium IDE is a chrome and Firefox add-on that helps in simple record-and-
playback of interactions with the browser.
 Selenium Grid for scaling your testing efforts by running tests on several machines
and manage multiple environments from a central point

8. Artifact Management tools


Sonatype NEXUS: Claims to be the world’s #1 repository manager, Sonatype efficiently
distributes parts and containers to developers acting as a single source of truth for all of your
components, binaries and build artifacts. Its features are:
 Offers universal support for all popular build tools.
 Efficiency and flexibility to empower development teams.
JFRog Artifactory: Functions as the single source of truth for all container images,
packages, and Helm charts, as they move across the entire DevOps pipeline. Its features are:
 Scale with active/active clustering and multi-site replication for your DevOps setup.
 Allows you to choose the tool stack and integrates with your environment.
 Release faster and automate your pipeline via powerful REST APIs.
CloudRepo: Used for managing, sharing and distributing private Maven and Python
repositories.
 Stores the repositories across multiple servers to ensure high availability.
 Easily provide or restrict access to your clients.
 Integrates with all major CI tools.

9. Codeless Test Automation tools


AccelQ: AcceIQ is leads the codeless test automation space among the DevOps Tools. It is a
powerful codeless test automation tool, which allows the tester to freely develop test logic
without having concern about the programming syntax:
 Follows a design-first approach and enforces modularity and reusability in the
development of test assets effortlessly.
 Handle iframes and other dynamic controls.
 Supports advanced interactions and logic development capabilities.
Appvance: This Al and ML-powered autonomous testing platform performs end-to-end
testing along with ML assisted codeless scripting. Its features are:
 A test automation system with level-5 autonomy.
 Self-healing scripts & AI-generated tests for complete application coverage and
validations with 90% less effort.
 Powers continuous testing in your DevOps environment.
Testim.io: AI-based UI Testing to help you run tests that deliver super-fast authoring that
increases coverage and quality. It helps in your DevOps Journey by:
 Integrating with tools like Saucelabs, Jira, and Github.
 Eliminating flaky tests and reduces maintenance.
 Pinpointing root cause to fix bugs and release faster.
 Efficiently expanding testing operations with control, management, and insights.

The OWASP Security Knowledge Framework is an open source web application


that explains secure coding principles in multiple programming languages. The
goal of OWASP-SKF is to
help you learn and integrate security by design in your software development and build
applications that are secure by design.

OWASP provides a secure coding practices checklist that includes 14 areas to consider in
your software development life cycle. Of those secure coding practices, we’re going to focus
on the top eight secure programming best practices to help you protect against vulnerabilities.
1.Security by Design
2.Password Management
3.Access Control
4.Error Handling and Logging
5.System Configuration
6.Threat Modeling
7.Cryptographic Practices
8.Input Validation and Output Encoding

Security by Design

Security needs to be a priority as you develop code, not an afterthought. Organizations may
have competing priorities where software engineering and coding are concerned. Following
software security best practices can conflict with optimizing for development speed.
However, a “security by design” approach that puts security first tends to pay off in the long
run, reducing the future cost of technical debt and risk mitigation. An analysis of your source
code should be conducted throughout your software development life cycle (SDLC), and
security automation should be implemented.

Password Management

Passwords are a weak point in many software systems, which is why multi-factor
authentication has become so widespread. Nevertheless, passwords are the most common
security credential, and following secure coding practices limits risk. You should require all
passwords to be of adequate length and complexity to withstand any typical or common
attacks. OWASP suggests several coding best practices for passwords, including:
 Storing only salted cryptographic hashes of passwords and never storing plain-text
passwords.
 Enforcing password length and complexity requirements.
 Disable password entry after multiple incorrect login attempts.
We have also written about password expiration policies and whether they are a security best
practice in a modern business environment.

Access Control

Take a “default deny” approach to sensitive data. Limit privileges and restrict access to
secure data to only users who need it. Deny access to any user that cannot demonstrate
authorization.
Ensure that requests for sensitive information are checked to verify that the user is authorized
to access it.
Learn more about access controls for remote employees and cloud access management.

Error Handling and Logging

Software errors are often indicative of bugs, many of which cause vulnerabilities. Error
handling and logging are two of the most useful techniques for minimizing their impact.
Error handling attempts to catch errors in the code before they result in a catastrophic failure.
Logging documents errors so that developers can diagnose and mitigate their cause.
Documentation and logging of all failures, exceptions, and errors should be implemented on a
trusted system to comply with secure coding standards.

System Configuration

Clear your system of any unnecessary components and ensure all working software is
updated with current versions and patches. If you work in multiple environments, make sure
you’re managing your development and production environments securely.
Outdated software is a major source of vulnerabilities and security breaches. Software
updates include patches that fix vulnerabilities, making regular updates one of the most vital,
secure coding practices. A patch management system may help your business to keep on top
of updates.

Threat Modeling

Document, locate, address, and validate are the four steps to threat modeling. To securely
code, you need to examine your software for areas susceptible to increased threats of attack.
Threat modeling is a multi-stage process that should be integrated into the software lifecycle
from development, testing, and production.

Cryptographic Practices

Encrypting data with modern cryptographic algorithms and following secure key
management best practices increases the security of your code in the event of a breach.

Input Validation and Output Encoding

These secure coding standards are self-explanatory in that you need to identify all data inputs
and sources and validate those classified as untrusted. You should utilize a standard routine
for output encoding and input validation.

1) SolarWinds Server Configuration Monitor

SolarWinds provides a Server Configuration Monitor to detect unauthorized configuration


changes to your servers and applications. It will help you to baseline server and application
configurations on Windows and Linux. It will improve visibility & team accountability and
decrease the troubleshooting time.

Developed by: Network & system engineers.


Type: Licensed Tool
Headquarters: Austin, Texas
Initial Release: 2018
Stable Release: 2019.4
Operating System: Windows
Price: Starts at $1803
Annual Revenue: $833.1M
Employees: 1001 to 5000 employees

The solution is for multiple projects, easy to understand, and offers affordable licensing.
Prominent Features:
 SolarWinds Server Configuration Monitor provides alerts and reports for deviations
from the baseline in almost real-time.
 It can track server and application changes.
 It has features to spot the differences between configs.
 It has enhanced change auditing capabilities by monitoring the script outputs.
Pros:
 The tool provides the features to help you decrease the troubleshooting time.
 It provides the facility of hardware and software inventory tracking and hence you
will have an up-to-date list of hardware and software assets.
Cons:
 As per reviews, it takes some time to get a hand on the tool.

2) Auvik
Auvik is the provider of cloud-based network management tools. These tools offer true
network visibility and control. It provides real-time network mapping & inventory, automated
config backup & restore on network devices, deep insights of network traffic, and automated
network monitoring. It helps with managing the network from anywhere you are.

Developed By: Auvik Networks Inc.


Type: Licensed tool
Headquarters: Waterloo, Ontario
Initial Release: 2014
Operating System: Web-based
Price:
 Get a quote for Essentials and Performance plans.
 As per reviews, the price starts at $150 per month.
 Free trial available.
Annual Revenue: $25 Million
Employees: 51-200 employees
Users: Fortinet, Dell Technologies, PaloAlto Networks, SonicWall, etc.
Features of Auvik:
 Configuration management
 Automated network discovery, mapping, and inventory.
 Network monitoring & alerting.
 Application visibility powered by machine learning.
 Syslog search, filter, export capabilities, etc.
Pros:
 Auvik is a cloud-based solution.
 It offers the functionalities for automating the configuration backup & recovery.
 It provides AES 256 encryption to network data.
 It is easy to use.
Cons:
 No such cons to mention.
ACTIVITY: 10

Compare and contrast containerization and virtualization and identify


importance of these in software development.
Identify container providers.

Virtualization and containerization are the two most frequently used mechanisms to host
applications in a computer system. Virtualization uses the notion of a virtual machine as the
fundamental unit. Containerization, on the other hand, uses the concept of a container. Both
of these technologies play a crucial role and have their merits and demerits.

Virtualization: Virtualization helps us to create software-based or virtual versions of a


computer resource. These computer resources can include computing devices, storage,
networks, servers, or even applications. It allows organizations to partition a single physical
computer or server into several virtual machines (VM). Each VM can then interact
independently and run different operating systems or applications while sharing the resources
of a single computer.

How Does Virtualization Work?


Hypervisor software facilitates virtualization. A hypervisor sits on top of an operating system.
But, we can also have hypervisors that are installed directly onto the hardware. Hypervisors
take physical resources and divide them up so that virtual environments can use them. When
a user or program issues an instruction to the VM that requires additional resources from the
physical environment, the hypervisor relays the request to the physical system and caches the
changes. There are two types of hypervisors, Type 1 (Bare Metal) and Type 2 (Hosted).
The main feature of virtualization is that it lets you run different operating systems on the
same hardware. Each virtual machine’s operating system (guest OS) does all the necessary
start-up activities such as bootstrapping, loading the kernel, and so on. However, each guest
OS is controlled through elevated security measures so that they don’t acquire full access to
the underlying OS.

Containerization: Containerization is a lightweight alternative to virtualization. This


involves encapsulating an application in a container with its own operating environment.
Thus, instead of installing an OS for each virtual machine, containers use the host OS.

How Does Containerization Work?


Each container is an executable package of software that runs on top of a host OS . A host can
support many containers concurrently. For example, in a microservice architecture
environment, this set up works as all containers run on the minimal, resource-isolated process
that others can’t access.

The preceding diagram demonstrates the layout of containerized architecture. We can


consider a container as the top layer of multilayered cake:

1. At the bottom of the layer, there are physical infrastructures such as CPU, disk
storage, and network interfaces
2. Above that, there is the host OS and its kernel. The kernel acts the bridge between the
software of the OS and the hardware resources
3. The container engine and its minimal guest OS sits on top of the host OS
4. At the very top, there are binaries, libraries for each application and the apps that run
on their isolated user spaces
Area Virtualization Containerization

Isolation Provides complete isolation from the Typically provides lightweight isolation
host operating system and the other from the host and other containers, but
VMs doesn’t provide as strong a security
boundary as a VM

Operating Runs a complete operating system Runs the user-mode portion of an


System including the kernel, thus requiring operating system, and can be tailored to
more system resources such as CPU, contain just the needed services for
memory, and storage your app using fewer system resources
Guest Runs just about any operating Runs on the same operating system
compatibility system inside the virtual machine version as the host

Deployment Deploy individual VMs by using Deploy individual containers by


Hypervisor software using Docker or deploy multiple
containers by using an orchestrator
such as Kubernetes

Persistent Use a Virtual Hard Disk (VHD) for Use local disks for local storage for a
storage local storage for a single VM or a single node or SMB for storage shared
Server Message Block (SMB) file by multiple nodes or servers
share for storage shared by multiple
servers
Load Virtual machine load balancing is An orchestrator can automatically start
balancing done by running VMs in other or stop containers on cluster nodes to
servers in a failover cluster manage changes in load and
availability.

Networking Uses virtual network adapters Uses an isolated view of a virtual


network adapter. Thus, provides a little
less virtualization

Top 10 Container Management Software:

1) Docker: Docker is a containerization software that performs operating-system-level-


virtualization. The developer of this software is Docker, Inc. The initial release of this
software happened in the year 2013. It is written in ‘Go’ programming language. It’s
a freemium software as a service and has Apache License 2.0 as the source code
license.
2) AWS Fargate: AWS Fargate happens to be a compute engine for Amazon ECS and
EKS* which lets you execute containers without any need to manage the servers or clusters.
Using AWS Fargate, you now don’t need to provision, configure, and scale cluster virtual
machines to execute containers. This, in turn, eliminates the requirement to select server
types, determine at what time to scale your clusters or optimize cluster packing.

3) Google Kubernetes Engine: Google Kubernetes Engine is a managed, production-


ready infrastructure for implementing containerized applications. This tool was launched in
the year 2015. It totally removes the need to install, handle and operate your own Kubernetes
clusters.

4) Amazon ECS: Amazon ECS (an acronym for Elastic Container Service) is an
orchestration service that supports Docker containers and permits you to effortlessly execute
and scale containerized applications on Amazon AWS.
This service is highly scalable and is high performing. It eradicates the requirement to install
and manage your own container orchestration software and manages to cluster through virtual
machines
.
5) LXC: LXC is the acronym for Linux Containers which is a type of OS-level virtualization
method for executing numerous isolated Linux systems(containers) sitting on a control host
employing a single Linux Kernel. This is an open source tool under the GNU LGPL License.
It is available on the GitHub Repository. This software is written in C, Python, Shell, and
Lua.

6) Container Linux by CoreOS: CoreOS Container Linux is an open source and


lightweight operating system founded on the Linux Kernel and is designed to containerize
your apps. It offers an infrastructure for easy clustered deployments while concentrating on
automation, security, reliability, and scalability. It comes under Apache License 2.0 and is
available on the GitHub-CoreOS.

7) Microsoft Azure: Microsoft Azure offers different container services for your various
container needs.

8) Google Cloud Platform: Google cloud provides you with different options to choose
from for running the containers. These are Google Kubernetes Engine (for container cluster
management), Google Compute Engine (for Virtual Machines and CI/CD pipeline) and
Google App Engine Flexible Environment (for containers on fully-managed PaaS). We have
already discussed the Google Kubernetes Engine earlier in this article. We will now discuss
the Google Compute Engine and Google App Engine Flexible Environment.

9) Portainer: Portainer is an open source lightweight container management User Interface


that permits you to effortlessly handle your Docker Hosts or Swarm clusters. It supports
Linux, Windows and OSX platforms. It comprises a single container that can be executed on
any Docker engine.
10) Apache Mesos: Developed by Apache Software Foundation, Apache Mesos is an open
source project to handle computer clusters.
Version 1 of this software was released in 2016. It is written in C++ programming language
and has Apache License 2.0. It employs Linux Cgroups technology in order to facilitate
isolation for CPU, memory, I/O and file system.
ACTIVITY: 11

Study and prepare report on testing tools.


Compare manual and automation testing.

Software Testing tools are the tools which are used for the testing of software. Software
testing tools are often used to assure firmness, thoroughness and performance in testing
software products. Unit testing and subsequent integration testing can be performed by
software testing tools. These tools are used to fulfill all the requirements of planned testing
activities. These tools also works as commercial software testing tools. The quality of the
software is evaluated by software testers with the help of various testing tools.

Types of Testing Tools:


As software testing is of two types, static testing and dynamic testing. Also the tools used
during these testing are named accordingly on these testings. Testing tools can be categorized
into two types which are as follows:
1. Static Test Tools
2. Dynamic Test Tools

These are explained in detail as following below:

1. Static Test Tools: Static test tools are used to work on the static testing processes. In the
testing through these tools, typical approach is taken. These tools do not test the real
execution of the software. Certain input and output are not required in these tools. Static test
tools consists of the following:
 Flow analyzers: Flow analyzers provides flexibility in data flow from input to output.
 Path Tests: It finds the not used code and code with inconsistency in the software.
 Coverage Analyzers: All rationale paths in the software are assured by the coverage
analyzers.
 Interface Analyzers: They check out the consequences of passing variables and data in
the modules.

2. Dynamic Test Tools: Dynamic testing process is performed by the dynamic test tools.
These tools test the software with existing or current data. Dynamic test tools comprises of
the following:
 Test driver: Test driver provides the input data to a module-under-test (MUT).
 Test Beds: It displays source code along with the program under execution at the
same time.
 Emulators: Emulators provides the response facilities which are used to imitate parts
of the system not yet developed.
 Mutation Analyzers: They are used for testing fault tolerance of the system by
knowingly providing the errors in the code of the software.
What Is Manual Testing?
Manual testing is the process in which QA analysts execute tests one-by-one in an individual
manner. The purpose of manual testing is to catch bugs and feature issues before a software
application goes live. When manually testing, the tester validates the key features of a
software application. Analysts execute test cases and develop summary error reports without
specialized automation tools.

What Is Automation Testing?


Automation testing is the process in which testers utilize tools and scripts to automate testing
efforts. Automation testing helps testers execute more test cases and improve test
coverage. When comparing manual vs. automatiosn testing, manual takes longer. Automated
testing is more efficient.

How Manual Testing Works


Manual testing is very hands-on. It requires analysts and QA engineers to be highly involved
in everything from test case creation to actual test execution.

How Automated Testing Works


Automation testing involves testers writing test scripts that automate test execution. (A test
script is a set of instructions to be performed on target platforms to validate a feature or
expected outcome.)

Aspect of Manual Automation


Testing
Test Execution Done manually by QA testers Done automatically using automation tools
and scripts

Test Efficiency Time-consuming and less More testing in less time and greater
efficient efficiency
Types of Tasks Entirely manual tasks Most tasks can be automated, including real
user simulations

Test Coverage Difficult to ensure sufficient Easy to ensure greater test coverage
test coverage
ACTIVITY: 12

Study and prepare report on widely used software metrics.

A software metric is a measure of software characteristics that are quantifiable or countable.


Software metrics are important for many reasons, including measuring software performance,
planning work items, measuring productivity, and many other uses.
Within the software development process, there are many metrics that are all related to each
other. Software metrics are related to the four functions of management: Planning,
Organization, Control, or Improvement.

Examples of software metrics:


 Benefits of Software Metrics
 How Software Metrics Lack Clarity
 How to Track Software Metrics
 Examples of Software Metrics

Benefits of Software Metrics:


The goal of tracking and analyzing software metrics is to determine the quality of the current
product or process, improve that quality and predict the quality once the software
development project is complete. On a more granular level, software development managers
are trying to:
 Increase return on investment (ROI)
 Identify areas of improvement
 Manage workloads
 Reduce overtime
 Reduce costs

These goals can be achieved by providing information and clarity throughout the organization
about complex software development projects. Metrics are an important component of quality
assurance, management, debugging, performance, and estimating costs, and they’re valuable
for both developers and development team leaders:
 Managers can use software metrics to identify, prioritize, track and communicate any
issues to foster better team productivity. This enables effective management and
allows assessment and prioritization of problems within software development
projects. The sooner managers can detect software problems, the easier and less-
expensive the troubleshooting process.
 Software development teams can use software metrics to communicate the status of
software development projects, pinpoint and address issues, and monitor, improve on,
and better manage their workflow.

Software metrics offer an assessment of the impact of decisions made during software
development projects. This helps managers assess and prioritize objectives and performance
goals.
How Software Metrics Lack Clarity
Terms used to describe software metrics often have multiple definitions and ways to count or
measure characteristics. For example, lines of code (LOC) is a common measure of software
development. But there are two ways to count each line of code:
 One is to count each physical line that ends with a return. But some software
developers don’t accept this count because it may include lines of “dead code” or
comments.
 To get around those shortfalls and others, each logical statement could be considered
a line of code.
Thus, a single software package could have two very different LOC counts depending on
which counting method is used. That makes it difficult to compare software simply by lines
of code or any other metric without a standard definition, which is why establishing a
measurement method and consistent units of measurement to be used throughout the life of
the project is crucial.
There is also an issue with how software metrics are used. If an organization uses
productivity metrics that emphasize volume of code and errors, software developers could
avoid tackling tricky problems to keep their LOC up and error counts down. Software
developers who write a large amount of simple code may have great productivity numbers
but not great software development skills. Additionally, software metrics shouldn’t be
monitored simply because they’re easy to obtain and display – only metrics that add value to
the project and process should be tracked.

How to Track Software Metrics


Software metrics are great for management teams because they offer a quick way to track
software development, set goals and measure performance. But oversimplifying software
development can distract software developers from goals such as delivering useful software
and increasing customer satisfaction. Of course, none of this matters if the measurements that
are used in software metrics are not collected or the data is not analyzed. The first problem is
that software development teams may consider it more important to actually do the work than
to measure it. It becomes imperative to make measurement easy to collect or it will not be
done. Make the software metrics work for the software development team so that it can work
better. Measuring and analyzing doesn’t have to be burdensome or something that gets in the
way of creating code. Software metrics should have several important characteristics. They
should be:
 Simple and computable
 Consistent and unambiguous (objective)
 Use consistent units of measurement
 Independent of programming languages
 Easy to calibrate and adaptable
 Easy and cost-effective to obtain
 Able to be validated for accuracy and reliability
 Relevant to the development of high-quality software products

This is why software development platforms that automatically measure and track metrics are
important. But software development teams and management run the risk of having too much
data and not enough emphasis on the software metrics that help deliver useful software to
customers.
The technical question of how software metrics are collected, calculated and reported are not
as important as deciding how to use software metrics. Patrick Kua outlines four guidelines for
an appropriate use of software metrics:

1. Link software metrics to goals.


Often sets of software metrics are communicated to software development teams as goals. So
the focus becomes:
 Reducing the lines of codes
 Reducing the number of bugs reported
 Increasing the number of software iterations
 Speeding up the completion of tasks
Focusing on those metrics as targets help software developers reach more important goals
such as improving software usefulness and user experience.

For example, size-based software metrics often measure lines of code to indicate coding
complexity or software efficiency. In an effort to reduce the code’s complexity, management
may place restrictions on how many lines of code are to written to complete functions. In an
effort to simplify functions, software developers could write more functions that have fewer
lines of code to reach their target but do not reduce overall code complexity or improve
software efficiency. When developing goals, management needs to involve the software
development teams in establishing goals, choosing software metrics that measure progress
toward those goals and align metrics with those goals.

2. Track trends, not numbers.


Software metrics are very seductive to management because complex processes are
represented as simple numbers. And those numbers are easy to compare to other numbers. So
when a software metric target is met, it is easy to declare success. Not reaching that number
lets software development teams know they need to work more on reaching that target.
These simple targets do not offer as much information on how the software metrics are
trending. Any single data point is not as significant as the trend it is part of. Analysis of why
the trend line is moving in a certain direction or at what rate it is moving will say more about
the process. Trends also will show what effect any process changes have on progress. The
psychological effects of observing a trend – similar to the Hawthorne Effect, or changes in
behavior resulting from awareness of being observed – can be greater than focusing on a
single measurement. If the target is not met, that, unfortunately, can be seen as a failure. But a
trend line showing progress toward a target offers incentive and insight into how to reach that
target.

3. Set shorter measurement periods.


Software development teams want to spend their time getting the work done not checking if
they are reaching management established targets. So a hands-off approach might be to set
the target sometime in the future and not bother the software team until it is time to tell them
they succeeded or failed to reach the target. By breaking the measurement periods
into smaller time frames, the software development team can check the software metrics —
and the trend line — to determine how well they are progressing. Yes, that is an interruption,
but giving software development teams more time to analyze their progress and change
tactics when something is not working is very productive. The shorter periods of
measurement offer more data points that can be useful in reaching goals, not just software
metric targets.

4. Stop using software metrics that do not lead to change.


We all know that the process of repeating actions without change with the expectation of
different results is the definition of insanity. But repeating the same work without
adjustments that do not achieve goals is the definition of managing by metrics. Why would
software developers keep doing something that is not getting them closer to goals such as
better software experiences? Because they are focusing on software metrics that do not
measure progress toward that goal. Some software metrics have no value when it comes to
indicating software quality or team workflow. Management and software development teams
need to work on software metrics that drive progress towards goals and provide verifiable,
consistent indicators of progress.

Examples of Software Metrics


There is no standard or definition of software metrics that have value to software
development teams. And software metrics have different value to different teams. It depends
on what are the goals for the software development teams.
As a starting point, here are some software metrics that can help developers track their
progress.
Agile process metrics: Agile process metrics focus on how agile teams make decisions and
plan. These metrics do not describe the software, but they can be used to improve the
software development process.
Lead time: Lead time quantifies how long it takes for ideas to be developed and delivered as
software. Lowering lead time is a way to improve how responsive software developers are to
customers.
Cycle time: Cycle time describes how long it takes to change the software system and
implement that change in production.
Team velocity: Team velocity measures how many software units a team completes in an
iteration or sprint. This is an internal metric that should not be used to compare software
development teams. The definition of deliverables changes for individual software
development teams over time and the definitions are different for different teams.
Open/close rates: Open/close rates are calculated by tracking production issues reported in a
specific time period. It is important to pay attention to how this software metric trends.
Production: Production metrics attempt to measure how much work is done and determine
the efficiency of software development teams. The software metrics that use speed as a factor
are important to managers who want software delivered as fast as possible.
Active days: Active days is a measure of how much time a software developer contributes
code to the software development project. This does not include planning and administrative
tasks. The purpose of this software metric is to assess the hidden costs of interruptions.
Assignment scope: Assignment scope is the amount of code that a programmer can maintain
and support in a year. This software metric can be used to plan how many people are needed
to support a software system and compare teams.
Efficiency: Efficiency attempts to measure the amount of productive code contributed by a
software developer. The amount of churn shows the lack of productive code. Thus a software
developer with a low churn could have highly efficient code.
Code churn: Code churn represents the number of lines of code that were modified, added or
deleted in a specified period of time. If code churn increases, then it could be a sign that the
software development project needs attention.
Impact: Impact measures the effect of any code change on the software development project.
A code change that affects multiple files could have more impact than a code change
affecting a single file.
Mean time between failures (MTBF) and mean time to recover/repair (MTTR)
Both metrics measure how the software performs in the production environment. Since
software failures are almost unavoidable, these software metrics attempt to quantify how well
the software recovers and preserves data.

Application crash rate (ACR): Application crash rate is calculated by dividing how many
times an application fails (F) by how many times it is used (U).
ACR = F/U
Security metrics: Security metrics reflect a measure of software quality. These metrics need
to be tracked over time to show how software development teams are developing security
responses.
Endpoint incidents: Endpoint incidents are how many devices have been infected by a virus
in a given period of time.
Mean time to repair (MTTR): Mean time to repair in this context measures the time from
the security breach discovery to when a working remedy is deployed.
Size-oriented metrics: Size-oriented metrics focus on the size of the software and are
usually expressed as kilo lines of code (KLOC). It is a fairly easy software metric to collect
once decisions are made about what constitutes a line of code. Unfortunately, it is not useful
for comparing software projects written in different languages. Some examples include:
 Errors per KLOC
 Defects per KLOC
 Cost per KLOC
Function-oriented metrics: Function-oriented metrics focus on how much functionality
software offers. But functionality cannot be measured directly. So function-oriented software
metrics rely on calculating the function point (FP) — a unit of measurement that quantifies
the business functionality provided by the product. Function points are also useful for
comparing software projects written in different languages. Function points are not an easy
concept to master and methods vary. This is why many software development managers and
teams skip function points altogether. They do not perceive function points as worth the time.
Errors per FP or Defects per FP: These software metrics are used as indicators of an
information system’s quality. Software development teams can use these software metrics to
reduce miscommunications and introduce new control measures.
Defect Removal Efficiency (DRE): The Defect Removal Efficiency is used to quantify how
many defects were found by the end user after product delivery (D) in relation to the errors
found before product delivery (E). The formula is:
DRE = E / (E+D)
ACTIVITY: 13

PROBLEM DEFINATION: Identify different quality tools and report


their features and usage

Most organizations use quality tools for various purposes related to controlling and assuring
quality. Although a good number of quality tools specific are available for certain domains,
fields and practices, some of the quality tools can be used across such domains. These quality
tools are quite generic and can be applied to any condition. There are seven basic quality
tools used in organizations. These tools can provide much information about problems in the
organization assisting to derive solutions for the same. A number of these quality tools come
with a price tag. A brief training, mostly a self-training, is sufficient for someone to start
using the tools.

Let us have a look at the seven basic quality tools in brief.

1. Flow Charts: This is one of the basic quality tool that can be used for analyzing a
sequence of events. The tool maps out a sequence of events that take place
sequentially or in parallel. The flow chart can be used to understand a complex
process in order to find the relationships and dependencies between events. You can
also get a brief idea about the critical path of the process and the events involved in
the critical path. Flow charts can be used for any field to illustrate complex processes
in a simple way. There is specific software tools developed for drawing flow charts,
such as MS Visio. You can download some of the open source flow chart tools
developed by the open source community.
1. Histogram: Histogram is used for illustrating the frequency and the extent in the
context of two variables. Histogram is a chart with columns. This represents the
distribution by mean. If the histogram is normal, the graph takes the shape of a bell
curve. If it is not normal, it may take different shapes based on the condition of the
distribution. Histogram can be used to measure something against another thing.
Always, it should be two variables. Consider the following example: The following
histogram shows morning attendance of a class. The X-axis is the number of students
and the Y-axis the time of the day.

2. Cause and Effect Diagram: Cause and effect diagrams (Ishikawa Diagram) are used
for understanding organizational or business problem causes. Organizations face
problems everyday and it is required to understand the causes of these problems in
order to solve them effectively. Cause and effect diagrams exercise is usually
teamwork. A brainstorming session is required in order to come up with an effective
cause and effect diagram. All the main components of a problem area are listed and
possible causes from each area is listed. Then, most likely causes of the problems are
identified to carry out further analysis.
1. Check Sheet: A check sheet can be introduced as the most basic tool for quality. A
check sheet is basically used for gathering and organizing data. When this is done
with the help of software packages such as Microsoft Excel, you can derive further
analysis graphs and automate through macros available. Therefore, it is always a good
idea to use a software check sheet for information gathering and organizing needs.
One can always use a paper-based check sheet when the information gathered is only
used for backup or storing purposes other than further processing.

Scatter Diagram: When it comes to the values of two variables, scatter diagrams are the best
way to present. Scatter diagrams present the relationship between two variables and
2. illustrate the results on a Cartesian plane. Then, further analysis, such as trend
analysis can be performed on the values. In these diagrams, one variable denotes one
axis and another variable denotes the other axis.

3. Control Charts: Control chart is the best tool for monitoring the performance of a
process. These types of charts can be used for monitoring any processes related to
function of the organization. These charts allow you to identify the following
conditions related to the process that has been monitored.
 Stability of the process
 Predictability of the process
 Identification of common cause of variation
 Special conditions where the monitoring party needs to react
1. Pareto Charts: Pareto charts are used for identifying a set of priorities. You can chart
any number of issues/variables related to a specific concern and record the number of
occurrences. This way you can figure out the parameters that have the highest impact
on the specific concern. This helps you to work on the propriety issues in order to get
the condition under control.

You might also like