Se Activity All
Se Activity All
Study the traffic signal and the importance of rules and process.
Traffic signal:
Fixed Time
Under fixed time operation the traffic signals will display green to each approach for the
same amount of time every cycle regardless of the traffic conditions. This may be adequate in
heavily congested areas but where a lightly trafficked side road is included within the
sequence it can be very wasteful if in some cycles there are no vehicles waiting as the time
could be better allocated to a busier approach.
Vehicle Actuation is one of the most common modes of operation for traffic signals, and as
the name suggests, it takes into account the variations in vehicle demand on all approaches
and
adjusts the green time accordingly. The traffic demands are registered through the detection
installed either in the carriageway or above the signal heads. The controller then processes
these demands and allocates the green time in the most appropriate way. Minimum and
maximum green times are specified in the controller and cannot be violated.
1. TraMM:
TraMM is a software tool to monitor and manage the traffic signal controller WiTraC
remotely from the Command and Control Centre. TraMM has dedicated Human
Machine Interface graphic software to configure, visualize real traffic patterns and
control the traffic signals remotely. It provides option to monitor the data in different
visual formats such as Trends, Charts, Reports and Remote Animation Screens.
Dedicated User Management System is a part of TraMM which embeds application
links for user to navigate the overall system. Centralized junction configuration tool
of TraMM facilitates plan download and upload functionality to configure the
junctions remotely. It receives the online junction pattern periodically from different
junction controllers and the same will be distributed to associated modules for display
and reporting
2. KeySIGNALS:
The UK’s favourite traffic signal design software, KeySIGNALS enables engineers to
produce traffic signal design drawings in AutoCAD.Utilising a large library of
symbols, you can draw, label and configure detailed traffic signal design plans
quickly, ready to be printed to any scale. It also quantifies costs.
What is Flipkart?
Flipkart is an eCommerce India based company. It was founded in the year 2007 by Sachin
Bansal and Binny Bansal founded Flipkart. But, in August 2018, 77% of its controlling stakes
were acquired by Walmart.
Flipkart deals with a wide range of product categories such as Electronics Fashion and
lifestyle products. It also offers attractive discounts for grabbing the attention of customers.
Flipkart offers multiple modes of payment, which allow the customer to pay according to
their convenience.
Steps 1: First of all, visit the Flipkart website using the URL www.flipkart.com.
Step 2: At the top of the screen, you have a search bar. In this search bar type the name of
the product you are interested in and click the search button.
Like, I have typed ‘Thermo steel water bottle’, the other related searches also start appearing
below the search bar, you can modify your search using this.
Step 3: The search results will appear on your screen. Scroll down the screen to go through
the displayed results. At the bottom of the page, you can see that you have more result pages
for your search. So, you can search through them and select (click) a product you like.
Step 4: When you will select a product, it will appear with its detailed description along with
ADD TO CART and BUY NOW button.
Scroll down the screen to see the Specifications and Description of the product.
Ratings & Reviews to see what are the buyer’s reaction to that product.
You even have Question answer section where you can raise your query regarding the
product.
After you are satisfied with the product details click on the Buy Now button or if not satisfied
go back and search for other product.
Step 5: After you click BUY NOW button you have to get across these four sections that you
can see in the image below:
1. Login or SIGNUP
2. Delivery Address
3. Order Summary
4. Payment Option
For login, you have to enter your mobile number and password and click on
the CONTINUE button.
Step 6: Enter the address where you want the product to be delivered in this DELIVERY
ADDRESS section. Click SAVE AND DELIVER HERE.
Step 7: You will see your order summary, to confirm the order enter your valid email id and
click on CONTINUE.
Step 8: Last is the PAYMENT OPTION section, select the mode of your payment and pay
likewise.
ACTIVITY: 03
Document the roles and responsibilities of deifferent agile ceremonies.
Product owner
The product owner represents the stakeholders of the project. The role is primarily
responsible for setting the direction for product development or project progress.
The Product Owner understands the requirements of the project from a stakeholder
perspective and has the necessary soft skills to communicate the requirements to the product
development team. The Product Owner also understand the long-term business vision and
aligns the project with the needs and expectations of all stakeholders. End-user feedback is
taken into account to determine appropriate next-best action plans for the development
throughout the project cycle.
The key responsibilities of a Product Owner include:
Scrum backlog management
Release management
Stakeholder management
Stakeholders
The Stakeholder position may not be directly involved in the product development process
but is used to represent a range of key roles that impact the decisions and work of the Scrum
team.
The stakeholder may be:
The end user of the product
Business executives
Production support staff
Investors
External auditors
Scrum team members from associated projects and teams
ACTIVITY: 04
Cost of Risk:
Cost of Risk the cost of managing risks and incurring losses. Total cost of risk is the sum of
all aspects of an organization's operations that relate to risk, including retained (uninsured)
losses and related loss adjustment expenses, risk control costs, transfer costs, and
administrative costs.
Cost changes that are out of your PMO’s hands are called external cost risks. Although there
is little that can be done about them on a practical level, they still need to be taken into
account. External cost risks on a project could include:
Change in price of materials needed
Regulatory changes requiring extra work
Exchange rate fluctuations
Banking fees and charges being amended
Which costs might increase during a project lifecycle?
Nearly everything that could change or go wrong on your project will have a cost associated
with it. Those costs can come from one or more of the following areas:
Labour – when more work is needed to make changes or do extra tasks that get added on
– known as scope creep – your projects will need to pay overtime, for new staff or
freelancers to get the job done.
Materials – new requirements can require new software or other project inputs. If an area
of the project fails, this could also require more materials to be purchased, and this may
be at a higher cost than originals.
Equipment – hardware needs may expand to fulfil new demands on a project, requiring
capital investment. Similarly, a broken-down tool in a factory or a worn-out delivery
vehicle increases equipment costs.
Administration – as a project expands, morphs, and changes it will take more
administration from your project manager and likely within your PMO. New risk
assessments need to be done, as well as cost and time projects and keeping stakeholders
informed.
2. Root Cause Analysis: This is a technique to help project members identify all the risks
that are embedded in the project itself. Conducting a root cause analysis shows the
responsiveness of the team members in risk management. It is normally used once a
problem arises so that the project members can address the root cause of the issue and
resolve it instead of just treating its symptom.
4.Risk Assessment Template for IT: A risk assessment template is usually made for IT
processes in an organization, but it can be implemented in any project in the company. This
assessment gives a list of risks in an orderly fashion. It is a space where all the risks can be
collected in one place. This is helpful when it comes to project execution and tracking risks
that become crises.
5.Probability and Impact Matrix: Project managers can also use the probability and impact
matrix to help in prioritizing risks based on the impact they will have. It helps with resource
allocation for risk management. This technique is a combination of the probability scores and
impact scores of individual risks. After all the calculations are over, the risks are ranked
based on how serious they are. This technique helps put the risk in context with the project
and helps in creating plans for mitigating it.
6.Risk Data Quality Assessment: When project managers use the risk data quality assessment
method, they utilize all the collected data for identified risks and find details about the risks
that could impact the project. This helps project managers and team members understand the
accuracy and quality of the risk based on the data collected.
7.Variance and Trend Analysis: Just like other control processes in the project, it helps when
project managers look for variances that exist between the schedule of the project and cost
and compare them with the actual results to see if they are aligned or not. If the variances
rise, uncertainty and risk also rise simultaneously. This is a good way of monitoring risks
while the project is underway. It becomes easy to tackle problems if project members watch
trends regularly to look for variances.
8.Reserve Analysis: While planning the budget for the project, contingency measures and
some reserves should be in place as a part of the budget. This is to keep a safeguard if risks
occur while the project is ongoing. These financial reserves are a backup that can be used to
mitigate risks during the project.
knows when they qualify for free shipping. Rare Beauty unconventionally uses large Xs so
customers can easily remove items from their shopping carts. This makes it easy to edit their
orders. You can see why their customers love to purchase online—it’s a quick and painless
checkout.
8. Compliment your customers | Lulu lemon : The first thing you notice in Lululemon’s
shopping cart is the large message at the top: “You’ve Got Great Taste.” A message like that
makes users want to click the cart icon! Complimenting your customers is a great way to
thank them for adding something to their shopping cart. And shows goodwill. This is
especially effective for Lululemon because of their positive and affirmational
branding. Lululemon’s checkout button is large and brightly colored. It draws the users’ eyes
and they recommend complementary products to encourage customers to “Continue
Shopping.”
ACTIVITY: 05
The term “Design Thinking” dates back to the 1987 book by Peter Rowe; “Design Thinking.”
In that book he describes the way that architects and urban planners would approach design
problems. However, the idea that there was a specific pattern of problem solving in “design
thought” came much earlier in Herbert A Simon’s book, “The Science of the Artificial”
which was published in 1969. The concept was popularized in the early 1990s by Richard
Buchanan in his article “Wicked Problems in Design Thinking”.
Design thinking is concerned with solving problems through design. The idea being that the
future output of the process will provide a better answer than the one already available or if
nothing is available – something entirely new.
It is an unconstrained methodology where it is possible that the designer (or design team) will
work on many possible solutions at once. It enables designers to consider the problem in
many different ways and speculate on both the past and future of the problem too.
This is in contrast to the scientific method of problem solving which requires a highly-
defined problem which focuses on delivering a single solution.
This difference was first noted by Brian Lawson, a psychologist, in 1972. He conducted an
experiment in which scientists and architects were asked to build a structure from colored
blocks. He provided some basic rules for the project and observed how they approached it.
The scientists looked to move through a simple series of solutions based on the outcome and
entire rule set. The architects, in contrast, simply focused on the desired end-state and then
tested to see if the solution they had found met the rules.
This led to the idea that scientists solve problems by a process of analysis, whilst designers
solve problems by synthesis. However, later evidence suggests that designers apply both
forms of problem solving to attain “design thinking”.
They do this via a process of divergent thinking. A designer will examine as many possible
solutions at the beginning of a process as they can think of – then they will apply the
scientific side (convergent thinking) to narrow these solutions down to the best output.
Design thinking can be as simple or as complex as the business and users require. This IDEO
process can be seen as a 3 part process or a 9 part process.
There are many other design thinking processes outlined in literature – most of which are a
truncated version of the above process combining or skipping stages.
In the book, Design thinking: Understand, Improve Apply, Plattner and Meinel offer four
underlying principles for design thinking:
Human – all design is of a social nature
Ambiguity – design thinking preserves and embraces ambiguity
Re-design – all design processes are in fact re-design of existing processes
Tangibility – the design process to make something tangible will facilitate
communication of that design
1. Recommend accessories and add ones B&H: At first glance, B&H’s “Add to Cart”
screen looks like any other site that sells electronics. But when you click the “Add to Cart”
button, there’s a staggering amount of actions a customer can take within its simple design.
For example, the page displays accessories that pair well with the chosen product—giving
customers a reason to add more items to their cart. In addition, the page prompts customers to
think about warranties for their products. Clicking the shopping cart icon redirects customers
to a more standard page. And it reminds them about those related product
recommendations. This gives them another chance to add the items to their cart in case
they’ve forgotten about them or changed their mind.
2. Use a mini cart to showcase items in the bag | Tilly’s : Tilly’s online clothing shop is
simple and easy-to-use. The page shows only the most necessary information along with
plenty of eye-catching pictures to engage visitors. Their clever “Add to Cart” design
improves their
customers’ shopping experience. When a customer puts a product into their shopping cart, a
“mini cart” expands on the right side. It lets a user keep track of their items at a glance. It
also has a “Checkout” button for when they’re done. So customers can go directly to
checkout with no effort. This “Add to Cart” example shows a great way to encourage impulse
purchases.
4. Show customers how much they need to spend to earn free shipping | Forever
21: Forever 21’s “Add to Cart” is transparent with their free shipping criteria. However, they
require shoppers to spend a certain amount to qualify for free shipping. You can see that this
cart is only $10.01 away from qualifying for free shipping. They also have a large coupon
code field—making it easy to add a discount. These two money-saving features cultivate a
positive emotion, even before customers get to the checkout page. The results: higher sales.
5. Increase urgency on the cart page | Nike: Shoppers get a small notification at the top
right corner of their screen when they click the “Add to Cart” button on Nike’s
website. There are two options in the mini cart window: view your bag or proceed directly to
checkout. It’s a choice that encourages customers to buy now or keep browsing the site. Nike
also places messages like “Just a Few Left, Order Now” to guide shoppers to complete
their checkout process. Catching a customer’s attention in this way can lead to more sales
because no one wants to lose out on a great pair of shoes. This is called the FOMO or the
Fear Of Missing Out strategy. Another nice touch on Nike’s shopping cart page is the
estimated shipping date. It gives shoppers an opportunity to imagine themselves in their new
shoes by a particular date. Details like these matter!
7. Offer free samples | Rare Beauty: Rare Beauty is another makeup brand with a heavy
online presence. Like Sephora, they offer customers free items in the same box as their
order.They also show how much more money needs to be added to the order. Then the
shopper
knows when they qualify for free shipping. Rare Beauty unconventionally uses large Xs so
customers can easily remove items from their shopping carts. This makes it easy to edit their
orders. You can see why their customers love to purchase online—it’s a quick and painless
checkout.
8. Compliment your customers | Lulu lemon : The first thing you notice in Lululemon’s
shopping cart is the large message at the top: “You’ve Got Great Taste.” A message like that
makes users want to click the cart icon! Complimenting your customers is a great way to
thank them for adding something to their shopping cart. And shows goodwill. This is
especially effective for Lululemon because of their positive and affirmational
branding. Lululemon’s checkout button is large and brightly colored. It draws the users’ eyes
and they recommend complementary products to encourage customers to “Continue
Shopping.”
ACTIVITY: 06
Requirement Management tools give the edge to the organizations in terms of giving
increased business value, reduce budget issues, etc.
In today’s world of development, it becomes utterly necessary to track changes in the
requirement at every step, automatically. The audit trail of requirements is done more
efficiently with the involvement of tools.
1) Visure
Price: Visure Solutions offers a free 30-day trial that can be downloaded from their website.
Perpetual and subscription licenses are available and can be used on-premises or cloud-based.
Detailed pricing and demo can be found on the Visure Solutions website.
2) SpiraTeam by Inflectra
SpiraTeam is a powerful application lifecycle management platform with robust and fully
integrated requirements management functionality. Voted the Quadrant Leader in
Requirements Management by SoftwareReviews.com in 2021, SpiraTeam is ideal for agile
teams who need to manage requirements with ease and confidence.
Plan, create, edit and manage requirements using planning boards, GANTT,
customizable workflows, while linking requirements to tests, and other artifacts in
SpiraTeam.
SpiraTeam’s requirements matrix allows users to drill down from each of captured
requirements to determine how many test cases have validated the functionality, and
status of each of the defects logged.
With SpiraTeam, users can quickly and easily organize requirements in a hierarchical
structure, viewed as a kanban board or on a mind map. Requirements can be
prioritized, estimated, and associated with specific releases and artifacts.
In SpiraTeam, each requirement is displayed with its associated test coverage.
Requirements can be moved, copied, and filtered according to a variety of criteria.
SpiraTeam’s requirements management module is designed with end-to-end
traceability at its core, which makes it ideal for regulated industries.
3) Jama Software
Price: You can get a quote for pricing details. It offers a free trial for the product.
Jama Software provides the leading platform for Requirements, Risk, and Test management.
With Jama Connect and industry-focused services, teams building complex products,
systems, and software improve cycle times, increase quality, reduce rework, and minimize
effort proving compliance.
Jama Software’s growing customer base of more than 600 organizations includes companies
representing the forefront of modern development in Autonomous Vehicles, Healthcare,
Financial Services, Industrial Manufacturing, Aerospace, and Defense.
Jama Connect was rated as the top Application Lifecycle Management (ALM) tool for 2019
by Trust Radius. In particular, the reviewers praise the product’s purposeful collaboration,
ease of adaptability, and live traceability.
Our Score: 8.5 out of 10
4) KatalonTestOps
KatalonTestOps is a free, robust orchestration tool that manages all of your testing activities
effectively and efficiently. TestOps centralizes every requirement in one place, gives your
teams full visibility of their tests, resources, and environments by integrating with any testing
framework you love.
Deployable on Cloud, Desktop: Window and Linux system.
Compatible with almost all testing frameworks available: Jasmine, JUnit, Pytest,
Mocha, etc; CI/CD tools: Jenkins, CircleCI, and management platforms: Jira, Slack.
Plan efficiently with Smart Scheduling to optimize the test cycle while maintaining
high quality.
5) ReqSuite® RM
Price: Osseno offers a free trial for the ReqSuite RM. There are three pricing plans for the
product, Basic (Starts from $143 per 3 users per month), Standard (Starts from $276 per 5
users per month), and Enterprise (Starts from $664 per 10 users per month).
ReqSuite® RM is a very intuitive yet powerful solution for requirements management, or
the management of other project-relevant information (e.g., solution concepts, test cases, etc.)
as well. Due to its easy and extensive configurability, ReqSuite® RM can be quickly and
completely adapted to the individual customer situation, which is why companies from
different industries rely on ReqSuite® RM.
Besides the mentioned configurator, the unique selling propositions include AI-supported
assistance functions, such as for guiding the analysis and approval workflow, checking the
quality and completeness of requirements, automatic linking, reuse recommendations, etc.
ReqSuite® RM is a fully web-based application and can be operated on the Cloud or on-
premise. The smallest license package (3 users) starts with 129€ per month. A free trial is
available.
Our Score: 9 out of 10
6) Xebrio
Xebrio tracks individual requirements throughout the lifecycle of your project with multiple
levels of stakeholder approvals & collaboration capabilities, the ability to associate
requirements to tasks, milestones, and test cases, and a transparent yet detailed process for
requirement change management, thereby guaranteeing requirement traceability.
Not just limited to Requirements Management, Xebrio is a complete Project Management
Tool with extensive task management, Collaboration, Communication, Test management,
Bug tracking, Asset management, Release management, and comprehensive reporting
capabilities, with all under the same roof and no add-ons or plugins are required.
It also sports a comprehensive dashboard and offers detailed data-driven insights with easy-
to-grasp reports.
Being very intuitive and user-friendly, Xebrio also offers you an extended free trial and great
support.
Our Score: 9 out of 10.
8) Process Street
Process Street is one of the most user-friendly Requirements Management tools to manage
processes, team workflow, checklists, and operating procedures. It’s easy and time-saving to
interact with a client via this tool. By using Process Street one can design one’s own
processes, without being experts in it.
This tool is available to use free of cost for 30 days, which includes all features.
Process Street is a workflow and process management tool that provides an option to
manage recurring checklists and procedures in the business.
Some of the best features that Process Street offers are Regular workflow scheduling,
Activity feed, tasks assignment, Instant visibility, Run processes as collaborative
workflows, etc.
UML (Unified Modeling Language) Diagrams are very important in the field of software
engineering. It allows us to visualize and analyze systems. It is efficient because as they say “One
picture is worth a thousand words”. It is easier to explain to clients using diagrams. In
software engineering, UML diagrams are used to visualize the project before it is done and
to document the project after it is done. You need to know that a lot of time is often saved
with the help of these UML diagrams. Also, there are the best UML modeling
tools available out there that help us to draw UML diagrams. So, we’re going to explore
various such tools with their specifications. Let’s get started:
2. Lucidchart: The Lucidchart is a tool where users draw diagrams and charts. It
provides a collaborative platform for drawing diagrams. It is an online tool and
paid service. People can use it with a single sign-up. It is user-friendly. It contains
all the shapes for drawing any UML diagrams. It also provides basic templates
for all diagrams. So we can draw any diagram easily with the help of those
templates. It also helps business people to plan for their events. So it will be very
helpful for data visualization, diagramming, etc. It uses a free trial option. It
allows only a few frames and shapes for free trial users. Free users are allowed to
use only 60 shapes per diagram. It does not support enterprise browsers. This tool
is not only helpful for drawing UML diagrams but also helpful for any project
design and project management.
6. Gliffy: Gliffy was founded by Chris Kohlhardt and Clint Dickson in 2005.
Gliffy is an online programming tool that allows any team to visually share ideas.
The main advantage of Gliffy is the drag and drop feature which makes the
drawing so easy. We can effortlessly drag and drop the shapes and use the
available templates. It eases the process of drawing diagrams. Using this we can
draw wireframes for apps, design projects. In Gliffy we can draw the diagrams
with ease, share the document with anyone we want, and can collaborate
instantly, import and export the documents easily. Another advantage of Gliffy is
that it is cloud-based and does not need to require the users to download it. The
allowed format of downloadable extensions in Gliffy are PNG, JPG, SVG, etc.
We can also choose the size like fit to max size, screen size, medium, large,
extra-large.
Estimating work effort in agile projects is fundamentally different from traditional methods
of estimation. The traditional approach is to estimate using a “bottom-up” technique: detail
out all requirements and estimate each task to complete those requirements in hours/days, and
then use this data to develop the project schedule. Agile projects, by contrast, use a “top-
down” approach, using gross-level estimation techniques on feature sets, then employing
progressive elaboration and rolling-wave planning methods to drill down to the task level on
a just-in-time basis, iteratively uncovering more and more detail each level down. This paper
will elaborate on two common techniques for agile estimation (planning poker and affinity
grouping), as well as touch on how the results of these exercises provide input into
forecasting schedule and budget.
The traditional method for estimating projects is to spend several weeks or months at the
beginning of a project defining the detailed requirements for the product being built. Once all
the known requirements have been elicited and documented, a Gantt chart can be produced
showing all the tasks needed to complete the requirements, along with each task estimate.
Resources can then be assigned to tasks, and actions such as loading and leveling help to
determine the final delivery date and budget. This process is known as a bottom-up method,
as all detail regarding the product must be defined before project schedule and cost can be
estimated.
In the software industry, the use of the bottom-up method has severe drawbacks due to
today's speed of change. Speed of change means that the speed of new development tools and
the speed of access to new knowledge is so great that any delay in delivery leaves one open to
competitive alternatives and in danger of delivering an obsolete product (Sliger, 2010).
The top-down method addresses this key issue, by using the information currently available
to provide gross-level estimates. Rolling-wave planning is then used to incorporate new
information as it's learned, further refining estimates and iteratively elaborating with more
detail as the project progresses. This method of learning just enough to get started, with a
plan to incorporate more knowledge as work outputs evolve, allows the project team to react
quickly to adversity and changing market demand.
Estimation Techniques
Gross-level estimation techniques are in use by teams using agile approaches such as Scrum
and Extreme Programming, and this paper will cover two of the most popular techniques:
Planning Poker and Affinity Grouping. Estimation units used will also be examined, as these
units should be such that they cannot be confused with time.
Planning Poker
The most popular technique of gross level estimation is Planning Poker, or the use of the
Fibonacci sequence to assign a point value to a feature or item (Grinning, 2002). The
Fibonacci sequence is a mathematical series of numbers that was introduced in the 13th
century and used to explain certain formative aspects of nature, such as the branching of
trees. The series is generated by adding the two previous numbers together to get the next
value in the sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, and so on. For agile estimation purposes,
some of the numbers have been changed, resulting in the following series: 1, 2, 3, 5, 8, 13,
20, 40, and 100.
These numbers are represented in a set of playing cards (see Exhibit 1). Team members play
“Planning Poker” (Exhibit 2) to provide an estimate in the form of a point value for each
item. Here are the steps:
Each team member gets a set of cards.
The business owner (who does NOT get to estimate) presents the item to be estimated.
The item is discussed.
Each team member privately selects a card representing his/her estimate.
When everyone is ready, all selected cards are revealed at the same time.
If all team members selected the same card, then that point value is the estimate.
If the cards are not the same, the team discusses the estimate with emphasis placed on the
outlying values:
The member who selected the lowest value explains why he/she selected the value.
The member who selected the highest value explains why he/she selected the value.
Select again until estimates converge.
Should lengthy or “in-the-weeds” conversations result, team members may use a two-
minute timer to time box the discussion, selecting again each time the timer runs out,
until conversion.
Repeat for each item (Cohn, 2006, pp. 56–57).
Exhibit 2. Playing Planning Poker (Photo courtesy of Museums and the Web. All Rights
Reserved)
There are several reasons Fibonacci numbers are used, and used in this format. First is the
notion that once teams eliminate time as the estimate base, they are less likely to demand
more detail and pad estimates. These numbers instead represent relative size, not time. As a
result, the estimation exercise goes quite quickly. Teams generally spend roughly two
minutes on each item, allowing a backlog of 30 items to be estimated in an hour. The fact that
teams are limited to only 9 choices (i.e., point values or cards) also helps speed up the
process.
The sequence also provides the right level of detail for smaller and better-understood
features, while avoiding a false sense of accuracy for higher estimates. For example, an item
with a high estimate (20 or higher) means the item is large and not yet well understood.
Debating whether the item was a 20 or a 19 or a 22 would be a waste of time as there simply
isn't enough data available. Once the item gets closer to the iteration in which the item will be
worked, it can be broken down into smaller pieces and estimated in more granular numbers
(1–13). Items with point estimates from 1–13 can generally be completed within a single
iteration (1–4 weeks).
It is important to note that points do not have the same meaning across teams; for example,
one team's “five” does not equal another team's “five.” Thus team velocity, which is derived
from points, should not be used to compare productivity across teams.
Affinity Grouping
An even faster way to estimate, and one used when the number of items to estimate is large,
is affinity grouping. Team members simply group items together that are like-sized, resulting
in configuration similar to the one in Exhibit 3. The method is simple and fast:
The first item is read to the team members and placed on the wall.
The second item is read and the team is asked if it is smaller or larger than the first item;
placement on the wall corresponds to the team's response (larger is to the right, smaller is
to the left).
The third item is read and the team is asked if it is smaller or larger than the first and/or
second items; the item is placed on the wall accordingly.
Control is then turned over to the team to finish the affinity grouping for the remainder of
the items.
Teams may choose to continue in the same fashion, placing one item at a time on the wall
after group discussion. However, a faster way is to have each team member select an item
and place it based on their own best understanding. This is done with all team members
working in parallel until all items have been assessed and placed on the wall. Several hundred
items can be estimated in a relatively short time. Once all items are on the wall, the team
reviews the groupings. Items that a team member believes to be in the wrong group are
discussed and moved if appropriate.
Once affinity grouping is complete, estimation unit values such as points can be assigned. In
Exhibit 3, the first set on the far left would be labeled as having a value of 1 point, the second
set would be 2 points, the third set 3 points, the fourth set 5 points, and the last set 8 points.
Affinity grouping can also be done for other estimation units, such as T-shirt sizes. Exhibit 4
shows an example of affinity grouped items labeled with T-shirt sizes instead of points.
Exhibit 4. Affinity Grouping Using T-shirt Sizes (Graphic courtesy of Chris Sterling.
All Rights Reserved.)
Estimation Units
The use of T-shirt sizes (Extra Small [XS], Small [S], Medium [M], Large [L], Extra Large
[XL]) is another way to think of relative sizes of features. This is an even greater departure
from the numeric system, and like all good gross-level estimation units can in no way be
associated with a specific length of time.
Other arbitrary tokens of measurement include Gummi Bears, NUTS (Nebulous Units of
Time), and foot-pounds. Teams may create their own estimation units, and as you can see,
they often have a bit of fun in doing so.
This paper does not cover the use of time-based units such as ideal development days and/or
hours. These are already common and well understood, so their explanations were not
included. It is worth noting however that gross-level estimating has the potential to be more
successful when decoupled from the notion of time. Because time estimates are often turned
into commitments by management and business, team members feel more pressure to be as
accurate as possible. As a result they request more and more detail about the item being
estimated. This turns gross-level estimation into the more time-consuming detail-level
estimation and defeats the original intent and purpose.
Once gross-level estimates and team velocity are determined, schedule and budget can be
forecast. Teams determine their velocity by adding up the total number of points for all the
items they completed in an iteration. For example, a team may have selected five items with a
total point value of 23 points (see Exhibit 5). At the end of their two-week iteration, they
were only able to complete four of the five items. Their velocity is 15, or the sum of the point
values of items 1–4. Teams do not get “partial credit” for completing portions of an item, so
even if they had started on item 5, it would not count, as it was not completed.
Determining Budget
In this section we look at answering, “We only have this much money—how long will it last
and how much will we have done before we run out?” First, a simple formula is used to
determine the cost per point:
Σ (loaded team salaries for period n) / points completed in period n
Take the sum of the team's salaries (loaded) for a period of time, say three two-week
iterations, and divide that by the number of points the team completed in the same time
frame. So a team whose total loaded salaries are $240,000 over six weeks, and completed 60
points of work in those three iterations, would have a cost per point of $4,000. Now use the
following formula to determine budget:
(Cost per point x total point value of items to be completed) + other expenses = forecast
budget
Quite often not all features for a product are defined at the outset of a project, which is as
expected for agile projects. So budget estimates are based on what we know today, plus a
forecast algorithm that is based on historic data or expert guidance. For example, say there
are only 20 features listed so far, but the business won't be able to provide any additional
feature requests or refinements until after seeing how the first release is received by the
customer. The budget for the project, which is slated for three releases, would only have
forecast data available for the first release and not the entire project. The team could use the
algorithm above to forecast budget for the first release, then assume an additional 20% for the
second release and an additional 5% for the last release, based on past experience.
ACTIVITY: 08
Study boiler plate and present necessary characteristics of boiler plate for
a large and small project
The term boilerplate refers to standardized text, copy, documents, methods, or procedures
that may be used over again without making major changes to the original. A boilerplate is
commonly used for efficiency and to increase standardization in the structure and language
of written or digital documents. This includes contracts, investment prospectuses, and bond
indentures. In the field of contract law, documents contain boilerplate language, which is a
language that is considered generic or standard in contracts.
Boilerplates Work
Boilerplate is any text, documentation, or procedures that can be reused more than once in a
new context without any substantial changes to the original. Boilerplates are commonly used
online and in written documents by a variety of entities including corporations, legal firms,
and medical facilities. Users can make slight changes to the language or certain portions of
the text to tailor a document for different uses. For instance, a media release contains
boilerplate language at the bottom, which is usually information about the company or
product, and can be updated for different situations before being disseminated to the public.
The term is also commonly used in the information technology industry, referring to coding
that can be created and reused over and over again. In this case, the IT specialist only has to
rework some of the code to fit into the current need, without making major changes to the
original structure.
History of Boilerplate
In the 19th century, a boilerplate referred to a plate of steel used as a template in the
construction of steam boilers. These standardized metal plates reminded editors of the often
trite and unoriginal work that ad writers and others sometimes submitted for publication. 2
The legal profession began using the term boilerplate as early as the mid-1950s, when an
article in the Bedford Gazette criticized boilerplates because they often included fine
print designed to skirt the law. Businesses now use boilerplate clauses in contracts, purchase
agreements, and other formal documents. Boilerplate clauses are designed to protect
businesses from making errors or legal mistakes in the language. The wording of these
passages is generally not up for negotiation with customers, who will often sign boilerplate
documents without actually reading or understanding them. This type of boilerplate, written
by a party with superior bargaining power and presented to a weaker party, is often called
an adhesion contract in the legal profession. Courts may set aside provisions of such
contracts if they find them coercive or unfair.
After all these minimum specs, you should start editing and altering the code in order to build
your project. There are some Big Tech Companies who even build their own boilerplate.
They use it for respective and similar projects throughout time.
These types of boilerplates are generally kind of “Starter Kits” or in professional way it is
called “Scaffolding”. Their main target users are novice developers or new early adopters.
It focuses on fast prototyping by creating the elements which are necessary only for new
projects. These require less functionality and are not scalable over time and project.
Their code structure is not much expanded, and doesn’t involve deeper abstraction layer as
users only need to build core features. This eliminates the need for extra utilities.
ACTIVITY: 09
1. Identify different DevOps tools and list features
2. Study and report OWASP coding guidelines
3. Learn and report Twelve Factor App methodologies
4. Identify different version control and configuration management tools
and report their market share
OWASP provides a secure coding practices checklist that includes 14 areas to consider in
your software development life cycle. Of those secure coding practices, we’re going to focus
on the top eight secure programming best practices to help you protect against vulnerabilities.
1.Security by Design
2.Password Management
3.Access Control
4.Error Handling and Logging
5.System Configuration
6.Threat Modeling
7.Cryptographic Practices
8.Input Validation and Output Encoding
Security by Design
Security needs to be a priority as you develop code, not an afterthought. Organizations may
have competing priorities where software engineering and coding are concerned. Following
software security best practices can conflict with optimizing for development speed.
However, a “security by design” approach that puts security first tends to pay off in the long
run, reducing the future cost of technical debt and risk mitigation. An analysis of your source
code should be conducted throughout your software development life cycle (SDLC), and
security automation should be implemented.
Password Management
Passwords are a weak point in many software systems, which is why multi-factor
authentication has become so widespread. Nevertheless, passwords are the most common
security credential, and following secure coding practices limits risk. You should require all
passwords to be of adequate length and complexity to withstand any typical or common
attacks. OWASP suggests several coding best practices for passwords, including:
Storing only salted cryptographic hashes of passwords and never storing plain-text
passwords.
Enforcing password length and complexity requirements.
Disable password entry after multiple incorrect login attempts.
We have also written about password expiration policies and whether they are a security best
practice in a modern business environment.
Access Control
Take a “default deny” approach to sensitive data. Limit privileges and restrict access to
secure data to only users who need it. Deny access to any user that cannot demonstrate
authorization.
Ensure that requests for sensitive information are checked to verify that the user is authorized
to access it.
Learn more about access controls for remote employees and cloud access management.
Software errors are often indicative of bugs, many of which cause vulnerabilities. Error
handling and logging are two of the most useful techniques for minimizing their impact.
Error handling attempts to catch errors in the code before they result in a catastrophic failure.
Logging documents errors so that developers can diagnose and mitigate their cause.
Documentation and logging of all failures, exceptions, and errors should be implemented on a
trusted system to comply with secure coding standards.
System Configuration
Clear your system of any unnecessary components and ensure all working software is
updated with current versions and patches. If you work in multiple environments, make sure
you’re managing your development and production environments securely.
Outdated software is a major source of vulnerabilities and security breaches. Software
updates include patches that fix vulnerabilities, making regular updates one of the most vital,
secure coding practices. A patch management system may help your business to keep on top
of updates.
Threat Modeling
Document, locate, address, and validate are the four steps to threat modeling. To securely
code, you need to examine your software for areas susceptible to increased threats of attack.
Threat modeling is a multi-stage process that should be integrated into the software lifecycle
from development, testing, and production.
Cryptographic Practices
Encrypting data with modern cryptographic algorithms and following secure key
management best practices increases the security of your code in the event of a breach.
These secure coding standards are self-explanatory in that you need to identify all data inputs
and sources and validate those classified as untrusted. You should utilize a standard routine
for output encoding and input validation.
The solution is for multiple projects, easy to understand, and offers affordable licensing.
Prominent Features:
SolarWinds Server Configuration Monitor provides alerts and reports for deviations
from the baseline in almost real-time.
It can track server and application changes.
It has features to spot the differences between configs.
It has enhanced change auditing capabilities by monitoring the script outputs.
Pros:
The tool provides the features to help you decrease the troubleshooting time.
It provides the facility of hardware and software inventory tracking and hence you
will have an up-to-date list of hardware and software assets.
Cons:
As per reviews, it takes some time to get a hand on the tool.
2) Auvik
Auvik is the provider of cloud-based network management tools. These tools offer true
network visibility and control. It provides real-time network mapping & inventory, automated
config backup & restore on network devices, deep insights of network traffic, and automated
network monitoring. It helps with managing the network from anywhere you are.
Virtualization and containerization are the two most frequently used mechanisms to host
applications in a computer system. Virtualization uses the notion of a virtual machine as the
fundamental unit. Containerization, on the other hand, uses the concept of a container. Both
of these technologies play a crucial role and have their merits and demerits.
1. At the bottom of the layer, there are physical infrastructures such as CPU, disk
storage, and network interfaces
2. Above that, there is the host OS and its kernel. The kernel acts the bridge between the
software of the OS and the hardware resources
3. The container engine and its minimal guest OS sits on top of the host OS
4. At the very top, there are binaries, libraries for each application and the apps that run
on their isolated user spaces
Area Virtualization Containerization
Isolation Provides complete isolation from the Typically provides lightweight isolation
host operating system and the other from the host and other containers, but
VMs doesn’t provide as strong a security
boundary as a VM
Persistent Use a Virtual Hard Disk (VHD) for Use local disks for local storage for a
storage local storage for a single VM or a single node or SMB for storage shared
Server Message Block (SMB) file by multiple nodes or servers
share for storage shared by multiple
servers
Load Virtual machine load balancing is An orchestrator can automatically start
balancing done by running VMs in other or stop containers on cluster nodes to
servers in a failover cluster manage changes in load and
availability.
4) Amazon ECS: Amazon ECS (an acronym for Elastic Container Service) is an
orchestration service that supports Docker containers and permits you to effortlessly execute
and scale containerized applications on Amazon AWS.
This service is highly scalable and is high performing. It eradicates the requirement to install
and manage your own container orchestration software and manages to cluster through virtual
machines
.
5) LXC: LXC is the acronym for Linux Containers which is a type of OS-level virtualization
method for executing numerous isolated Linux systems(containers) sitting on a control host
employing a single Linux Kernel. This is an open source tool under the GNU LGPL License.
It is available on the GitHub Repository. This software is written in C, Python, Shell, and
Lua.
7) Microsoft Azure: Microsoft Azure offers different container services for your various
container needs.
8) Google Cloud Platform: Google cloud provides you with different options to choose
from for running the containers. These are Google Kubernetes Engine (for container cluster
management), Google Compute Engine (for Virtual Machines and CI/CD pipeline) and
Google App Engine Flexible Environment (for containers on fully-managed PaaS). We have
already discussed the Google Kubernetes Engine earlier in this article. We will now discuss
the Google Compute Engine and Google App Engine Flexible Environment.
Software Testing tools are the tools which are used for the testing of software. Software
testing tools are often used to assure firmness, thoroughness and performance in testing
software products. Unit testing and subsequent integration testing can be performed by
software testing tools. These tools are used to fulfill all the requirements of planned testing
activities. These tools also works as commercial software testing tools. The quality of the
software is evaluated by software testers with the help of various testing tools.
1. Static Test Tools: Static test tools are used to work on the static testing processes. In the
testing through these tools, typical approach is taken. These tools do not test the real
execution of the software. Certain input and output are not required in these tools. Static test
tools consists of the following:
Flow analyzers: Flow analyzers provides flexibility in data flow from input to output.
Path Tests: It finds the not used code and code with inconsistency in the software.
Coverage Analyzers: All rationale paths in the software are assured by the coverage
analyzers.
Interface Analyzers: They check out the consequences of passing variables and data in
the modules.
2. Dynamic Test Tools: Dynamic testing process is performed by the dynamic test tools.
These tools test the software with existing or current data. Dynamic test tools comprises of
the following:
Test driver: Test driver provides the input data to a module-under-test (MUT).
Test Beds: It displays source code along with the program under execution at the
same time.
Emulators: Emulators provides the response facilities which are used to imitate parts
of the system not yet developed.
Mutation Analyzers: They are used for testing fault tolerance of the system by
knowingly providing the errors in the code of the software.
What Is Manual Testing?
Manual testing is the process in which QA analysts execute tests one-by-one in an individual
manner. The purpose of manual testing is to catch bugs and feature issues before a software
application goes live. When manually testing, the tester validates the key features of a
software application. Analysts execute test cases and develop summary error reports without
specialized automation tools.
Test Efficiency Time-consuming and less More testing in less time and greater
efficient efficiency
Types of Tasks Entirely manual tasks Most tasks can be automated, including real
user simulations
Test Coverage Difficult to ensure sufficient Easy to ensure greater test coverage
test coverage
ACTIVITY: 12
These goals can be achieved by providing information and clarity throughout the organization
about complex software development projects. Metrics are an important component of quality
assurance, management, debugging, performance, and estimating costs, and they’re valuable
for both developers and development team leaders:
Managers can use software metrics to identify, prioritize, track and communicate any
issues to foster better team productivity. This enables effective management and
allows assessment and prioritization of problems within software development
projects. The sooner managers can detect software problems, the easier and less-
expensive the troubleshooting process.
Software development teams can use software metrics to communicate the status of
software development projects, pinpoint and address issues, and monitor, improve on,
and better manage their workflow.
Software metrics offer an assessment of the impact of decisions made during software
development projects. This helps managers assess and prioritize objectives and performance
goals.
How Software Metrics Lack Clarity
Terms used to describe software metrics often have multiple definitions and ways to count or
measure characteristics. For example, lines of code (LOC) is a common measure of software
development. But there are two ways to count each line of code:
One is to count each physical line that ends with a return. But some software
developers don’t accept this count because it may include lines of “dead code” or
comments.
To get around those shortfalls and others, each logical statement could be considered
a line of code.
Thus, a single software package could have two very different LOC counts depending on
which counting method is used. That makes it difficult to compare software simply by lines
of code or any other metric without a standard definition, which is why establishing a
measurement method and consistent units of measurement to be used throughout the life of
the project is crucial.
There is also an issue with how software metrics are used. If an organization uses
productivity metrics that emphasize volume of code and errors, software developers could
avoid tackling tricky problems to keep their LOC up and error counts down. Software
developers who write a large amount of simple code may have great productivity numbers
but not great software development skills. Additionally, software metrics shouldn’t be
monitored simply because they’re easy to obtain and display – only metrics that add value to
the project and process should be tracked.
This is why software development platforms that automatically measure and track metrics are
important. But software development teams and management run the risk of having too much
data and not enough emphasis on the software metrics that help deliver useful software to
customers.
The technical question of how software metrics are collected, calculated and reported are not
as important as deciding how to use software metrics. Patrick Kua outlines four guidelines for
an appropriate use of software metrics:
For example, size-based software metrics often measure lines of code to indicate coding
complexity or software efficiency. In an effort to reduce the code’s complexity, management
may place restrictions on how many lines of code are to written to complete functions. In an
effort to simplify functions, software developers could write more functions that have fewer
lines of code to reach their target but do not reduce overall code complexity or improve
software efficiency. When developing goals, management needs to involve the software
development teams in establishing goals, choosing software metrics that measure progress
toward those goals and align metrics with those goals.
Application crash rate (ACR): Application crash rate is calculated by dividing how many
times an application fails (F) by how many times it is used (U).
ACR = F/U
Security metrics: Security metrics reflect a measure of software quality. These metrics need
to be tracked over time to show how software development teams are developing security
responses.
Endpoint incidents: Endpoint incidents are how many devices have been infected by a virus
in a given period of time.
Mean time to repair (MTTR): Mean time to repair in this context measures the time from
the security breach discovery to when a working remedy is deployed.
Size-oriented metrics: Size-oriented metrics focus on the size of the software and are
usually expressed as kilo lines of code (KLOC). It is a fairly easy software metric to collect
once decisions are made about what constitutes a line of code. Unfortunately, it is not useful
for comparing software projects written in different languages. Some examples include:
Errors per KLOC
Defects per KLOC
Cost per KLOC
Function-oriented metrics: Function-oriented metrics focus on how much functionality
software offers. But functionality cannot be measured directly. So function-oriented software
metrics rely on calculating the function point (FP) — a unit of measurement that quantifies
the business functionality provided by the product. Function points are also useful for
comparing software projects written in different languages. Function points are not an easy
concept to master and methods vary. This is why many software development managers and
teams skip function points altogether. They do not perceive function points as worth the time.
Errors per FP or Defects per FP: These software metrics are used as indicators of an
information system’s quality. Software development teams can use these software metrics to
reduce miscommunications and introduce new control measures.
Defect Removal Efficiency (DRE): The Defect Removal Efficiency is used to quantify how
many defects were found by the end user after product delivery (D) in relation to the errors
found before product delivery (E). The formula is:
DRE = E / (E+D)
ACTIVITY: 13
Most organizations use quality tools for various purposes related to controlling and assuring
quality. Although a good number of quality tools specific are available for certain domains,
fields and practices, some of the quality tools can be used across such domains. These quality
tools are quite generic and can be applied to any condition. There are seven basic quality
tools used in organizations. These tools can provide much information about problems in the
organization assisting to derive solutions for the same. A number of these quality tools come
with a price tag. A brief training, mostly a self-training, is sufficient for someone to start
using the tools.
1. Flow Charts: This is one of the basic quality tool that can be used for analyzing a
sequence of events. The tool maps out a sequence of events that take place
sequentially or in parallel. The flow chart can be used to understand a complex
process in order to find the relationships and dependencies between events. You can
also get a brief idea about the critical path of the process and the events involved in
the critical path. Flow charts can be used for any field to illustrate complex processes
in a simple way. There is specific software tools developed for drawing flow charts,
such as MS Visio. You can download some of the open source flow chart tools
developed by the open source community.
1. Histogram: Histogram is used for illustrating the frequency and the extent in the
context of two variables. Histogram is a chart with columns. This represents the
distribution by mean. If the histogram is normal, the graph takes the shape of a bell
curve. If it is not normal, it may take different shapes based on the condition of the
distribution. Histogram can be used to measure something against another thing.
Always, it should be two variables. Consider the following example: The following
histogram shows morning attendance of a class. The X-axis is the number of students
and the Y-axis the time of the day.
2. Cause and Effect Diagram: Cause and effect diagrams (Ishikawa Diagram) are used
for understanding organizational or business problem causes. Organizations face
problems everyday and it is required to understand the causes of these problems in
order to solve them effectively. Cause and effect diagrams exercise is usually
teamwork. A brainstorming session is required in order to come up with an effective
cause and effect diagram. All the main components of a problem area are listed and
possible causes from each area is listed. Then, most likely causes of the problems are
identified to carry out further analysis.
1. Check Sheet: A check sheet can be introduced as the most basic tool for quality. A
check sheet is basically used for gathering and organizing data. When this is done
with the help of software packages such as Microsoft Excel, you can derive further
analysis graphs and automate through macros available. Therefore, it is always a good
idea to use a software check sheet for information gathering and organizing needs.
One can always use a paper-based check sheet when the information gathered is only
used for backup or storing purposes other than further processing.
Scatter Diagram: When it comes to the values of two variables, scatter diagrams are the best
way to present. Scatter diagrams present the relationship between two variables and
2. illustrate the results on a Cartesian plane. Then, further analysis, such as trend
analysis can be performed on the values. In these diagrams, one variable denotes one
axis and another variable denotes the other axis.
3. Control Charts: Control chart is the best tool for monitoring the performance of a
process. These types of charts can be used for monitoring any processes related to
function of the organization. These charts allow you to identify the following
conditions related to the process that has been monitored.
Stability of the process
Predictability of the process
Identification of common cause of variation
Special conditions where the monitoring party needs to react
1. Pareto Charts: Pareto charts are used for identifying a set of priorities. You can chart
any number of issues/variables related to a specific concern and record the number of
occurrences. This way you can figure out the parameters that have the highest impact
on the specific concern. This helps you to work on the propriety issues in order to get
the condition under control.