0% found this document useful (0 votes)
229 views61 pages

AGILE Refresher - Part 2

Uploaded by

rupesh naidu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
229 views61 pages

AGILE Refresher - Part 2

Uploaded by

rupesh naidu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Infosys Global Agile Developer

Certification Refresher
Part – 2
Sections Covered (Agile Generic)

Content from study material covered in the ppt

Need to be read from the study material in Learning Hub. No content covered.

2
Requirements, Architecture and Design

3
Requirement Categorization: Theme, Epic, User Story
The widely-used technique for classifying the requirement is size-based SMC (Small, Medium and Complex) or based on various factors
like dependency, unknowns, risk and uncertainty

In Agile, requirements are primarily defined based on value to business (ROI-Return on Investment) and how they evolve over the project
execution timespan
Theme: Epic:
• A Theme is very large requirement and sometimes encompass • Very large requirement that can’t be completed in a short
entire product vision iteration (Sprints)
• Set of interrelated requirements which can be considered for • Needs to be broken down into User stories
Release planning • Can be considered as a small theme by virtue of its magnitude
• Large theme can be subdivided in multiple sub-themes
• Development of a theme can span from few months to a year Example of Epic: ‘Xyz Bank wants to allow funds transfer between
own accounts in different branch, other accounts in same bank, and
Example: ‘Xyz Bank wants to allow customers transact online and preferably, accounts in other banks.’
be able to do fund transfer, open fixed deposits, and request
check book online.’

User Story:
• A feature small enough to be delivered in iteration and represents unique business value
• Epic is divided into multiple user stories
• Mostly a vertical slice cutting across all phases of Software development life cycle and has tangible business value.
• Represents technical debt which doesn’t represent business value directly but is required for better and maintainable code
quality

4 Example of Epic: As a customer, I want to schedule automatic fund transfer among my accounts.’
Effective Requirement Definition
Requirement Workshop
• Carried out from a few hours to a few days depending on size of the release
• Outcome of these workshops is Epics and Stories on which stakeholders have enough clarity
• Have stakeholders from even extended teams such as Tech Architect, Solution Designer, IT Managers, Capacity Managers and BAs
• Requirements are prioritized, detailed out, estimated

Requirement Accuracy
• Requirements taken up for development should have enough clarity to help team estimate them to a satisfactory and agreeable
level
• Complex Epic requirement should be broken down to sub-epics till it is possible to define multiple short feature/story for each one
• Becomes accurate when it’s smaller as it will have lesser unknowns, dependencies and associated risks
• Adding testable acceptance criteria in each requirement helps in a to increase requirement clarity

User Story
• written in perspective of user and how it would help business
• always capture AT (Acceptance Test) which should be testable
• Each story should have DONE criterion to consider the story
for acceptance

5
User Story
Three C’s for writing good user stories
•‘Card’ came from XP world where practice of writing requirement on Index
Card cards
•Writing requirements in short
•brings required details
Conversations •conversation summary must be recorded on the backside of the card with
date
•Acceptance tests provide ways to confirm if story card needs are met
Confirmations •Can prove as excellent artifacts to be considered as part of documentation
deliverables

INVEST Method: Define User Story

Independent Negotiable Valuable Small Testable


6
Prioritization Techniques Cont…
Prioritizing By Value: Value stream visualization, cost to build the feature vs. efforts, feasibility, infrastructure cost (if any), risk, skills,
segment/audience
Include requirements are essentially to improve operations or for statutory compliance reasons and do not add
business value; rather they are cost realization
MMF (Minimum Focused on the set of features that needs to be clubbed and released together to gain business value
Marketable Feature)
Team focuses on completing smallest functionality of real importance to business rather than releasing a theme or
epic which may not directly lead to delivering most important need
MoSCow Must Have
Should Have
Could Have
Wont Have
Cost od Delay (COD Impact of not completing a user story or feature to business

Used with Kanban method to effectively prioritize requirements that have more value over others because of cost of
non-completion
Certainty Certain requirement has more weightage while prioritizing

Helps in fast-tracking the decision process of business stakeholders

Feasibility Can be gauged based on the cost/benefit analysis (CBA)

No matter which technique is used, list of project requirements must be sorted from most to least valuable
7
Architecture and Design
• High-level architecture generally involves Factors for Bad Design
Depicting the system into modules
Representing the interfaces and interactions between these interfaces
• Detail design outlines the design internals for each of the modules
• For OOAD (Object Oriented Analysis and Design) project high-level design
includes system architecture with details about interfaces, components,
and networks, data flows and interactions between component systems
• Detail design further divides high-level design into class diagram with all
the methods and relation between the classes

Agile projects believe in You Aren't Gonna Need It (YAGNI) & No Big Design Up Front (NBDUP):

• YAGNI is an engineering principle of Extreme Programming (XP)


• Focuses on working on the features which are deemed necessary
• Used in combination with other XP practices of refactoring and TDD

Rationale:
1. Any code or design which is not required for the current set of features might not be required in the future
2. More time is spent in designing/ developing feature which does not add any value to the current set of features required by the
customer
3. Additional unrequired feature can lead to software becoming larger and complicated which in turn can impose constraints on
adding new features
8
Define System Architecture

• Create visual architecture and infrastructure (AIP) planning diagram defining components, interfaces
and other characteristics of system or components
• Security or other NFRs in the visual AIP
• Decided in workshop or project initiation (Sprint 0) phase

Guidelines:
• Create System architecture diagram for implementation in different layers and the
main components
• Decide on platform level details (programming language, database)
• Identify risk areas that need POC (Proof of Concept)

Techniques:
• Expert Judgment
• Whiteboard, AIP diagram
• Existing Enterprise Guidelines & Constraints

9
Define Architecture Stories Architectural styles describes a
system category that encompasses
• Software Architecture is the structure which comprise the software
components, the externally visible properties of those components, A set of component types that perform a function required by the
and the relationships among the components system
• Represents the structure of the data and program components that are
required to build a computer-based system A set of connectors (subroutine call, remote procedure call, data
• Refined at the initiation of the project stream, socket) that enable communication, coordination, and
cooperation among components

 Risk areas are identified as part of the ‘Define System Architecture’ phase (i.e. Constraints that define how components can be integrated to form
Sprint-0 or Discovery Phase). These are defined as architecture stories and taken the system
up as POC (Proof of Concept) during this phase, which is in-line with the idea of
fail fast of Agile principle
 Any other POC as required for identified risk areas or new technology A topological layout of the components indicating their runtime
implementation should be covered during this phase and has to be in agreement interrelationships
with all stakeholders
 Some of the high priority stories can be identified for implementation, during first
Semantic models that enable a designer to understand the overall
2-3 Sprints (assuming 2 weeks iteration for project duration of more than 6 properties of a system
months) which can set up the technical foundation (e.g. Data Access Framework,
Error Logging Framework, common utilities) for the system being developed

Objectives: Techniques:
• Reduce project risk due to technology by • Expert Judgment
validating assumptions • Whiteboard
10 • Provide foundation for future Sprints • Team Discussions
Just-in-Time Design

• Focus on feature implementation and design should be given at start of story


• Design evolves iteratively along the Sprints
• Sets the direction for the team for implementing the stories in conjunction to the defined architecture
• TDD to be followed to ensure easier code refactoring
• Pair programming for code

Visual artifacts for current story:


• Interaction Diagram
• ER (Entity Relationship) diagram
• Sequence Diagram

Techniques:
• Expert Judgment
• Whiteboard

11
Design Principles

Continuous Test Driven Implement


Keep It Apply SOLID
Code Development Design
Simple Principles
Refactoring (TDD) Patterns

Self-Explanatory: Remove superfluous content:


•Code should be self-documenting •Code should not have any unnecessary content
•Should use unabbreviated words i.e. full names •Support the required functionality
•Standard naming convention should be followed •Any functionality which is not required should not be coded
•Use comments or assertions for better understanding of code (YAGNI principle)

No duplication: Cover all Test Scenarios:


•Each and every declaration of the behavior should appear
•Write code that satisfies unit tests covering all functional areas
OnceAndOnlyOnce
•TDD helps the developer with the same
•Developers should not implement unrelated ideas in the same
method •Minimal coding is done to keep it simple and flexible to maintain
•Code without duplication is easier to maintain and change
12
Design Principles

Continuous Test Driven Implement


Keep It Apply SOLID
Code Development Design
Simple Principles
Refactoring (TDD) Patterns

• Single Responsibility Principle: The programmer should maximize cohesion so that each class does only one thing. This in turn
ensures that behavior of a class is not accidently changed
• Open/Closed Principle :The open/closed principle states "software entities (classes, modules, functions, etc.) should be open for
extension, but closed for modification"; that is, such an entity can allow its behavior to be modified without altering its source
code
• Liskov Substitution Principle: If a system is using a base class, the reference to that base class can be replaced with the derived
class without changing the functionality of the system
• Interface Segregation Principle: Each interface should have a single responsibility and should not be overloaded. Interfaces should
be kept small and cohesive and exposed only when required
• Dependency Inversion Principle: Idea is to isolate our class behind a boundary formed by the abstractions it depends upon. If all
the details behind those abstractions change, then our class is still safe. This helps keep coupling low and makes our design easier
to change
Please refer to page 11 to 16
of Requirements, Architecture
13 and Design for code example
Design Principles To be read from study • Experienced Object Oriented (OO) developers build up a
material collection of both general principles and idiomatic solutions
that guide them in the creation of software
• These principles and idioms codified in a structured format
Continuous Test Driven Implement describing the problem and solution are called as patterns
Keep It Apply SOLID • Design Pattern names, abstracts, and identifies the key
Code Development Design
Simple Principles
Refactoring (TDD) Patterns aspects of a common design structure that makes it useful
for creating a reusable object-oriented design
• Describes a design structure that solves a particular design
problem within a specific context and amid forces that may
have an impact on the manner in which the pattern is
applied and used

Document design decisions


Helps to and rationale

Reuse wisdom and experience


of master practitioners

Reuse successful design

Design alternatives

Get right design faster

14
Agile Testing Techniques

15
Testing: Overview

Tester’s involvement is necessary right from the initiation of the project so that they can actively participate in
defining Definition of Done, exploration of User Stories

Follow up on validation criteria in order to know what's expected of a new feature or a change being made in the
system

Both testing and development are carried out in the same sprint

Tester’s work on how the system would behave after the new changes are incorporated and whether it satisfies the
business needs or acceptance criteria

When the development team works on the design and code changes, testing team works on the testing approach
and comes up with the test scenarios and test scripts

Testing team is expected to work on smaller modules and test them as and when provided rather than the whole
functionality to be developed completely

16
Testing: Concepts
Test Strategy and Planning

• Structured and definite testing approach and strategy in the project


• Recommended to update the test strategy at the beginning of each sprint
• Documentation of this test strategy starts from the Release initiation phase and acts as a living document throughout the Release
• Test plan has to be updated in each Sprint and is continued till the release
• test one functionality at the same time write a test plan for another and at the same time review the design for another story and
carry out many similar kinds of activities all at the same point of time
• Testers participate in design discussions in Agile projects
• In sprint review, testers prepare the test data sheets and run the tests in front of the Business Stakeholders, Product Owner and
show them how the product is working and whether or not it's meeting their Acceptance Criteria

Test Coverage in Agile

• Inadequate test coverage might result in missing of critical tests for some requirements
• Test coverage for a user story is usually discussed and finalized during the backlog grooming sessions and later during the Sprint
Planning meetings
• Creating a traceability matrix between requirements, user stories and test cases avoids missing out a few test scenarios. A link
can also be created between test cases and user stories
• Analyzing the impact, review of source code to identify modules that are to be changed and ensuring that all the changed code
has been properly tested ensures test coverage
• Use of code coverage tools must be leveraged for verifiable thoroughness as part of the automation effort

17
Testing: Concepts
Test Data Management in Agile

• Agile Testing in Sprints gets difficult when the system is dependent on another team for test data. Sprint timelines do not give
enough room for any delays in the data setup
• Setting up the test data is done along the Sprint to make available them at least, ‘just in time’ (JIT) if not earlier, by the time when
the developer checks-in his code for claiming it ‘as done’
• Developers usually practice test automation for unit testing and integration testing, use TDD approach to complete user stories, the
synchronized supply of the test data by testing team is essential
• Analysis of the data needs is done during the backlog grooming sessions and the Sprint planning meetings to ensure the complete
data set up needs
• If data is complex and huge, we may not get the adequate support from the team owning the data. Hence the discussion should
ideally start with the business, at least a Sprint early or as per the applicable SLA to get the data at the appropriate time
• members (from business) who are responsible for the data should be included as part of the Scrum team for the Sprint where their
support is required
• test data created in earlier sprints can either be directly reused or used with minor modifications in order to avoid redundant data
requests/set up and reduced turnaround time for the test data creation and manipulation
Impact Analysis in Agile

• Significant role of testing team in impact analysis and design reviews as they are involved in the design discussions happening for a
story
• Helps the developer in the impact analysis and debugging the issues
• Testing team members go to the code level to analyze or debug the issue along with the developer
18
Importance of Automation for Testing in Agile
Importance of Automation for Testing in Agile

Agile needs continuous testing of the


functionalities developed in past Sprints.
Use of Automation helps complete this
regression testing on time

Helps in minimizing the cost of the


project in the Release Sprint (last Sprint Minimizes the rework that can happen
in the Release) - as majority of the due to error prone manual processes
testing can be handled through the
existing automations

Testers can put their effort into testing


Brings predictability to the tests run as
areas of new functionality where a
part of Sprint testing
manual intervention is really needed

Helps to cover more regression tests and


reduces effort than it could have been
done manually because of higher number
19 of builds and frequent changes
Testing: Types
Unit Testing • Part of every Sprint for every story being developed
• Allows developers identify as many problems as possible at an early stage
• Doing it in a repeatable and automated fashion helps them make more frequent and stable code changes
• JUnit, NUnit, Jmock etc. are some tools to effectively and efficiently test the code being developed
Integration • Iterative and continuous in nature
Testing • After few units or functions are developed and tested by the team, if integration of systems have to be done, then that story is
planned for the sprint on priority and integration testing is performed by running tests that were done in isolation previously
• A project team should have outlined the capacity for integration testing based on current scenario of the project to seize any
opportunity for integration
• Team should create stories and functions to integrate and plan tasks to carry out the sequence of the integration of user stories
to complete integration testing in every sprint
For example, imagine a new data feed has to be sent from System A and consumed by System B. There will be stories to create a
feed from System A. There will be stories to consume the feed in System B. Now both these work happen in isolation. After these
are tested by the team, there should be one more story to actually integrate the two systems for the new feed and test the same.
This helps them to address any integration issues between the two systems. To summarize, in Agile it is not expected to wait till the
end of development phase to carry out Integration testing.
Smoke • Done to confirm whatever is dropped to QA for testing is good and can move ahead with system testing
Testing • Subset of system test cases are used for targeting some of the critical features
• Happens once in a release path prior to the start of system testing
• Important when there are multiple release paths catering to different releases
• It happens after every code backfill into the current test environment from the release path
• Carried out after a code drop, by all team members irrespective of whether he is developing or testing
• For better usage of time, since code drops happen very frequently in Agile, there should be a base-lined set of test scripts for
Smoke testing in a particular sprint. This set of test scripts/cases should be distributed to all the team members irrespective of
20
their work. This way when a code drop happens, the team members can complete the smoke testing quickly.
Testing: Types
System Testing • Scenarios where system testing of a story needs a completion of another story, prioritization of stories keeping an eye to
the end product and the testing needs is critical
• Since it’s iterative in nature, the same scripts have to be executed as and when different interlinked stories are developed
• The system test scripts have to be designed in line with the Story under test and not the complete feature
• Ensure that the scripts are designed in such a way that they require minimum modification when the end product/feature
is ready for testing
Regression • Done prior to the release of new changes into production basically towards the end of system testing
Testing • Done in every Sprint to ensure that all the previous functionalities developed till the previous sprints are working fine
• Having an automated suite of test cases is important to help the team with their regression testing. Unless there is
automation, regression testing objective to cover all prior Sprint functionalities in a given sprint cannot be realized
• For maintenance kind of projects, regression testing of all the existing functionalities is carried out in the last Sprint prior
to the release (also known as the Hardening Sprint or Release Sprint)
• Based on the volume of changes, team should plan to execute the existing regression suite in small portions in every sprint
before the Release or Hardening sprint. This way, the team would avoid maximum issues in the Release or Hardening
sprint and minimize the risk to the Release
Production • When the product is released to production or to the end users of the product, the development and the testing team is
Implementation involved in the implementation
Testing • Testing team prepares a checkout test plan and list the cases to be tested once the code is implemented in production
• During this phase, if the team finds something which is not working as expected, they fix it immediately, thereby by
ensuring quality of the product
• In the last sprint prior to the release, the capacity for the team should have factored in the activities that has to be done
for the implementation starting from deployment to testing the changes in production and carrying out the required
preparedness for the same
21
Testing: Types
Performance • Determine the behavior of the system at a particular load and the breakpoint of the system
Testing • Done at the release sprint after the functional testing is completed
• However, if the performance is a critical aspect of the project and there is high risk involved when performance issue identified
may lead to significant changes in the application, Team is advised to run performance testing after few iterations
• Team needs to plan and work on the performance testing couple of Sprints prior to the Sprint where performance testing is
planned. This helps to setup environment for the performance tests
• Team must be aware of the ‘Done’ criteria of performance testing upfront. This is applicable for many of NFR (Non-Functional
Requirements) tests like availability, security, scalability etc. and not just limited to performance testing only
Exploratory • An unorthodox means of testing with no specific plan or testing approach
Testing • Exploring and identifying what is working and what is not in the software. Time is a key factor in this testing
• Most of the standard test cases for the Sprint Stories are automated. Even though Automations capture the majority of
functionalities, they won’t be able to identify the usability, reliability, compatibility and other quality areas. Manual verifications
that need to happen to ensure the product is behaving as expected
• Testers identify areas that can have issues or impacts and then design manual test cases, run the tests, analyze the results and
decide on the next phase of test cases to be run
• During the story development asking the right set of questions can bring more clarities to the acceptance criteria and help
developers to design the product in a right way
For example: Web page is getting developed and as part of a story there must be all the fields that need to be present in the web
page. The story normally won’t have the details of the upper and lower limits of all the data entry fields, the browsers it is
supported or the load it can take. However all those questions bring the right kind of clarity to the story and hence story can be
developed in the right way from day one
Client/User • There are regular Sprint Reviews (at the end of the Sprint) where team get inputs from the Business Stakeholders on the
Acceptance product that is being developed
Testing • Formal User Acceptance Testing is planned in the Release Hardening Sprint when all functionalities are fully developed
22 • Chances of identifying defects during this phase gets low because of the regular feedbacks received as part of Sprint Reviews
Testing: Defect Management
• Whenever the stories are available to test, the testing team starts testing
and finds bugs
• The usual practice is, if the bug or the issue can be fixed in a day, defects
are not raised, and the development team is communicated to work on
the issue. If the issue which is found is not fixed in a day, then a defect is
raised to make the team and the relevant stake holders aware of the issue
• This has to be tracked formally in a defect management tool
• The critical defects are discussed with the team along with Product Owner
who finalizes on the criticality of the defect and the criticality of the story
under which the defect has been logged

The defects are prioritized on the following basis


1. Impact on the test coverage
2. Impact on meeting the acceptance criteria of the story

What if a defect comes up late during the sprint or some defect which is
directly not related to any of the stories taken up in the sprint?
• In such cases, a defect is raised and discussed whether it can be fixed in
the same sprint and also how much of effort is required to fix and retest it
• If the effort is too high and if forecasted that it cannot be delivered as a
part of the current Sprint, it is either moved to a story related to the
defect which is then planned in future Sprint, or it is converted to a new
story and gets assigned to a future sprint
23
Testing: Metrics
Defect Removal Effectiveness (DRE)
Defect removal effectiveness is a metric that tells that how many defects testers could identify in their testing and how many of them got slipped to
next stage i.e. UAT or Production or Warranty period etc. DRE is expressed as %.

Automation Test Coverage


Provides an insight to the percentage of test cases being automated as a part of the project. Each Sprint covers certain test cases to be automated
which can be run in an automated fashion in the future Sprints. In this way, the number of test cases being automated increases Sprint by Sprint
reducing the number of manual test cases thereby providing higher percentage of automation which can cover system cases during release level
testing, rather manually executing the cases

Automation Coverage Effectiveness


Defines how effective the Automation test cases are by having the number of defects caught through automation testing. The more the defects
caught through running automation test cases, higher is the effectiveness. With each Sprint, team can work on the defects caught manually in the
earlier sprint to incorporate the changes in automation so that the effectiveness of the automation test cases increase. If the effectiveness remains
the same/decreases, then the team need to strive to have some changes in the automation testing strategy

24
Testing: Metrics
Test Coverage (using code coverage) Unexercised code can mean either
• How much of the source code is actually exercised by my test system?" • Test system is deficient
• Testers can look inside the Black box code and analyze how much of the • Source has "dead code" (code that cannot be reached by any path)
code was actually exercised by the test set • Extra code is in the codebase that is not necessary (perhaps a feature was
• Tools like like ‘Semantic Design’ test coverage tool, ‘Bullseye Coverage’ removed but not all of the associated code was deleted)
can be used

After testing is complete, check the


Compile the code with test Use that code to test your test tool report to see the code
coverage tool integrated cases coverage achieved by tests that
were just executed

Based on the report and discussion with


Based on the report and discussion with Many a times these tools provide additional
developers, testers should take decision to
developers, testers must decide if details than just % code coverage. They are
add/update test suit to have additional coverage
additional/updated tests to be created so as to Product Quality Metrics (PQM) like memory
of code where the tests did not even reach to.
have better code coverage through the tests and issues, dead code, clone identification etc. That
The tool report throws reports on which
surety that each corner of the code is tested by eventually helps developers in cleaning and
paths/classes/branches/files were covered when
your tests optimizing code further
the software was executed to run the test suit
25
Testing: Challenges
Challenge
Wastage of Automation
Code broken accidently Performance Testing
scripts after couple of
due to frequent Builds Bottlenecks
sprints

Agile involves regular builds and


subsequent testing, and hence There is always a risk of lot of rework and
Description

Since code gets changed and compiled wastage of automation scripts in case they are
daily, the likelihood of code breaking the environment availability becomes
difficult for the performance testing not developed properly. With change in project
existing features is much higher course is quite normal in Agile, automations
activities
would need modifications which may impact its
ROI (Return on Investment)

Performance testing is mandatory for projects


undergoing changes which have architectural
Solution

Have a smoke test bed created, to be executed impacts. It is very essential for Agile projects to Automation scripts should have high
after every code build. As resource constraints carry out performance testing. To overcome reusability factor with data sheets
prevails, so it is not practical to have team the challenge, team should always have a driving the automation execution. This
members do this daily thereby having efficient separate environment for performance way team can reduce the code changes
automated test cases built to confirm the testing. The setup should be backfilled with to the automation scripts for any
environment stability after code drops the correct code base and then a round of change in the Business functionality.
smoke testing has to be done on the same. It’s an industry wide best practice, but
There should be separate stories to track the required more in Agile since it’s prone
performance testing progress to more frequent changes.

26
Testing: Challenges
Challenge
Wrong selection of Sprint to start Wrong selection of Automation
Automation of test cases Tool

It’s always a challenge to decide on the correct Sprint for starting with the
Description

automation of manual test cases. If Automation is planned at the early In Agile, User Stories get reprioritized and nothing is
stages of the Release (i.e. from the initial Sprints onwards), it leads to a lot constant if compared on a day to day basis. Hence if the
of wastage of effort as the team is not settled properly and the output is Automation Tools are not selected properly, there is a
meagre to do any automation. This leads to the automation work getting chance of the project team running into higher costs and
redone in subsequent Sprints, leading to wastage of effort. In case of late less ROI for the automation.
start of automation (i.e. during the last one or two Sprints in the Release),
the team would have more manual test cases than automated ones. This
would significantly impact the team’s velocity and delivery capabilities.

For Agile Teams, it is always recommended to go for Open Source Tools


without any licensing costs. This would give automaton access to all
Solution

the team members ensuring better/higher outputs at low cost.


Although it varies from project to project, but planning the automation However for projects, where there are members dedicated for
ideally after 3 Sprints is a good idea, since by then there would have been Automation work, tools should be selected based on the kind of
significant application development and stability to carry out the automation project being developed. Ex: A tool with Command Line Support is
work. perfect for projects which involve testing for backend systems rather
than User Interface.
27
Testing: Best Practices

28
Automation

29
Automation: Overview
 Automating the repetitive procedures can provide real value to the
projects
 Continuous integration/builds, unit, functional & integration test Areas where automation can play an important role
execution and continuous/automated deployment are common
examples of applying automation
Build and deployment of the system under design
Benefits:
 Automation in software design and testing eliminates risks associated Code deployment
with human error
 Spares developers and testers from repetitive activities and allows them Unit test execution
to focus on the more critical aspects of system quality, thus improving
build and testing effectiveness as well as operational efficiency
 Ensures reusability and scalability, which in turn translates into
Code coverage report generation
significant cost savings
Functional & Performance test execution
Team needs to
 Investigate on how to automate the process and identify the tools that Code quality metrics report generation
could assist with automation
 Estimate the level of effort required to implement the automation Coding conventions report generation
against the total cost and risk of manual procedures

30
Automation: Concepts
Test Data Management in Agile

• Agile Testing in Sprints gets difficult when the system is dependent on another team for test data. Sprint timelines do not give enough room for
any delays in the data setup
• Setting up the test data is done along the Sprint to make available them at least, ‘just in time’ (JIT) if not earlier, by the time when the developer
checks-in his code for claiming it ‘as done’
• Developers usually practice test automation for unit testing and integration testing, use TDD approach to complete user stories, the
synchronized supply of the test data by testing team is essential
• Analysis of the data needs is done during the backlog grooming sessions and the Sprint planning meetings to ensure the complete data set up
needs
• If data is complex and huge, we may not get the adequate support from the team owning the data. Hence the discussion should ideally start
with the business, at least a Sprint early or as per the applicable SLA to get the data at the appropriate time
• Members (from business) who are responsible for the data should be included as part of the Scrum team for the Sprint where their support is
required
• Test data created in earlier sprints can either be directly reused or used with minor modifications in order to avoid redundant data requests/set
up and reduced turnaround time for the test data creation and manipulation

31
Automation in Development

•Developers can have series of their testing automated as simple unit tests and make it available along with their versioned codebase
•These tests should be living test suites i.e. it should run to success at any point of time
•Whenever a code module is changed, the corresponding unit tests should also be updated and ensured that the test cases are passed
Unit Test •Automated Unit Tests are the first line of defense and the key step in “Fail Fast” approach
suite •Various unit testing frameworks that are available are – JUnit (for Java), NUnit (for DotNet), utPlSql (for Oracle PL/SQL)

•Static code analysis needs to be performed to identify and quantify the code quality metrics like standards, security risks, and adherence to
best practices
•As part of the development practice, the code quality checks should be run
•Only after attaining satisfactory quality metrics, the code should be checked into the version control tool
Code Quality •Some of the key tools for Static Code Analysis are CAST, SONAR.
Analysis •Developers can make use of the IDE(Integrated Development Environment) integration capabilities of these tools to integrate static code
analysis in their development process

32
Automation in Build
Continuous Integration (CI)
The aim of practicing Continuous Integration is to maintain a build environment which is able to generate “Production Ready” builds.
Following is the process:

analyze publish alert the


execute upload
the code build the the build team
unit test build
for code and test with
cases artifacts
quality results results

To aid in this practice following support systems are required


• Source Version Control tools – The version control tool should be able to determine and communicate to other tools on any new code changes
through “hooking” (proactively let the other tools know when a new code changes are made) or “polling” (other tools have to poll the Version
Control tool to determine whether a new code changes are made). Example – Git, ClearCase, SVN Build tool – A robust tool/framework that
helps in maintaining and code build more easily. Example – Maven, ANT

• Static code analysis tool – As described in above section, these tools are used to check the code quality. Example – CAST, SONAR

• Build repositories – A robust repository to maintain the builds that can be uploaded/downloaded/searched with ease. Example – Nexus (for
maven builds)

• CI server – Interacts with all the above components and perform the actual build of the application along with execution of tests either
automatically or just by click of a button. Example – Jenkins, Hudson
33
Automation in Deployment
Continuous Deployment (CD)
A mature Agile team extends the Continuous Integration practice so that the application can also be automatically and regularly deployed
into multiple environments (realistically lower environments like DEV and QA). This is needed for maintaining a “Production Deployable”
builds. Any deployment issues will also be validated, as part of this process

A typical CD process includes


All Continuous Integration • Usually an enterprise application will follow multi-tier
steps architecture which includes multiple technology (Example - J2EE
application with Oracle backend). So, to achieve deployment
Deployment into needed
environment automation in multi-tier architecture, there should be an
automated orchestration.
Execution of smoke tests
• For automating the deployments, the delivery team can have a
Publish the deployment custom scripts or use automation framework like uDeploy. The CI
results
build servers like Jenkins/Hudson can be configured to perform
Alerting the team with the CD activities as well
results

34
Automation in Testing
Recommended to create the automated test cases right from the beginning of Sprint execution in parallel with the User story
development. Once the test cases are automated, it can be scheduled as part of the build servers to be triggered automatically

•Few sanity test cases can be handpicked and a suite of tests can be created
Smoke testing •Test suites will be executed from the build server after every deployment tasks
•Ensures that the basic features are validated automatically

•Test cases are created in the needed automation framework


•Though it might take some effort initially, it will ensure that there is no need for automating the tests in future
System Testing •The Rule of Thumb should be whatever test cases that can be automated, should be automated
•Ensure test case library is always updated
•Future maintenance/iterative work on the same module will not need excessive testing effort

•To keep the regression testing effort to a minimum, automation of test cases is mandatory
•Within the automated test cases, required regression test cases can be identified and run whenever required
Regression Testing •For new features, since the test cases are getting automated simultaneously during the System Integration testing, the
regression testing cases are also getting refreshed as part of each release
•With the minimal regression testing effort, more time can be spent on building new features

•Aim of defining the acceptance criteria is to automate them easily


•Multiple frameworks/tools exists to support these acceptance testing including Cucumber, FitNesse etc.
Acceptance Testing •This method of defining the acceptance test cases initially and then developing the code based on it is known as Acceptance
Test Driven Development (ATDD)
35
Automation: Emerging trends
LiquiBase:
• A tool that tracks the changes that are made to the database
• Changes to the database are maintained in an XML
• Helps in subsequent Rollback of changes or applying a change in one environment to another
• Can be used to create/rollback test data for unit test cases or to move the changes from DEV environment to higher environments

Chef: A tool that automates deployment and management of infrastructure. Infrastructure is treated as versionable, testable, and repeatable similar to
the application code.

36
• Product Owners would like to get their working product out and may not be willing
Automation: Challenges to let the team spend the time on automation frameworks that might benefit long
term
• Stakeholders must be taken into confidence on the benefits of automation
• Return on Investment (ROI) can be calculated on the adoption of automation and
used for articulation
• Automation scripts have to be well maintained along with the production code
• Enough support to be provided to the team to ensure that the changes to the
Pressure from Product Owners maintenance of the automation scripts are taken up along with business priority
• As the total number of automated tests increase, the time taken for building and
executing the tests tends to increase
Maintenance of automated • Team should decide on the priority of the tests executed along with the build and
scripts ensure that needed refactoring of automation scripts is performed to improve,
wherever possible
Lack of follow through
• Team might lose sight of the vision and may compromise on the
automation activities
Agile Maturity • Delivery lead or any member of the team should constantly remind the
team in case of any deviations

Timing of Automation • Development team might have the mental makeup of giving preference to
the code than the automation scripts or frameworks
• Team should be trained appropriately and instill the practice of creating
automated tests along with development, for sustainable maintenance of
the application
• Automating the build process when the project is nearly over, provides
very little benefit
• It can save dozens, if not hundreds of hours when automated as soon as
37
development/ testing begins
Case Study 1 - Deployment Automation

Since Jenkins support execution of


UNIX scripts, a new Deploy job is
created in Jenkins. As part of this
deploy job, a series of UNIX
commands are executed to get the
application and DB component from
the last successful build and deploy
simultaneously it in needed servers
using the above mentioned toolkits.
With this approach, the developers
can deploy the components to the
needed tomcat server and DB in the
right order through a simple click of
a button in Jenkins

38
Case Study 2 - Code Generator

39
Continuous Integration and Associated Tools

40
Concepts
Continuous Integration is continuously integrating source code into a central repository wherein each instance of integration is
verified by the Automated Build process to ensure the stability of the project (working software) at all times

• Use of Continuous Integration effectively will deliver software much faster, and with fewer bugs
• helps to identify the integration defects much early in the software development lifecycle
• Reduces risk by providing faster feedback
• Facilitates coordination between team members and encourages collaborative approach for problem solving and process improvement in
the project

 Requires that every time somebody commits any change, the entire application is built and a comprehensive set of automated
tests is run against it
 If the build or test process fails, the development team stops whatever they are doing and fixes the problem immediately
 Goal is to have software in a working state all the time
 Continuous Integration was first written in the book ‘Extreme Programming Explained’

Static Code Unit Testing,


Functional /
Analysis and Code Coverage, Email
SCM Checkout Obfuscation Deployment Performance
FOSS Compliance Build & notification
testing
check Packaging

41
Continuous Integration Process
Things needed before getting started with CI
•Everything in the project must be checked in to a •Project build, test and deployment process should
single version control repository i.e. code, tests, be run in an automated fashion
database scripts, build and deployment scripts, and •Team must be able to run the build process in an
anything else needed to create, install, run, and test automated way from the continuous integration
the application environment
•There will be projects that don’t use any form of
version control considering that their project is not
big enough to warrant the use of version control,
which is incorrect
Software
Automated
Configuration
Build
Management

Continuous •Requires a degree of commitment and discipline


Integration Agreement of from the development team
Server (Two the Team •Entire team must follow the process of checking in
•First: Long-running process which can execute a components) the small incremental changes frequently to
simple workflow for regular intervals. mainline and must be in agreement that the
highest priority task on the project is to fix any
•Second: Provides a view of the results of the
change that breaks the application
processes that have been run, notifies the team
about the success or failure of the build and test •If any of the team members don’t adopt the
runs, and provides access to the test reports, discipline necessary for it to work, then the
installers and so on. attempts of Continuous Integration will not lead to
the improvement in quality

42
Continuous Integration Flow Diagram

43
Stages in Continuous Integration

44
Stages in Continuous Integration
Stage for Build, Unit and Integration Testing and Quality Check

Build gets initiated when the developers commit their individual work/code to the version control repository
system. The CI server which is installed as part of project environment, continually polls this repository (e.g.
frequency could be every few hours based on the need of the project) for detecting any changes made by the
development team

The CI server then detects changes, if any, in the version control repository, and initiates the process of
retrieving the latest copy of source code from the repository and executes the build script using build
automation tools to perform integration of the software

Once a successful build is generated, CI server triggers the process of validation of test cases/scenarios and
code quality inspection. The process of validation and inspection is automated using the automation tools for
unit and integration testing and inspection

Though these steps are automated, there would be some manual intervention required in these stages. Post
this activity, feedback is generated by the CI server and is communicated to the team members through
various mechanisms such as email and text message

45
Stages in Continuous Integration
Stage for System/Functional/Non-functional Requirement Testing
• Component Tests: Verify portions of a system such as database, file systems, or network end points
• System Tests: Exercise a complete software system including external interfaces
• Functional Tests: Executed to test the functionality of the application or product from the client perspective

Component tests take a bit more time to run when compared with Unit testing because of multiple dependencies whereas the System
and Functional tests take the longest time to run

 CI system would be configured to automate and run all these types of tests
 Tests can be categorized into distinct buckets and run the slower running tests (example, Component, System) at different intervals
than the faster running tests (example, Unit)
 CI system Configuration would run each of these test categories with different staged builds
 Slower running tests (example, System and Functional) can be automated through the tools interfaced with the CI server

Stage for Automated Deployment


• Continuous deployment enables to deliver working, deployable software at any point in time
• By running a fully automated build including compilation, all tests, inspections, packaging, and deployment, the team acquires the
capability to release working software at any time and in any known environment
• CI needs CD for the culmination of all engineering practices which enables the release of working software at any time and any place
with minimal effort
• CD should include the capability to automatically roll back all changes applied in the deployment

46
Practices

Requires a significant degree of discipline from the development team as a whole

Projects must have a repeatable, reliable, and automated build process without any human intervention

Broken builds should be fixed immediately before checking in further changes and triggering new builds else this will
compound to failure with more problems

Deployment process of the application should be automated

Good infrastructure for automated build and test process

• The most basic functionality of CI server is to poll the version control system for new commits, running build scripts
for compilation, running the tests and notifying the results to the team
• Teams commonly determine test coverage, code duplication, adherence to coding standards, and other indicators of
health, and have the results displayed in the CI server’s summary page for each build
47
Step-by-step Process of Implementing Continuous Integration
• Developed and implemented (which could be a
single script or a set of scripts)
• After the creation of each build, new
• Automate the software build cycle which includes
version of binaries should be
• Manage all the source code changes and compiling, testing, inspecting, and deployment
automatically deployed to test servers
other software artifacts processes
• Helps in improving integration with
• Uses a controlled access repository • Developers run Unit, Component, System,
customer systems
• “single source point”, all source code is Functional, and Regression tests to speed up the
• After each fix, customer may verify
available from one primary location build process
new versions

Configure CI
Use Version Use Code
Configure the Build Script for with
Control Quality
CI Server Automation Automated
Repository Analysis
Deployment

• Helps in stabilizing coding conventions


• Configured within the project environment and constantly monitors code for
• Continuously check for any changes in the version 1. Average code complexity
control repository 2. Coverage of code
• Run an integration build by retrieving the source files 3. Duplication of code
48 whenever a change is committed to the version 4. Performance
control repository
ICIP & its Workflow
Infosys Continuous Integration (CI) Platform is a centralized hosted platform, which enables the project teams to leverage a
predefined continuous integration workflow for build and deployment. It is web based solution designed to automate build for
JAVA/J2EE and .NET based application projects.

• Enables teams to schedule the builds to run at regular intervals or based on source control action performed by the developer
• Leverages the plugins such as Custom Tools, Source control management (SCM) plugins, Build tool plugins, Code analysis plugins,
unit testing plugin, Code coverage plugins, and Authentication and notification plugins
• Infosys internal team has developed custom plugins such as createuser plugin, copy-slave plugin, save-job plugin to define the
continuous integration workflow

• Leverages the batch script execution to


ensure activities such as Free and Open
Source Software (FOSS) compliance and
securing intellectual property of the
source code through obfuscation
• Identifies the cross project dependency
for Java/J2EE projects and build multiple
projects at one go and support parallel
deployment on multiple web servers

49
Architecture Diagram of ICIP

50
Infosys Continuos Integration Platform (ICIP): Benefits

A centralized platform
available as-a-Service that
reduces project teams effort
for set up & administration

Significant reduction in Implementation of


cycle time through Standard CI process that
automated build & leverages Industry best
deployment tools & practices

Enhanced compliance &


automation of several
Improved adoption of the validations - code analysis,
CI System due to deskilling unit testing, code
of administration and build coverage, FOSS
script creation compliance, obfuscation,
packaging, deployment,
etc.
51
Tools Associated with Continuous Integration

52
CI Server Tools
Cruise Control: Jenkins:
• This server automates the process of integration by monitoring the • Java-written Open Source CI tool
source code repository of the team directly • Server-based system running in a servlet container such as
• Every time a developer commits a new set of modifications, the Apache
server automatically launches an integration build to incorporate • Supports Software Configuration Management (SCM) tools
the changes that include CVS, Clearcase
Key features: • Can execute Apache Ant and Apache Mavenbased projects
• Allows remote management as well as shell scripting
• Reports using email, exe, RSS
• Builds multiple projects on a single server
• Integrates with other external tools such as Ant, Nant, MS build

TFS Team Build:


Rational Team Concert: • Provides the functionality of a public build lab and is part of Visual
• Built under Jazz platform by ‘Rational’ brand of IBM Studio Team Foundation Server
• Available in both Client version and Web version • With Team Foundation Build, enterprise build managers can
• Licensed according to the number of users working with synchronize the sources, compile the application, run associated unit
the software as well as the actions these users are tests, perform code analysis, release builds on a file server and
allowed to execute with the system publish build reports
• Build result data is propagated to the warehouse for historical
reporting
53
Code Quality Tools

•Web-based application that can be used to manage the code quality


•Very efficiently, navigates a balance between high-level view, dashboard, TimeMachine and defect hunting tools
SonarQube •Due to this ability of the tool, it becomes easy to uncover projects and / or components which are in technical
debt, so that action plans can be established effectively

•Integrated development testing tool that automates various practices that have proven to improve productivity
and software quality
Parasoft® •Used for static code analysis, data flow static analysis, and metrics analysis
•Facilitates automated peer reviews, Cactus test creation, execution, optimization and maintenance, along with
runtime error detection

•Open source tool for Java that can calculate the percentage of code accessed by various tests
Cobertura •Identifies those parts of the Java program that lack test coverage
•Based on jcoverage tool

54
Build Tools

•For Java Project


ANT •Uses XML for defining the build process
•Open Source software released under the Apache Software License

•Open Source software tool


•Similar to ANT but needs .NET environment to work rather than Java
NANT
•The name NANT came from the fact that the tool is NOT ANT
•Released under GPL License

•build automation tool usually used for Java


•Used to build and manage projects written in C#, Ruby, Scala
Maven •Uses XML to define the software project being built
•Downloads Java libraries dynamically and plug-ins from one or more repositories
•Released under Apache License 2.0

55
Unit Testing Tools

Quest Code Tester


Junit Nunit Parasoft Jtest
for Oracle

• Unit testing framework for • Open source unit testing • Automated Java software • Component of the Toad
the Java programming framework for Microsoft testing and static analysis Development Suite for
language .NET product Oracle
• Important in the • Serves the same purpose • Includes technology for • First and only automated
development of test- as JUnit does in the Java Unit test-case generation PL/SQL code-testing tool
driven development world and execution, static available
• One of a family of unit • One of many programs in analysis, regression • Provides an efficient and
testing frameworks which the xUnit family testing, runtime error reliable way to perform
is collectively known as detection, and Design by thorough PL/SQL code
xUnit that originated with contract tests
sUnit

56
Integration and Functional Testing Tools

•Tool for automated testing of software applications from the Rational Software division of IBM
Rational Functional Tester •Allows users to create tests that mimic the actions and assessments of a human tester
•Formerly known as HP QuickTest Professional (QTP)
HP Unified Functional •Provides functional and regression test automation for software applications and environments
Testing
•Can be used for enterprise quality assurance
Parasoft SOA Test •Testing and analysis tool suite for testing and validating APIs and API-driven applications (e.g., cloud, mobile apps, SOA)
(Service-Oriented- •Basic testing functionality include functional unit testing, integration testing, regression testing, system testing, security
Architecture) testing, simulation and mocking, and load testing
•IBM Security AppScan previously known as IBM Rational AppScan is a family of web security testing and monitoring tools
Rational AppScan from the Rational Software division of IBM
•Test Web applications for security vulnerabilities during the development process, when it is least expensive to fix such
problems

Microsoft VSTS Visual •Integrated testing toolset developed by Microsoft to facilitate a plan-testtrack workflow for collaboration between
Studio Test Professional testers and developers

•Set of different software tools each with a different approach to supporting test automation
•Most Selenium QA Engineers focus on the one or two tools that most meet the needs of their project, however learning
all the tools will provide many different options for approaching different test automation problems
Selenium •The entire suite of tools results in a rich set of testing functions specifically geared to the needs of testing of web
applications of all types
•These operations are highly flexible, allowing many options for locating UI elements and comparing expected test results
against actual application behavior
•Support for executing one’s tests on multiple browser platforms
57
Performance Testing Tools

• Automated performance and test automation product from Hewlett-Packard for


HP Load Runner (HP
application load testing, examining system behavior and performance, while
Performance Center)
generating actual load

• Apache project that can be used as a load testing tool for analyzing and measuring the
JMeter Apache
performance of a variety of services, with a focus on web applications

• Tool for automated performance testing of web and server based applications from
the Rational Software division of IBM
Rational Performance Tester
• Allows users to create tests that mimic user transactions between an application
client and server

58
Case Study Cont…

59
Case Study

60
THANK YOU

© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys
acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this
documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the
prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document.

You might also like