0% found this document useful (0 votes)
11 views

SW Testing Interview Questions Collection2

This document discusses key concepts in software testing through a series of interview questions and answers. It covers topics like the relationship between testing and quality assurance, the objectives and purposes of testing at different stages, and different testing techniques like static testing, reviews, and static analysis. The document provides concise definitions and explanations of fundamental testing terminology.

Uploaded by

Ahmed Swedan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

SW Testing Interview Questions Collection2

This document discusses key concepts in software testing through a series of interview questions and answers. It covers topics like the relationship between testing and quality assurance, the objectives and purposes of testing at different stages, and different testing techniques like static testing, reviews, and static analysis. The document provides concise definitions and explanations of fundamental testing terminology.

Uploaded by

Ahmed Swedan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Interview Questions Edited and uploaded by

Interview Questions Collections

CH 1 Fundamentals of testing

1- How can a software defect cause harm to a person, the environment or a company?
Lots of possible answers.

2- What causes a defect?


An error

3- What is the effect of a defect?


A failure

4- What is the difference between error, defect and failure?


Error = mistake = human action
Defect = fault = bug = a flaw in a component or system
Failure = deviation from expected delivery or service

5- What is the relationship between testing and quality assurance?


Testing is a QA activity.

6- How can testing contribute to higher quality?


Testing measures the quality.

7- How much testing is enough?


Depends on the risk.

8- What are the common objectives of testing?


Find defects.
Gain confidence.
Preventing defects.

9- What is the benefit of designing test early?


Find defects early, prevent them multiplying.

10- What are the different purposes of testing?


Development testing - To find as many failures as possible so that defects can be fixed.

Acceptance testing -To confirm that the system works as expected & to gain confidence that it has met the
requirements.
To assess the quality, to give information to the stakeholders about the risk of releasing the software.

Maintenance testing -To ensure that no new defects have been introduced during development of changes

Operational testing - To assess system characteristics such as reliability or availability

11- What is the difference between debugging and testing?


Testing can show failures that are caused by defects.
Debugging identifies cause of a defect and repairs the code.

12- Describe each of the General testing principles


Page 1 of 11.
Interview Questions Edited and uploaded by
Testing shows the presence of defects
Can show defects are present, but cannot prove that there are no defects.

Exhaustive testing is impossible


We can’t test everything – use risk analysis and priorities to focus testing.

Early testing
Start as early as possible in development lifecycle and should be focused on defined objectives.

Defect clustering
A small number of modules contain most of the defects.

Pesticide paradox
Repeat the same tests and eventually they will no longer find any new bugs.

Testing is context dependent


Testing is done differently in different contexts.

Absence-of-errors fallacy
Finding and fixing defects does not help if the system built i s unusable and does not fulfil the users’ needs and
expectations.

13- What are the key activities in the test process and describe each…

Planning and control


Planning – verifying the mission of testing, defining the objectives
Control – measuring and analysing results

Analysis and design


Identifying test conditions and designing test cases

Implementation and execution


Prioritising, creating test suites, verifying test environment and executing tests

Evaluating exit criteria and reporting


Checking test logs against exit criteria
Writing a test summary report for stakeholders

Test closure activities


Collecting and consolidating data on the testing project.
Archiving testware and handing it over to maintenance.
Lessons learned.

14- Why is it important that a testing project has clear objectives?


People and projects are driven by objectives.

15- What is the difference (in mindset) between a tester and a developer?
Developer – positive. Tester – negative.

16- What is meant by independent testing?


Separating testing from development.

17- Why must testers be able to communicate well?

Page 2 of 11.
Interview Questions Edited and uploaded by
Avoid bad feelings, enough info for developers to find and fix, state of project to stakeholders.

Ch 2Testing throughout the software lifecycle

1- Name the four steps in software development.


Requirements, analysis, design, code.

2- What are the benefits of early test design?


Test preparation done in parallel with development, find faults early.

3- Identify the Characteristics of good testing.


For every development activity there is a corresponding test activity.
Each test level has test objectives specific to that level.
Analysis & design of tests for a given level should begin during the corresponding development activity.
Testers should be involved in reviewing documents.

4- What are test levels?


A group of test activities

5- What can we identify for each test level?


Their generic objectives.
The work products being referenced for deriving test cases (the test basis).
The test object (what is being tested).
Typical defects to be found.
Harness and tool requirements.
Specific approaches and responsibilities.

6- Name the 4 levels on the right of our V model.


Component testing
Integration testing
System testing
Acceptance testing

7- Name the two types of integration testing.


Component integration testing
System integration testing

8- What are the 4 types of acceptance testing?


User
Operational
Contract and regulation
Alpha and beta (or field)

9- What is a test type?


A group of test activities can be aimed at verifying the system based on a specific reason or targe t for testing.
A test type is focused on a particular test objective.

10- Can you name the four test types?


Testing of function (Functional testing)
Testing of non-functional software characteristics (Non-functional testing)
Testing of structure/architecture (Structural testing)
Testing related to changes (confirmation testing (retesting) and regression testing)
Page 3 of 11.
Interview Questions Edited and uploaded by

11- What is functional testing?


Testing of the functions that the system is to perform.
“What” the system does

12- What is non-functional testing?


“How” the system works – the quality characteristics.

13- Name 7 non-functional testing types.


Performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing,
portability testing.

14- What is structural testing?


Testing of the internal structure of the system.

15- What is meant by coverage?


% of the system exercised by tests (code, functions etc)

16- Name the two type of testing related to change and explain them.
Confirmation testing – re-testing following a defect fix.
Regression testing – re-testing to discover defects in unchanged areas.

17- When should we regression test?


Whenever there’s a change to the software (bug fix and / or new functionality) or the environment.

18- Maintenance testing


What is maintenance testing?
Testing done on an existing operational system, triggered by modifications, migration or retirement of the system.

19- How do we determine how the existing system may be affected by changes?
Impact analysis

Ch3 Static techniques

1.What is meant by static testing?


Testing of system deliverables without execution of code.

2.How is static testing done?


Manually – reviews.
Automated – static analysis.

3.What are the benefits of reviews?


Page 4 of 11.
Interview Questions Edited and uploaded by
Find faults early.
Development improvement.
Find omissions.
Review process

4.Name the steps in a review process:


Planning
Kick-off
Individual preparation
Review meeting
Rework
Follow-up

5.Name the typical roles in a review:


Manager
Moderator
Author
Reviewers
Scribe (or recorder)

6.Name the 4 review types and describe its’ key points:

Informal
No formal process
Inexpensive
May (optionally) be documented
Vary in usefulness

7.Walkthrough
Step-by-step presentation
Gather information & establish a common understanding
Main purpose: learning, gaining understanding & defect finding
May vary from quite informal to very formal

Technical (peer)
Peer group discussion (no management participation)
Discuss, make decisions, evaluate alternatives, find defects, check conformance, achieve consensus
Documented, defined defect detection
May vary from quite informal to very formal
Ideally led by trained moderator (not author)

Inspections
Led by trained moderator (not author)
Main purpose: find defects
Most formal review type
Always based on a documented procedure
Entry & exit criteria
Formal follow-up process
Defined roles
Process improvement is often a part

8.What is static analysis?


Analysis of code (without execution) by a tool.

Page 5 of 11.
Interview Questions Edited and uploaded by

9.What are the benefits of static analysis?


Early detection of defects
Early warning about weak / too complex code
Improve maintainability of code
Code metrics

10.What types of defect will static analysis find?


Variable issues
Inconsistent interface between modules
Dead code
Standards violation
Security vulnerabilities
Syntax violations

Ch4 Test design techniques

1.What is a test condition?


An item or event that could be verified by one or more test cases

2.What is the document associated with a test condition?


Test Design Specification.

3.What is a test case?


A set of inputs, execution preconditions, expected outcomes and execution post-conditions developed for a particular
objective / to cover certain test conditions.

4.What is the document associated with a test case?


Test Case Specification.
Page 6 of 11.
Interview Questions Edited and uploaded by

5.What is a test procedure?


Detailed instructions for the execution of one or more test cases
The test procedure (or manual test script) specifies the sequence of actions for the execution of a test.
If tests are run using a test execution tool, the sequence of actions is specified in a test script (which is an automated
test procedure).

6.What is the document associated with a test procedure?


Test Procedure Specification.

7.Why is traceability important?


It enables Impact Analysis when requirements change and requirements coverage to be determined.

8.Describe why are expected results important?


If expected results have not been defined then a plausible, but erroneous, result may be interpreted as the correct one.

9.What is a test execution schedule?


It defines the order in which the various test procedures, and possibly automated test scripts, are executed, when they
are to be carried out and by whom.

10.Describe Black Box techniques?


Black-box techniques (also called specification-based techniques) derive and select test conditions or test cases based on
an analysis of the documentation, whether functional or non-functional, for a component or system without reference
to its internal structure.

11.Name the Black Box techniques discussed on the course?


EP, BVA, Decision Tables, State Transition, Use Case

12.What are the two Experience based techniques discussed on the course?
Error guessing & Exploratory Testing

13.Describe White Box techniques?

White-box techniques (also called structural or structure-based techniques) are based on an analysis of the internal
structure of the component or system.

14.What is meant by Coverage?


Coverage – amount of x we have tested.

15.Name the White Box techniques discussed in the course?


Statement and Decision testing

Ch5 Test management

1.What is the benefit of independent testing?


Independent testers see other and different defects and are unbiased.
Can verify assumptions made during system development.

2.What are the drawbacks of independent testers?


Page 7 of 11.
Interview Questions Edited and uploaded by
Isolated from development
May be bottleneck
Developers lose sense of responsibility for code quality

3.Describe the difference in role between a test leader and a tester?


Test leader is test manager, tester is test analyst / executioner.

4.What is meant by test planning?


The activity of verifying the mission of testing,
Defining the objectives of testing
Specifying test activities

5.What is IEEE829?
Standard for Software Test Documentation.

6.What is the difference between metrics-based estimation and expert-based estimation?


Metrics – based on information from previous projects or based on similar values
Experts – SME or task owner as to how long things will take to do

7.What testing tasks (preparation & execution) need planning?


Preparation
Identifying test conditions
Identifying and creating tests, test cases and test procedures
Specification of test environment
Specification and creation of test data

Execution
Preparation of test data
Preparation / verification of test environment
Execution of test cases
Recording of test results

Reporting incidents
Analysing incidents and test results
Creating post execution reports

8.Describe some useful exit criteria for testing?


Coverage
Defect density
Reliability measures
Cost
Residual risks (defects not fixed, lack of coverage)
Schedules (time to market)

9.What is a test strategy / test approach?


Defines the overall approach to testing

10.Name some test strategies?


Point in time – preventative or reactive
Analytical (risk based)
Model-based (inc. usage)
Methodical (error guessing, quality characteristics)
Process or standard compliant
Dynamic and heuristic (exploratory testing)
Consultative (coverage is driven by advice & guidance from SME’s)
Page 8 of 11.
Interview Questions Edited and uploaded by
Regression-averse

11.What is the difference between a preventative approach and a reactive approach?


Preventative – tests are designed as early as possible
Reactive – test design comes after the system has been produced

12.What should we consider when selecting a test approach?


Risk of failure
Skills & experience
Objective of the testing endeavour
Mission of the test team
Regulatory aspects
Nature of the product and business

13.What is test progress monitoring?


Provides feedback and visibility about test activities.
Measure status against exit criteria.

14.Identify common metrics for monitoring test preparation and execution?


Percentage of work done – test case preparation, test data preparation, test environment preparation.
Test case execution
Defect information (density, found & fixed, failure rate, re-test results)
Coverage of requirements, risks, code.
Confidence of testers in the product
Dates of test milestones
Test cost (inc. cost of finding next defect and benefit of finding next defect)

15.What is the purpose of a Test Summary Report?


Summarise the results of the designated testing activities and to provide evaluations based on these results.

16.What is test control and give examples?


Any guiding or corrective actions taken as a result of information and metrics gathered and reported.
Re-prioritise tests, change the test schedule, set an entry criterion (developers must re-test fixes before test accepts
them).

Configuration management

17.What is configuration management?


Control of all items relating to a system throughout its’ lifecycle. Seeks to establish and maintain integrity of pr oducts.

18.How can CM affect testing?


All testware identified, version controlled, tracked for changes, related to each other (and development items) for
traceability).
All identified documents and software items are referenced unambiguously in test docum entation.

19.Define Risk?
The chance of an event, hazard, threat or situation occurring and its undesirable consequences, a potential problem.

The level of risk will be determined by the likelihood of an adverse event happening and the impact (the harm resulting
from that event).

20.How can we calculate Risk?


Risk = Impact x Probability

Page 9 of 11.
Interview Questions Edited and uploaded by
21.Define Project Risks?
Project risks are the risks that surround the project’s capability to deliver its objectives, such as:
Organisational factors:
skill and staff shortages;
personal and training issues;
Political issues, such as
problems with testers communicating their needs and test results;
failure to follow up on information found in testing and reviews (e.g. not improving development and testing practices).
improper attitude toward or expectations of testing (e.g. not appreciating the value of finding defects during testing).
Technical issues:
problems in defining the right requirements
the extent that requirements can be met given existing constraints
the quality of the design, code and tests
supplier issues
failure of a third party
contractual issues

22.Define Product risks?


Potential failure areas (adverse future events or hazards) in the software or system are known as product risks, as they
are a risk to the quality of the product, such as:
Failure-prone software delivered.
The potential that the software/hardware could cause harm to an individual or company.
Poor software characteristics (e.g. functionality, reliability, usability and performance).
Software that does not perform its intended functions.

23.What is Incident management?

The control of anything that stops us testing or slows us down.

24.How should we track incidents?


From discovery and classification to correction and confirmation of the solution. In order to manage all incidents to
completion, an organisation should establish a process and rules for classification.

Page 10 of 11.
Interview Questions Edited and uploaded by

Ch6 Tools

1-Name some characteristics of test management tools?


 Support for the management of tests and the testing activities carried out.
 Interfaces to other testing tools
 May offer independent version control.
 Support for traceability.
 Logging of test results and generation of progress reports.
 Metrics

2-What are the main considerations in selecting a tool for an organization?


 Assessment of organisational maturity
 Evaluation against clear requirements and objective criteria.
 A proof-of-concept to test the required functionality and determine whether the product meets its objectives.
 Evaluation of the vendor (including training, support and commercial aspects).
 Identification of internal requirements for coaching and mentoring in the use of the tool.

3-What are the objectives for a proof of concept?


 Learn more detail about the tool.
 Evaluate how the tool fits with existing processes and practices, and determine what would need to change.
 Decide on standard ways of using, managing, storing and maintaining the tool and the test assets (e.g. deciding
on naming conventions for files and tests, creating libraries and def ining the modularity of test suites).
 Assess whether the benefits will be achieved at reasonable cost.

4-Name the main types of testing tool?


 Test Management Tools
 Requirements management tools
 Incident Management Tools also known as defect tracking tools.
 Configuration Management Tools
 Review tools
 Static Analysis Tools (D)
 Modelling Tools (D)
 Test design tools
 Test Data Preparation Tools
 Test execution tools
 Test harness / unit test framework tools (D)
 Test comparators
 Coverage measurement tools (D)
 Security Tools
 Dynamic analysis tools (D)
 Performance testing / load testing / stress testing tools
 Monitoring tools

Page 11 of 11.

You might also like