Module 1 - Part B1
Module 1 - Part B1
To check whether the Actual software product matches the Expected requirements and to ensure that
software product is defect free.
In Black Box Testing, the functionalities of software applications are tested without having knowledge
of Internal code structure, Implementation details and Internal paths.
• Tester gives input value to examine its functionality & checks whether function is giving expected
output or not.
• If the function produces correct output, then it is passed in testing, otherwise failed.
1. Functional Testing
2. Regression Testing
Before we move in depth of the Black box testing do you known that there are many different type of
testing used in industry and some automation testing tools are there which automate the most of
testing so if you wish to learn the latest industry level tools then you check-out our manual to
automation testing course in which you will learn all these concept and tools
Functional Testing
• Functional testing is defined as a type of testing that verifies that each function of the software
application works in conformance with the requirement and specification.
• This testing is not concerned with the source code of the application. Each functionality of the
software application is tested by providing appropriate test input, expecting the output, and
comparing the actual output with the expected output.
• This testing focuses on checking the user interface, APIs, database, security, client or server
application, and functionality of the Application Under Test. Functional testing can be manual
or automated. It determines the system’s software functional requirements.
Regression Testing
• Regression Testing is the process of testing the modified parts of the code and the parts that
might get affected due to the modifications to ensure that no new errors have been introduced
in the software after the modifications have been made.
• Regression means the return of something and in the software field, it refers to the return of
a bug. It ensures that the newly added code is compatible with the existing code.
• In other words, a new software update has no impact on the functionality of the software. This
is carried out after a system maintenance operation and upgrades.
Nonfunctional Testing
• It is designed to test the readiness of a system as per nonfunctional parameters which are
never addressed by functional testing.
• Non-functional testing is also known as NFT. This testing is not functional testing of software.
It focuses on the software’s performance, usability, and scalability.
• The tester does not need to have more functional knowledge or programming skills to
implement the Black Box Testing.
• There is a possibility of repeating the same tests while implementing the testing process.
• It is difficult to execute the test cases because of complex inputs at different stages of testing.
• Working with a large sample space of inputs can be exhaustive and consumes a lot of time.
Black Box Techniques
Random Testing
Random testing is software testing in which the system is tested with the help of generating random
and independent inputs and test cases. Random testing is also named monkey testing. It is a black box
assessment outline technique in which the tests are being chosen randomly and the results are being
compared by some software identification to check whether the output is correct or incorrect.
Step-3: Test the system on these inputs and form a random test set
The below image represents the working of Random Testing more clearly.
3. It saves our time and does not need any extra effort.
4. Random testing is less costly, it doesn’t need extra knowledge for testing the program.
2. After that, from that domain, the data of test inputs are chosen separately.
3. With the help of these test inputs, the test is executed successfully. These input tests conduct
random sets of tests.
4. The outcomes are compared with the system identification. The outcome of the test becomes
unsuccessful if any test input doesn’t match with the original one otherwise the outcomes are
always successful.
2. It doesn’t need any special intelligence to access the program during the tests.
3. Errors can be traced very easily; it can easily detect the bug throughout the testing.
4. This software is lacking bias means it makes the groups evenly for the testing and it prefers not
to repeatedly check the errors as there can be some changes in the codes throughout the
testing process.
2. They are not practical. Some tests will be of no use for a longer time.
4. New tests cannot be formed if their data is not available during testing.
Example:
Scenario:
You are testing a new mobile banking application. The app includes several features such as viewing
account balances, transferring funds, paying bills, and checking transaction history. To ensure
robustness, you decide to use random testing to identify any potential issues that might not be
captured by other testing methods.
Question:
Design test cases using Random Testing to validate the functionality and stability of the mobile
banking application. Specify the expected results for each test case.
Answer:
Test Cases:
o Input: Transfer a randomly selected amount (e.g., $42.73) from one account to
another.
o Expected Result: The transfer should be processed correctly, with the appropriate
amount deducted from the source account and added to the destination account.
The transaction should appear in both accounts' transaction history.
o Input: Check the balance of a randomly selected account (e.g., Account #2315).
o Expected Result: The balance displayed should be accurate and match the expected
amount based on recent transactions. The app should display the balance correctly
without any discrepancies.
o Input: Pay a randomly selected bill (e.g., electricity bill) for a randomly selected
amount (e.g., $75.50).
o Expected Result: The bill payment should be processed successfully, with the
amount deducted from the account. The payment confirmation should be displayed,
and the bill should be marked as paid in the app's bill payment section.
o Input: View transaction history for a randomly selected date range (e.g., from March
1 to March 7).
o Expected Result: The transaction history should display all transactions accurately
for the selected date range. The details of each transaction should be correct and
reflect the activity within that period.
o Input: Navigate randomly through different sections of the app (e.g., from account
overview to transaction history to bill payments).
o Expected Result: The app should navigate smoothly between sections without errors
or crashes. All features and options should function correctly as the user navigates
through the app.
o Input: Perform a series of random actions within the app (e.g., login, logout, change
account settings, and log in again).
o Expected Result: The app should handle each action correctly, without any
unexpected behavior or errors. The login and logout processes should work
smoothly, and changes in account settings should be saved and reflected.
o Input: Simulate random error scenarios (e.g., network failure during a fund transfer).
o Expected Result: The app should display appropriate error messages and handle the
errors gracefully. It should provide options to retry or cancel the operation and
ensure that no inconsistent state is left.
Explanation:
Random Testing involves executing random actions or inputs to discover potential issues that might
not be identified through structured testing methods. In this scenario:
• Random Transfer Amount, Account Balance Inquiry, Bill Payment: Tests various functions
with randomly chosen inputs to ensure correct processing and accuracy.
• Random Transaction History Check: Ensures that historical data is accurately displayed for a
random date range.
• Random App Navigation and User Interaction: Validates the app’s stability and correct
behavior through random navigation and actions.
• Random Error Handling: Checks how the app handles unexpected error scenarios.
By performing these random tests, you can uncover issues that might not be revealed through other
testing strategies, contributing to a more robust and reliable application.
Instead of using each and every input value, use any one value from the group to test outcome.
Scenario:
You are testing an online registration form that includes a field for users to enter their age. The age
must be between 18 and 65 inclusive. You need to design test cases to validate this age input field
using Equivalence Class Partitioning.
Question:
Design test cases using Equivalence Class Partitioning to ensure that the age input field is correctly
validated. Specify the expected results for each test case.
Answer:
Equivalence Classes:
Test Cases:
o Input: 25
o Expected Result: The system should accept the input and proceed with the
registration.
2. Test Case 2: Age Just Below Minimum
o Input: 17
o Expected Result: The system should reject the input and display an error message
indicating that the age must be between 18 and 65.
o Input: 66
o Expected Result: The system should reject the input and display an error message
indicating that the age must be between 18 and 65.
Explanation:
Equivalence Class Partitioning divides the input data into classes where all values within a class are
expected to be treated the same way. For this scenario:
• Valid Age Class (18-65): Includes any age that falls within this range. Only one test case is
needed from this class to confirm that the system accepts valid inputs.
• Invalid Age Class - Below Minimum (<18): Includes all ages less than 18. Testing with a value
just below the minimum boundary (17) is sufficient to ensure that inputs in this range are
rejected.
• Invalid Age Class - Above Maximum (>65): Includes all ages greater than 65. Testing with a
value just above the maximum boundary (66) is sufficient to ensure that inputs in this range
are rejected.
By selecting representative test cases from each class, you effectively validate that the application
handles all possible input scenarios within the defined boundaries.
It tests, while entering boundary value whether the software is producing correct output or not.
You are testing an online registration form for a website. The form includes a field for users to enter
their age. The age must be between 18 and 65 inclusive. You need to design test cases to validate
this age input field using Boundary Value Analysis.
Question:
Design test cases using Boundary Value Analysis to ensure that the age input field is correctly
validated. Specify the expected results for each test case.
Answer:
Test Cases:
o Input: 17
o Expected Result: The system should reject the input and display an error message
indicating that the age must be between 18 and 65.
o Input: 18
o Expected Result: The system should accept the input and proceed with the
registration.
o Input: 19
o Expected Result: The system should accept the input and proceed with the
registration.
o Input: 64
o Expected Result: The system should accept the input and proceed with the
registration.
o Input: 65
o Expected Result: The system should accept the input and proceed with the
registration.
o Input: 66
o Expected Result: The system should reject the input and display an error message
indicating that the age must be between 18 and 65.
Explanation:
Boundary Value Analysis focuses on testing values at the edges of input ranges. In this case, the
boundaries are 18 and 65. The test cases include values just below the lower boundary (17), at the
lower boundary (18), just above the lower boundary (19), just below the upper boundary (64), at
the upper boundary (65), and just above the upper boundary (66). This approach helps ensure that
the age validation logic handles boundary conditions correctly.
Decision Table Testing
Various input combination & their respective system behaviour are captured in tabular form.
Check logical relationship between two and more than two inputs. Ex. Gmail Account
Scenario-Based Question: Decision Table Testing for Gmail Login and Password
Scenario:
You are testing the Gmail login system. The system behavior is based on two main conditions:
If both the email and password are correct, login should be successful.
If either the email or the password is incorrect, login should fail with an appropriate error message.
Question: Create a decision table based on the login behavior for Gmail and derive test cases to ensure
the system works as expected. Specify the expected results for each test case.
Decision Table:
Input: Enter correct email (e.g., "[email protected]") and correct password (e.g., "Password123").
Expected Result: The login should be successful. The user should be taken to the Gmail inbox.
Test Case 2: Valid Email and Incorrect Password
Input: Enter correct email (e.g., "[email protected]") and incorrect password (e.g.,
"WrongPass123").
Expected Result: The login should fail with an error message stating, "Wrong password. Try again or
click 'Forgot password' to reset it."
Input: Enter incorrect email (e.g., "[email protected]") and correct password (e.g., "Password123").
Expected Result: The login should fail with an error message stating, "Couldn't find your Google
Account."
Input: Enter incorrect email (e.g., "[email protected]") and incorrect password (e.g.,
"FakePass123").
Expected Result: The login should fail with an error message stating, "Couldn't find your Google
Account." The system should not distinguish between incorrect email and password for security
reasons.
Explanation:
Decision Table Testing helps to identify all possible combinations of conditions (in this case, correct or
incorrect email and password) and their expected outcomes.
Each row in the decision table represents a unique combination of the conditions (correct or incorrect
email and password) and their respective results (successful or failed login).
The test cases ensure that the system handles each scenario correctly, covering both successful login
and various failure scenarios with appropriate error messages.
By using decision table testing, you can comprehensively verify the Gmail login functionality, ensuring
that the system behaves as expected for all possible input combinations.
• Error Guessing
• It is based on the experience of the tester, where tester uses experience to guess the
problematic areas of the software.
• Examples: Divide by zero, Handling null values in the text fields, Accepting the submit
button without any value, File upload without attachment, file upload with less than
or more that the limit size.
Scenario:
You are testing an online form for a travel booking website where users can book flights. The
form has the following fields:
Full Name: Must contain only alphabetic characters.
You are tasked with using error guessing to identify potential errors users might make when
filling out this form, based on common mistakes you anticipate.
Question: Identify possible test cases using the Error Guessing technique to test the online
form. List the errors you would guess and the expected outcomes for each test case.
Expected Result: The system should display an error message stating that the name must
contain only alphabetic characters. The form should prevent submission.
Expected Result: The system should display an error message indicating that the email
address is invalid. The form should prevent submission.
Input: Enter a travel date of "January 15, 2023" (assuming today’s date is September 2024).
Expected Result: The system should display an error message stating that the travel date
must be in the future. The form should prevent submission.
Expected Result: The system should display an error message stating that the number of
passengers must be a positive integer. The form should prevent submission.
Expected Result: The system should display an error message indicating the maximum
allowed number of passengers is 9. The form should prevent submission.
Input: Leave the "Credit Card Number" field blank and attempt to submit the form.
Expected Result: The system should display an error message indicating that the credit card
number is required. The form should prevent submission.
Input: Enter "12345678901234" (14 digits) in the "Credit Card Number" field.
Expected Result: The system should display an error message indicating that the credit card
number must be exactly 16 digits long. The form should prevent submission.
Input: Enter "15/2024/09" (incorrect date format) in the "Travel Dates" field.
Expected Result: The system should display an error message indicating the correct date
format (e.g., MM/DD/YYYY). The form should prevent submission.
Expected Result: The system should display an error message stating that the name must
contain only alphabetic characters. The form should prevent submission.
Input: Enter "John123" in the "Full Name" field, "johndoe.com" as the email address, and "-
2" as the number of passengers.
Expected Result: The system should display all relevant error messages (for the invalid name,
email format, and negative passenger count) and prevent the form from being submitted.
Explanation:
Error Guessing relies on the tester’s experience and intuition to guess common mistakes that
users might make. In this case, the errors include invalid input formats, missing mandatory
fields, and values that violate the constraints.
The test cases ensure that the system properly handles invalid inputs and provides useful
error messages, preventing incorrect data from being submitted.
By anticipating common mistakes and testing these potential errors, you can identify
weaknesses in the form validation logic and improve the overall robustness of the system.
Applies to those types of application that provide specific number of attempts to access
application.
Scenario-Based Question: State Transition Testing
Scenario:
You are testing the order management system for an e-commerce platform. The system has
an order lifecycle with the following states:
3. State 3: Shipped
4. State 4: Delivered
5. State 5: Canceled
2. From Payment Processed to Shipped (after the order is packed and shipped).
4. From Order Placed to Canceled (if the order is canceled before payment).
5. From Payment Processed to Canceled (if the order is canceled after payment but before
shipping).
6. Orders cannot be transitioned back from Shipped, Delivered, or Canceled to any previous
state.
Question:
Design test cases using State Transition Testing to validate the transitions between these
states. Specify the expected results for each test case.
o Expected Result: The order state should transition to Payment Processed. Verify that
the payment is recorded, and the system reflects the new state.
o Expected Result: The order state should transition to Shipped. Verify that the
shipping information is recorded, and the system reflects the new state.
o Expected Result: The order state should transition to Delivered. Verify that the
delivery information is recorded, and the system reflects the new state.
o Input: Place an order and then cancel it before processing the payment.
o Expected Result: The order state should transition to Canceled. Verify that the
cancellation is recorded, and no further actions (like payment or shipping) are
possible.
o Input: After processing the payment, cancel the order before shipping.
o Expected Result: The order state should transition to Canceled. Verify that the
cancellation is recorded and no further actions (like shipping or delivery) are
possible.
o Expected Result: The system should prevent this invalid transition and display an
appropriate error message indicating that transitions from Canceled to previous
states are not allowed.
o Input: Place an order, process payment, mark it as shipped, and finally, deliver it.
o Expected Result: The order should transition through each state (Order Placed →
Payment Processed → Shipped → Delivered) correctly, and each state change should
be reflected in the system.
Explanation:
• State Transition Testing involves validating that the system correctly transitions between
states based on predefined rules.
• Each test case ensures that the system correctly handles valid transitions and prevents
invalid ones.
• Testing transitions helps verify that the state management logic in the system is functioning
correctly and that invalid state changes are appropriately handled.
By using state transition testing, you can ensure that the order management system adheres
to the defined state transitions and accurately reflects the order lifecycle.