1. What is a Test Plan?
“A Test Plan is a document that describes the overall strategy for testing. It defines
what features will be tested, testing types, test environment, timelines, tools, and roles
involved. I’ve seen it prepared by Test Leads to align QA with project goals.”
🔹 2. Can you explain the Bug Life Cycle?
“Sure. Once a tester finds a bug, it goes through these stages:
New → Assigned → Open
Fixed → Retest
If not fixed: Reopen
If fixed: Verified → Closed
We use tools like JIRA for this. In my project, we also attached screenshots, logs, and
test data.”
🔹 3. Smoke vs Sanity Testing?
“Smoke testing checks whether the build is stable. It's broad and shallow — like
login, homepage load, etc.
Sanity is narrow and deep — we do it after a bug fix to ensure specific functionality
works before full testing.”
🔹 4. Regression vs Retesting?
“Retesting means testing a defect that was fixed.
Regression ensures that fixing one part hasn’t broken other parts. Retesting is
focused; regression is wide.”
🔹 5. Functional vs Regression Testing?
“Functional testing verifies that a feature works as per the requirement.
Regression testing ensures existing functionalities remain unaffected after changes.
Functional testing is feature-based; regression is change-impact-based.”
🔹 6. Severity vs Priority?
“Severity is technical impact, while priority is how urgently it should be fixed.
High severity + low priority: App crash in rarely used module
Low severity + high priority: Spelling mistake on home page*
🔹 7. Difference between Test Plan and Test Strategy?
“Test Plan is project-specific and dynamic, created by QA Leads.
Test Strategy is at the org-level, static, and defines overall test approach and
standards.”
🔹 8. Boundary Value Analysis vs Equivalence Partitioning
“BVA tests at the edges of input range (like age = 18, 60).
Equivalence Partitioning divides input into valid and invalid groups and picks one
value from each group.”
🔹 9. White Box vs Black Box Testing?
“White box is code-level testing, done by developers.
Black box is functionality-based, done by testers without seeing the code.”
🔹 10. You have 1000 test cases. Which do you automate?
“I would automate high-priority, frequently executed, stable test cases — like login,
checkout, core API flows. We focus on regression and smoke for automation.”
🔹 11. Can a method return 2+ values?
“Yes. In Java, we use POJO objects, Maps, or Arrays to return multiple values from
a method.”
🔹 12. What is Bug Triage?
“Bug triage is a meeting where QA, Dev, and PM review all open bugs, set
priorities/severities, assign owners, and decide fix timelines.”
Exploratory Testing – Detailed Explanation
✅ Definition:
Exploratory Testing is an unscripted, simultaneous process of:
Learning the application,
Designing tests on the fly,
Executing them immediately.
It relies on the tester’s domain knowledge, intuition, creativity, and experience
instead of pre-written test cases.
How to explain in interviews:
“Exploratory Testing is a hands-on, informal testing approach where I simultaneously
learn the system, think about what to test, and execute tests on the fly. There are no
predefined test cases. It helps uncover edge cases and real-world bugs that often go
unnoticed during formal testing.”
Ad-hoc Testing – Detailed Explanation
✅ Definition:
Ad-hoc Testing is an unstructured, informal testing technique performed without
any test design techniques or documentation.
How to explain in interviews:
“Ad-hoc Testing is quick and informal. I perform it without test cases, usually based
on my understanding and instincts. It’s useful when time is limited or to validate
small changes quickly. We often do this after formal testing to see if any unexpected
behavior remains.”
When should you say “I use Exploratory / Ad-hoc testing”?
👉 When:
Requirements are not fully available
Time is limited before release
Testing newly integrated or unstable features
UI/UX changes are frequent
15. What is Build Acceptance Testing?
“It’s also called Smoke Testing. It’s done to check if the new build is stable enough
for further testing — like login, page navigation, and basic workflows.”
7. Priority Execution Order −1, 0, 1, 2
“Lower number = higher priority. Execution will be: −1 → 0 → 1 → 2.”
21. What is a Deferred Bug?
“A valid bug, but postponed to future release due to low priority or lack of time.”
🔹 22. How do you decide what to automate?
*“I choose tests that are:
Frequently executed
Time-consuming manually
Stable and repeatable
Critical to business”*
🔹 23. When do you stop testing?
*“When:
Test coverage is met
No major bugs
Deadlines reached
Stakeholders give sign-off”*
🔹 24. Suite Takes 1.5 Hrs — How to Reduce?
“I’ll implement parallel execution, tag-based runs (like smoke-only), data-driven
optimization, and CI/CD scheduling via Jenkins.”
🔹 25. Regression Test Suite Execution Time?
“In my last project, we had ~300 automated test cases. Full regression took 60–90
mins. We also had smoke runs taking ~10–15 mins.”
🔹 26. Keyboard Testing via Context Menu (Selenium)?
“Using Actions class to simulate keyboard via context menu:
java
CopyEdit
Actions a = new Actions(driver);
a.contextClick(element).sendKeys(Keys.ARROW_DOWN).sendKeys(Keys.ENTER
).perform();
🔹 27. What do you log in a Bug?
*“I include:
Steps to reproduce
Expected vs actual result
Screenshots or logs
Severity/Priority
Environment info”*
🔹 28. Achievement in Automation?
“Reduced regression execution time from 2 hrs to 30 mins by creating modular
reusable components and running tests in parallel on Jenkins.”
🔹 29. Difference between BDD and TDD?
BDD TDD
Focus on behavior Focus on implementation
Gherkin syntax Unit test first
Cucumber, SpecFlow JUnit, TestNG
🔹 30. Steps to Create Test Plan?
“Identify scope → Define test strategy → Estimate effort → Assign roles → Set
entry/exit criteria → List deliverables → Approve plan”
🔹 31. Inbound vs Outbound Testing
Inbound Outbound
Data/API entering app Data/API leaving app
Example: Payment received Example: Notification sent
🔹 32. What is the next step if dev rejects your defect?
“Reproduce it with proper steps, attach evidence (logs/screenshots), and if still
rejected — escalate to QA lead or BA.”
🔹 33. What are Test Metrics?
*“I tracked:
Test execution %
Pass/fail ratio
Defect leakage
Automation coverage
Test case effectiveness”*
🔹 34. How do you prioritize tests?
*“Based on:
Critical user flows
Business impact
Historical defect areas
Usage frequency”*
🔹 35. How do you do automation code review?
*“We use Git for pull requests. I check for:
Hardcoded values
Wait usage
Proper exception handling
Reusable methods
Logging and assertions”*
🔹 36. 250 Manual Test Cases – How to Group Suites?
*“I separate:
20–30 Smoke (core flows)
50–60 Sanity (bug fix validation)
200+ Regression (all major flows + edge cases)”*
🔹 37. When are Smoke/Sanity/Regression Tests Executed?
Test Type When
Smoke On every new build
Sanity After bug fixes
Regression Before release, major merges
🔹 38. How to check if a test case is a good candidate for automation?
“Stable, repetitive, not UI-fluctuating, data-driven, business-critical tests are good
candidates. One-time or unstable tests are avoided.”
🎯 Real-Time Project Example (for interviews):
“In my project, verification happened early when the team reviewed BRD and
SRS documents to ensure clarity and correct understanding before development.
Later, once the login module was developed, we validated it by testing whether
users could log in with valid/invalid credentials, checking for real-time behavior
and bugs.”
✅ Simple Analogy to Remember:
📋 Verification is like reviewing the recipe before cooking.
🍲 Validation is tasting the dish after cooking to see if it meets expectations.
“INNER JOIN returns only the records that have matching values in both tables.
OUTER JOIN returns all records from one or both tables, even if there’s no match. It's
categorized as LEFT OUTER, RIGHT OUTER, and FULL OUTER join.”
✅ Types of Joins Explained:
Join Type Description Example
Employees with
INNER Returns only matching rows between
matching Department
JOIN Table A and Table B
IDs
LEFT Returns all rows from the left table, All Employees, even
Join Type Description Example
OUTER and matching rows from the right table. those without a
JOIN Null if no match. department
RIGHT Returns all rows from the right All Departments, even
OUTER table, and matching rows from the left. those without
JOIN Null if no match. employees
FULL
Returns all rows from both tables, null All Employees and all
OUTER
where there's no match Departments
JOIN
🧪 Example (Interview):
sql
CopyEdit
SELECT E.name, D.dept_name
FROM Employees E
INNER JOIN Departments D
ON E.dept_id = D.id;
🔹 Returns only those employees who belong to a department.
🧹 2. DELETE vs TRUNCATE vs DROP – Detailed
Operati Can Affects Spee WHERE
Use
on Rollback? Structure? d Clause
Deletes specific Slowe
DELETE ✅ Yes ❌ No ✅ Yes
rows r
TRUNC Faste
Deletes all rows ❌ No ❌ No ❌ No
ATE r
Deletes entire Faste
DROP ❌ No ✅ Yes ❌ No
table st
Interview Tip:
“I use DELETE when I need to remove specific records and might need rollback.
TRUNCATE when I want to clear all data quickly without rollback.
DROP when I want to remove the entire table structure.”
🧪 Example:
sql
CopyEdit
DELETE FROM employees WHERE emp_id = 101;
TRUNCATE TABLE employees;
DROP TABLE employees;
Test Design Techniques – In Detail
These techniques help you write effective and optimized test cases for maximum
coverage with minimal effort.
✅ 1. Boundary Value Analysis (BVA)
📌 Test edges or boundaries of input ranges.
“If the input range is 18–60, I test: 17, 18, 19, 59, 60, 61.”
✅ Finds off-by-one errors.
✅ 2. Equivalence Partitioning (EP)
📌 Divide inputs into valid and invalid groups and test one from each.
“If the input range is 1–100, valid = [1–100], invalid = <1, >100. I test 50, 0, 101.”
✅ Reduces test count but keeps good coverage.
✅ 3. Decision Table Testing
📌 Used when there's multiple conditions and actions.
“Like if user has balance and OTP is correct, allow withdrawal.”
Condition: OTP Balance Acti
Correct Available on
Yes Yes Allow
Reje
No Yes
ct
✅ Helps handle combinations of conditions.
✅ 4. Use Case Testing
📌 Based on real-world business scenarios.
“Login, Add to Cart, Place Order — these are use cases. I test as per user flows.”
✅ Helps test end-to-end flows.
✅ 5. State Transition Testing (Bonus!)
📌 Used when the system changes state based on input.
“Ex: ATM – insert card → enter PIN → select amount → dispense cash.”
✅ Great for workflow-based applications.
✅ 6. Error Guessing
📌 Based on intuition and past experience.
“I try entering special characters in name fields or uploading large files.”
✅ Often catches hidden bugs.
✅ Different Types of Testing and Real-Time Usage
Type of
When It's Performed Real-Time Example / Use Case
Testing
Check if application launches,
1. Smoke ✅ After every new build
login works, homepage loads — If
Testing deployment
this fails, no further testing.
After fixing “Add to Cart” bug,
2. Sanity ✅ After small bug fix or
tester checks only that feature
Testing change to a module
quickly.
Check login, registration,
✅ After understanding
3. Functional payment — feature-by-feature
requirements, during initial
Testing testing against requirement
QA cycle
documents.
After a major bug fix or feature
4. Regression ✅ Before release or after any
update, run full suite to ensure no
Testing code change
old feature is broken.
✅ After a bug is marked as Re-run the same test case to
5. Retesting
“Fixed” by developer confirm bug is really fixed.
Tester tries unexpected inputs,
6. Exploratory ✅ At any stage, especially
flows to break the app without
Testing early builds or tight deadlines
test cases.
Just exploring the profile page
7. Ad-hoc ✅ Informally after formal
with invalid data, refreshing
Testing testing is done
flows, or multi-tab actions.
✅ Alongside functional Check layout, button alignment,
8. UI Testing
testing or after UI change colors, responsive design.
9. Test app on Chrome, Firefox,
✅ Once app is stable, usually
Compatibility Safari, Mobile — does it behave
once per release
Testing same?
Type of
When It's Performed Real-Time Example / Use Case
Testing
10.
✅ Once modules are built Test how "Order" module works
Integration
and integrated with "Payment" module.
Testing
11. System ✅ When entire application is End-to-end testing of entire
Testing developed system before UAT.
12. User
✅ Before release, done by Final testing to ensure it works as
Acceptance
business/client expected in real-world usage.
Testing (UAT)
13.
✅ After dev freeze or before Check if app loads under 100
Performance
production users, response time <2s
Testing
14. Security ✅ Before Go-live or when Check SQL injection, URL
Testing handling sensitive data manipulation, user access levels.
15. Usability ✅ During design or UAT Easy to navigate? Is UI intuitive
Testing phase for users?
🔁 Real-Time Usage Flow (Agile Sprint)
🧩 Week 1-2:
Functional Testing (Test Case design + execution)
Integration Testing
Ad-hoc/Exploratory if needed
🧩 Week 2:
Smoke Test on each new build
Retesting of fixed bugs
Sanity Test after small fixes
🧩 Sprint End:
Regression Testing (stable features)
UAT (by Product Owner)
Performance/Security Testing if needed
✅ Interview Pro Tip – What to Say:
“In my project, I start with Smoke Testing once the build is deployed. Then I do
Functional Testing for all the planned stories. If any bugs are fixed, I do Retesting
and then Regression Testing before the sprint ends. I also perform Ad-hoc and
Exploratory Testing to catch edge cases. Before UAT, we perform full System
Testing.”
✅ Real-Time Testing Flow: From Dev to QA to Sign-off
🧱 1. Requirement Phase
BA/Product Owner creates BRD/SRS
Dev and QA attend refinement/grooming meetings
QA starts understanding and preparing test scenarios
2. Development Phase
Dev starts coding based on user stories
Unit Testing is done by developers
Once a small module or feature is ready and stable...
🔥 3. Build Deployment to QA
Dev releases a build into QA environment (via CI/CD tools like Jenkins)
QA is notified via email, JIRA, or Slack
✅ 4. Smoke Testing by QA
To ensure major modules like login, navigation, and dashboard are working.
If Smoke Passes → Proceed with Functional Testing
If Smoke Fails → Blocker bug raised, dev investigates
🧪 5. Functional Testing
QA executes test cases for the new features/stories
Bugs are logged in JIRA or defect tracker
Test data is prepared (sometimes with Dev/DB support)
Use positive & negative test cases
🔁 6. Retesting + Sanity Testing
After dev fixes bugs:
o Retesting = QA verifies fixed bugs
o Sanity = QA ensures nothing else broke during the fix (mini smoke
test)
♻️7. Regression Testing
Run full regression to ensure new changes didn’t break existing features.
Usually executed before sprint end or before release
Run manually or via automation suite
🧭 8. Ad-hoc / Exploratory Testing
QA randomly explores app to catch unexpected issues not covered in test cases.
Done toward end of sprint or post-regression
Helps identify UI or real-time bugs
🔏 9. Bug Closure and QA Sign-Off
All test cases pass ✅
Critical/High severity bugs are fixed or deferred with approval
QA prepares a Sign-off Mail or Report (Test Summary, Defect status,
Blockers if any)
QA gives formal sign-off to Product Owner/Client
10. UAT (Optional)
Product Owner or Client performs User Acceptance Testing
Based on real-world scenarios
Once UAT passes, the product is ready for production release
🚀 11. Production Deployment
QA may do Post-Deployment Smoke Testing in production
Ensure key workflows work for real users
🔄 Visual Summary:
pgsql
CopyEdit
Requirements ➜ Dev Coding ➜ Unit Testing
↓
Build Release ➜ Smoke Testing
↓
Functional Testing ➜ Bug Reporting
↓
Retesting + Sanity
↓
Regression Testing + Ad-hoc
↓
QA Sign-off ➜ UAT ➜ Release to Prod
✅ Interview Answer Sample:
“Once developers complete a feature, they release a build. I start with Smoke Testing
to validate core modules. If it passes, I proceed with Functional Testing and log any
bugs. Once those are fixed, I perform Retesting, followed by Sanity Testing. Toward
the end of the sprint, I run full Regression Testing. I also perform Ad-hoc Testing to
catch missed edge cases. Finally, I prepare a QA sign-off report, and if needed,
support during UAT.”