0% found this document useful (0 votes)
18 views19 pages

STQA Imp

Uploaded by

Kuroko Moi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views19 pages

STQA Imp

Uploaded by

Kuroko Moi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

STQA IMP

1.Explain Diff types of computer aided s/w testing tools(CAST),What are benefit of using the
testing ools?

Types of CAST Tools:

1. Test Management Tools: e.g., JIRA, TestLink – manage test plans and cases.

2. Functional Testing Tools: e.g., Selenium, QTP – verify functionality.

3. Performance Testing Tools: e.g., JMeter, LoadRunner – assess performance.

4. Defect Tracking Tools: e.g., Bugzilla – track bugs.

5. Automation Tools: e.g., Appium – automate repetitive tests.

6. Security Testing Tools: e.g., Burp Suite, OWASP – ensure app security.

7. Static Analysis Tools: e.g., SonarQube – analyze code without execution.

8. Continuous Testing Tools: e.g., Jenkins – integrate testing in CI/CD.

9. Cross-Browser Testing Tools: e.g., BrowserStack – test across browsers.

10. API Testing Tools: e.g., Postman – validate APIs.

Benefits of Using Testing Tools:

1. Increase test accuracy by reducing human error.

2. Faster test execution compared to manual testing.

3. Facilitate regression testing.

4. Support for large-scale testing.

5. Improves collaboration in teams via centralized tools.

6. Provide detailed analytics and reports.

7. Reduces costs in the long run.

8. Helps in early defect detection.

9. Ensures reproducibility of tests.

10. Enables continuous testing in DevOps pipelines.

11.

2) What are level of testing ?.Explain in details?


1. Unit Testing: Tests individual components or units of code.

2. Integration Testing: Tests data flow between integrated modules.

3. System Testing: Verifies the complete system’s functionality.

4. Acceptance Testing: Ensures software meets user requirements.


5. Regression Testing: Checks if changes caused unexpected issues.

6. Smoke Testing: Quick validation of major functionalities.

7. Sanity Testing: Focused testing after minor updates.

8. Alpha Testing: Internal testing by developers.

9. Beta Testing: External testing by end-users.

10. Exploratory Testing: Unscripted testing based on intuition.

.3Test Cases Cominng Minimum 10 min (FIX)


1. Login validation with valid credentials.

2. Login failure with invalid credentials.

3. Password reset functionality.

4. Adding items to a shopping cart.

5. Removing items from the cart.

6. Searching for products in the store.

7. Validating API response status codes.

8. Checking form field validation.

9. Ensuring error messages appear for invalid input.

10. Verifying UI elements load correctly on mobile devices.

4.What is usability testing ?


1. Evaluates how easy and intuitive software is for users.

2. Focuses on user satisfaction and effectiveness.

3. Involves real users performing tasks.

4. Checks navigation flow and UI clarity.

5. Identifies bottlenecks or usability issues.

6. Assesses accessibility for differently-abled users.

7. Measures time taken to complete tasks.

8. Determines if error messages are clear and actionable.

9. Helps improve retention and user engagement.

10. Ensures the software aligns with user expectations.


5.Explain Bug life cycle or Defect life cycle?
1. New: Bug is identified and logged.

2. Assigned: Assigned to a developer for fixing.

3. Open: Developer begins work.

4. Fixed: Bug is resolved in code.

5. Retest: Tester verifies the fix.

6. Reopened: If the bug persists after the fix.

7. Deferred: Fix is postponed for later.

8. Rejected: Bug is not valid or reproducible.

9. Verified: Bug fix is confirmed.

10. Closed: Bug is no longer an issue.

6) Difference Between Functional and Non-Functional Testing

Functional Testing:

1. Focuses on the business logic and functionality.

2. Ensures each feature works as expected.

3. Includes tests like login, form validation, and CRUD operations.

4. Based on requirements or specifications.

5. Includes Unit, Integration, and System Testing.

6. Tools: Selenium, QTP.

7. Example: Verifying the login process works correctly.

8. Ensures software meets user needs.

9. Output-oriented testing.

10. Performed manually or automated.

Non-Functional Testing:

1. Focuses on performance, usability, and reliability.

2. Ensures the system performs under varying conditions.

3. Includes performance, security, and scalability testing.

4. Tests beyond functional requirements.

5. Evaluates how the system behaves.

6. Tools: JMeter, LoadRunner.

7. Example: Checking if the app handles 1,000 users concurrently.


8. Ensures software delivers high quality.

9. Environment and architecture testing.

10. Usually automated.

7) Phases of Software Testing Life Cycle (STLC)

1. Requirement Analysis: Understand testable requirements.

2. Test Planning: Define scope, objectives, and strategies.

3. Test Case Development: Write detailed test cases.

4. Environment Setup: Prepare test environments (hardware/software).

5. Test Execution: Run tests based on cases.

6. Defect Reporting: Identify and document bugs.

7. Test Closure: Analyze results, and prepare closure reports.

8. Ensures systematic approach for thorough testing.

9. Improves efficiency and reduces time-to-market.

10. Encourages collaboration between QA and development teams.

8) Regression Testing vs. Confirmation Testing

Regression Testing:

1. Ensures new changes don’t affect existing functionality.

2. Performed after modifications or bug fixes.

3. Focuses on entire software functionality.

4. Automated testing is commonly used.

5. Example: Verifying all login-related features after changing the UI.

Confirmation Testing:

1. Verifies that specific defects have been fixed.

2. Focuses on the particular defect area.

3. A type of re-testing.

4. Manual testing often used.

5. Example: Ensuring the password reset functionality now works as intended.

9) Boundary Value Analysis (BVA)


1. A black-box testing technique.

2. Focuses on testing the boundaries of input values.

3. Ensures system handles minimum, maximum, and edge cases.

4. Example: If input range is 1–10, test 0, 1, 10, and 11.

5. Identifies defects in boundary-related conditions.

6. Reduces the number of test cases while ensuring coverage.

7. Effective for numeric and range-based inputs.

8. Common in financial and numeric applications.

9. Helps ensure robustness.

10. Complemented with equivalence partitioning.

10) Configuration Management in Testing

1. Tracks and manages changes to software.

2. Maintains version control for test artifacts (test cases, scripts).

3. Ensures consistency between test environments.

4. Identifies which configurations were tested.

5. Tools: Git, Jenkins, SVN.

6. Helps manage dependencies across environments.

7. Facilitates rollback in case of errors.

8. Avoids conflicts in team collaboration.

9. Ensures traceability of changes.

10. Important in agile and CI/CD workflows.

11) Dynamic Testing

1. Involves executing the code.

2. Focuses on functional behavior and performance.

3. Examples: Unit, Integration, System Testing.

4. Detects runtime errors.

5. Performed after static testing.

6. Verifies the system meets user requirements.

7. Includes black-box and white-box testing.


8. Supports early bug identification during development.

9. Requires test cases and scripts.

10. Ensures a working application for end-users.

12) Difference Between QA and QC

Quality Assurance (QA):

1. Process-oriented.

2. Focuses on preventing defects.

3. Conducted during development.

4. Examples: Reviews, audits, process improvement.

5. Aims at building the right product.

6. Proactive in nature.

7. Ensures proper process adherence.

8. Involves management and testing teams.

9. Continuous activity.

10. Includes creating checklists and process documents.

Quality Control (QC):

1. Product-oriented.

2. Focuses on identifying defects.

3. Conducted after development.

4. Examples: Testing, inspections, defect reporting.

5. Aims at delivering a defect-free product.

6. Reactive in nature.

7. Ensures product quality.

8. Involves the testing team.

9. Periodic or scheduled activity.

10. Includes execution of test cases.

13) What is Decision Coverage?


1. A white-box testing technique.

2. Ensures all decision points (if/else, loops) in the code are tested.

3. Verifies each branch of a decision is executed at least once.

4. Provides insight into untested paths.

5. Example: For an if (x > 0) condition, test cases must include x > 0 and x <= 0.

6. Helps identify logical errors and unreachable code.

7. Achieves higher code coverage than statement coverage.

8. Requires detailed test case design.

9. Tools: Code coverage tools like JaCoCo or Cobertura.

10. Improves reliability of the software.

14) Test Planning and Product Work in Testing

Test Planning:

1. A document defining test scope, objectives, and resources.

2. Created during the project planning phase.

3. Identifies required tools and environments.

4. Defines roles and responsibilities of team members.

5. Includes timelines, test case design, and execution plans.

6. Outlines risk management and contingency plans.

7. Helps streamline the testing process.

8. A living document updated as requirements change.

9. Ensures alignment with business goals.

10. Facilitates smooth communication within the team.

Product Work in Testing:

1. Analyzing product requirements.

2. Designing test scenarios and cases.

3. Configuring the test environment.

4. Executing functional and non-functional tests.

5. Logging and managing defects.

6. Verifying fixes through regression testing.

7. Creating test reports and metrics.


8. Validating the product against user expectations.

9. Conducting usability and exploratory tests.

10. Delivering a reliable and high-quality product.

15) What is TestNG? How to Set Priority in Testing?

TestNG Overview:

1. A testing framework for Java applications.

2. Supports annotations for test execution.

3. Enables parallel test execution.

4. Generates detailed reports.

5. Facilitates dependency-based and group testing.

6. Compatible with Selenium for automation.

7. Includes features like data-driven testing.

8. Allows easy integration into CI/CD pipelines.

9. Helps organize and execute test cases effectively.

10. Widely used for functional and regression testing.

Setting Priority in TestNG:

1. Use the priority attribute in the @Test annotation.

2. Lower priority value indicates higher execution precedence.

3. Default priority is 0 if not specified.

4. Example:

java

Copy code

@Test(priority = 1) public void testLogin() { }

@Test(priority = 2) public void testLogout() { }

5. Helps define test execution order.

6. Avoids dependency conflicts in execution.

7. Useful for smoke and regression test suites.

8. Enables efficient test management.

9. Ensures critical tests are executed first.

10. Customizes test flows in complex scenarios.


16) Verification vs. Validation in Testing

Verification:

1. Ensures the product is built correctly.

2. Process-oriented.

3. Conducted during development.

4. Examples: Reviews, inspections, walkthroughs.

5. Focuses on conformance to specifications.

6. Preventive in nature.

7. Answers: “Are we building the product right?”

8. Involves QA activities.

9. Does not involve code execution.

10. Improves process efficiency.

Validation:

1. Ensures the correct product is built.

2. Product-oriented.

3. Conducted after development.

4. Examples: Functional and system testing.

5. Focuses on meeting user requirements.

6. Detective in nature.

7. Answers: “Are we building the right product?”

8. Involves QC activities.

9. Includes code execution.

10. Ensures product reliability and usability.

17) Role and Responsibilities of a Test Leader

1. Define testing strategies and plans.

2. Allocate tasks to the testing team.

3. Review and approve test cases.

4. Ensure testing aligns with business requirements.

5. Facilitate communication between teams.


6. Monitor test execution and progress.

7. Manage risks and resolve testing bottlenecks.

8. Ensure proper defect reporting and tracking.

9. Generate test metrics and reports.

10. Promote quality standards within the team.

18) Black-box vs. White-box Testing

Black-box Testing:

1. Focuses on system behavior without internal knowledge.

2. Tests based on requirements and inputs.

3. Examples: Functional, usability, and regression testing.

4. Performed by testers or end-users.

5. Uses techniques like equivalence partitioning and BVA.

6. Simple and quick to perform.

7. Does not involve code-level testing.

8. Ensures feature completeness.

9. Tool example: Selenium.

10. Effective for acceptance testing.

White-box Testing:

1. Focuses on internal structure and code logic.

2. Tests paths, loops, and decision points.

3. Examples: Unit, integration, and decision coverage testing.

4. Performed by developers or technical testers.

5. Uses techniques like path testing and cyclomatic complexity.

6. Requires programming knowledge.

7. Ensures code robustness and reliability.

8. Tool example: JUnit.

9. Improves code maintainability.

10. Effective for performance optimization.

19) Static Testing


1. Testing without executing the code.

2. Conducted in early stages of SDLC.

3. Examples: Reviews, walkthroughs, and inspections.

4. Identifies syntax errors, design flaws, and requirement gaps.

5. Saves cost by detecting defects early.

6. Involves stakeholders like developers, QA, and analysts.

7. Improves code quality and maintainability.

8. Facilitates better understanding of requirements.

9. Tools: SonarQube, Checkstyle.

10. Complements dynamic testing.

20) Seven Testing Principles

1. Defect Clustering: Most bugs are in a few modules.

2. Pesticide Paradox: Repeating tests reduces effectiveness.

3. Testing Shows Presence of Defects: Can reveal issues but not prove absence.

4. Exhaustive Testing Is Impossible: Test all combinations is impractical.

5. Early Testing Saves Costs: Start testing in early SDLC phases.

6. Context Dependent Testing: Approach varies by project.

7. Absence of Errors Fallacy: No bugs ≠ correct product if requirements are wrong.

21) VV&T in SDLC (Verification, Validation, and Testing)

1. Verification: Ensures processes and requirements are correctly implemented.

2. Validation: Ensures the final product meets user expectations.

3. Testing: Detects defects by executing the system.

4. Conducted at every phase of SDLC (e.g., requirement, design, coding).

5. Verification techniques include reviews and inspections.

6. Validation techniques include functional and system testing.

7. Testing ensures both verification and validation goals are met.

8. Involves tools like TestNG, Selenium, and JIRA for automation and tracking.

9. Early VV&T reduces cost and improves product quality.

10. Helps in delivering defect-free and reliable software.


22) Software Quality Assurance (SQA) and Activities

1. Definition: SQA ensures software meets quality standards and best practices.

2. Focuses on preventing defects during the development lifecycle.

3. Activities:

o Requirement analysis.

o Test planning and execution.

o Process improvement (e.g., CMMI, Six Sigma).

o Configuration management.

o Quality audits and metrics analysis.

o Risk management.

o Training and mentoring QA teams.

4. Ensures compliance with industry standards.

5. Involves tools like TestLink, Quality Center.

6. Bridges the gap between development and testing teams.

7. Helps maintain customer satisfaction.

8. Drives continuous process improvement.

9. Plays a proactive role in agile and DevOps workflows.

10. Supports overall software lifecycle management.

23) Types of Testing: Benefits and Risks

Types:

1. Unit Testing.

2. Integration Testing.

3. System Testing.

4. Acceptance Testing.

5. Regression Testing.

6. Performance Testing.

7. Security Testing.

8. Usability Testing.

9. Compatibility Testing.
10. Exploratory Testing.

Benefits:

1. Early defect identification.

2. Improved software quality.

3. Enhanced customer satisfaction.

4. Compliance with standards.

5. Reduced maintenance cost.

6. Better performance under stress.

7. Security against vulnerabilities.

8. Ensures functionality across platforms.

9. Supports scalability.

10. Facilitates continuous integration and delivery.

Risks:

1. Time and cost constraints.

2. Insufficient coverage in large systems.

3. Over-dependence on automation tools.

4. Lack of skilled resources.

5. Risk of missing edge cases.

6. Poorly maintained test environments.

7. Misaligned testing with requirements.

8. Communication gaps between teams.

9. Testing fatigue in repetitive tasks.

10. Unaddressed compatibility issues.

24) What is Test Automation? Explain CAST

Test Automation:

1. The process of using tools to execute tests automatically.

2. Reduces manual effort in repetitive tasks.

3. Tools: Selenium, Appium, TestComplete.

4. Supports faster regression testing.

5. Improves test accuracy and coverage.


6. Facilitates testing in DevOps pipelines.

7. Enables large-scale performance testing.

8. Reduces time-to-market.

9. Requires initial investment in tools and scripting.

10. Ideal for stable and repetitive test cases.

CAST (Computer-Aided Software Testing):

1. Encompasses tools that aid in testing processes.

2. Includes test management, automation, and defect tracking.

3. Examples: JIRA, Jenkins, Postman, BrowserStack.

4. Reduces manual dependency.

5. Improves efficiency and consistency.

6. Supports continuous integration and delivery.

7. Facilitates API, security, and performance testing.

8. Encourages collaboration and transparency.

9. Offers real-time reporting and analytics.

10. Enhances overall software quality assurance.

25) STLC Life Cycle

1. Requirement Analysis: Understand test requirements and objectives.

2. Test Planning: Create a comprehensive test strategy and plan.

3. Test Case Design: Write test cases and scenarios based on requirements.

4. Environment Setup: Configure test environments and tools.

5. Test Execution: Run tests and log results.

6. Defect Reporting: Identify, report, and track defects.

7. Regression Testing: Verify fixes don’t break existing functionality.

8. Test Closure: Analyze and document testing outcomes.

9. Deliverables: Test summary reports, metrics, and defect logs.

10. Ensures thorough and systematic testing.

26) Reliability Measurement Factors

1. Failure Rate: Frequency of failures over time.


2. MTBF (Mean Time Between Failures): Time between successive failures.

3. MTTR (Mean Time to Repair): Time taken to resolve issues.

4. Availability: Percentage of uptime vs. downtime.

5. Fault Tolerance: System’s ability to function despite faults.

6. Recovery Time: Time needed to restore operations post-failure.

7. Test Coverage: Extent of functionality tested.

8. Defect Density: Number of defects per module/lines of code.

9. User Feedback: Insights from end-users on reliability.

10. Environment Suitability: Performance in different environments.

27) Short Notes

1 Productive & Project Risk:

Productive Risk:

1. Defects that degrade software quality.

2. Causes user dissatisfaction or financial loss.

3. Examples: Poor performance, security vulnerabilities.

4. Leads to high defect density or frequent failures.

5. Affects usability and functionality.

6. Requires mitigation through thorough testing.

7. Can damage the organization’s reputation.

8. Measured through metrics like MTTR and failure rate.

Project Risk:

1. Impacts delivery schedules and resource allocation.

2. Examples: Scope creep, team skill gaps, or budget overruns.

3. Delays due to unforeseen technical challenges.

4. Risk of team attrition or resource unavailability.

5. Misaligned goals between stakeholders.

6. Addressed using risk management frameworks.

7. Requires contingency planning.

8. Impacts both short-term and long-term project outcomes.


2 V-Model of Testing:

1. Emphasizes early defect detection.

2. Verification activities correspond to each SDLC phase (e.g., requirement review aligns with
acceptance testing).

3. Validation focuses on actual functionality testing.

4. Follows a strict sequential development/testing approach.

5. Minimizes the cost of late-phase defect fixes.

6. Promotes a structured and disciplined process.

7. Challenges include less flexibility for requirement changes.

8. Well-suited for small or medium-sized projects with stable requirements.

3 Experience-Based Testing:

1. Uses the tester's domain knowledge and past experience.

2. Fills gaps where formal requirements are incomplete or unclear.

3. Helps uncover edge-case scenarios not covered in test cases.

4. Quick to set up with minimal preparation.

5. Supports exploratory and ad-hoc testing methods.

6. Requires skilled testers familiar with the product or domain.

7. Less structured but highly adaptive to real-world scenarios.

8. Can complement formal testing strategies for comprehensive coverage.

4 Walkthrough:

1. A collaborative activity involving stakeholders.

2. Focuses on understanding requirements or design logic.

3. Typically involves the author presenting the work to a group.

4. Helps identify early defects in documentation or code.

5. Encourages feedback and constructive suggestions.

6. No formal documentation or approval required.

7. Useful in agile environments for continuous feedback.

8. Builds team consensus and shared understanding.


5 Equivalence Partitioning:

1. Divides inputs into valid and invalid equivalence classes.

2. Reduces the number of test cases without losing coverage.

3. Ensures all possible input scenarios are tested.

4. Identifies boundary values for additional focus.

5. Example: Testing for age input (valid: 1-120, invalid: <1 or >120).

6. Enhances efficiency in functional and black-box testing.

7. Minimizes redundant test cases.

8. Ensures critical scenarios are prioritized.

6 Test Process Monitoring:

1. Tracks real-time progress of test activities.

2. Measures KPIs like defect density, pass/fail ratios.

3. Provides insights into test coverage gaps.

4. Identifies bottlenecks in execution phases.

5. Utilizes dashboards or reports for visibility.

6. Ensures test alignment with deadlines and scope.

7. Facilitates decision-making on resource reallocation.

8. Aids in evaluating the overall effectiveness of testing strategies.

7 Software Quality Metrics:

1. Quantifies aspects of software quality (e.g., reliability, usability).

2. Examples: Code churn, test coverage, defect escape rate.

3. Helps in tracking improvement trends.

4. Assesses maintainability and reusability of code.

5. Reduces subjectivity in quality evaluation.

6. Facilitates performance benchmarking across releases.

7. Informs decision-making for process improvements.

8. Tools: SonarQube, Jenkins, and custom dashboards.

8 Cyclomatic Complexity:
1. Measures decision paths in the code.

2. Helps identify overly complex or risky code areas.

3. Formula: Edges – Nodes + 2 (for control flow graph).

4. High complexity indicates the need for refactoring.

5. Used in white-box testing for path coverage.

6. Guides unit testing by defining minimum test cases.

7. Tools: Code analysis tools like Checkstyle.

8. Enhances code readability and maintainability.

9 Decision Table Testing:

1. Maps input conditions to expected outputs in a table format.

2. Ensures all combinations of conditions are tested.

3. Helps uncover business logic defects.

4. Simplifies complex decision-making processes.

5. Example: Loan approval with criteria like income and credit score.

6. Supports requirement-driven and functional testing.

7. Enhances clarity and test case design efficiency.

8. Reduces ambiguity in testing complex rules.

10 Alpha & Beta Testing:

Alpha Testing:

1. Conducted by internal developers or testers.

2. Performed in a controlled environment.

3. Detects defects before external exposure.

4. Simulates real-world scenarios to some extent.

5. Ensures software stability before beta release.

6. Includes functional, usability, and performance testing.

7. Feedback loops are quicker due to proximity to the development team.

8. May involve tools for monitoring and debugging.

Beta Testing:

1. Conducted by real users in production-like environments.


2. Aims to gather user feedback and uncover unanticipated issues.

3. Last stage before full production release.

4. Enhances software reliability through diverse user environments.

5. Helps evaluate user satisfaction and feature effectiveness.

6. Feedback is used to fine-tune the product.

7. Requires clear guidelines and communication with beta users.

8. Risk of exposing minor defects to end-users.

You might also like