0% found this document useful (0 votes)
18 views12 pages

Sco203 PP Answers

The document outlines various aspects of software testing, including types of testing, test case design, and the importance of software quality assurance (SQA). It discusses the benefits and limitations of automated testing, key roles of testers, and the significance of adherence to quality standards. Additionally, it covers integration testing strategies, test planning, and the importance of verification and validation in software development.

Uploaded by

wanbabsl1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views12 pages

Sco203 PP Answers

The document outlines various aspects of software testing, including types of testing, test case design, and the importance of software quality assurance (SQA). It discusses the benefits and limitations of automated testing, key roles of testers, and the significance of adherence to quality standards. Additionally, it covers integration testing strategies, test planning, and the importance of verification and validation in software development.

Uploaded by

wanbabsl1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

QUESTION ONE (30 MARKS)

a. Four instances in which automatic testing might be used (4 marks)

1. Regression Testing – When new features are added, automated tests ensure existing
functionality remains intact.

2. Performance Testing – Automated tools simulate multiple users to test system performance
under load.

3. Repetitive Tasks – Running the same tests frequently (e.g., nightly builds) to ensure stability.

4. Large-Scale Data Validation – Testing databases or APIs with thousands of records manually is
inefficient.

b. Four key items in a test case design document (4 marks)

1. Test Case ID – Unique identifier for tracking.

2. Test Steps – Detailed actions to execute the test.

3. Expected Result – What should happen if the test passes.

4. Actual Result & Status – Records whether the test passed or failed.

c. Acceptance testing and three expected outcomes (6 marks)

- Definition: Final testing phase where stakeholders verify if the software meets business needs.

- Expected Outcomes:

1. Confirms Business Requirements – Ensures software aligns with user needs.

2. Identifies Last-Minute Issues – Catches defects before release.

3. Sign-Off Approval – Stakeholders approve deployment.

d. Three benefits of integration testing in time-critical projects (3 marks)

1. Early Defect Detection – Finds interface issues between modules early.

2. Reduces System Failures – Ensures combined components work as intended.

3. Saves Time – Prevents late-stage integration problems that delay projects.


e. Three restrictions of testing tools (3 marks)

1. High Initial Cost – Tools like Selenium require licensing and training.

2. False Positives/Negatives – Automated scripts may misinterpret results.

3. Limited to Scriptable Tests – Creativity-based testing (e.g., UX) still needs manual effort.

f. Three importance of SQA and how it’s achieved (5 marks)

1. Reduces Costs – Early bug detection lowers fixes.

2. Enhances Reliability – Ensures software performs as expected.

3. Customer Satisfaction – Delivers a high-quality product.

- Achieved via: Standards (ISO), reviews, and continuous testing.

g. Five measurable aspects of software quality (5 marks)

1. Functionality – Does it work as intended?

2. Performance – Speed and responsiveness.

3. Usability – User-friendliness (e.g., navigation).

4. Reliability – System uptime and error rates.

5. Maintainability – Ease of updates and debugging.

QUESTION TWO (20 MARKS)

a. Five reasons for SQA standards (5 marks)

1. Consistency – Uniform processes across teams.

2. Compliance – Meets legal/industry regulations.

3. Risk Mitigation – Reduces project failures.

4. Improves Efficiency – Streamlines development.

5. Global Compatibility – Aligns with international best practices.


b. Main elements of ISO 9000-3 (5 marks)

1. Quality Management System – Framework for processes.

2. Management Responsibility – Leadership commitment.

3. Resource Management – Ensures adequate tools/skills.

4. Product Realization – Development lifecycle controls.

5. Measurement & Improvement – Continuous feedback loops.

c. Four key roles of a software tester (4 marks)

1. Test Planning – Designs test strategies.

2. Defect Reporting – Logs and tracks bugs.

3. Automation – Implements scripted tests.

4. Collaboration – Works with devs to resolve issues.

d. Examples of Failure, Fault, Error (6 marks)

- Failure: System crash during login (visible issue).

- Fault: Incorrect code logic in authentication (defect).

- Error: Typo in code (e.g., `if (passowrd == ...)`).

QUESTION THREE (20 MARKS)

a. Login page test cases (6 marks)

- Requirements:

1. Valid credentials grant access.

2. Invalid credentials show error.

3. Password masking (security).

- Test Cases:

- Input correct username/password → Redirect to dashboard.


- Input wrong password → Display "Invalid credentials."

b. SQA activities for a new manager (6 marks)

1. Audit Processes – Review existing workflows.

2. Implement Standards – Adopt ISO/CMMI.

3. Team Training – Ensure skill alignment.

4. Metrics Tracking – Defect rates, test coverage.

c. Four SQA pitfalls (4 marks)

1. Overlooking Documentation – Leads to miscommunication.

2. Late Testing – Increases cost of fixes.

3. Tool Dependency – Neglecting manual checks.

4. Resistance to Change – Teams avoiding new practices.

d. Bug cycle (4 marks)

1. Discovery – Tester identifies bug.

2. Reporting – Logged in tracking system.

3. Fix & Retest – Dev resolves; tester verifies.

4. Closure – Bug marked as resolved.

Key Takeaways: - Automation saves time but has limitations.

- SQA standards prevent costly errors.

- Acceptance testing ensures stakeholder satisfaction.

- Bug cycles require structured tracking.

QUESTION FOUR (20 MARKS)

a. Verification vs. Validation with examples (5 marks)

- Verification: Checks if the product is built correctly (process-focused).


- Example: Code reviews to ensure adherence to coding standards.

- Validation: Checks if the right product is built (user-focused).

- Example: User testing a login feature to confirm it meets requirements.

b. Four solutions to software failures (4 marks)

1. Root Cause Analysis (RCA) – Identify and fix underlying issues (e.g., logging errors to trace
bugs).

2. Automated Monitoring – Tools like New Relic detect failures in real-time.

3. Rollback Mechanisms – Revert to a stable version if updates fail.

4. User Training – Reduce human error (e.g., guiding users on input formats).

c. Three origins of software defects (6 marks)

1. Requirement Gaps – Ambiguous specs lead to mismatched features.

- Example: Unclear password complexity rules.

2. Coding Errors – Logical mistakes (e.g., infinite loops).

3. Environmental Issues – OS/dependency conflicts.

- Example: App crashes on older Android versions.

d. Performance/Stress testing for a banking app (5 marks)

- Performance Testing:

- Simulate 1,000 users logging in simultaneously to measure response time (<2 seconds).

- Stress Testing:

- Overload the server (e.g., 10,000 transactions/minute) to identify breaking points.

- Tools: JMeter or LoadRunner.

QUESTION FIVE (20 MARKS)


a. Testing scenario vs. test case (4 marks)

- Testing Scenario: High-level what to test (e.g., "Test login functionality").

- Test Case: Detailed how to test (e.g., "Enter valid credentials → Verify dashboard loads").

b. Purpose of a test plan (3 marks)

- Defines scope, resources, schedule, and deliverables for testing.

- Ensures alignment with project goals (e.g., "Complete 95% test coverage by Sprint 3").

c. Role of alpha/beta testing in acceptance (3 marks)

- Alpha Testing: Internal team tests in a lab (e.g., developers validate core features).

- Beta Testing: Real users test in production (e.g., customers report UX issues before launch).

d. Four automation testing challenges (4 marks)

1. High Maintenance – Scripts break with UI changes.

2. Skill Dependency – Requires programming expertise.

3. False Results – Misleading pass/fail due to flaky tests.

4. Initial Cost – Tools/licenses are expensive.

e. Kinds of software testing (6 marks)

1. Unit Testing – Tests individual functions (e.g., JUnit for Java).

2. Integration Testing – Checks module interactions (e.g., API testing).

3. System Testing – End-to-end validation (e.g., Selenium for web apps).

4. Regression Testing – Ensures new code doesn’t break old features.

5. Security Testing – Identifies vulnerabilities (e.g., OWASP ZAP).

6. Usability Testing – Evaluates user experience (e.g., A/B testing layouts).


Key Takeaways for All Questions

- Verification ≠ Validation – One checks the process, the other checks the product.

- Automation Trade-offs – Speed vs. maintenance costs.

- Testing Hierarchy – Unit → Integration → System → Acceptance.

- SQA Pitfalls – Late testing and poor documentation are major risks.

Answers to QUESTION ONE (30 MARKS)

a) Five conditions that result in a bug in software (5 marks)

1. Incorrect requirements interpretation - When developers misunderstand or misinterpret the


software requirements.

2. Programming errors - Mistakes in coding logic, syntax errors, or algorithmic flaws.

3. Inadequate testing - Failure to test all possible scenarios or edge cases.

4. Integration issues - Problems that arise when different modules or components interact.

5. Environmental differences - Discrepancies between development, testing, and production


environments.

b) Illustration of TDD phases in XP software development life cycle (5 marks)

```

[Write Test] → [Run Test (Fail)] → [Write Code] → [Run Test (Pass)] → [Refactor Code]

↑_________________________________________________________|

```

1. Write Test: Create a test for a small piece of functionality before writing the code.

2. Run Test (Fail): Verify the test fails (since the code doesn't exist yet).

3. Write Code: Implement just enough code to make the test pass.

4. Run Test (Pass): Verify the code now passes the test.

5. Refactor Code: Improve the code structure while maintaining functionality.


c) Five critical manual test tasks for a login screen (5 marks)

1. Verify correct credentials - Test that valid username/password combinations grant access.

2. Test invalid credentials - Verify the system rejects incorrect username/password


combinations.

3. Check password masking - Ensure password fields obscure input characters.

4. Validate error messages - Confirm appropriate error messages display for failed attempts.

5. Test empty submissions - Verify the system handles blank username or password fields
properly.

d) Five ways software inspections improve software quality (5 marks)

1. Early defect detection - Finds errors before they become costly to fix.

2. Knowledge sharing - Team members learn from each other's expertise.

3. Standard adherence - Ensures compliance with coding standards and best practices.

4. Improved design - Identifies architectural flaws early in development.

5. Documentation quality - Verifies that documentation matches the actual implementation.

e) Six indirectly measured factors affecting software quality (6 marks)

1. Maintainability - How easily the software can be modified or updated.

2. Portability - The ease with which software can be transferred between environments.

3. Reusability - The extent to which code can be reused in other projects.

4. Testability - How easily the software can be tested.

5. Understandability - How easily the code can be comprehended by developers.

6. Interoperability - The ability to interact with other systems or components.

f) Four major QA tasks for quality software production (4 marks)

1. Requirement analysis and validation - Thoroughly review and validate all requirements.

2. Comprehensive test planning - Develop detailed test plans covering all aspects.
3. Continuous integration and testing - Implement automated builds and frequent testing.

4. Process audits and improvement - Regularly review and improve development processes.

Answers to Software Engineering Questions

QUESTION THREE (20 MARKS)

a) Importance of Quality Standards in Software Development (10 marks)

Quality standards in software development ensure consistency, reliability, and efficiency. Their
importance includes:

1. Consistency – Ensures uniformity in development processes and outputs.

2. Reliability – Reduces defects, leading to more stable and dependable software.

3. Compliance – Helps meet regulatory and industry requirements (e.g., ISO 9001, IEEE
standards).

4. Customer Satisfaction – Delivers software that meets user expectations and reduces failures.

5. Cost Efficiency – Minimizes rework, debugging, and maintenance costs by catching issues
early.

6. Maintainability – Ensures code is well-documented and structured for future updates.

7. Interoperability – Facilitates compatibility with other systems and technologies.

8. Security – Adherence to security standards (e.g., OWASP) reduces vulnerabilities.

9. Competitive Advantage – High-quality software enhances market reputation.

10. Process Improvement – Encourages continuous refinement of development practices.

b) Five Attributes Low-Level Specification Tests Focus on in Static Black Box Testing (10 marks)

Static black box testing examines requirements and design documents without executing code.
Key attributes include:

1. Completeness – Checks if all necessary requirements are documented.

2. Consistency – Ensures no contradictory requirements exist.

3. Correctness – Verifies that specifications align with business needs.


4. Clarity – Assesses whether requirements are unambiguous and understandable.

5. Testability – Determines if requirements can be effectively tested.

QUESTION FOUR (20 MARKS)

a) Steps in Software Test Design Specifications (IEEE 829 Standard) (10 marks)

The IEEE 829 standard outlines the following steps:

1. Test Plan Identification – Unique identifier for the test design.

2. Features to be Tested – Lists functionalities under test.

3. Test Approach – Describes testing techniques (e.g., unit, integration, system testing).

4. Test Case Design – Specifies test inputs, procedures, and expected results.

5. Pass/Fail Criteria – Defines conditions for test success or failure.

6. Test Deliverables – Lists documents, logs, and reports to be produced.

7. Environmental Needs – Specifies hardware, software, and tools required.

8. Schedule & Responsibilities – Assigns tasks and timelines.

9. Risks & Contingencies – Identifies potential issues and mitigation plans.

10. Approval & Review – Ensures stakeholders validate the test design.

b) Five Hardware Configuration Elements to Test When Buying a New Computer (10 marks)

Configuration testing ensures compatibility across different hardware setups. Key elements
include:

1. Processor (CPU) – Different models (Intel, AMD) and speeds.

2. Memory (RAM) – Various capacities (8GB, 16GB, 32GB).

3. Storage (HDD/SSD/NVMe) – Different types and sizes.

4. Graphics Card (GPU) – Integrated vs. dedicated GPUs.

5. Operating System (OS) – Windows, macOS, Linux versions.


QUESTION FIVE (20 MARKS)

a) 10 Quality Indicators for Addressing Management Concerns in Software Projects (10 marks)

Key indicators to measure and ensure software quality:

1. Defect Density – Number of defects per lines of code.

2. Test Coverage – Percentage of code tested.

3. Requirement Traceability – Ensures all requirements are tested.

4. Code Review Findings – Number of issues found in peer reviews.

5. Mean Time to Repair (MTTR) – Average time to fix defects.

6. Customer Reported Bugs – Post-release defects reported by users.

7. Build Stability – Frequency of failed builds in CI/CD.

8. Performance Metrics – Response time, throughput, scalability.

9. Security Vulnerabilities – Number of security flaws detected.

10. User Satisfaction (NPS/CSAT) – Feedback from end-users.

b) Three Integration Testing Strategies (10 marks)

Integration testing ensures modules work together correctly. The three main strategies are:

1. Big Bang Integration

- All modules are combined and tested at once.

- Pros: Simple, quick for small projects.

- Cons: Difficult to isolate failures; high risk in complex systems.

2. Incremental Integration

- Modules are added and tested step-by-step.

- Types:

- Top-down – High-level modules tested first, stubs simulate lower modules.

- Bottom-up – Low-level modules tested first, drivers simulate higher modules.

- Sandwich/Hybrid – Combines top-down and bottom-up.


- Pros: Easier defect isolation, better control.

- Cons: Requires stubs/drivers; more planning needed.

3. Continuous Integration (CI)

- Frequent code integrations with automated testing.

- Pros: Early bug detection, smoother collaboration.

- Cons: Requires robust automation and DevOps setup.

You might also like