Se Ut2
Se Ut2
2) In software engineering, cohesion and coupling are two important concepts used to
measure the quality of a module or component in a system, specifically how well the modules are
designed and how they interact with each other.
Cohesion
Cohesion refers to how closely related and focused the responsibilities of a single module are. A
module with high cohesion has a single, well-defined purpose and contains only elements that
contribute directly to that purpose. High cohesion is desirable because it makes the module easier to
understand, maintain, and reuse.
Types of Cohesion:
2. Sequential Cohesion: When the output of one element serves as the input for the next
element.
3. Communicational Cohesion: When elements in a module operate on the same data set.
4. Procedural Cohesion: When elements are grouped because they always follow a specific
sequence of execution.
5. Temporal Cohesion: When elements are grouped because they are executed at the same
time (e.g., initialization routines).
6. Logical Cohesion: When elements perform similar functions but are not necessarily related
(e.g., multiple unrelated tasks within a module).
7. Coincidental Cohesion: The worst form of cohesion, where elements have no meaningful
relationship and are grouped arbitrarily.
Coupling
Coupling refers to the degree of interdependence between software modules. Lower coupling is
generally preferred, as it indicates that modules can function independently and can be modified or
replaced with minimal impact on other modules. Higher coupling means that changes in one module
are likely to affect others, making the system harder to maintain.
Types of Coupling:
1. Content Coupling (Strongest): One module directly modifies or relies on the internal data or
workings of another. This is highly undesirable.
2. Common Coupling: Modules share global data. This can lead to unforeseen side effects and
is considered weak in terms of modularity.
3. External Coupling: Modules share an external resource (e.g., file system, database). Changes
in the external resource affect all the modules that depend on it.
4. Control Coupling: One module controls the behavior of another by passing control
information (e.g., flags, commands). This creates a dependency between the modules.
5. Stamp Coupling (Data-structured Coupling): Modules share a composite data structure, but
each module uses only a part of it. This form of coupling can lead to unnecessary
dependencies between the modules.
6. Data Coupling: Modules share only the data that is necessary for them to interact. This is the
lowest form of coupling and is considered desirable.
7. Message Coupling (Weakest): Modules interact by passing messages (e.g., method calls,
function invocations) without sharing any internal details. This is the most desirable form of
coupling in modern architectures.
Summary
• Cohesion focuses on how well elements within a single module relate to each other (higher
cohesion is better).
• Coupling refers to how dependent modules are on one another (lower coupling is better).
Q no4)What is Testing?
Types of Testing:
1. Manual Testing: The testing process is carried out manually by testers who explore the
application and validate its functionality.
2. Automated Testing: Automated tools are used to execute predefined tests and validate the
software.
What is Testing?
Types of Testing:
1. Manual Testing: The testing process is carried out manually by testers who explore the
application and validate its functionality.
2. Automated Testing: Automated tools are used to execute predefined tests and validate the
software.
Q no 5)In software engineering, testing levels are stages in the software testing process,
each aimed at validating different aspects of a system to ensure quality. These levels help
detect errors in a structured manner, from the smallest components to the fully integrated
system.
1. Unit Testing
• Scope: Each unit (typically a function, method, or class) is tested in isolation to ensure that it
works correctly.
• Automation: Often automated using unit testing frameworks (e.g., JUnit for Java, NUnit for
.NET).
• Goal: Detect bugs early in the development phase by verifying that individual functions or
methods work as intended.
Example: Testing a function that calculates a discount based on input price and discount rate.
2. Integration Testing
• Scope: After unit testing, different components are integrated and tested together to ensure
they work as a group.
• Types:
• Goal: Detect issues in module interactions, such as data flow errors, communication issues,
or unexpected behavior.
Example: Testing the interaction between the login module and the database module to
ensure that valid credentials can authenticate users.
3. System Testing
• Types:
o Functional Testing: Verifies that the system performs its intended functions.
• Goal: Ensure that the system meets the specified requirements and behaves correctly in a
complete, real-world environment.
4. Acceptance Testing
• Objective: To ensure the system meets business requirements and is ready for deployment.
• Scope: Performed from the end-user’s perspective to verify that the system satisfies their
needs.
• Types:
o User Acceptance Testing (UAT): Performed by the client to validate that the system
meets their business needs.
o Operational Acceptance Testing (OAT): Verifies that the system meets operational
requirements (e.g., backups, recovery).
o Beta Testing: Conducted by real users in a real environment before the official
release.
• Goal: Ensure the software is ready for deployment and meets the business objectives, user
expectations, and other acceptance criteria.
Example: Before launching a new mobile app, users test the app to verify if it meets their
expectations and all business requirements.
Summary of Levels of Testing:
Testing
Scope Who Performs Focus
Level
Ensure that
Integrated interactions
Integration
components Developers/Testers between
Testing
or modules components work
as expected.
Validate the
Complete system's
System Independent
system as a functionality,
Testing testers
whole performance, and
security.
• Early bug detection: Unit and integration testing help catch bugs early in the development
cycle, reducing the cost of fixing issues.
• Improved quality: Testing at multiple levels ensures comprehensive coverage, reducing the
risk of bugs in the final product.
1. Corrective Maintenance:
2. Adaptive Maintenance:
3. Perfective Maintenance:
4. Preventive Maintenance:
o Purpose: To make changes that prevent future issues by improving the software’s
maintainability and reliability.
1. Longevity and Relevance: Ensures the software continues to meet current business
requirements and remains compatible with technological changes.
2. Improved User Experience: Enhances functionality and performance based on user needs
and feedback, ensuring higher user satisfaction.
3. Cost Efficiency: Regular maintenance prevents costly major overhauls or complete system
replacements.
4. Security: Continuously updates the software to patch vulnerabilities and stay ahead of
potential threats.
5. Compliance: Helps maintain compliance with evolving regulatory standards and industry
norms.
• High Cost: Maintenance often accounts for a large portion of software lifecycle costs due to
ongoing updates and support.
• Dependency Management: Changes in one part of the software can affect other modules,
leading to integration issues.
Overall, software maintenance is essential for ensuring that the software remains functional,
secure, and aligned with business goals, adapting to both internal and external changes.
Q no 6. Black Box Testing is a software testing method in which the tester evaluates the
functionality of the software without having any knowledge of the internal structure, design,
or implementation. The tester focuses on the input and output of the system to validate
whether the system behaves as expected. The internal workings (such as the source code) are
completely hidden from the tester, making it a high-level approach that tests the system from
a user’s perspective.
1. Functional Testing:
o Verifies that the system behaves according to its functional specifications
(e.g., testing if login works, verifying calculations).
2. Non-Functional Testing:
o Tests attributes like performance, usability, reliability, scalability, and
security, which are not directly related to specific functions but are crucial for
system quality.
3. Regression Testing:
o Ensures that previously developed and tested software continues to function
correctly after code modifications, updates, or enhancements.
4. Acceptance Testing:
o Determines whether the software meets the business requirements and if it is
ready for deployment (often carried out by the end-user or client).
5. System Testing:
o Involves testing the entire integrated system to validate that it works as
expected.
6. Smoke Testing:
o A high-level, shallow test that checks whether the most essential functions of
the software work without going deep into finer details.
1. Equivalence Partitioning:
o This technique divides the input data into different classes (equivalence
classes) where test cases are created to cover one input from each class. This
reduces the number of test cases while still covering key scenarios.
o Example: If an input field accepts values between 1 and 100, you can divide
the data into valid and invalid partitions:
▪ Valid partition: 1 to 100 (test a number like 50).
▪ Invalid partitions: values less than 1 and greater than 100 (test 0 and
101).
2. Boundary Value Analysis (BVA):
o Focuses on testing the boundaries between different input classes. Defects
often occur at the boundaries, so inputs at the edges are carefully tested.
o Example: If an input range is 1 to 100, you would test the values at the
boundaries like 1, 100, 0, and 101.
3. Decision Table Testing:
o This technique uses a decision table to represent combinations of inputs and
their associated outcomes. It ensures that all possible combinations of inputs
are tested.
o Example: For a login system, combinations of valid/invalid usernames and
passwords can be tested.
4. State Transition Testing:
o Validates the system’s behavior as it transitions from one state to another. It is
especially useful for systems with finite states (e.g., a login process where the
states could be logged in, logged out, or locked).
5. Error Guessing:
o In this technique, experienced testers use their intuition and experience to
guess potential areas of the software that could fail. This is not a formal
technique but can often lead to the discovery of bugs that are not covered by
more structured testing.
• If a valid username and password are provided, the user is logged in.
• If an invalid username or password is provided, the system shows an error message.
1. Valid login: Provide a valid username and password, expect to be logged in.
2. Invalid username: Provide an invalid username with a valid password, expect an
error message.
3. Invalid password: Provide a valid username but an incorrect password, expect an
error message.
4. Empty fields: Leave the username or password field empty, expect an error message.
In black box testing, the tester does not know how the login function is implemented, but they
validate the functionality based on the input-output relationship.
1. System Understanding:
o Reverse engineering helps in understanding complex or undocumented
systems, which is especially useful when original developers or documentation
are unavailable.
2. Improvement and Optimization:
o It allows systems to be improved, optimized, or modernized by providing
insights into how they were originally designed.
3. Interoperability:
o Reverse engineering helps in ensuring compatibility with other systems,
creating adapters, or integrating legacy systems with newer technologies.
4. Security Enhancements:
o Identifying vulnerabilities or security flaws in existing systems allows
organizations to improve the security of their software or hardware.
5. Cost-Effective Solutions:
o It allows companies to replicate or create alternatives to existing products
without needing original blueprints, which can reduce development costs.
Qno.10 In Software Quality Assurance (SQA), FTR stands for Formal Technical Review.
It is a structured process used to evaluate the quality of software products, documents, or
code by reviewing them with a group of peers. The purpose of an FTR is to find defects,
ensure compliance with standards, and improve the overall quality of the software being
developed.
1. Walkthroughs:
o An informal review where the author of the document or code leads the team
through it to gather feedback, but without a formal structure or checklist.
2. Inspections:
o A more formal and rigorous review where a trained moderator leads a team
through the document or code, following a structured process to identify
defects.
3. Technical Reviews:
o A semi-formal review aimed at evaluating technical content (e.g., design
documents, code) by peers who discuss issues, improvements, and alternative
approaches.
4. Audits:
o A systematic review conducted to assess if the software product or process
complies with defined standards, guidelines, or regulatory requirements.
1. Planning:
o The review process is planned, including selecting participants, scheduling
meetings, and identifying the materials to be reviewed (e.g., code, design
documents).
2. Preparation:
o Reviewers are provided with the materials in advance, allowing them time to
individually inspect the documents and identify potential defects before the
review meeting.
3. Conducting the Review:
o The team meets to discuss the identified issues, defects, and any necessary
improvements. The focus is on finding problems, not fixing them during the
review.
4. Documentation of Findings:
o Defects or improvement suggestions are documented, typically using a defect
log or review form.
5. Follow-Up:
o After the review, the author of the document or code addresses the identified
defects. A follow-up review may be conducted to ensure all issues have been
resolved.
Limitations of FTR:
1. Time-Consuming:
o Formal reviews can be time-intensive, requiring the involvement of multiple
team members and significant preparation.
2. Requires Expertise:
o Successful FTRs require experienced reviewers who understand the product
and the review process. Inexperienced reviewers may miss critical issues.
3. Can Be Costly:
o The review process can incur additional costs in terms of resources and time,
especially if conducted frequently.
1. Project Risks:
o Definition: Risks that directly impact the project plan and may affect the
schedule, resources, and cost.
o Examples:
▪ Poor project planning or scheduling.
▪ Lack of experienced team members.
▪ Unrealistic timelines or deadlines.
▪ Inadequate resource allocation.
▪ Scope creep (expansion of project requirements).
2. Technical Risks:
o Definition: Risks associated with the technology or technical aspects of the
project. These risks can lead to project failure if the chosen technologies or
design decisions don’t work as expected.
o Examples:
▪ New, untested technologies that may fail or be hard to implement.
▪ Integration challenges with other systems.
▪ Changing technology during the project lifecycle.
▪ Complex software functionality or technical architecture.
▪ Performance or scalability issues.
3. Business Risks:
o Definition: Risks that can affect the business side of the project. These can
cause financial losses, missed business opportunities, or affect the customer.
o Examples:
▪ Changes in market demand or business strategy.
▪ Failure to meet customer needs.
▪ Poor stakeholder management.
▪ Misalignment between project goals and business objectives.
▪ Legal or regulatory risks.
4. Operational Risks:
o Definition: Risks that affect the daily operational processes and could disrupt
project execution.
o Examples:
▪ Communication breakdown between team members.
▪ Inefficient resource utilization.
▪ Delays due to external factors (supplier delays, infrastructure issues).
▪ Poor project governance.
Resource Risks:
• Definition: Risks related to the availability and performance of project resources,
including team members, infrastructure, and materials.
• Examples:
o Unavailability of critical team members or expertise.
o Budget cuts or resource constraints.
o Dependency on third-party vendors or suppliers
The RMMM plan is a structured approach to handling risks in software engineering projects.
It involves three key activities:
1. Risk Mitigation:
2. Risk Monitoring: