0% found this document useful (0 votes)
20 views

Module 5 Se

Uploaded by

Isha Jain
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Module 5 Se

Uploaded by

Isha Jain
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Module 5: Software Modelling and Design

Translating a Requirement Model into a Design Model


Translating a Requirement Model into a Design Model, particularly focusing on data modeling in software
engineering, involves several key steps and considerations. Below are the general steps involved in this
process:
1. Requirement Analysis: Understand the requirements thoroughly. This involves gathering requirements
from stakeholders, analyzing them, and documenting them in a clear and concise manner. It's crucial
to have a deep understanding of what the software system is supposed to do and how it should
behave.
2. Identify Entities and Relationships: From the requirements, identify the key entities (objects or
concepts) and their relationships. Entities are the real-world objects or concepts about which data is
to be collected, stored, and processed. Relationships define how entities interact with each other.
3. Conceptual Data Model: Create a conceptual data model, also known as an Entity-Relationship (ER)
diagram. This model represents the high-level entities, their attributes, and the relationships among
them. It's a visual representation that helps stakeholders understand the structure of the data and the
relationships between different entities.
4. Refine the Model: Refine the conceptual data model based on feedback from stakeholders and further
analysis. This may involve adding more detail to the model, identifying additional entities or
relationships, or clarifying ambiguities in the requirements.
5. Normalization: Normalize the data model to reduce redundancy and improve data integrity.
Normalization involves organizing the data into tables (or entities) and ensuring that each
table represents a single subject and that there are no repeating groups or partial dependencies.
6. Physical Data Model: Translate the conceptual data model into a physical data model that defines
how the data will be stored in the underlying database management system (DBMS). This involves
specifying the data types, primary keys, foreign keys, indexes, and other database specific details.
7. Optimization: Optimize the data model for performance, scalability, and efficiency. This may involve
denormalization (introducing redundancy to improve performance), partitioning large tables, creating
indexes on frequently queried columns, and other techniques.
8. Validate the Model: Validate the data model to ensure that it meets the requirements and performs
as expected. This may involve testing the data model against sample data, conducting performance
tests, and verifying that it can handle the expected workload.
9. Documentation: Document the data model thoroughly, including its purpose, structure, relationships,
and any assumptions or constraints. This documentation is important for future reference and for
communicating the design decisions to other stakeholders.
10. Iterate: Data modeling is an iterative process, and it's common to refine the data model based on
feedback, changes in requirements, or new insights gained during implementation. Iterate on
the design as needed to ensure that it continues to meet the needs of the stakeholders.
By following these steps, you can effectively translate a requirement model into a design model for data
modeling in software engineering, ensuring that the resulting system accurately reflects the requirements and
effectively manages the data it needs to operate.
Analysis Modelling: Elements of Analysis model.
In software engineering, the analysis model serves as a bridge between the requirements specification and
the design model. It captures the essential elements of the system requirements and represents them in a
form that can be used to guide the design process. Here are the key elements of an analysis model:
1. Use Cases: Use cases represent interactions between the system and external actors (users, other
systems, etc.) They describe the functionality of the system from the perspective of its users. Each use
case typically consists of a sequence of steps that describe how a particular goal or task is
accomplished within the system.
2. Functional Requirements: Functional requirements specify the behavior of the system in response to
various inputs or stimuli. These requirements describe what the system should do in terms of its
functionality, including any inputs it accepts, outputs it produces, and processing it performs
3. Non-Functional Requirements: Non-functional requirements specify constraints on the system or
quality attributes that it must satisfy. These requirements address aspects such as
performance, scalability, reliability, usability, security, and maintainability.
4. Domain Model: The domain model represents the key concepts, entities, and relationships within the
problem domain. It identifies the relevant objects or entities and their attributes, as well as the
associations and dependencies between them. The domain model helps to establish a common
vocabulary and understanding of the problem domain among stakeholders.
5. Behavioral Models: Behavioral models describe the dynamic behavior of the system, including how it
responds to events and stimuli over time. This may include state diagrams, activity diagrams, or
sequence diagrams that illustrate the sequence of interactions between objects or components within
the system.
6. Data Dictionary: The data dictionary provides a detailed description of the data elements used within
the system, including their names, definitions, data types, and constraints. It serves as a reference for
developers and designers when working with data structures and ensures consistency in the
representation of data throughout the system.
7. User Interface Prototypes: User interface prototypes or mockups provide visual representations of the
system's user interface, including the layout, navigation, and interaction elements. These prototypes
help to validate the user interface design and gather feedback from stakeholders early in the
development process.
8. Constraints and Assumptions: Constraints and assumptions capture any limitations or assumptions
that affect the design and implementation of the system. This may include constraints imposed by the
environment, technology choices, regulatory requirements, or business rules.
9. Traceability Matrix: A traceability matrix establishes links between various elements of the analysis
model, such as requirements, use cases, and design artifacts. It helps to ensure that all requirements
are addressed by the design and provides traceability throughout the software development lifecycle.
10. Rationale and Decisions: The analysis model may also include documentation of design decisions,
rationale, and trade-offs made during the analysis process. This helps to provide context for the design
choices and facilitates communication among stakeholders.
By developing a comprehensive analysis model that encompasses these elements, software engineers can
ensure that they have a clear understanding of the system requirements and can effectively translate them
into a design that meets the needs of stakeholders.
Design Modelling: Fundamental Design Concept
1. Abstraction: Abstraction involves simplifying complex systems by focusing on the essential aspects
while hiding unnecessary details. It allows developers to manage complexity by defining clear
boundaries between different components or layers of the system. Abstraction is achieved through
techniques such as encapsulation and modeling, which help in creating models that capture the
essential features of the system without getting bogged down in implementation specifics.
2. Information Hiding: Information hiding is a design principle that involves encapsulating the
implementation details of a module or component and exposing only the necessary interfaces
or abstractions to other parts of the system. By hiding implementation details, developers can reduce
dependencies between components, promote modularity, and improve maintainability. This concept
is closely related to encapsulation and helps in building robust and scalable software systems.
3. Structure: Designing the structure of a software system involves organizing its components, modules,
and relationships in a coherent and understandable way. A well-structured design makes it easier to
understand, maintain, and evolve the system over time. This involves defining clear architectural
patterns, such as client-server architecture, layered architecture, or microservices architecture, that
guide the organization of the system and its components.
4. Modularity: Modularity is the principle of dividing a system into smaller, independent components or
modules that can be developed, tested, and maintained separately. Each module should encapsulate a
specific set of functionality and have well-defined interfaces for communication with other modules.
Modularity promotes reusability, maintainability, and scalability by allowing developers to manage
complexity and change more effectively.
5. Concurrency: Concurrency is the ability of a software system to execute multiple tasks or processes
simultaneously. Designing for concurrency involves identifying opportunities to parallelize tasks and
managing shared resources to ensure consistency and avoid race conditions. Techniques such as
threading, synchronization mechanisms, and message passing are used to enable concurrency in
software systems while maintaining correctness and performance.
6. Verification: Verification is the process of ensuring that a software system meets its
specified requirements and behaves as intended. Designing for verification involves defining clear
and testable requirements, designing robust and fault-tolerant components, and
implementing comprehensive testing strategies. This includes unit testing, integration testing, system
testing, and other validation techniques to ensure the correctness and reliability of the software
system.
7. Aesthetics: Aesthetics in software engineering refers to the design principles and guidelines that
govern the visual appearance and user experience of software systems. Aesthetic design considers
factors such as usability, simplicity, consistency, and visual appeal to create software that is intuitive,
engaging, and enjoyable to use. This involves designing user interfaces, graphics, and interactions that
enhance usability and user satisfaction while reflecting the brand identity and design ethos of the
software product.
By considering these fundamental design concepts, software engineers can create software systems that are
well-structured, modular, maintainable, concurrent, verifiable, and aesthetically pleasing, ultimately
delivering value to users and stakeholders.
What is a data flow diagram?
A data flow diagram (DFD) maps out the flow of information for any process or system. It uses defined
symbols like rectangles, circles and arrows, plus short text labels, to show data inputs, outputs, storage points
and the routes between each destination. Data flowcharts can range from simple, even hand-drawn process
overviews, to in-depth, multi-level DFDs that dig progressively deeper into how the data is handled. They can
be used to analyze an existing system or model a new one. Like all the best diagrams and charts, a DFD can
often visually “say” things that would be hard to explain in words, and they work for both technical and
nontechnical audiences, from developer to CEO. That’s why DFDs remain so popular after all these years.
While they work well for data flow software and systems, they are less applicable nowadays to visualizing
interactive, real-time or database-oriented software or systems.
Using any convention’s DFD rules or guidelines, the
symbols depict the four components of data flow
diagrams.
External entity: an outside system that sends or
receives data, communicating with the system being
diagrammed. They are the sources and destinations of
information entering or leaving the system. They might
be an outside organization or person, a computer
system or a business system. They are also known as
terminators, sources and sinks or actors. They are
typically drawn on the edges of the diagram.
Process: any process that changes the data, producing
an output. It might perform computations, or sort data
based on logic, or direct the data flow based on
business rules. A short label is used to describe the
process, such as “Submit payment.”
Data store: files or repositories that hold information
for later use, such as a database table or a membership form. Each data store receives a simple label, such as
“Orders.”
Data flow:The route that data takes between the external entities, processes and data stores. It portrays the
interface between the other components and is shown with arrows, typically labeled with a short data name,
like “Billing details.”
DFD rules and tips

 Each process should have at least one input and an output.


 Each data store should have at least one data flow in and one data flow out.
 Data stored in a system must go through a process.
 All processes in a DFD go to another process or a data store.
DFD levels and layers: From context diagrams to pseudocode
A data flow diagram can dive into progressively more detail by using levels and layers, zeroing in on a
particular piece. DFD levels are numbered 0, 1 or 2, and occasionally go to even Level 3 or beyond. The
necessary level of detail depends on the scope of what you are trying to accomplish.

 DFD Level 0 is also called a Context Diagram. It’s a basic overview of the whole system or process being
analyzed or modeled. It’s designed to be an at-a-glance view, showing the system as a single high-level
process, with its relationship to external entities. It should be easily understood by a wide audience,
including stakeholders, business analysts, data analysts and developers.
 DFD Level 1 provides a more detailed breakout of pieces of the Context Level Diagram. You will
highlight the main functions carried out by the system, as you break down the high-level process of the
Context Diagram into its subprocesses.
 DFD Level 2 then goes one step deeper into parts of Level 1. It may require more text to reach the
necessary level of detail about the system’s functioning.

Progression to Levels 3, 4 and beyond is possible, but going beyond Level 3 is uncommon. Doing so can
create complexity that makes it difficult to communicate, compare or model effectively.

Using DFD layers, the cascading levels can be nested directly in the diagram, providing a cleaner look with easy
access to the deeper dive.

By becoming sufficiently detailed in the DFD, developers and designers can use it to write pseudocode, which
is a combination of English and the coding language. Pseudocode facilitates the development of the actual
code.
Structure Charts in Software Engineering

 Structure charts in software engineering are fundamental to visually representing a system’s


components and interactions.
 They are a crucial tool for developers and project managers, aiding software systems’ design,
development, and maintenance.
 This comprehensive guide will delve into the various types of structure charts in software engineering.
 A structure chart is a diagrammatic representation of a software system’s components, showcasing the
hierarchical relationship between modules.
 It is a static system representation, focusing on the structure rather than the process.
 Structure charts are primarily used in top-down modular design and structured programming, where
the system is broken down into manageable modules.
 They help visualise the system’s complexity, making it easier for developers to understand and
manage.
 They also assist in identifying potential issues or bottlenecks in the system’s design.
Types of structure charts in software engineering
1. High-level structure charts
2. Detailed structure charts
3. Transaction structure charts

They build upon basic flowcharts with a focus on well-defined structures. Here are the key components of
structured flowcharts:

 Processing Steps : These boxes represent actions or calculations performed within the
program. They contain a brief description of the task being executed.
 Decision Points : These diamond-shaped symbols depict points where the program needs to
make a choice based on a condition. Typically, one or two arrows will emerge from a decision point,
representing the possible paths based on the condition being true or false.
 Connectors : Arrows show the flow of control between different steps in the
flowchart. They indicate the sequence of execution.
 Input/Output :These represent points where data enters or exits the program. Data
entering the flowchart (input) is displayed on the left side of the parallelogram, while data leaving
(output) is shown on the right side.
 Annotation: You can add brief text descriptions within any component or near arrows for improved
clarity, especially for complex logic.

Structured flowcharts emphasize these core elements to promote a clear and readable visualization of the
program's logic, focusing on:

 Sequence: Highlighting the sequential execution of steps.


 Selection: Clearly showing decision points and branching paths.
 Iteration: Representing loops and repetitive execution of a block of code.
What are Decision Tables?
The act of making a choice may be made much easier with the help of a decision table. It is a collection of
rules laid down in rows and columns. Conditions or circumstances that influence the best course of action
are represented in the columns, and their permutations are shown in the rows.
Decision tables are meant to aid programmers in making difficult choices in a systematic and precise
manner. Decision tables give a systematic framework for examining and optimising the decision -making
process by decomposing a choice into its component pieces.

Use Cases for Decision Tables: Business rule management, risk management, quality assurance, and project
management are just few of the many software development activities that benefit greatly from the usage
of decision tables. They are also applicable in the realms of finance and medical, where nuanced judgements
are often called for.
Elements of a Decision Table
1. Conditions
What should be done depends on the current circumstances. In the decision table, each column
corresponds to a yes or false answer.
2. Actions
The results of a particular set of circumstances are actions. In the decision table, they are also shown as
columns.
3. Rules
Rules are sets of criteria that, when met, dictate what must be done. They are generated by examining the
many permutations of the circumstances and are shown as rows in the decision table.
4. Rows and columns
A decision table consists of columns and rows. The circumstances and responses are shown in columns,
while the alternative outcomes are shown in rows.
Benefits of Using Decision Tables
1. Increased Lucidity
2. Productivity Boost
3. Enhanced Precision
4. Greater Accuracy
5. Enhanced Communication
What Is Testing?
Testing can be defined as a process of analyzing a software item to detect the differences between existing
and required conditions and to evaluate the features of the software item. In this process, we validate and
verify that a software product or application does what it’s supposed to do. The system or its components are
tested to ensure the software satisfies all specified requirements.
By executing the systems, we can identify any gaps, errors, or missing requirements in contrast with the actual
requirements. No one wants the headaches of bug fixes, late deliveries, defects, or serious malfunctions
resulting in damage or death.
The following are important reasons why software testing techniques should be incorporated into application
development:

 Identifies defects early. Developing complex applications can leave room for errors. Software testing is
imperative, as it identifies any issues and defects with the written code so they can be fixed before the
software product is delivered.
 Improves product quality. When it comes to customer appeal, delivering a quality product is an
important metric to consider. An exceptional product can only be delivered if it's tested effectively
before launch. Software testing helps the product pass quality assurance (QA) and meet the criteria
and specifications defined by the users.
 Increases customer trust and satisfaction. Testing a product throughout its development lifecycle
builds customer trust and satisfaction, as it provides visibility into the product's strong and weak
points. By the time customers receive the product, it has been tried and tested multiple times and
delivers on quality.
 Detects security vulnerabilities. Insecure application code can leave vulnerabilities that attackers can
exploit. Since most applications are online today, they can be a leading vector for cyber attacks and
should be tested thoroughly during various stages of application development. For example, a web
application published without proper software testing can easily fall victim to a cross-site scripting
attack where the attackers try to inject malicious code into the user's web browser by gaining access
through the vulnerable web application. The nontested application thus becomes the vehicle for
delivering the malicious code, which could have been prevented with proper software testing.
 Helps with scalability. A type of nonfunctional software testing process, scalability testing is done to
gauge how well an application scales with increasing workloads, such as user traffic, data volume and
transaction counts. It can also identify the point where an application might stop functioning and the
reasons behind it, which may include meeting or exceeding a certain threshold, such as the total
number of concurrent app users.
 Saves money. Software development issues that go unnoticed due to a lack of software testing can
haunt organizations later with a bigger price tag. After the application launches, it can be more difficult
to trace and resolve the issues, as software patching is generally more expensive than testing during
the development stages.
What Is Black Box Testing?
Black box testing or functional testing is a method which is used to examine software functionality without
knowing its internal code structure. It can be applied to all software testing levels but is mostly employed for
the higher level acceptance and system related ones.
To elaborate, a professional using this method to test an application’s functionality will only know about the
input and expected output but not about the program which helps the application reach the desired output.
The professional will only enter valid and invalid inputs and determine the expected outputs without having
any in-depth knowledge of the internal structure.
Black Box Testing Techniques: Test cases in the black box testing method are built around the specifications,
requirements, and design parameters of a software. Some reliable techniques applied to create those test
cases are:

 Boundary Value Analysis: The most commonly used black box testing technique, Boundary Value
Analysis or BVA is used to find the error in the boundaries of input values rather than the center.
 Equivalence Class Partitioning: This technique is used to reduce the number of possible inputs to small
yet effective inputs. Used to test an application exhaustively and avoid redundancy of inputs, it is done
by dividing inputs into classes and getting value from each class.
 Decision Table Based Testing: This approach is the most rigorous one and is ideally implemented when
the number of combinations of actions is taken under varying conditions.
 Cause-Effect Graphing Technique: This technique considers a system’s desired external behavior only.
It helps in selecting test cases which relate Causes to Effects to create test cases. In the
aforementioned statement, Cause implies a distinct input condition which results in internal change in
a system while Effect implies an output condition brought by a combination of causes.
 Error Guessing: The success of this technique is solely dependent on the experience of the tester.
There are no tools and techniques as such, but one can write test cases either while reading the
document or while encountering an undocumented error during the testing.

Advantages / Pros of Black Box Testing

 Unbiased tests because the designer and tester work independently.


 Tester is free from any pressure of knowledge of specific programming languages to test the reliability
and functionality of an application / software.
 Facilitates identification of contradictions and vagueness in functional specifications.
 Test is performed from a user’s point-of-view and not of the designer’s.
 Test cases can be designed immediately after the completion of specifications.
Disadvantages / Cons of Black Box Testing

 Tests can be redundant if already run by the software designer.


 Test cases are extremely difficult to be designed without clear and concise specifications.
 Testing every possible input stream is not possible because it is time-consuming and this would
eventually leave many program paths untested.
 Results might be overestimated at times.
 Cannot be used for testing complex segments of code.
What Is White Box Testing?
Applicable at the unit, integration, and system levels of a software testing phase, the method of white box
software testing tests an application at the source code level. The test cases generated as a result of this
testing method are based on design techniques like control flow testing, branch testing, path testing,
statement coverage, and decision coverage.This method of testing is one of the best methods to find the
errors in the early stages of software development. By following this method, one can test paths within and
between units, and between sub-systems when the system-level test is being pursued.

White Box Testing Techniques: The most important part in the white box testing method is the code coverage
analysis which empowers a software engineering team to find the area in a code which is unexecuted by a
given set of test case thereby, helping in improving a software application’s quality. There are different
techniques which can be used to perform the code coverage analysis. Some of these are:

 Statement Coverage: This technique is used to test every possible statement at least once. Cantata++
is the preferred tool when using this technique.
 Decision Coverage: This includes testing every possible decision condition and other conditional loops
at least once. TCAT-PATH, supporting C, C++, and Java applications, is the go-to tool when this
technique is followed.
 Condition Coverage: This makes one time code execution mandatory when all the conditions are
tested.
 Decision/Condition Coverage: This is a mixed technique which is implemented to test all the Decision /
Condition coverage at least once while the code is executed.
 Multiple Condition Coverage: In this type of white box testing technique, each entry point of a system
has to be executed at least once.
Advantages / Pros of White Box Testing

 Code optimization by revealing hidden errors


 Transparency of the internal coding structure which is helpful in deriving the type of input data needed
to test an application effectively
 Covers all possible paths of a code thereby, empowering a software engineering team to conduct
thorough application testing
 Enables programmer to introspect because developers can carefully describe any new implementation
 Test cases can be easily automated
 Gives engineering-based rules to stop testing an application
Disadvantages / Cons of White Box Testing

 A complex and expensive procedure which requires the adroitness of a seasoned professional,
expertise in programming and understanding of internal structure of a code
 Updated test script required when the implementation is changing too often
 Exhaustive testing becomes even more complex using the white box testing method if the application
is of large size
 Some conditions might be untested as it is not realistic to test every single one
 Necessity to create full range of inputs to test each path and condition make the white box testing
method time-consuming
 Defects in the code may not be detected or may be introduced considering the ground rule of
analyzing each line by line or path by path.
Unit Testing
Unit testing is one of the core functional testing types that provides the foundation for verifying software
behavior.To elaborate, unit testing focuses on testing the functionality of individual units or components of
code in isolation to verify each part operates correctly on its own.Compared to other types of testing, the
scope of unit tests is quite small and focuses mostly on validating things like:

 Correctness of a single function or method


 Individual classes meet the requirements
 Logic within a specific module
Developers usually do unit testing by writing various test cases to test their code. It helps them catch bugs
early during coding before issues can multiply, saving the time and effort of dedicated software testers.
Advantages of Unit Testing

 Unit testing aids in the early detection of flaws or defects in the software code, helping to
avoid them developing into larger problems or spreading to later stages of the software
development cycle.
 Helps engineers find and quickly fix errors, which speeds up software development.
 Helps guarantee that code is of a high standard and complies with the specifications of the
software.
 Gives team members a clear and concise way to discuss the code, which enhances team
communication.
 Can assist in locating code that is applicable to different areas of the programme. Developers
can increase the code's modularity.
 Unit tests acts as documentation which shows how the code is supposed to operate. These
tests can be used by developers as a guide for comprehending the code, which can assist
prevent misunderstandings and confusion.

Disadvantages of Unit testing

 Unit testing can take a lot of time, particularly in complicated, large-scale projects.
 Unit testing might result in increased code complexity since developers must add more code to
support test scenarios.
 Passing unit tests simply validates the functionality of the tested unit; it does not take into account
how the tested unit interacts with other components of the system. An issue in production may arise if
a unit passes all tests but fails in the larger system.
 Maintaining unit tests can be difficult, particularly when code modifications happen often.
 It could be challenging to obtain 100% test coverage, particularly in complex systems with lots of
interdependent components.
 Putting in place a thorough unit testing approach may call for more resources and raise the price of
software development.
Integration Testing
Unlike unit testing, integration testing helps testers determine whether different modules or services work
together harmoniously as intended.
That means after developers have conducted and verified that individual units are working properly, software
testers combine those units and perform integration tests to test them as a group.
Specifically, integration testing helps check if the interface connections between units are working and verify
that the integrated system meets the requirements.
This process helps to confirm that the units tested by developers independently are operating together when
integrated.

What is UAT?
User Acceptance Testing (UAT) is a type of testing performed by the end user or the client to verify/accept the
software system before moving the software application to the production environment. UAT is done in the
final phase of testing after functional, integration and system testing is done.
The main Purpose of UAT is to validate end to end business flow. It does not focus on cosmetic errors, spelling
mistakes or system testing. User Acceptance Testing is carried out in a separate testing environment with
production-like data setup. It is kind of black box testing where two or more end-users will be involved.
UAT is performed by –

 Client
 End users
What is Test Documentation?
Test documentation is documentation of artifacts created before or during the
testing of software. It helps the testing team to estimate testing effort needed,
test coverage, resource tracking, execution progress, etc. It is a complete suite of
documents that allows you to describe and document test planning, test design,
test execution, test results that are drawn from the testing activity.

You might also like