Software Engineering
Software Engineering
Version Control →Version control is the management of multiple revisions of the same unit of item during the software
development process. The initial version of the item is given version number Ver 1.0. Subsequent changes to the item which could
be mostly fixing bugs or adding minor functionality is given as Ver 1.1 and Ver 1.2. After that, a major modification to Ver 1.2 is
given a number Ver 2.0 at the same time, a parallel version of the same item without the major modification is maintained and
given a version number 1.3.
Software engineers use this version control mechanism to track the source code, documentation and other configuration items.
Commercial tools are available for version control which performs one or more of following tasks;
• Source code control • Revision control • Concurrent version control
Software Engineering Notes READ & PASS Aney Academy
Change Control→Change control is a management process and is to some extent automated to provide a systematic mechanism for
change control. Changes can be initiated by the user or other stake holder during the maintenance phase, although a change
request may even come up during the development phase of the software. The adoption and evolution of changes are carried out in
a disciplined manner. The real challenge of change manager and project leader is to accept and accommodate all justifiable changes
without affecting the integrity of product or without any side effect. A change control report is generated by the technical team
listing the extent of changes and potential side effects. A designated team called change control authority makes the final decision,
based on the change control report, whether to accept or reject the change request. The role of the change control authority is vital
for any item which has become a baseline item. All changes to the baseline item must follow a formal change control process.
Explain the concept of cleanroom software engineering.
Cleanroom software engineering is an engineering and managerial process for the development of high-quality software with
certified reliability. Cleanroom was originally developed by Dr. Harlan Mills. The name “Cleanroom” was taken from the electronics
industry, where a physical clean room exists to prevent introduction of defects during hardware fabrication. It reflects the same
emphasis on defect prevention rather than defect removal, as well as Certification of reliability for the intended environment of use.
The focus of Cleanroom involves moving from traditional software development practices to rigorous, engineering-based practices.
This software development is based on mathematical principles. It follows the box principle for specification and design. Formal
verification is used to confirm correctness of implementation of specification. Testing is based on statistical principles.
The following principles are the foundation for the Cleanroom-based software development:
• Incremental development under statistical quality control (SQC):→ Incremental development as practiced in Cleanroom provides
a basis for statistical quality control of the development process.
• Software development based on mathematical principles:→ In Cleanroom software engineering development, the key principle is
that, a computer program is an expression of a mathematical function. The Box Structure Method is used for specification and
design, and functional verification is used to confirm that the design is a correct implementation of the specification.
• Software testing based on statistical principles:→ In Cleanroom, software testing is viewed as a statistical experiment. A
representative subset of all possible uses of the software is generated, and performance of the subset is used as a basis for
conclusions about general operational performance.
The following is the phase-wise strategy followed for Cleanroom software development.
• Increment planning:→ The project plan is built around the incremental strategy.
• Requirements gathering:→ Customer requirements are elicited and refined for each increment using traditional methods.
• Box structure specification:→ Box structures isolate and separate the definition of behaviour, data, and procedures at each level
of refinement.
• Formal design:→ Specifications (black-boxes) are iteratively refined to become architectural designs (state-boxes) and
component-level designs (clear boxes).
• Correctness verification:→ Correctness questions are asked and answered, formal mathematical verification is used as required.
• Code generation, inspection, verification:→ Box structures are translated into program language; inspections are used to ensure
conformance of code and boxes, as well as syntactic correctness of code; followed by correctness verification of the code.
• Statistical test planning:→ A suite of test cases is created to match the probability distribution of the projected product usage
pattern.
• Statistical use testing:→ A statistical sample of all possible test cases is used rather than exhaustive testing.
• Certification:→ Once verification, inspection, and usage testing are complete and all defects removed, the increment is certified
as ready for integration.
The project is split into requirement and analysis, design, coding, testing and maintenance phase. Further it will be split into
multiple submodule.
• Flow Graph: Various modules are represented as nodes with edges connecting nodes. Dependency between nodes is shown by
flow of data between nodes. Nodes indicate milestones and deliverables with the corresponding module implemented. Cycles are
not allowed in the graph. Start and end nodes indicate the source and terminating nodes of the flow.
M1 is the starting module and the data flows to M2 and M3. The combined data from M2 and M3 flow to M4 and finally the project
terminates. The arrows indicate the flow of information between modules.
GANTT CHART:-- A Gantt chart is a type of bar chart, first developed by Karol Adamiecki in 1896, and independently by Henry Gantt
in the 1910s, that illustrates a project schedule. Gantt charts illustrate the start and finish dates of the terminal elements and
summary elements of a project. Terminal elements and summary elements comprise the work breakdown structure of the project.
Modern Gantt charts also show the dependency (i.e., precedence network) relationships between activities. Gantt charts can be
used to show current schedule status using percent-complete shadings.
SOFTWARE RELIABILITY→Software reliability is defined as the probability that software will provide failure-free operation in a fixed
environment for a fixed interval of time. Software reliability is typically measured per a unit of time, whereas probability of failure is
generally time independent.
Software Reliability Models are as follows:
• The failures are independent of each other.
• The inputs are random samples.
• Failure intervals are independent and all software failures are observed.
• Time between failures is exponentially distributed.
Software Engineering Notes READ & PASS Aney Academy
PERT (Project Evolution and Review Technique)→Charts consist of a network of boxes and arrows. The boxes represent activities
and the arrows represent task dependencies. PERT is organized by events and activities or tasks. PERT have more advantages and
they are likely to be used for more complex projects. Through PERT chart the various task paths are defined. PERT enables the
calculation of critical path. Each path consists of combination of tasks, which must be completed. The time and the cost associated
with each task along a path are calculated, and the path that requires the greatest amount of elapsed time is the critical path.
Calculation of the critical path enables project manager to monitor this series of task more closely than others to shift resources to
it if it begins to fall behind schedule. PERT controls time and cost during the project and also facilitate finding the right balance
between completing a project on time and completing it within the budget. There are thus not one but many critical paths,
depending on the permutations of the estimates for each task. This makes analyses of critical path in PERT charts very complex. The
PERT chart representation of this site given below: -
Problems of Prototyping
1. A common problem with this approach is that people expect much from insufficient effort. As the requirements are loosely
defined, the prototype sometimes gives misleading results about the working of software.
2. The approach of providing early feedback to user may create the impression on user and user may carry some negative biasing
for the completely developed software also.
Advantages of Prototyping→It is a beneficial approach to develop the prototype. The end user cannot demand fulfilling of
incomplete and ambiguous software needs from the developer.
Disadvantage of Prototyping→Adopting this approach is the large investment that exists in software system maintenance. It
requires additional planning about the re-engineering of software.
Software Engineering Notes READ & PASS Aney Academy
How is software configuration management done in software development process? Explain.
Software Configuration Management (SCM) is extremely important from the view of deployment of software applications. SCM
controls deployment of new software versions. Software configuration management can be integrated with an automated solution
that manages distributed deployment. This helps companies to bring out new releases much more efficiently and effectively. It also
reduces cost, risk and accelerates time. A current IT department of an organisation has complex applications to manage. These
applications may be deployed on many locations and are critical systems. Thus, these systems must be maintained with very high
efficiency and low cost and time.
Suppose IGNOU has a data entry software version 1.0 for entering assignment marks which is deployed all the RCs. In case
its version 1.1 is to be deployed, if the re-built software need to be sent and deployed manually, it would be quite troublesome.
Thus, an automatic deployment tool will be of great use under the control of SCM.
We need an effective SCM with facilities of automatic version control, access control, automatic re-building of software,
build audit, maintenance and deployment. Thus, SCM should have the following facilities:
1. Creation of configuration
2. This documents a software builds and enables versions to be reproduced on demand
3. Configuration lookup scheme that enables only the changed files to be rebuilt. Thus, entire application need not be rebuilt.
4. Dependency detection features even hidden dependencies, thus ensuring correct behaviour of the software in partial
rebuilding.
5. Ability for team members to share existing objects, thus saving time of the team members.
McCall’s Quality Factor→A quality factor is an attribute of a quality factor that is related to software development. For example,
modularity is an attribute of the architecture of a software system.
List of McCall’s Quality Criteria/factors are:
1.Correctness→•A software system is expected to meets the explicitly specified functional requirements and the implicitly expected
non-functional requirements. •If a software system satisfies all the functional requirements, the system is said to be correct.
2.Reliability→•Customers may still consider an incorrect system to be reliable if the failure rate is very small and it does not
adversely affect their mission objectives. •Reliability is a customer perception, and an incorrect software can still be considered to
be reliable.
3.Efficiency→•Efficiency concerns to what extent a software system utilizes resources, such as computing power, memory, disk
space, communication bandwidth, and energy. •A software system must utilize as little resources as possible to perform its
functionalities.
4.Integrity:→•A system’s integrity refers to its ability to withstand attacks to its security. •In other words, integrity refers to the
extent to which access to software or data by unauthorized persons or programs can be controlled.
5.Usability→•A software is considered to be usable if human users find it easy to use. •Without a good user interface a software
system may fizzle out even if it possesses many desired qualities.
6.Maintainability→•Maintenance refers to the upkeep of products in response to deterioration of their components due to
continuous use of the products. •Maintenance refers to how easily and inexpensively the maintenance tasks can be performed.
•For software products, there are three categories of maintenance activities: corrective, adaptive and perfective maintenance.
7.Testability→•Testability means the ability to verify requirements. At every stage of software development, it is necessary to
consider the testability aspect of a product. •To make a product testable, designers may have to instrument a design with
functionalities not available to the customer.
8.Flexibility→•Flexibility is reflected in the cost of modifying an operational system. •In order to measure the flexibility of a system,
one has to find an answer to the question: How easily can one add a new feature to a system.
9.Portability→•Portability of a software system refers to how easily it can be adapted to run in a different execution environment.
•Portability gives customers an option to easily move from one execution environment to another to best utilize emerging
technologies in furthering their business.
10.Reusability→•Reusability means if a significant portion of one product can be reused, maybe with minor modifications, in
another product. •Reusability saves the cost and time to develop and test the component being reused.
Software Engineering Notes READ & PASS Aney Academy
Explain GSM architecture with the help of a diagram.
Answer-->GSM Architecture--A GSM network comprises of many functional units. These functions and interfaces can be broadly
divided into:
• The Mobile Station (MS) •The Base Station Subsystem (BSS) • The Network Switching Subsystem (NSS)
• The Operation Support Subsystem (OSS)
Mobile Station (MS): -- It is the mobile phone which consists of the transceiver, the display and the processor and is controlled by a
SIM card operating over the network. A mobile station communicates across the air interface with a base station transceiver in the
same cell in which the mobile subscriber unit is located. The MS communicates the information with the user and modifies it to the
transmission protocols if the air-interface to communicate with the BSS. The MS has two elements. The Mobile Equipment (ME) refers
to the physical device, which comprises of transceiver, digital signal processors, and the antenna. The second element of the MS is
the GSM is the Subscriber Identity Module (SIM). The SIM card is unique to the GSM system. It has a memory of 32 KB.
Base Station Subsystem (BSS): -- It acts as an interface between the mobile station and the network subsystem. It consists of the
Base Station Controller which controls the Base Transceiver station and acts as an interface between the mobile station and mobile
switching center. A base station subsystem consists of a base station controller and one or more base transceiver station. Each Base
Transceiver Station defines a single cell. A cell can have a radius of between 100m to 35km, depending on the environment.
Network and switching subsystem (NSS): --It provides the basic network connection to the mobile stations. The basic part of the
Network Subsystem is the Mobile Service Switching Centre which provides access to different networks like ISDN, PSTN etc. It also
consists of the Home Location Register and the Visitor Location Register which provides the call routing and roaming capabilities of
GSM.
Operation and Support System (OSS): -- OSS helps in mobile networks to monitor and control the complex systems. The basic reason
for developing operation and support system is to provide customers a cost effective support and solutions. It helps in managing,
centralizing, local and regional operational activities required for GMS networks.
SOFTWARE QUALITY→ Software quality can be defined as conformance to explicitly stated and implicitly stated functional
requirements. Quality of a software can be defined as conformance to explicitly stated and implicitly stated functional
requirements. Good quality software satisfies both explicit and implicit requirements. Software quality is a complex mix of
characteristics and varies from application to application and the customer who requests for it. Software quality is a set of
characteristics that can be measured in all phases of software development.
Measurement of Software Quality (Quality metrices):
• Number of design changes required • Number of errors in the code • Number of bugs during different stages of testing
• Reliability metrics • It measures the mean time to failure (MTTF), that may be defined as probability of failure during a particular
interval of time. This will be discussed in software reliability.
Stress Testing→ Stress testing a Non-Functional testing technique that is performed as part of performance testing. During stress
testing, the system is monitored after subjecting the system to overload to ensure that the system can sustain the stress. Stress
testing is a simulation technique often used in the banking industry. It is also used on asset and liability portfolios to determine
their reactions to different financial situations. Additionally, stress tests are used to gauge how certain stressors will affect a
company, industry or specific portfolio. Stress tests are usually computer-generated simulation models that test hypothetical
scenarios; however, highly customized stress testing methodology is also often utilized. Stress testing is a useful method for
determining how a portfolio will fare during a period of financial crisis. Stress testing is most commonly used by financial
professionals for regulatory reporting and also for portfolio risk management. The recovery of the system from such phase (after
stress) is very critical as it is highly likely to happen in production environment.
Reasons for conducting Stress Testing:
• It allows the test team to monitor system performance during failures.
• To verify if the system has saved the data before crashing or NOT.
• To verify if the system prints meaning error messages while crashing or did it print some random exceptions.
• To verify if unexpected failures do not cause security issues.
Software Engineering Notes READ & PASS Aney Academy
What is meant by "Software Reengineering"? Explain the phases of Software Reengineering Life cycle.
Reengineering applies reverse engineering to existing system code to extract design and requirements. Reengineering is the
examination, analysis, and alteration of an existing software system to reconstitute it in a new form, and the subsequent
implementation of the new form. The process typically encompasses a combination of reverse engineering, re-documentation,
restructuring, and forward engineering. The goal is to understand the existing software system components (specification, design,
implementation) and then to re-do them to improve the system’s functionality, performance, or implementation. Re-engineering
starts with the code and comprehensively reverse engineers by increasing the level of abstraction as far as needed toward the
conceptual level, rethinking and re-evaluating the engineering and requirements of the current code, then forward engineers using
a waterfall software development life-cycle to the target system.
Computer Aided Software Engineering (CASE) tools instill many software engineering tasks with the help of information created
using computer. CASE tools support software engineering tasks and are available for different tasks of the Software Development
Life Cycle (SDLC).
SOFTWARE QUALITY ASSURANCE→•Software quality assurance (SQA) is a process that ensures that developed software meets and
complies with defined or standardized quality specifications. •SQA is an ongoing process within the software development life cycle
(SDLC) that routinely checks the developed software to ensure it meets desired quality measures. •SQA helps ensure the development
of high-quality software. •SQA practices are implemented in most types of software development, regardless of the underlying
software development model being used. In a broader sense, SQA incorporates and implements software testing methodologies to
test software. •Rather than checking for quality after completion, SQA processes test for quality in each phase of development until
the software is complete. •With SQA, the software development process moves into the next phase only once the current/previous
phase complies with the required quality standards. •SQA generally works on one or more industry standards that help in building
software quality guidelines and implementation strategies. These standards include the ISO 9000 and capability maturity model
integration (CMMI). •A quality factor represents a behavioral characteristic of a system.
Putnam’s Model→L. H. Putnam developed a dynamic multivariate model of the software development Process. It is based on the
assumption that distribution of effort over the life of software Development. It is described by the Rayleigh-Norden curve.
P = Kt exp(t2/2T2) / T2
P = No. of persons on the project at time ‘t’
K = The area under Rayleigh curve which is equal to total life cycle effort
T = Development time
The Rayleigh-Norden curve is used to derive an equation that relates lines of code delivered to other parameters like development
time and effort at any time during the project.
S = CkK1/3T4/3
S = Number of delivered lines of source code (LOC)
Ck = State-of-technology constraints
K = The life cycle effort in man-years
T = Development time.
Software Engineering Notes READ & PASS Aney Academy
Cyclomatic Complexity→It is a white box testing strategy. This strategy is used to find the number of independent paths through a
program. If Control Flow Graph (CFG) of a program is given, then the Cyclomatic complexity V(G) can be computed as with the help
of given formula: V(G) = E – N + 2, where N = number of nodes of the CFG and E = number of edges in the CFG.
Properties of Cyclomatic complexity are:
•V(G) is the maximum number of independent paths in graph G •Inserting and deleting functional statements to G does not
affect V(G) •G has only one path if and only if V(G) = 1.
Example
int avinash(a,b)
{ int a,b;
while (a!= b) { if (a > b) a = a-b; else a = b-a; }
return a;
}
In the above program, two control constructs are used, while-loop and if-then-else respectively. A complete CFG for the above
program is
Component Based Software Engineering(CBSE)→The goal of component-based software engineering is to increase the
productivity, quality, and decrease time-to-market in software development. CBSE uses Software Engineering principles to apply the
same idea as OOP to the whole process of designing and constructing software systems. It focuses on reusing and adapting existing
components, as opposed to just coding in a particular style. A software component is a nontrivial, independent, and replaceable
part of a system that fulfils a clear function in the context of a well-defined architecture. CBSE is in many ways similar to
conventional or object-oriented software engineering. software team establishes requirements for the system to be built using
conventional requirements elicitation techniques. The CBSE process identifies not only candidate components but also qualifies
each component’s interface, adapts components to remove architectural mismatches, assembles components into selected
architectural style, and updates components as requirements for the system change.
Two processes occur in parallel during the CBSE process. These are:
• Domain Engineering
• Component Based Development.
Software Engineering Notes READ & PASS Aney Academy
Challenges for CBSE
•Dependable systems and CBSE: The use of CBD in safety-critical domains, real-time systems, and different process-control systems,
in which the reliability requirements are more rigorous, is particularly challenging. A major problem with CBD is the limited
possibility of ensuring the quality and other nonfunctional attributes of the components and thus our inability to guarantee specific
system attributes.
•Tool support: The purpose of Software Engineering is to provide practical solutions to practical problems, and the existence of
appropriate tools is essential for a successful CBSE performance. Development tools, such as Visual Basic, have proved to be
extremely successful, but many other tools are yet to appear – component selection and evaluation tools, component repositories
and tools for managing the repositories, component test tools, component-based design tools, run-time system analysis tools,
component configuration tools, etc.
•Trusted components: Because the trend is to deliver components in binary form and the component development process is
outside the control of component users, questions related to component trustworthiness become of great importance.
•Component certification: One way of classifying components is to certificate them. In spite of the common belief that certification
means absolute trustworthiness, it is in fact only gives the results of tests performed and a description of the environment in which
the tests were performed. While certification is a standard procedure in many domains, it is not yet established in software in
general and especially not for software components.
•Composition predictability: Even if we assume that we can specify all the relevant attributes of components, it is not known how
these attributes determine the corresponding attributes of systems of which they are composed. The ideal approach, to derive
system attributes from component attributes is still a subject of research.
•Requirements management and component selection: Requirements management is a complex process. A problem of
requirements management is that requirements in general are incomplete, imprecise and contradictory. The process of engineering
requirements is much more complex as the possible candidate components are usually lacking one or more features which meet
the system requirements exactly.
•Long-term management of component-based systems: As component-based systems include sub-systems and components with
independent lifecycles, the problem of system evolution becomes significantly more complex. CBSE is a new approach and there is
little experience as yet of the maintainability of such systems. There is a risk that many such systems will be troublesome to
maintain.
•Development models: Although existing development models demonstrate powerful technologies, they have many ambiguous
characteristics, they are incomplete, and they are difficult to use.
•Component configurations: Complex systems may include many components which, in turn, include other components. In many
cases compositions of components will be treated as components. As soon as we begin to work with complex structures, the
problems involved with structure configuration pop up.
Similarities and differences between cleanroom and OO Paradigm E-R diagram→An entity relationship diagram (ERD)
The following are the similarities and the differences between the shows the relationships of entity sets stored in a
Cleanroom software engineering development and OO software database. An entity in this context is a component of
engineering paradigm. data. An ER diagram is a means of visualizing how the
Similarities information a system produces is related. There are
• Lifecycle - both rely on incremental development. • Usage - cleanroom five main components of an ER-Diagram:
usage model similar to OO use case. • State Machine Use - cleanroom • Entities are represented by means of rectangles.
state box and OO transition diagram. • Reuse - explicit objective in both Rectangles are named with the entity set they
process models. represent. Student Teacher Project
Key Differences
• Cleanroom relies on decomposition and OO relies on composition. • • Attributes are the properties of entities. Attributes
Cleanroom relies on formal methods while OO allows informal use case are represented by means of ellipses. Every ellipse
definition and testing. • OO inheritance hierarchy is a design resource represents one attribute and is directly connected to
whereas cleanroom usage hierarchy is system itself. • OO practitioners its entity (rectangle).
prefer graphical representations while cleanroom practitioners prefer • Actions, which are represented by diamond shapes,
tabular representations. • Tool support is good for most OO processes, but show how two entities share information in the
usually tool support is only found in cleanroom testing, not design. database.
• Connecting lines, solid lines that connect attributes
to show the relationships of entities in the diagram.
• Cardinality specifies how many instances of an entity
relate to one instance of another entity.
Software Engineering Notes READ & PASS Aney Academy
Data Flow diagram→A data flow diagram (DFD) illustrates how data is processed by a system in terms of inputs and outputs. As its
name indicates its focus is on the flow of information, where data comes from, where it goes and how it gets stored. A data flow
diagram (DFD) maps out the flow of information for any process or system. A DFD shows what kind of information will be input to
and output from the system
DFD rules
• Each process should have at least one input and an output. • Each data store should have at least one data flow in and one data
flow out. • Data stored in a system must go through a process. • All processes in a DFD go to another process or a data store.
A data flow diagram can dive into progressively more detail by using levels and layers, zeroing in on a particular
piece. DFD levels are numbered 0, 1 or 2, and occasionally go to even Level 3 or beyond. DFD Level 0 is also called a Context
Diagram. It’s a basic overview of the whole system. DFD Level 1 provides a more detailed breakout of pieces of the Context Level
Diagram. DFD Level 2 then goes one step deeper into parts of Level 1. It may require more text to reach the necessary
level of detail about the system’s functioning. Progression to Levels 3, 4 and beyond is possible, but going beyond Level 3 is
uncommon.
Benefits of change control management
•The existence of a formal process of change management helps the developer to identify the responsibility of code for which a
developer is responsible. An idea is achieved about the changes that affect the main product. The existence of such mechanism
provides a road map to the development process and encourages the developers to be more involved in their work.
•Version control mechanism helps the software tester to track the previous version of the product, thereby giving emphasis on
testing of the changes made since the last approved changes. It helps the developer and tester to simultaneously work on multiple
versions of the same product and still avoid any conflict and overlapping of activity.
•The software change management process is used by the managers to keep a control on the changes to the product thereby
tracking and monitoring every change. The existence of a formal process reassures the management. It provides a professional
approach to control software changes.
•It also provides confidence to the customer regarding the quality of the product.
Various levels of testing
To check the functionality of a software application, several techniques of functional testing are used by testers and developers
during the process of software development. The Four Levels of Software Testing are
Unit Testing: --The first level of functional testing, Unit Testing, is the most micro-level of testing performed on a software. The
purpose of this testing is to test individual units of source code together with associated control data to determine if they are fit for
use. Unit testing is performed by the team of developers before the setup is handed over to the testing team to formally execute
the test cases. It helps developers verify internal design and logic of the written code. One of the biggest benefit of this testing is
that it can be implemented every time a piece of code is modified, allowing issues to be resolved as quickly as possible.
Integration Testing: -- Integration testing allows individuals the opportunity to combine all of the units within a program and test
them as a group. This testing level is designed to find interface defects between the modules/functions. This is particularly
beneficial because it determines how efficiently the units are running together. Keep in mind that no matter how efficiently each
unit is running, if they aren’t properly integrated, it will affect the functionality of the software program. In order to run these types
of tests, individuals can make use of various testing methods, but the specific method that will be used to get the job done will
depend greatly on the way in which the units are defined.
System Testing: --System testing is the first level in which the complete application is tested as a whole. The goal at this level is to
evaluate whether the system has complied with all of the outlined requirements and to see that it meets Quality Standards. System
testing is undertaken by independent testers who haven’t played a role in developing the program. This testing is performed in an
environment that closely mirrors production. System Testing is very important because it verifies that the application meets the
technical, functional, and business requirements that were set by the customer.
Acceptance Testing: --The final level, Acceptance testing (or User Acceptance Testing), is conducted to determine whether the
system is ready for release. During the Software development life cycle, requirements changes can sometimes be misinterpreted in
a fashion that does not meet the intended needs of the users. During this final phase, the user will test the system to find out
whether the application meets their business’ needs. Once this process has been completed and the software has passed, the
program will then be delivered to production.
Software Engineering Notes READ & PASS Aney Academy
Factors affecting the task set for the project
• Technical staff expertise: -- All staff members should have sufficient technical expertise for timely implementation of the
project. Meetings have to be conducted, weekly and status reports are to be generated.
• Customer satisfaction: -- Customer has to be given timely information regarding the status of the project. If not, there might be
a communication gap between the customer and the organisation.
• Technology update: -- Latest tools and existing tested modules have to be used for fast and efficient implementation of the
project.
• Full or partial implementation of the project: -- In case, the project is very large and to meet the market requirements, the
organisation has to satisfy the customer with at least a few modules. The remaining modules can be delivered at a later stage.
• Time allocation: -- The project has to be divided into various phases and time for each phase has to be given in terms of
person-months, module-months, etc.
• Module binding: -- Module has to bind to various technical staff for design, implementation and testing phases. Their necessary
inter-dependencies have to be mentioned in a flow chart.
• Milestones: -- The outcome for each phase has to be mentioned in terms of quality, specifications implemented, limitations of
the module and latest updates that can be implemented (according to the market strategy).
• Validation and Verification: -- The number of modules verified according to customer specification and the number of modules
validated according to customer’s expectations are to be specified.
The following are the objectives of software change management process:
1. Configuration identification: -- The source code, documents, test plans, etc. The process of identification involves identifying
each component name, giving them a version name (a unique number for identification) and a configuration identification.
2. Configuration control: -- Controlling changes to a product. Controlling release of a product and changes that ensure that the
software is consistent on the basis of a baseline product.
3. Review: Reviewing the process to ensure consistency among different configuration items.
4. Status accounting: -- Recording and reporting the changes and status of the components.
5. Auditing and reporting: Validating the product and maintaining consistency of the product throughout the software life cycle.
Identification of Software Risks →A risk may be defined as a potential problem. It may or may not occur. But, it should always be
assumed that it may occur and necessary steps are to be taken. Risks can arise from various factors like improper technical
knowledge or lack of communication between team members, lack of knowledge about software products, market status, hardware
resources, competing software companies, etc.
Basis for Different Types of Software risks
• Skills or Knowledge: The persons involved in activities of problem analysis, design, coding and testing have to be fully aware of the
activities and various techniques at each phase of the software development cycle. In case, they have partial knowledge or lacks
adequate skill, the products may face many risks at the current stage of development or at later stages.
• Interface modules: Complete software contains various modules and each module sends and receives information to other
modules and their concerned data types have to match.
• Poor knowledge of tools: If the team or individual members have poor knowledge of tools used in the software product, then the
final product will have many risks, since it is not thoroughly tested.
• Programming Skills: The code developed has to be efficient, thereby, occupying less memory space and less CPU cycles to
compute given task. • Management Issues: The management of the organisation should give proper training to the project staff,
arrange some recreation activities, give bonus and promotions and interact with all members of the project and try to solve their
necessities at the best. • Extra support: The software should be able to support a set of a few extra features in the vicinity of the
product to be developed. • Customer Risks: Customer should have proper knowledge of the product needed, and should not be in a
hurry to get the work done. • External Risks: The software should have backup in CD, tapes, etc., fully encrypted with full license
facilities. Encryption is maintained such that no external persons from the team can tap the source code.
Rapid Application Development(RAD)→ This model gives a quick approach for software development and is based on a linear
sequential flow of various development processes. The software is constructed on a component basis. Thus multiple teams are
given the task of different component development. It increases the overall speed of software development. It gives a fully
functional system within very short time. It follows a modular approach for development. The problem with this model is that it may
not work when technical risks are high.
Iterative Enhancement Model→ This model was developed to remove the shortcomings of waterfall model. In this model, the
phases of software development remain the same, but the construction and delivery is done in the iterative mode. In the first
iteration, a less capable product is developed and delivered for use. This product satisfies only a subset of the requirements. In the
next iteration, a product with incremental features is developed. Every iteration consists of all phases of the waterfall model. The
complete product is divided into releases and the developer delivers the product release by release. This model is useful when less
manpower is available for software development. The main disadvantage of this model is that iteration may never end, and the user
may have to endlessly wait for the final product. The cost estimation is also tedious.
Software Engineering Notes READ & PASS Aney Academy
What is Software Quality? Causes of error in Software
Quality software is reasonably bug or defect free, delivered on time and • Misinterpretation of customers’ requirements/
within budget, meets requirements and/or expectations, and is communication
maintainable. software quality can be defined as conformance to explicitly • Incomplete/erroneous system specification
stated and implicitly stated functional requirements. Good quality software • Error in logic
satisfies both explicit and implicit requirements. Software quality is a • Not following programming/software standards
complex mix of characteristics and varies from application to application and • Incomplete testing
the customer who requests for it. • Inaccurate documentation/no documentation
Attributes of Quality • Deviation from specification
The following are some of the attributes of quality: • Error in data modeling and representation.
Auditability: -- The ability of software being tested against conformance to J2ME →Java supports mobile application
standard. development using its J2ME. J2ME stands for Java 2
Compatibility: -- The ability of two or more systems or components to Platform, Micro Edition. J2ME provides an
perform their required functions while sharing the same hardware or environment under which application development
software environment. can be done for mobile phone, personal digital
Completeness: -- The degree to which all of the software’s required assistants and other embedded devices.
functions and design constraints are present and fully developed in the As like any other Java platform, J2ME includes API’s
requirements specification, design document and code. (Application Programming Interface) and Java Virtual
Consistency: -- The degree of uniformity, standardization, and freedom from Machines. It includes a range of user interfaces,
contradiction among the documents or parts of a system or component. provides security and supports a large number of
Correctness: -- The degree to which a system or component is free from network protocols. J2ME also supports the concept
faults in its specification, design, and implementation. The degree to which of write once, run anywhere concept. J2ME is one of
software, documentation, or other items meet specified requirements. the popular platforms that is being used across the
Feasibility: -- The degree to which the requirements, design, or plans for a world for a number of mobile devices, embedded
system or component can be implemented under existing constraints. devices etc.
Modularity: -- The degree to which a system or computer program is The architecture of J2ME consists of a number of
composed of discrete components such that a change to one component components that can be used to construct a suitable
has minimal impact on other components. Java Runtime Environment (JRE) for a set of mobile
Predictability: -- The degree to which the functionality and performance of devices. When right components are selected, it will
the software are determinable for a specified set of inputs. lead to a good memory, processing strength and I/O
Robustness: -- The degree to which a system or component can function capabilities for the set of devices for which JRE is
correctly in the presence of invalid inputs or stressful environmental being constructed.
conditions. There are several reasons for the usage of Java
Structuredness: -- The degree to which the SDD (System Design Document) technology for wireless application development.
and code possess a definite pattern in their interdependent parts. This Some of them are given below:
implies that the design has proceeded in an orderly and systematic manner • Java platform is secure. It is safe. It always works
(e.g., top-down, bottom-up). The modules are cohesive and the software within the boundaries of Java Virtual Machine.
has minimized coupling between modules. Hence, if something goes wrong, then only JVM is
Testability: -- The degree to which a system or component facilitates the corrupted. The device is never damaged. •
establishment of test criteria and the performance of tests to determine Automatic garbage collection is provided by Java. In
whether those criteria have been met. the absence of automatic garbage collection, it is to
Traceability: -- The degree to which a relationship can be established the developer to search for the memory leaks. • Java
between two or more products of the development process. The degree to offers exception handling mechanism. Such a
which each element in a software development product establishes its mechanism facilitates the creation of robust
reason for existing (e.g., the degree to which each element in a bubble chart applications. • Java is portable. Suppose that you
references the requirement that it satisfies). For example, the system’s develop an application using MIDP. This application
functionality must be traceable to user requirements. can be executed on any mobile device that
Understandability: -- The degree to which the meaning of the SRS, SDD, and implements MIDP specification. Due to the feature
code are clear and understandable to the reader. of portability, it is possible to move applications to
Verifiability: -- The degree to which the SRS, SDD, and code have been specific devices over the air.
written to facilitate verification and testing.
Software Engineering Notes READ & PASS Aney Academy
Waterfall Model→ The Waterfall Model was first Process Model to be introduced. It is very simple to understand and use. In a
waterfall model, each phase must be completed before the next phase can begin and there is no overlapping in the phases.
Waterfall model is the earliest SDLC approach that was used for software development. This type of software development model is
basically used for the project which is small and there are no uncertain requirements. It is actually the first engineering approach of
software development. The waterfall model provides a systematic and sequential approach to software development and is better
than the build and fix approach. It does not incorporate any kind of risk assessment. The name is mainly due to suppose a waterfall
on the cliff of a steep mountain. Once the water has flowed over the edge of the cliff and has begun its journey down the side of the
mountain, it cannot turn back. It is the same with waterfall development. Once a phase of development is completed, the
development proceeds to the next phase and there is no turning back.