0% found this document useful (0 votes)
12 views

Assignment of Software Engineering - II

Uploaded by

arawat2615.ca20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Assignment of Software Engineering - II

Uploaded by

arawat2615.ca20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 82

Name- Harpreet Singh

Class – M.Sc (IT) – I

Roll no – 4524

Assignment of Software Engineering-II


Q.1 Discuss basic COCOMO Model to estimate
development effort ?

Ans. The COCOMO Model is a procedural cost estimate model


for software projects and is often used as a process of reliably
predicting the various parameters associated with making a
project such as size, effort, cost, time, and quality. It was
proposed by Barry Boehm in 1981 and is based on the study of
63 projects, which makes it one of the best-documented
models.

The key parameters that define the quality of any software


product, which are also an outcome of COCOMO, are primarily
effort and schedule:

1. Effort: Amount of labour that will be required to complete a


task. It is measured in person-months units.

2. Schedule: This simply means the amount of time required


for the completion of the job, which is, of course,
proportional to the effort put in. It is measured in the units
of time such as weeks, and months.

Types of Projects in the COCOMO Model

In the COCOMO model, software projects are categorised into


three types based on their complexity, size, and the
development environment. These types are:

1. Organic: A software project is said to be an organic type if


the team size required is adequately small, the problem is
well understood and has been solved in the past and also
the team members have a nominal experience regarding
the problem.

2. Semi-detached: A software project is said to be a Semi-


detached type if the vital characteristics such as team size,
experience, and knowledge of the various programming
environments lie in between organic and embedded. The
projects classified as Semi-Detached are comparatively less
familiar and difficult to develop compared to the organic
ones and require more experience better guidance and
creativity. Eg: Compilers or different Embedded Systems can
be considered Semi-Detached types.

3. Embedded: A software project requiring the highest level


of complexity, creativity, and experience requirement falls
under this category. Such software requires a larger team
size than the other two models and also the developers
need to be sufficiently experienced and creative to develop
such complex models.

The Six phases of detailed COCOMO are:

1. Planning and requirements: This initial phase involves


defining the scope, objectives, and constraints of the
project. It includes developing a project plan that outlines
the schedule, resources, and milestones

2. System design: : In this phase, the high-level architecture


of the software system is created. This includes defining the
system’s overall structure, including major components,
their interactions, and the data flow between them.

3. Detailed design: This phase involves creating detailed


specifications for each component of the system. It breaks
down the system design into detailed descriptions of each
module, including data structures, algorithms, and
interfaces.

4. Module code and test: This involves writing the actual


source code for each module or component as defined in
the detailed design. It includes coding the functionalities,
implementing algorithms, and developing interfaces.

5. Integration and test: This phase involves combining


individual modules into a complete system and ensuring
that they work together as intended.

6. Cost Constructive model: The Constructive Cost


Model (COCOMO) is a widely used method for estimating
the cost and effort required for software development
projects.

Importance of the COCOMO Model

1. Cost Estimation: To help with resource planning and


project budgeting, COCOMO offers a methodical approach
to software development cost estimation.
2. Resource Management: By taking team experience,
project size, and complexity into account, the model helps
with efficient resource allocation.

3. Project Planning: COCOMO assists in developing practical


project plans that include attainable objectives, due dates,
and benchmarks.

4. Risk management: Early in the development process,


COCOMO assists in identifying and mitigating potential
hazards by including risk elements.

5. Support for Decisions: During project planning, the model


provides a quantitative foundation for choices about scope,
priorities, and resource allocation.

6. Benchmarking: To compare and assess various software


development projects to industry standards, COCOMO offers
a benchmark.

7. Resource Optimization: The model helps to maximize the


use of resources, which raises productivity and lowers costs.

The Basic COCOMO model is a straightforward way to estimate


the effort needed for a software development project. It uses a
simple mathematical formula to predict how many person-
months of work are required based on the size of the project,
measured in thousands of lines of code (KLOC).

It estimates effort and time required for development using the


following expression:
E = a(KLOC)b PM

Tdev = c(E)d

Person required = Effort/ Time

Where,

E is effort applied in Person-Months

KLOC is the estimated size of the software product indicate in


Kilo Lines of Code

Tdev is the development time in months

a, b, c are constants determined by the category of software


project given in below table.

The above formula is used for the cost estimation of the basic
COCOMO model and also is used in the subsequent models. The
constant values a, b, c, and d for the Basic Model for the
different categories of the software projects are:

Software
Projects a b c d

Organic 2.4 1.05 2.5 0.38

Semi-
3.0 1.12 2.5 0.35
Detached

Embedded 3.6 1.20 2.5 0.32


1. The effort is measured in Person-Months and as evident
from the formula is dependent on Kilo-Lines of code. The
development time is measured in months.

2. These formulas are used as such in the Basic Model


calculations, as not much consideration of different factors
such as reliability, and expertise is taken into account,
henceforth the estimate is rough.

Example of Basic COCOMO Model

Suppose that a Basic project was estimated to be 400 KLOC (kilo


lines of code). Calculate effort and time for each of the three
modes of development. All the constants value provided in the
following table:

Solution

From the above table we take the value of constant a,b,c and d.

1. For organic mode,

 effort = 2.4 × (400)1.05 ≈ 1295 person-month.

 dev. time = 2.5 × (1295)0.38 ≈ 38 months.

2. For semi-detach mode,

 effort = 3 × (400)1.12 ≈ 2462 person-month.

 dev. time = 2.5 × (2462)0.35 ≈ 38 months.

3. For Embedded mode,

 effort = 3.6 × (400)1.20 ≈ 4772 person-month.

 dev. time = 2.5 × (4772)0.32 ≈ 38 months.


Q.2 What is the role of metrics and Management in
Software Engineering? Explain People metric.
Ans . Software metrics are quantitative measures used to assess
various aspects of software development and maintenance. These
metrics provide insights into the quality, performance, and efficiency of
software projects, helping teams improve their processes and
outcomes.

Role of Metrics and Measurement


Metrics and measurement play a crucial role in various aspects of
business and organizational management. They are foundational tools
for understanding, controlling, and improving processes, performance,
and outcomes. Here's a breakdown of their roles:

• Track Progress: Keep an eye on how well things are going.

• Improve Processes: Make sure everything works smoothly and


efficiently.

• Make Decisions: Use data to make smart choices.

• Align with Goals: Ensure daily work supports big-picture goals.

• Accountability: Hold people responsible for their work.

• Continuous Improvement: Find areas to get better and check if


changes help.

• Manage Risks: Spot problems early and fix them.

• Customer Focus: Understand and improve customer satisfaction.

• Control Costs: Keep an eye on spending and profits.


• Encourage Innovation: Support new ideas and learning.

People metrics, also known as human resource (HR) metrics or


workforce metrics, are measurements used to assess and manage the
performance, effectiveness, and well-being of employees within an
organization. These metrics help organizations understand how their
workforce contributes to overall business goals and where
improvements can be made.

Employee Turnover Rate


Measures the percentage of employees who leave the organization
over a specific period.
High turnover can indicate issues with job satisfaction, management, or
workplace culture.

2. Employee Engagement
Assesses how committed and motivated employees are in their roles.
Commonly measured through surveys that ask about job satisfaction,
work environment, and organizational loyalty.

3. Absenteeism Rate
Tracks the frequency and duration of employee absences.
High absenteeism may signal poor job satisfaction, health issues, or
workplace stress.

4.Time to Hire
Measures the average time taken to fill open positions.
A shorter time to hire usually indicates an efficient recruitment process

4. Employee Productivity
Evaluates the amount of work or output generated by an
employee or team over a specific period.
Can be measured by output per hour, sales per employee, or
other relevant performance indicators.

6. Training and Development


Tracks the number of hours spent on employee training and the
effectiveness of these programs.
Includes metrics like training completion rates and improvements
in job performance after training.

7.Employee Retention Rate


The percentage of employees who remain with the organization
over a specific period.
High retention rates generally indicate good employee satisfaction
and a positive work environment.

8. Cost per Hire


Calculates the total cost involved in hiring a new employee,
including recruitment, training, and onboarding costs.
Helps assess the efficiency and cost-effectiveness of the hiring
process.

9.Employee Net Promoter Score (eNPS)


Measures how likely employees are to recommend the
organization as a good place to work.
A high eNPS suggests strong employee satisfaction and loyalty.

10. Employee Wellness


Assesses the physical and mental well-being of employees.
Metrics may include health-related absenteeism, participation in
wellness programs, and stress levels.

11. Succession Planning


Evaluates how well-prepared the organization is to fill key roles
with internal candidates.
Metrics include the percentage of leadership positions with
identified successors.
12. Performance Appraisals
Tracks the outcomes of employee performance reviews.
Includes metrics like the percentage of employees meeting or
exceeding performance expectations.

Q.3 What do you understand by Software Maintenance? What are


different types of Maintenance?

Ans. Software maintenance is the process of changing, modifying, and


updating software to keep up with customer needs. Software
maintenance is done after the product has launched for several
reasons including improving the software overall, correcting issues or
bugs, to boost performance, and more.

Software maintenance is a natural part of SDLC (software development


life cycle). Software developers don’t have the luxury of launching a
product and letting it run, they constantly need to be on the lookout to
both correct and improve their software to remain competitive and
relevant.

Using the right software maintenance techniques and strategies is a


critical part of keeping any software running for a long period of time
and keeping customers and users happy.

Why is software maintenance important?

Creating a new piece of software and launching it into the world is an


exciting step for any company. A lot goes into creating your software
and its launch including the actual building and coding, licensing
models, marketing, and more. However, any great piece of software
must be able to adapt to the times.

This means monitoring and maintaining properly. As technology is


changing at the speed of light, software must keep up with the market
changes and demands.

The four different types of software maintenance are each performed


for different reasons and purposes. A given piece of software may have
to undergo one, two, or all types of maintenance throughout its
lifespan.

The four types are:


Corrective Software Maintenance
Preventative Software Maintenance
Perfective Software Maintenance
Adaptive Software Maintenance

Corrective Software Maintenance

Corrective software maintenance is the typical, classic form of


maintenance (for software and anything else for that matter).
Corrective software maintenance is necessary when something goes
wrong in a piece of software including faults and errors. These can
have a widespread impact on the functionality of the software in
general and therefore must be addressed as quickly as possible.

Many times, software vendors can address issues that require


corrective maintenance due to bug reports that users send in. If a
company can recognize and take care of faults before users discover
them, this is an added advantage that will make your company seem
more reputable and reliable (no one likes an error message after all).

Preventative Software Maintenance

Preventative software maintenance is looking into the future so that


your software can keep working as desired for as long as possible.

This includes making necessary changes, upgrades, adaptations and


more. Preventative software maintenance may address small issues
which at the given time may lack significance but may turn into larger
problems in the future. These are called latent faults which need to be
detected and corrected to make sure that they won’t turn into effective
faults.

Perfective Software Maintenance

As with any product on the market, once the software is released to the
public, new issues and ideas come to the surface. Users may see the
need for new features or requirements that they would like to see in
the software to make it the best tool available for their needs. This is
when perfective software maintenance comes into play.

Perfective software maintenance aims to adjust software by adding


new features as necessary and removing features that are irrelevant or
not effective in the given software. This process keeps software
relevant as the market, and user needs, change.
Adaptive Software Maintenance

Adaptive software maintenance has to do with the changing


technologies as well as policies and rules regarding your software.
These include operating system changes, cloud storage, hardware, etc.
When these changes are performed, your software must adapt in order
to properly meet new requirements and continue to run well.

 The Software Maintenance Process

The software maintenance process involves various software


maintenance techniques that can change according to the type
of maintenance and the software maintenance plan in place.

Most software maintenance process models include the following


steps:

1. Identification & Tracing – The process of determining what


part of the software needs to be modified (or maintained). This
can be user-generated or identified by the software developer
itself depending on the situation and specific fault.
2. Analysis – The process of analyzing the suggested
modification including understanding the potential effects of
such a change. This step typically includes cost analysis to
understand if the change is financially worthwhile.
3. Design – Designing the new changes using requirement
specifications
4. Implementation – The process of implementing the new
modules by programmers.
5. System Testing – Before being launched, the software and
system must be tested. This includes the module itself, the
system and the module, and the whole system at once.
6. Acceptance Testing- Users test the modification for
acceptance. This is an important step as users can identify
ongoing issues and generate recommendations for more
effective implementation and changes.
7. Delivery – Software updates or in some cases new
installation of the software. This is when the changes arrive at
the customers.

Q.4 Discuss process & Project Metrics how are they useful?
Ans. Process Metrics
Process metrics are key indicators used to measure the
performance, efficiency, and effectiveness of a process. These
metrics provide insights into how well a process is functioning
and where improvements can be made. They are essential for
process management, quality control, and continuous
improvement efforts.

Efficiency Metrics
Cycle Time: The total time taken to complete a process from
start to finish.
Throughput: The amount of work completed or units produced
in a given time period.
Resource Utilization: The percentage of resources (like labor,
machinery, or materials) used in the process.

Effectiveness Metrics
Quality: Measures the quality of the output, often assessed
through defect rates or customer satisfaction scores.
Compliance Rate: The degree to which the process adheres to
regulatory standards or internal guidelines.

Project Metrics
In software engineering, project metrics are quantitative
measures that provide insights into the various aspects of
software development processes, product quality, and team
performance. They are essential for managing, improving, and
ensuring the successful delivery of software projects.

2. Product Metrics

These are used to assess the quality and performance of the


software product itself.

- Defect Density: The number of defects per size unit (lines of


code, function points, etc.).

Usefulness: Measures the quality of the software, helping


teams focus on areas that may require more testing or
refactoring.

- Code Coverage: The percentage of code covered by automated


tests.

Usefulness: Assesses how thoroughly the code is tested,


ensuring the reliability of the software.

- Cyclomatic Complexity: Measures the complexity of a


program's control flow.
Usefulness: Helps identify areas of the code that may be error-
prone or difficult to maintain and test.

- Mean Time to Failure (MTTF): The average time the software


operates before failing.

Usefulness: Useful in predicting the reliability of the software in


real-world usage.

3. Team Performance Metrics

These evaluate the performance and productivity of the


development team.

- Commit Frequency: How often team members commit code to


the version control system.

Usefulness: A high frequency may indicate a healthy, active


development process, while a low frequency may suggest
blockers or inefficiencies.

- Pull Request (PR) Cycle Time: The time taken for a pull request
to be reviewed and merged.

Usefulness: Short PR cycle times can reflect good collaboration


and efficient workflows, while long cycle times might indicate a
need for process improvements.
- Work in Progress (WIP): The number of tasks in progress at any
given time.

Usefulness: Helps monitor if a team is multitasking too much,


which can reduce productivity and increase the risk of
incomplete or poor-quality work.

5. Business-Oriented Metrics

These track the project’s alignment with business goals.

- Return on Investment (ROI): The financial return on the project


compared to its cost.

Usefulness: Helps determine whether the software


development efforts are yielding profitable outcomes.

- Cost Performance Index (CPI): The ratio of earned value to


actual costs.

Usefulness: Indicates whether the project is within budget.

- Schedule Performance Index (SPI): The ratio of earned value to


planned value.

Usefulness: Shows whether the project is ahead of or behind


schedule.

How Project Metrics Are Useful:


- Improved Decision-Making: Metrics provide data-driven insights
to guide decisions on resource allocation, process changes, and
product improvements.

- Early Detection of Problems: Tracking metrics allows teams to


identify and address issues (e.g., quality, performance, delays)
early in the development process.

- Better Project Planning: By understanding past performance


through metrics like velocity or lead time, teams can better
estimate and plan future sprints or releases.

- Enhanced Communication: Metrics serve as a common


language for stakeholders (e.g., developers, managers,
customers), enabling clearer communication about project
progress and challenges.

- Continuous Improvement: Process metrics help teams adopt


continuous improvement practices by identifying inefficiencies
and tracking the impact of changes over time.

- Increased Transparency and Accountability: Metrics make it


easier to assess whether teams are meeting goals and sticking
to timelines, fostering accountability across the organization.

In summary, project metrics in software engineering are


essential for tracking progress, improving quality, managing
risks, and ensuring that the software development process
aligns with business objectives.
Q.5 Diffentiate between software reliability & hardware
reliability. Discuss reliability as well

Ans. Reliability in software is software that has no failure and


works in a special time period with a special environment.
Hardware reliability is the probability of the absence of any
hardware-related system malfunction for a given mission on the
other hand software reliability is the probability that the software
will provide a failure-free operation in a fixed environment for a
fixed interval of time. The article focuses on discussing the
difference between Hardware Reliability and Software Reliability.

Hardware Reliability

Hardware reliability is the probability that the ability of the


hardware to perform its function for some period of time. It may
change during certain periods such as initial burn-in or the end
of useful life.
 It is expressed as Mean Time Between Failures (MBTF).

 Hardware faults are mostly physical faults.

 Thorough testing of all components cuts down on the


number of faults.

 Hardware failures are mostly due to wear and tear.

 It follows the Bathtub curve principle for testing failure.

Software Reliability

Software reliability is the probability that the software will


operate failure-free for a specific period of time in a specific
environment. It is measured per some unit of time.

 Software Reliability starts with many faults in the system


when first created.

 After testing and debugging enter a useful life cycle.


 Useful life includes upgrades made to the system which
bring about new faults.

 The system needs to then be tested to reduce faults.

 Software reliability cannot be predicted from any physical


basis, since it depends completely on the human factors in
design.

Hardware Reliability vs Software Reliability

Software
Features Hardware Reliability Reliability

Failures are caused due


Failures are caused
to defects in design,
due to defects in
Source of production, and
design.
Failure maintenance.

Wear and Failure occurs due to In software reliability,


Software
Features Hardware Reliability Reliability

physical deterioration in there is no wear and


Tear wear and tear. tear.

In this prior In this no prior


Deterioratio deterioration warning deterioration warning
n Warning about failure. about failure.

The bathtub curve is There is no Bathtub


Failure used for failure rates curve for failure
Curve apply. rates.

Is Failure
In this failures are time- In this failures are
Time-
dependent. not time-dependent.
dependent?

In this reliability can


In this reliability can be
Reliability not be predicted
predicted from design.
Prediction from design.

The complexity of The complexity of


Reliability hardware reliability is software reliability is
Complexity very high. low.

External Hardware reliability is External environment


Software
Features Hardware Reliability Reliability

related to conditions do not


Environmen environmental affect software
t Impact conditions. reliability.

Reliability can be
Reliability can’t be
Reliability improved through
improved through
Improvemen redundancy of
redundant of hardware.
t software.

Repairs can be made No equivalent


that make hardware preventive
Maintenanc more reliable through maintenance for
e maintenance. software.

Software Reliability means Operational reliability. It is


described as the ability of a system or component to perform its
required functions under static conditions for a specific period.

Software reliability is also defined as the probability that a


software system fulfills its assigned task in a given environment
for a predefined number of input cases, assuming that the
hardware and the input are free of error.

Software Reliability is an essential connect of software quality,


composed with functionality, usability, performance,
serviceability, capability, installability, maintainability, and
documentation. Software Reliability is hard to achieve because
the complexity of software turn to be high. While any system
with a high degree of complexity, containing software, will be
hard to reach a certain level of reliability, system developers
tend to push complexity into the software layer, with the speedy
growth of system size and ease of doing so by upgrading the
software.

For example, large next-generation aircraft will have over 1


million source lines of software on-board; next-generation air
traffic control systems will contain between one and two million
lines; the upcoming International Space Station will have over
two million lines on-board and over 10 million lines of ground
support software; several significant life-critical defense systems
will have over 5 million source lines of software. While the
complexity of software is inversely associated with software
reliability, it is directly related to other vital factors in software
quality, especially functionality, capability, etc.

Q.6 What are various types of software metrics used in it?


explain in detail.

Ans. Software metrics are quantitative measures that help


organizations assess various aspects of the software
development process, product quality, and team performance.
They play a vital role in improving software engineering
practices, managing projects effectively, and ensuring that
software systems meet user and business requirements.

Types of Software Metrics

Software metrics can be broadly categorized into three main


types:

1. Product Metrics

2. Process Metrics

3. Project Metrics

Each category focuses on a different aspect of software


engineering.

1. Product Metrics

Product metrics assess the characteristics and quality of the


software product itself, such as its design, code, and
performance. These metrics help in evaluating the efficiency,
complexity, and reliability of the software.

Key Product Metrics:


- Size Metrics:

Measure the size of the software product in terms of lines of


code (LOC), function points, or classes.

- Lines of Code (LOC): Measures the number of lines in the


software’s source code.

- Usefulness: Simple to measure, often used to estimate effort


and cost, but doesn’t account for code complexity.

- Function Points (FP): Measures functionality provided to the


user, based on inputs, outputs, user interactions, and files used.

- Usefulness: More objective than LOC, especially for


measuring productivity across different programming languages.

- Complexity Metrics:

Assess the complexity of the software code, which can affect


maintainability, testability, and defect likelihood.

- Cyclomatic Complexity: Measures the number of linearly


independent paths through the program’s source code.

- Usefulness: Higher values indicate more complex code,


making it harder to test, maintain, and less reliable.
- Halstead Complexity: Based on the number of operators and
operands in the code, it measures program complexity in terms
of effort, difficulty, and error proneness.

- Usefulness: Provides insights into the complexity and


maintainability of code.

- Code Coverage:

Measures the percentage of code executed by automated


tests, providing an indication of test completeness.

- Usefulness: Ensures more of the code is tested, leading to


higher quality and fewer defects.

- Defect Density:

The number of defects per unit size (e.g., LOC, FP).

- Usefulness: Helps measure the quality of the code by


identifying how many issues are present relative to its size.

- Maintainability Index:

A composite measure that reflects how easy it is to maintain


the software.
- Usefulness: Helps in planning for future maintenance tasks,
identifying parts of the code that may be more difficult or
expensive to update.

2. Process Metrics

Process metrics are used to analyze and improve the software


development process. They are aimed at ensuring that the
software is developed efficiently, on schedule, and within
budget.

Key Process Metrics:

- Defect Removal Efficiency (DRE):

Measures the percentage of defects detected and removed


during the development process, compared to those reported by
users after release.

- Usefulness: A high DRE indicates an effective testing and


quality assurance process, reducing post-release defects.

- Lead Time:

The total time taken from a task being requested to its


completion.
- Usefulness: Measures process efficiency, identifying how
quickly work is moving through the system from concept to
delivery.

- Cycle Time:

The time required to complete a specific task from the start of


development to deployment.

- Usefulness: Shorter cycle times indicate a more efficient


development process.

- Process Productivity:

Measures the number of functional units (such as features or


function points) produced per unit effort (e.g., developer hours).

- Usefulness: Reflects how productive the development team is,


helping to improve resource management.

- Error Discovery Rate:

Tracks how quickly defects are found during the development


or testing phases.
- Usefulness: If too many defects are discovered late, it
indicates a need to improve earlier quality checks.

- Build Frequency and Stability:

Measures how often builds are created and how stable they are
(i.e., how often they pass testing).

- Usefulness: Frequent, stable builds indicate a smooth


development process and minimize integration problems.

3. Project Metrics

Project metrics focus on managing the overall project, tracking


its progress, schedule, costs, and resource allocation.

Key Project Metrics:

- Velocity:

Measures the amount of work a team completes in a sprint or


iteration, typically in terms of story points or tasks.

- Usefulness: Helps teams forecast future work and adjust


planning based on past performance.

- Cost Performance Index (CPI):


A ratio of the value of work completed (earned value) to the
actual costs incurred.

- Usefulness: Indicates whether the project is within budget. A


CPI > 1 indicates under-budget performance, while CPI < 1
indicates cost overruns.

- Schedule Performance Index (SPI):

A ratio of earned value to planned value, measuring the


efficiency of schedule performance.

- Usefulness: Helps assess whether the project is ahead of or


behind schedule. SPI > 1 means the project is ahead of
schedule.

- Burn Rate:

The rate at which the project is consuming its allocated budget.

- Usefulness: Tracks how quickly funds are being spent and can
indicate potential budget issues early on.

- Defects Reported by Users:

Tracks the number of defects reported by users post-release.


- Usefulness: Provides an indication of software quality after
deployment and the effectiveness of the testing process.

- Resource Utilization:

Measures the percentage of available resources (e.g.,


developers, testers) being utilized on the project.

- Usefulness: Helps project managers ensure resources are


efficiently allocated, avoiding overloading or underutilization.

Specialized Metrics

Agile Metrics

Agile development teams often use specific metrics to measure


efficiency and improve processes. Key Agile metrics include:

- Lead Time: The time between a task being requested and its
completion. Shorter lead times mean faster delivery.

- Cycle Time: Measures how long a task takes from the start of
development to its release.

- Team Velocity: Reflects the amount of work completed in a


sprint, helping estimate future workloads.
- Burn-Down and Burn-Up Charts: Visual tools to track progress
against goals or remaining work in a sprint or release.

DevOps Metrics

In DevOps, continuous integration and continuous delivery


(CI/CD) practices are often tracked through the following
metrics:

- Deployment Frequency: Measures how often code is deployed


to production.

- Change Failure Rate: The percentage of deployments causing


failures in production that require remediation.

- Mean Time to Recovery (MTTR): The average time it takes to


recover from a failure in production.

Importance of Software Metrics

- Improved Quality: By tracking defect density and testing


coverage, teams can improve the overall quality of the product.

- Better Planning and Estimation: Metrics like velocity and cycle


time help teams make more accurate project estimates.

- Risk Mitigation: Metrics such as defect removal efficiency and


error discovery rates help teams identify risks and resolve issues
early in the development process.
- Increased Productivity: Process metrics provide insights into
workflow bottlenecks, allowing teams to improve productivity.

- Enhanced Customer Satisfaction: By monitoring user-reported


defects and customer satisfaction scores, teams can focus on
improving the customer experience.

In conclusion, software metrics are crucial for tracking and


improving all aspects of software engineering, from
development processes to product quality and project
management. They provide the data necessary to make
informed decisions, optimize performance, and deliver high-
quality software efficiently.
Q. 7. Discuss the Software Size metric with the help of
suitable Example.

Ans. Size metrics play a fundamental role in software


development by measuring and comparing software
project sizes based on various factors. This article
explores the concept of size-oriented metrics, their
advantages, and disadvantages, and provides an
example of how they are applied in software
organizations.

What is Size Metrics?

Size metrics are derived by normalizing quality and


productivity Point Metrics measures by considering
the the size of the software that has been produced. The
organization builds a simple record of size measure for
the software projects. It is built on past experiences of
organizations. It is a direct measure of software. This
metric measure is one of the simplest and earliest
metrics that is used for computer programs to measure
size. Size Oriented Metrics are also used for measuring
and comparing the productivity of programmers. It is a
direct measure of a Software. The size measurement is
based on lines of code computation. The lines of code are
defined as one line of text in a source file. While counting
lines of code, the simplest standard is:

 Don’t count blank lines

 Don’t count comments

 Count everything else

 The size-oriented measure is not a universally


accepted method.

A simple set of size measures that can be developed is


given below:

Size = Kilo Lines of Code (KLOC)


Effort = Person / month
Productivity = KLOC / person-month
Quality = Number of faults / KLOC
Cost = $ / KLOC
Documentation = Pages of documentation / KLOC

Advantages of Size-Oriented Metrics

 This measure is dependent upon programming


language.

 This method is well designed upon programming


language.

 It does not accommodate non-procedural languages.


 Sometimes, it is very difficult to estimate LOC in early
stage of development.

 Though it is simple to measure but it is very hard to


understand it for users.

 It cannot measure size of specification as it is defined


on code.

Example of Size Metrics

For a size oriented metrics, software organization


maintains records in tabular form. The typical table
entries are: Project Name, LOC, Efforts, Pages of
documents, Errors, Defects, Total number of people
working on it.

Projec Cos Doc.


t LO Effo t (page Erro Defe Peo
Name C rt ($) s) rs cts ple

10,
ABC 20 170 400 100 12 4
000

20,
PQR 60 300 1000 129 32 6
000

XYZ 20, 65 522 1290 280 87 7


Projec Cos Doc.
t LO Effo t (page Erro Defe Peo
Name C rt ($) s) rs cts ple

000

Conclusion

In conclusion, size-oriented measures are a useful tool for


making software because they are easy to use,
standardised, and can be used to estimate. They do,
however, have some problems, such as being dependent
on the computer language and possibly having trouble
with early-stage estimates. Companies can make better
choices and improve their software development
processes if they know these measures and how to use
them.

Q.8 Define DFD .

Ans. A Data Flow Diagram (DFD) is a visual tool used in systems


analysis to illustrate how data flows through a system. It
represents the movement of data between different processes,
data stores, external entities, and the system itself. DFDs are
typically used to break down complex processes into simpler
parts, making it easier to understand how information is input,
processed, stored, and output by the system.
DFDs are structured into different levels:

- Context Diagrams (Level 0): Provide a high-level overview of


the system.

- Level 1 DFDs: Break down major processes into sub-processes


for more detailed analysis.

Key components include:

- Processes: Transform data within the system.

- Data stores: Where data is held.

- External entities: Sources or destinations of data outside the


system.

- Data flows: Represent the movement of data between


components.

Q.9 Define ER diagram

Ans . An ER diagram (Entity-Relationship diagram) is a visual


representation used in database design to illustrate the
relationships between data entities within a system. It helps in
modeling the structure of a database by showing how different
entities (e.g., objects, concepts, or events) are related to each
other.

Key Components of an ER Diagram:

1. Entities: Objects or concepts that have data stored about


them. They are typically represented as rectangles. For example,
"Customer" or "Product."

2. Attributes: Properties or characteristics of an entity. They are


usually shown as ovals connected to their corresponding entity.
For example, "Customer Name" or "Product Price."

3. Relationships: Describe how entities are related to each other,


depicted as diamonds or labeled lines between entities. For
example, a "buys" relationship between "Customer" and
"Product."

4. Primary Key: A unique identifier for each entity, usually


underlined in the diagram.

5. Cardinality: Specifies the number of instances of one entity


that can be related to another, such as one-to-one (1:1), one-to-
many (1:M), or many-to-many (M:N).

ER diagrams are a foundational tool in designing databases and


help clarify how data is organized, stored, and retrieved in a
system.

Q.10 Define data Dictionary.


Ans. A Data Dictionary is a centralized repository that contains
detailed information about the data used in a system or
database. It describes the structure, format, meaning, and
relationships of data elements within a system. Essentially, it's a
reference guide that helps users and developers understand
how data is organized, used, and managed.

Key Elements of a Data Dictionary:

1. Data Elements: Individual pieces of data (e.g., "Customer ID,"


"Order Date").

2. Data Types: The format or type of data (e.g., integer, string,


date).

3. Field Length: Specifies the size or length of the data element


(e.g., maximum of 50 characters).

4. Descriptions: Definitions or explanations of the data elements,


including their purpose and usage.

5. Relationships: Links between data elements, showing how


they relate to other fields in the system.

6. Constraints/Rules: Restrictions or conditions on data entry,


such as "must be unique" or "cannot be null."

A data dictionary improves communication between


stakeholders, enhances data consistency, and aids in database
design, maintenance, and documentation. It ensures that
everyone working with the data understands its meaning,
format, and constraints.
Q.11. Define CPM(Critical Path Method).

Ans. The Critical Path Method (CPM) is a project management


technique used to identify the longest sequence of dependent
tasks (the "critical path") required to complete a project. It helps
project managers determine the minimum time needed to finish
a project and highlights tasks that cannot be delayed without
impacting the overall timeline.

Key Components of CPM:

1. Activities/Tasks: The individual work elements or tasks in the


project.

2. Dependencies: The relationships between tasks, showing


which tasks must be completed before others can start.

3. Duration: The estimated time required to complete each task.

4. Critical Path: The longest sequence of tasks that dictates the


project's total duration. Any delay in a task on the critical path
will delay the entire project.

5. Slack/Float: The amount of time a task can be delayed without


affecting the project's completion date. Tasks not on the critical
path have slack.

Steps in CPM:

1. List all activities required to complete the project.

2. Determine dependencies between activities.


3. Estimate the duration of each activity.

4. Draw a network diagram to visualize the sequence of


activities.

Q.12 Define Software Metric

Ans. A software metric is a quantitative measure used to assess


various attributes of software development and maintenance
processes. These metrics help in evaluating aspects like code
quality, performance, productivity, and project management.
Common types of software metrics include:

1. Code Metrics: Measure attributes of the code itself, such as


lines of code (LOC), cyclomatic complexity, and code churn.

2. Performance Metrics: Assess the performance of software


applications, including response time, throughput, and resource
usage.

3. Process Metrics: Evaluate the efficiency and effectiveness of


the software development process, such as the time taken for
development tasks and defect density.

4. Quality Metrics: Focus on the quality of the software, including


the number of defects found, customer satisfaction scores, and
test coverage.
Software metrics are crucial for decision-making, improving
processes, and ensuring high-quality software delivery.

Q.13 Define function point

Ans. A function point is a standardized unit of measurement


used to estimate the size and complexity of a software
application based on its functionality. Developed by Alan
Albrecht in the late 1970s, function points assess the value
delivered to users by counting the various functions provided by
the software. These functions typically fall into five categories:

1. External Inputs (EI): User inputs that result in data being


processed by the system.

2. External Outputs (EO): Outputs that are generated by the


system for users or external systems.

3. User Inquiries (EQ): Interactive inputs that result in an output


without any data change.

4. Internal Logical Files (ILF): Data files maintained within the


application, which store information.
5. External Interface Files (EIF): Data files used by the
application but maintained by other systems.

Function points provide a way to estimate development effort,


measure productivity, and evaluate the impact of changes in
requirements. They are particularly useful for project
management, benchmarking, and process improvement in
software development.

Q.14 Define RAD

Ans. RAD, or Rapid Application Development, is a software


development methodology that emphasizes quick and iterative
development of applications through user feedback and
prototyping. The key characteristics of RAD include:

1. Iterative Development: Instead of a linear approach, RAD


involves cycles of development where prototypes are created,
reviewed, and refined based on user input.

2. Prototyping: Developers create early versions of the software


to gather user feedback and clarify requirements, allowing for
adjustments before the final product is developed.
3. User Involvement: Continuous user participation is crucial in
the RAD process, ensuring that the final product meets user
needs and expectations.

4. Timeboxing: RAD focuses on completing the development


within a set timeframe, which helps to keep projects on schedule
and encourages rapid progress.

5. Minimal Planning: While some initial planning is necessary,


RAD prioritizes flexibility and adaptability over comprehensive
upfront design.

RAD is particularly suited for projects where requirements are


expected to evolve or where user feedback is essential, making
it popular for web applications and other rapidly changing
environments.

Q.15 Define Boundary Value Analysis.

Ans. Boundary Value Analysis (BVA) is a testing technique used


in software testing to identify errors at the boundaries of input
ranges rather than within the ranges themselves. It is based on
the premise that errors are more likely to occur at the extreme
ends of input values. This technique helps ensure that the
software behaves correctly at the limits of its input
specifications.
Key Concepts of Boundary Value Analysis:

1. Identify Boundaries: Determine the input ranges and identify


the minimum and maximum values, as well as any values just
below and above these boundaries.

2. Test Cases: Create test cases that include:

- The lower boundary value

- The lower boundary value minus one

- The lower boundary value plus one

- The upper boundary value

- The upper boundary value minus one

- The upper boundary value plus one

3. Focus on Edge Cases: BVA targets edge cases, which are often
prone to errors, helping to uncover potential defects that might
not be found through regular testing methods.

Example:

For an input field that accepts integers from 1 to 100, the


boundary values to test would include:

- 1 (minimum value)

- 0 (just below minimum)


- 2 (just above minimum)

- 100 (maximum value)

- 99 (just below maximum)

- 101 (just above maximum)

By using Boundary Value Analysis, testers can ensure that the


application handles all edge cases appropriately, leading to
more robust and reliable software.

Q.16 Give two goals of Software Engineering

Ans. Two primary goals of software engineering are:

1. Deliver High-Quality Software: The foremost goal is to develop


software that meets or exceeds user requirements, is reliable,
maintainable, and performs well. This involves ensuring that the
software is free of defects, functions as intended, and provides a
good user experience.

2. Improve Efficiency and Productivity: Software engineering


aims to optimize the development process to reduce costs and
time while maintaining quality. This includes using effective
methodologies, tools, and practices to streamline development,
enhance collaboration among team members, and facilitate
better project management.

By focusing on these goals, software engineering seeks to create


software solutions that are not only effective but also
sustainable and manageable over time.

Q.17 What is difference between ER and DFD

Ans. Entity-Relationship (ER) diagrams and Data Flow Diagrams


(DFD) are both modeling tools used in system analysis and
design, but they serve different purposes and represent different
aspects of a system. Here are the key differences:

Entity-Relationship (ER) Diagrams:

1. Purpose: ER diagrams are used to model the data entities in a


system and the relationships between them. They help in
designing the database structure.
2. Components:

- Entities: Objects or concepts (e.g., Customer, Order).

- Attributes: Properties or details of entities (e.g., Customer


Name, Order Date).

- Relationships: Connections between entities (e.g., a


Customer places an Order).

3. Focus: Emphasizes the data and how different data entities


interact with one another.

4. Static View: Represents a static view of the system’s data


architecture.

Data Flow Diagrams (DFD):

1. Purpose: DFDs are used to model the flow of data within a


system, illustrating how data is processed and transferred
between different components.

2. Components:

- Processes: Activities that transform input data into output


data (e.g., Process Order).
- Data Stores: Repositories where data is stored (e.g.,
Customer Database).

- External Entities: Sources or destinations of data outside the


system (e.g., Customers, Suppliers).

- Data Flows: Arrows indicating the movement of data between


processes, data stores, and external entities.

3. Focus: Emphasizes the movement and transformation of data


through the system.

4. Dynamic View: Represents a dynamic view of how data flows


and is processed in the system over time.

Summary:

- ER Diagrams focus on data structure and relationships, while


DFDs focus on data flow and processing.

- ER diagrams are more about the “what” (data entities),


whereas DFDs address the “how” (data movement and
processes).

Q.18 Name the elements of UI design.


Ans . The elements of user interface (UI) design encompass
various components and principles that contribute to creating
effective and engaging interfaces. Here are some key elements:

1. Layout: The arrangement of visual elements on a screen,


including grids and spacing, which impacts the overall
organization and flow of content.

2. Typography: The use of fonts, sizes, spacing, and styles to


ensure text is readable and conveys the appropriate tone.

3. Color Scheme: The selection of colors used in the interface,


influencing aesthetics, branding, and user emotions.

4. Buttons: Interactive elements that users can click to perform


actions, such as submitting forms or navigating between pages.

5. Icons: Visual symbols representing actions, objects, or


concepts, aiding in communication and navigation.

6. Images and Graphics: Visual content that enhances


understanding, attracts attention, or provides context to the
information presented.
7. Navigation: Menus, links, and pathways that guide users
through the interface and help them find information or
complete tasks.

8. Forms and Input Fields: Areas where users can enter data,
including text boxes, checkboxes, and dropdowns, designed for
ease of use.

9. Feedback: Visual or auditory responses to user actions, such


as alerts, notifications, or loading indicators, which inform users
about the system's state.

10. White Space: The empty space around elements that


improves readability, reduces clutter, and enhances overall
aesthetics.

11. Consistency: Maintaining uniform design patterns and


behaviors throughout the interface to help users build familiarity
and ease of use.

These elements work together to create intuitive, accessible,


and visually appealing interfaces that enhance the user
experience.
Q.19 What are Software risks with example

Ans. Software risks are potential issues or uncertainties that can


negatively impact the success of a software project. These risks
can arise from various sources and may affect project timelines,
costs, and quality. Here are some common categories of
software risks, along with examples:

1. Technical Risks

- Example: A new technology or tool may not perform as


expected, leading to delays. For instance, if a team decides to
use a new programming language that none of the developers
are familiar with, it could result in unforeseen challenges.

2. Project Management Risks

- Example: Poor project planning can lead to scope creep,


where additional features are added without proper evaluation,
causing the project to exceed its timeline and budget.

3. Requirements Risks

- Example: Misunderstanding or changing requirements can


result in developing a product that doesn't meet user needs. If
stakeholders frequently change their requirements, it can lead to
confusion and rework.
4. Human Resource Risks

- Example: Key team members leaving the project


unexpectedly can lead to knowledge gaps and delays. For
example, if a senior developer departs, the team might struggle
to maintain the codebase or transfer knowledge effectively.

5. External Risks

- Example: Changes in regulations or market conditions can


impact the project. For instance, if new data privacy laws are
enacted, a software product may need significant modifications
to comply, resulting in additional costs and time.

6. Operational Risks

- Example: Infrastructure failures, such as server outages or


network issues, can disrupt development or deployment. If a
critical server crashes during testing, it could delay the release.

7. Quality Risks

- Example: Insufficient testing can lead to software defects. For


example, if the testing phase is rushed or inadequately
executed, critical bugs may be missed, leading to failures in
production.

8. Security Risks
- Example: Vulnerabilities in the software can expose it to
cyberattacks. For example, if the application does not properly
validate user inputs, it may be susceptible to SQL injection
attacks.

Summary

Identifying and managing these risks early in the software


development lifecycle is essential to minimize their impact and
ensure project success. Risk management strategies may
include regular assessments, proactive planning, and the
implementation of contingency measures.

Q.20 Two characteristics of waterfall model?

Ans. Two key characteristics of the waterfall model are:


1. Sequential Phases: The waterfall model is structured into
distinct and sequential phases, such as requirements analysis,
design, implementation, testing, deployment, and maintenance.
Each phase must be completed before moving on to the next,
creating a clear and linear progression through the development
process.

2. Documentation-Driven: The waterfall model emphasizes


comprehensive documentation at each stage. Detailed
specifications, design documents, and test plans are created
before development begins, ensuring that all requirements and
designs are well understood and agreed upon before
implementation. This characteristic facilitates easier tracking of
progress and changes throughout the project.

Q.21 Four characteristics of SRS model?


Ans. A Software Requirements Specification (SRS) model is a
document that clearly defines the expected behavior and
constraints of a software system. Here are four key
characteristics of an effective SRS:

1. Clarity and Precision: The SRS should clearly articulate


requirements in unambiguous terms, avoiding vague language.
Each requirement should be understandable and easily
interpretable by all stakeholders, including developers, testers,
and clients.

2. Completeness: The document must cover all aspects of the


software system, including functional and non-functional
requirements, interfaces, constraints, and user requirements. A
complete SRS ensures that no critical features are overlooked.

3. Consistency: Requirements in the SRS should be consistent


with one another, without contradictions. Inconsistencies can
lead to confusion and errors in the development process, making
it essential to cross-check requirements against each other.

4. Traceability: The SRS should allow for traceability of


requirements throughout the software development lifecycle.
Each requirement should be uniquely identifiable and traceable
to design elements, implementation, and testing, facilitating
easier validation and verification.

Q.22 . What is Coupling with example.

Ans. Coupling in software engineering refers to the degree of


interdependence between software modules. It’s an important
concept in system design, as it affects maintainability,
scalability, and the ease of understanding the code.

Types of Coupling:

1. Content Coupling: One module directly accesses the content


of another. This is the tightest coupling and is usually
discouraged.

- Example: Module A modifies a variable in Module B.

2. Common Coupling: Multiple modules share the same global


data.

- Example: Modules A and B both access and modify a global


variable.

3. External Coupling: Modules depend on externally imposed


data formats or communication protocols.

- Example: Module A relies on the output format of a web


service that Module B provides.
4. Control Coupling: One module controls the behavior of
another by passing information that influences its execution.

- Example: Module A passes a flag to Module B to dictate how


it should execute.

5. Data Coupling: Modules share data through parameters, with


no shared state or global variables.

- Example: Module A passes data to Module B via function


parameters.

6. Message Coupling: Modules communicate through well-


defined interfaces and messages, reducing interdependencies.

- Example: Module A sends a message to Module B without


knowing the internal workings of B.

Best Practices:

- Aim for low coupling (e.g., data or message coupling) to


enhance modularity and maintainability.

- Use interfaces or abstract classes to define clear contracts


between modules, promoting loose coupling.
By understanding and managing coupling, developers can
create systems that are easier to maintain and evolve over time.

Q.23 Explain feasibility?

Ans. Feasibility refers to the assessment of the practicality and


viability of a proposed project or solution. It evaluates whether
the project can be successfully implemented within constraints
such as time, budget, resources, and technology. Feasibility
studies are conducted to determine the likelihood of success and
to identify potential obstacles.

Types of Feasibility:

1. Technical Feasibility: Evaluates whether the technology and


resources required for the project are available and if the
technical approach is sound.

- Example: Assessing if the existing software infrastructure can


support a new application.
2. Economic Feasibility: Analyzes the cost-effectiveness of the
project. It includes cost-benefit analysis to ensure that the
benefits outweigh the costs.

- Example: Estimating the return on investment (ROI) for a


new marketing campaign.

3. Operational Feasibility: Examines whether the organization


can implement and sustain the project within its operational
framework.

- Example: Determining if the staff has the necessary skills and


if existing processes can accommodate new systems.

4. Legal Feasibility: Considers any legal implications of the


project, such as regulatory compliance, contracts, and
intellectual property rights.

- Example: Ensuring that a new product complies with industry


regulations.

5. Schedule Feasibility: Assesses whether the project can be


completed within the desired timeframe.

- Example: Evaluating whether a software development project


can meet a specific launch date.

Importance of Feasibility Studies:


- Risk Mitigation: Helps identify potential issues early, allowing
for better planning and risk management.

- Resource Allocation: Aids in determining whether resources


should be committed to a project.

- Informed Decision-Making: Provides stakeholders with the


information needed to make sound decisions regarding project
approval or rejection.

Q.24 Explain the Decomposition Technique.

Ans. The decomposition technique is a method used in problem-


solving and system design where a complex problem or system
is broken down into smaller, more manageable components.
This approach helps simplify analysis, design, and
implementation by focusing on individual parts rather than the
entire system at once. Decomposition is widely used in various
fields, including software development, project management,
and systems engineering.

Key Steps in the Decomposition Technique:


1. Identify the Problem or System: Clearly define the overall
problem or system that needs to be addressed.

2. Break Down the System: Divide the system into smaller,


manageable components or sub-problems. This can be done in
various ways, such as by functionality, processes, or levels of
abstraction.

3. Analyze Components: Examine each component or sub-


problem individually. This involves understanding the
requirements, functionality, and interactions with other
components.

4. Define Interfaces: Specify how the components will interact


with one another. This includes defining inputs, outputs, and
communication protocols.

5. Integrate Components: Once all components have been


designed and developed, they can be integrated to form the
complete system.

6. Iterate as Needed: Decomposition can be an iterative process.


If new complexities arise, further decomposition may be
necessary.
Examples of Decomposition:

1. Software Development: In building a software application, you


might decompose the system into modules such as user
interface, database management, and business logic. Each
module can be developed and tested independently.

2. Project Management: A project can be decomposed into tasks


and subtasks. For example, a project to build a house might
include tasks like design, site preparation, foundation, framing,
and finishing.

3. Data Analysis: A complex dataset can be decomposed into


smaller subsets for easier analysis, allowing analysts to focus on
specific trends or patterns within the data.

Benefits of Decomposition:

- Simplification: Makes complex problems easier to understand


and manage.

- Parallel Development: Allows different teams to work on


separate components simultaneously, speeding up the
development process.
- Improved Quality: Isolating components helps in focused
testing and debugging, leading to higher overall quality.

- Flexibility: Easier to modify or replace components without


affecting the entire system.

Q.25 What do you understand by agile modelling

Ans. Agile modeling is a practice within Agile software


development that emphasizes flexibility, collaboration, and
iterative progress. It focuses on creating a shared understanding
of the system through simple, effective models while promoting
communication among team members and stakeholders. Agile
modeling encourages the use of lightweight, adaptable models
that can evolve as the project progresses.

Key Principles of Agile Modeling:

1. Simplicity: Create only the models necessary to convey


essential information, avoiding unnecessary complexity.

2. Collaboration: Involve team members, stakeholders, and


users in the modeling process to ensure that everyone shares a
common understanding of the system.

3. Iterative Development: Models should evolve over time,


reflecting changes in requirements and understanding as the
project progresses.

4. Feedback-Driven: Seek continuous feedback from


stakeholders to refine models and ensure they align with user
needs and expectations.

5. Just-In-Time Modeling: Develop models as needed, rather than


upfront, to reduce waste and ensure relevance.
Common Types of Models in Agile Modeling:

1. Use Case Diagrams: Visual representations of user


interactions with the system, highlighting the functionalities
from the user’s perspective.

2. Class Diagrams: Illustrations of the system's structure,


showing classes, attributes, and relationships.

3. Activity Diagrams: Flowcharts that depict workflows and


processes, illustrating how tasks are carried out within the
system.

4. Sequence Diagrams: Diagrams that show the order of


interactions between objects or components over time.

5. Prototypes: Early versions of the product that demonstrate


key features and allow for user feedback before full-scale
development.

Benefits of Agile Modeling:


- Enhanced Communication: By using visual models, team
members can better understand requirements and design,
leading to more effective collaboration.

- Adaptability: Models can be quickly adjusted to accommodate


changing requirements or new insights, supporting Agile’s
iterative nature.

- Reduced Risk: Continuous feedback and incremental


development help identify and address issues early in the
process, minimizing the risk of project failure.

- Increased Engagement: Involving stakeholders in modeling


fosters buy-in and ensures that the final product aligns with user
needs.
Q.26 Two characteristics of prototype model

Ans. The prototype model in software development is


characterized by the following two key characteristics:

1. Iterative Development: The prototype model emphasizes


creating a working prototype of the software early in the
development process. This prototype is continuously refined
based on user feedback. Developers and stakeholders can
interact with the prototype to understand the requirements
better, allowing for iterative improvements and adjustments.
This process helps in clarifying requirements that may not have
been fully understood at the outset.

2. User Involvement: User feedback is crucial in the prototype


model. By involving users throughout the development process,
the team can gather insights and suggestions to shape the final
product. This high level of user engagement ensures that the
end result aligns closely with user needs and expectations,
leading to higher satisfaction and usability.

These characteristics make the prototype model particularly


effective for projects where requirements are unclear or likely to
evolve.
Q.27 What is ER diagram example?

Ans. ER (Entity Relationship) Diagram in DBMS

o ER model stands for an Entity-Relationship model. It is a


high-level data model. This model is used to define the data
elements and relationship for a specified system.

o It develops a conceptual design for the database. It also


develops a very simple and easy to design view of data.

o In ER modeling, the database structure is portrayed as a


diagram called an entity-relationship diagram.

For example, Suppose we design a school database. In this


database, the student will be an entity with attributes like
address, name, id, age, etc. The address can be another entity
with attributes like city, street name, pin code, etc and there will
be a relationship between them.
Component of ER Diagram

1. Entity:

An entity may be any object, class, person or place. In the ER


diagram, an entity can be represented as rectangles.

Consider an organization as an example- manager, product,


employee, department etc. can be taken as an entity.
a. Weak Entity

An entity that depends on another entity called a weak entity.


The weak entity doesn't contain any key attribute of its own. The
weak entity is represented by a double rectangle.

2. Attribute

The attribute is used to describe the property of an entity.


Eclipse is used to represent an attribute.

For example, id, age, contact number, name, etc. can be


attributes of a student.

a. Key Attribute
The key attribute is used to represent the main characteristics of
an entity. It represents a primary key. The key attribute is
represented by an ellipse with the text underlined.

b. Composite Attribute

An attribute that composed of many other attributes is known as


a composite attribute. The composite attribute is represented by
an ellipse, and those ellipses are connected with an ellipse.

c. Multivalued Attribute
An attribute can have more than one value. These attributes are
known as a multivalued attribute. The double oval is used to
represent multivalued attribute.

For example, a student can have more than one phone


number.

d. Derived Attribute

An attribute that can be derived from other attribute is known as


a derived attribute. It can be represented by a dashed ellipse.

For example, A person's age changes over time and can be


derived from another attribute like Date of birth.
3. Relationship

A relationship is used to describe the relation between entities.


Diamond or rhombus is used to represent the relationship.

Types of relationship are as follows:

a. One-to-One Relationship

When only one instance of an entity is associated with the


relationship, then it is known as one to one relationship.

For example, A female can marry to one male, and a male can
marry to one female.

b. One-to-many relationship

When only one instance of the entity on the left, and more than
one instance of an entity on the right associates with the
relationship then this is known as a one-to-many relationship.

For example, Scientist can invent many inventions, but the


invention is done by the only specific scientist.
c. Many-to-one relationship

When more than one instance of the entity on the left, and only
one instance of an entity on the right associates with the
relationship then it is known as a many-to-one relationship.

For example, Student enrolls for only one course, but a course
can have many students.

d. Many-to-many relationship

When more than one instance of the entity on the left, and more
than one instance of an entity on the right associates with the
relationship then it is known as a many-to-many relationship.

For example, Employee can assign by many projects and


project can have many employees.
Q.28 how structure chart different from flowchart?

Ans. Structure Chart :


Structure Chart represents the hierarchical structure of modules.
It represents the software architecture that means the various
modules making up the system and the dependency. Structure
chart representation can be easily implemented using some
common programming language. The main focus in the
structure chart is on the module structure of the software.

2. Flow Chart :
Flowchart is a graphical representation of an algorithm.
Programmers often use it as a program-planning tool to solve a
problem. It makes use of symbols which are connected among
them to indicate the flow of information and processing. Flow
chart is a convenient technique to represent the flow of control
in a program.
Difference between Structure chart and Flow chart :

Structure chart Flow chart

Flow chart represents


Structure chart represents
the flow of control in
the software architecture.
program.

It is easy to the identify It is difficult to identify


different modules of the the different modules of
software from structure the software from the
chart. flow chart.

Symbols used in structure Symbols used in flow


chart are complex. chart are simple.

Data interchange among


Data interchange between
different modules is not
different modules is
represented in flow
represented here.
chart.

In structure chart different Only a single type of


types of arrows are used to arrow is used to show
represent data flow and the control flow in flow
module invocation. chart.

It suppresses the It demonstrates the


sequential ordering of tasks sequential ordering of
Structure chart Flow chart

inherent in a flow chart. inherent tasks.

Structure chart is complex Flow chart is easier to


to construct in comparison construct in comparison
of flow chart. of structure chart.

Structure chart is hard to Flow chart is easy to


understand. understand.

You might also like