Software Engineering 4
Software Engineering 4
Software Engineering
UNIT I
UNIT – II
UNIT – III
UNIT IV
User interface design and real time systems: User interface design - Human factors – Human
computer interaction - Human - Computer Interface design - Interface design - Interface
standards.
UNIT V
Software quality and testing: Software Quality Assurance - Quality metrics - Software
Reliability - Software testing - Path testing – Control Structures testing - Black Box testing -
Integration, Validation and system testing - Reverse Engineering and Reengineering.
CASE Tools: Projects management, tools - analysis and design tools – programming tools -
UNIT-1
Software Engineering
Software engineering is a discipline within the field of computer science that
focuses on the systematic design, development, testing, and maintenance of
software. It involves applying engineering principles to software creation, ensuring
that the software is reliable, efficient, and meets user requirements.
Here's an overview of some key concepts and areas within software engineering:
Key Concepts
* Waterfall: A linear and sequential approach where each phase depends on the
deliverables of the previous one.
2
Page
3. Programming Languages:
* The choice of language often depends on the project requirements and the
development environment.
5. Version Control:
* Systems like Git help manage changes to the source code over time.
6. Quality Assurance:
* Includes various testing methods like unit testing, integration testing, system
testing, and acceptance testing.
7. Project Management:
* Tools like JIRA, Trello, and Asana help manage software projects.
* Quality Assurance: Ensures the software is reliable, efficient, and meets user needs.
* Project Management: Helps manage complex projects with clear goals, timelines,
and deliverables.
3
Page
* User Satisfaction: Involves users in the development process to ensure the final
product meets their needs.
Software Development Life Cycle (SDLC) is a process used by the software industry to
design, develop and test high quality softwares. The SDLC aims to produce a high-
quality software that meets or exceeds customer expectations, reaches completion
within times and cost estimates.
sales department, market surveys and domain experts in the industry. This
information is then used to plan the basic project approach and to conduct product
feasibility study in the economical, operational and technical areas.
Planning for the quality assurance requirements and identification of the risks
associated with the project is also done in the planning stage. The outcome of the
technical feasibility study is to define the various technical approaches that can be
followed to implement the project successfully with minimum risks.
Once the requirement analysis is done the next step is to clearly define and
document the product requirements and get them approved from the customer or
the market analysts. This is done through an SRS (Software Requirement
Specification) document which consists of all the product requirements to be
designed and developed during the project life cycle.
SRS is the reference for product architects to come out with the best architecture for
the product to be developed. Based on the requirements specified in SRS, usually
more than one design approach for the product architecture is proposed and
documented in a DDS - Design Document Specification.
This DDS is reviewed by all the important stakeholders and based on various
parameters as risk assessment, product robustness, design modularity, budget and
time constraints, the best design approach is selected for the product.
A design approach clearly defines all the architectural modules of the product along
with its communication and data flow representation with the external and third
party modules (if any). The internal design of all the modules of the proposed
architecture should be clearly defined with the minutest of the details in DDS.
In this stage of SDLC the actual development starts and the product is built. The
programming code is generated as per DDS during this stage. If the design is
performed in a detailed and organized manner, code generation can be
accomplished without much hassle.
Developers must follow the coding guidelines defined by their organization and
programming tools like compilers, interpreters, debuggers, etc. are used to generate
the code. Different high level programming languages such as C, C++, Pascal, Java
and PHP are used for coding. The programming language is chosen with respect to
the type of software being developed.
5
Page
This stage is usually a subset of all the stages as in the modern SDLC models, the
testing activities are mostly involved in all the stages of SDLC. However, this stage
refers to the testing only stage of the product where product defects are reported,
tracked, fixed and retested, until the product reaches the quality standards defined in
the SRS.
Once the product is tested and ready to be deployed it is released formally in the
appropriate market. Sometimes product deployment happens in stages as per the
business strategy of that organization. The product may first be released in a limited
segment and tested in the real business environment (UAT- User acceptance testing).
Then based on the feedback, the product may be released as it is or with suggested
enhancements in the targeting market segment. After the product is released in the
market, its maintenance is done for the existing customer base.
Size estimation in software engineering refers to the process of quantifying the size
of a software project.
By estimating the size of the software project, stakeholders can assess the level of
effort required and make informed decisions regarding budget and staffing.
It also helps in identifying potential risks and challenges that may arise during the
development process.
One of the key benefits of size estimation in project planning is the ability to set
realistic expectations for stakeholders.
By clearly understanding the project’s size, project managers can manage
expectations and avoid misunderstandings or disappointments later.
Several methods are available for software size estimation, each with strengths and
limitations.
These methods can be broadly classified into algorithmic, expert judgement, and
machine learning approaches.
It involves gathering input from domain experts, project managers, and developers
to assess the complexity and size of the software project.
Machine learning approaches to size estimation
7
Page
Machine learning techniques have gained popularity in recent years for software
size estimation.
Machine learning algorithms can provide accurate size estimates based on specific
project attributes by training the models on past projects.
Quality Factors
Quality factors are attributes or characteristics that affect the standard of products
or services. High quality ensures customer satisfaction, reduces costs, and enhances
competitiveness. Key quality factors include:
1. Customer Satisfaction:
2. Consistency:
3. Defect Rates:
Low defect rates indicate high quality. Statistical process control (SPC) and Six
Sigma methodologies help in reducing defects.
7. Continuous Improvement:
1. Workforce Efficiency:
Skilled, motivated, and welltrained employees enhance productivity. Providing
continuous training and incentives boosts efficiency.
Streamlining processes and eliminating waste using Lean, Six Sigma, or Kaizen
improves productivity.
4. Resource Utilization:
Efficient use of materials, time, and equipment ensures higher productivity. This
includes minimizing downtime and optimizing inventory levels.
5. Workflow Management:
6. Innovation:
Tracking key performance indicators (KPIs) and using data analytics helps in
identifying areas for improvement.
Efficient processes and resource utilization can enhance quality by allowing more
focus on quality control and continuous improvement initiatives.
Types of Complexity
Cost Management Complexity: Estimating the total cost of the project is a very
difficult task and another thing is to keep an eye that the project does not overrun
the budget.
Quality Management Complexity: The quality of the project must satisfy the
customer’s requirements. It must assure that the requirements of the customer are
fulfilled.
Risk Management Complexity: Risks are the unanticipated things that may occur
during any phase of the project. Various difficulties may occur to identify these risks
and make amendment plans to reduce the effects of these risks.
Communication Management Complexity: All the members must interact with all
the other members and there must be good communication with the customer.
API complexity: An API should ideally not be any more difficult to use than calling
a function. However, that hardly ever occurs. These calls are inadvertently
complicated due to authentication, rate restrictions, retries, mistakes, and other
factors.
Technical Challenges: Software projects can be complex and difficult due to the
technical challenges involved. This can include complex algorithms, database design,
and system integration, which can be difficult to manage and test effectively.
Schedule Constraints: Software projects are often subject to tight schedules and
deadlines, which can make it difficult to manage the project effectively and ensure
that all tasks are completed on time.
Quality Assurance: Ensuring that software meets the required quality standards is a
critical aspect of software project management. This can be a complex and
timeconsuming process, especially when dealing with large, complex systems.
Improved software quality: Software engineering practices can help ensure the
development of highquality software that meets user requirements and is reliable,
secure, and scalable.
Better maintenance and support: Software engineering practices can help ensure
that software is designed to be maintainable and supportable, making it easier to fix
bugs, add new features, and provide ongoing support to users.
Schedule delays: Technical challenges, scope creep, and other factors can cause
schedule delays, which can impact the project’s success and increase costs.
2. Functionality Requirements
Page
Describe what the system will do & what the solution needs to achieve
The requirements of the system give direction to the project
Requirements are defined as features, properties and behaviours a system
nust have to achieve its purpose
3. Compapatibility Issues
Software of various types runs on a variety of enviroments
When designing software developers must ensure products are able to be
used on multiple devices and conditions
Examples include:
Appearaning to be not repsonding when time takes too long and Poor Response
times in networking operations
Boundaries of the problem define the limits of the problem of the system to
be developed
Anything outside the system is said to be part of the enviroment
The system connects with the enviroment through an interface
Determing the boundaries effectively determines the system and how the
enivroment interacts with it
Software development strategy refers to the comprehensive plan and approach that
guides how software products are conceptualized, developed, and delivered. A
welldefined strategy ensures that software projects are executed efficiently, meet
stakeholder expectations, and achieve business goals.
Here are key aspects typically involved in crafting a software development strategy:
10. Monitor, Evaluate, and Iterate: Implement monitoring and analytics tools to
gather data on software performance, usage patterns, and user feedback. Use this
data to evaluate the effectiveness of features and iterate on the software to
continuously improve its quality and value.
11. Security and Compliance: Integrate security measures throughout the
development process to protect against vulnerabilities and ensure compliance with
15
Page
industry regulations and standards. Conduct regular security audits and implement
best practices for data protection.
By integrating these elements into your software development strategy, you can
effectively manage the development lifecycle, deliver highquality software products,
and achieve business objectives while adapting to changing requirements and
market conditions.
Identify Project Phases: Divide the project into distinct phases (e.g., requirements
gathering, design, development, testing, deployment).
Task Identification: List all tasks required to complete each phase. Tasks should be
specific and measurable.
Task Breakdown: Break down complex tasks into smaller, manageable subtasks. This
makes it easier to estimate time and effort accurately.
Define Milestones: Set key milestones for significant achievements (e.g., completion
of design phase, beta release, final deployment).
Develop a Gantt Chart: Use a Gantt chart or similar tool to visualize the project
timeline. Include tasks, dependencies, milestones, and deadlines.
16
Page
Iterative Planning: For agile projects, create iteration plans (sprints) with specific
goals and tasks for each iteration.
4. Resource Allocation
Assign Responsibilities: Assign tasks to team members based on their skills and
expertise.
5. Risk Management
Identify Risks: Identify potential risks that could affect project timeline, scope, or
quality.
Risk Analysis: Assess the likelihood and impact of each risk. Prioritize risks based on
their severity.
Plan Testing Activities: Define the testing strategy and types of testing (e.g., unit
testing, integration testing, acceptance testing).
Allocate Time for Testing: Ensure adequate time is allocated for testing activities
within the project timeline.
Quality Control: Implement processes to monitor and control the quality of
deliverables throughout the development process.
Track Metrics: Use metrics (e.g., burndown charts, velocity) to measure progress and
identify areas for improvement.
9. Continuous Improvement
Implement Feedback: Incorporate feedback and lessons learned into future projects
to improve development processes and outcomes.
By carefully planning the development process and actively managing its execution,
software projects can minimize risks, optimize resource utilization, and increase the
likelihood of delivering a successful product that meets stakeholder expectations.
Define Project Scope: Clearly define the scope of the software development
project, including goals, timelines, budget, and expected outcomes.
Identify Team Requirements: Determine the skill sets and expertise required to
successfully execute the project. Consider technical skills, domain knowledge, and
experience levels needed.
18
Page
Identify Key Roles: Define essential roles such as developers, testers, designers,
project managers, product owners, and architects.
Team Size: Determine the optimal team size based on project complexity,
workload, and required expertise.
Authority Levels: Clarify decisionmaking authority levels for different roles and
teams. Determine who has the final say on technical decisions, scope changes, and
resource allocation.
19
Page
Recognition and Rewards: Recognize and reward team members for their
contributions and achievements, fostering motivation and retention.
Culture Alignment: Align the organizational structure with the company’s values,
mission, and culture. Ensure that the structure promotes inclusivity, transparency, and
accountability.
Leadership Support: Gain leadership buyin and support for the chosen
organizational structure to ensure alignment with strategic objectives and longterm
goals.
teams are organized by functional expertise (e.g., development, testing, design) and
Page
also by project or product. This structure allows for specialization within functions
while promoting crossfunctional collaboration and alignment with project goals.
Here are some more planning activities that organizations can undertake to
improve their efficiency, sustainability, and overall performance:
1. Knowledge Management
Ethics Framework: Develop and enforce ethical standards and guidelines that
govern the behavior and decisionmaking of employees and leadership.
Healthcare Benefits: Review and optimize healthcare benefits to ensure they meet
the needs of employees and contribute to a healthy workforce.
DEI Strategy: Develop a strategy to foster diversity, equity, and inclusion within the
organization, promoting a respectful and inclusive workplace culture.
Loyalty Programs: Design and manage customer loyalty programs to reward repeat
customers and encourage brand loyalty.
Compliance: Ensure compliance with data privacy regulations (e.g., GDPR, CCPA)
and industry standards to mitigate legal and reputational risks.
UNIT - 2
Estimating the cost of software product is one of the most difficult and error-prone
tasks in software engineering. It is difficult to make an accurate cost estimate during
the planning phase of software development.
A preliminary estimate is prepared during the planning phase and presented at the
project feasibility review. An improved estimate is presented at the software
requirements review, and the final estimate is presented at the preliminary design
review. Each estimate is a refinement of the previous one, and is based on the
additional information gained as a result of additional work activities.
The factors that influence the cost of a software product are Programmer Ability,
Product Complexity, Product Size, Available Time, Required Reliability, Level of
Technology. Primary among the cost factors are the individual abilities of project
personnel and their familiarity with the application area; the complexity of the
product; the size of the product, the available time, the required level of reliability;
the level of technology utilized, and the availability, familiarity, and stability of the
system used to develop the product.
Programmer Ability
Product Complexity
There are three categories of software product: Application Programs, which include
data processing and scientific programs; Utility Programs, such as compilers, linkage
editors, and inventory systems; and System Programs, such as database management
systems, operating systems, and real-time systems.
23
Brooks states that utility programs are three times as difficult to write as application
Page
programs, and that system programs are three times as difficult to write as utility
programs. His levels of product complexity are thus 1-3-9 for applications-utility-
systems programs.
Boehm uses three levels of product complexity and provides equations to predict
total programmer- months of effort, PM, in terms of the number of thousands of
delivered source instruction, KDSI, in the product. Programmer cost for a software
project can be obtained by multiplying the effort in programmer-months by the cost
per programmer-month. The equations were derived by examining historical data
from a large number of actual projects.
Product Size
A large software product is more expensive to develop than a small one. Boehm’s
equations indicate that the rate of increase in required effort grows with the number
of source instructions at an exponential rate slightly greater than 1. Some
investigators believe that the rate of increase in effort grows at an exponential rate
slightly less than 1, but most use an exponent in the range of 1.05 to 1.83.
Available Time
Total project effort is sensitive to the calendar time available for project completion.
Several investigators agree that software projects require more total effort if
development time is compressed or expanded from the optimal time. The most
striking feature is the Putnam curve. According to Putnam, project effort is inversely
proportional to the fourth power of development time, E = k/(Td**4). This curve
indicates an extreme penalty for schedule compression and an extreme reward for
expanding the project schedule.
24
Page
Putnam also states that the development schedule cannot be compressed below
about 86% of the nominal schedule, regardless of the people or resources utilized.
Software reliability can be defined as the probability that a program will perform a
required function under stated conditions for a stated period of time. Reliability can
be expressed in terms of accuracy, robustness, completeness, and consistency of the
source code. Reliability characteristics can be built into a software product, but there
is a cost associated with the increased level of analysis, design, implementation, and
verification and validation effort that must be exerted to ensure high reliability. The
multipliers range from 0.75 for very low reliability to 1.4 for very high reliability. The
effort ratio is thus 1.87 (1.4/0.75).
Level of Technology
Software tools range from elementary tools, such as assemblers and basic debugging
aids, to compilers and linkage editors, to interactive text editors and database
management system,s to program design language processors and requirements
specification analyzers, to fully integrated development environments that include
configuration management and automated verification tools.
Within most organizations, software cost estimates are based on past performance.
25
Historical data are used to identify cost factors. Cost and productivity data must be
Page
collected on current projects in order to estimate future ones. It can be done either
top-down or bottom-up.
Bottom up estimation first estimates the cost to develop each module or subsystem.
Those costs are combined to arrive at an overall estimate.
Expert Judgement
The most widely used cost estimation technique is expert judgement, which is an
top-down estimation technique. Expert judgement relies on the experience,
background, and business sense of one or more key people in the organization.
An expert might arrive at a cost estimate in the following manner: The system to
be developed is a process control system similar to one that was developed last
year in 10 months at a cost of $1 million. The new system has similar control
functions, but has 25 percent more activities to control; thus, we will increase our
time and cost estimates by 25 percent. We will use the same computer and
external sensing/controlling devices; and many of the same people are available to
develop the new system, so we can reduce our estimate by 20 percent.
We can reuse much of low-level code from the previous product, which reduces
the time and cost estimates by 25 percent. The net effect of these considerations is
a time and cost estimates by 20 percent, which results in an estimate of $800,000
and 8 months development time. The customer has budgeted $1 million and 1 year
delivery time for the system. Therefore, we add a small margin of safety and bid
the system at $850,000 and 9 months development time.
These estimates reflect a well-balanced approach, ensuring that the offer falls within
the customer's budget while allowing for additional security in the project timeline
and costs. Thus, the bid aligns closely with customer expectations while
incorporating the benefits of prior work and efficiencies gained through code reuse.
The major disadvantage of group estimation is the effect that interpersonal group
dynamics may have on individuals in the group.
The Delphi technique was developed by Rand Corporation in 1948 to gain expert
consensus without introducing the adverse side effects of group meetings. The
Delphi technique can be adapted to software cost estimation in the following
manner:
A coordinator provides each estimator with the System Definition document and a
form for recording a cost estimate.
Estimators study the definition and complete their estimates anonymously. They may
ask questions of the coordinator, but they do not discuss their estimates with one
another.
Estimators complete another estimate, again anonymously, using the results from the
previous estimate. Estimators whose estimates differ sharply from the group may be
asked, anonymously, to provide justification for their estimates.
Expert judgment and group consensus are top-down estimation techniques. The
work breakdown structure method is a bottom-up estimation tool. A work
breakdown structure is a hierarchical chart that accounts for the individual parts of a
system. A WBS chart can indicate either product hierarchy or process hierarchy
product
Product hierarchy identifies the product components and indicates the manner in
which the components are interconnected. A WBS chart of process hierarchy
identifies the work activities and the relationships among those activities. Using WBS
technique, costs are estimated by assigning costs to each individual component in
the chart and summing the costs.
Some planners use both product and process WBS chart for cost estimation. The
primary advantages of the WBS technique are in identifying and accounting for
various process and product factors, and in making explicit exactly which costs are
included in the estimate.
Algorithmic cost estimators compute the estimated cost of a software system as the
sum of the costs of the modules and subsystems that comprise the system.
Algorithmic models are thus bottom-up estimators.
Boehm uses three levels of product complexity and provides equations to predict
total programmer- months of effort, PM, in terms of the number of thousands of
delivered source instruction, KDSI, in the product. Programmer cost for a software
project can be obtained by multiplying the effort in programmer-months by the cost
per programmer-month. The equations were derived by examining historical data
from a large number of actual projects.
Given the total programmer-months for a project and the nominal development time
required, the average staffing level can be obtained by simple divisions. For our 60
KDSI program, we obtain the following results:
Effort multipliers are used to adjust the estimate for product attributes, computer
attributes, personnel attributes, and project attributes.
In 1958, Norden observed that research and development projects follow a cycle of
planning, design, prototype, development, and use, with the corresponding
personnel utilization. The sum of the areas under the curves can be approximated by
the Rayleigh equation. Any particular point in Rayleigh curve represents the number
of full-time equivalent personnel required at that instant in time.
Norden’s Work
Norden studied the staffing patterns of several R & D projects. He found that
the staffing pattern can be approximated by the Rayleigh distribution curve.
Norden represented the Rayleigh curve by the following equation:
E = K/t²d * t * e-t² / 2 t² d
In 1976, Putnam reported that the personnel level of effort required throughout the
life cycle of a software product has a similar envelope. Putnam studied 50 Army
software projects and 150 other projects to determine how the Rayleigh curve can be
30
Putnam’s Work
Putnam studied the problem of staffing of software projects and found that the
software development has characteristics very similar to other R & D projects
studied by Norden and that the Rayleigh-Norden curve can be used to relate the
number of delivered lines of code to the effort and the time required to develop
the project. By analyzing a large number of army projects, Putnam derived the
following expression:
L = Ck K1/3td 4/3
• K is the total effort expended (in PM) in the product development and L is the product size in
KLOC.
• Ck is the state of technology constant and reflects constraints that impede the
progress of the programmer. Typical values of Ck = 2 for poor development
environment (no methodology, poor documentation, and review, etc.), Ck = 8 for
good software development environment (software engineering principles are
adhered to), Ck = 11 for an excellent environment (in addition to following
software engineering principles, automated tools and techniques are used). The
exact value of Ck for a specific project can be computed from the historical data of
the organization developing it.
Putnam suggested that optimal staff build-up on a project should follow the
Rayleigh curve. Only a small number of engineers are needed at the beginning of a
project to carry out planning and specification tasks. As the project progresses and
more detailed work is required, the number of engineers reaches a peak. After
implementation and unit testing, the number of project staff falls. However, the staff
build-up should not be carried out in large installments. The team size should either
be increased or decreased slowly whenever required to match the Rayleigh-Norden
curve.
31
K = L 3/C k 3 td 4
Where, K is the total effort expended (in PM) in the product development
Ck Is the state of technology constant and reflects constraints that impede the
progress of the program
From the above expression, it can be easily observed that when the schedule of a
project is compressed, the required development effort as well as project
development cost increases in proportion to the fourth power of the degree of
compression. It means that a relatively small compression in delivery schedule can
result in a substantial penalty of human effort as well as development cost.
For example, if the estimated development time is 1 year, then to develop the
product in 6 months, the total effort required to develop the product (and hence the
project cost) increases 16 times.
32
development project.
Software maintenance typically requires 40-60%, and in some cases 90%, of the total
life-cycle effort devoted to a software product. Maintenance activities include adding
enhancements to the product, adapting the product to new processing
environments, and correcting problems.
Activity %
Effort
Enhancement 51.3
Adaptation 23.6
Corrections 21.7
Others 3.4
Boehm suggests that maintenance effort can be estimated by use of an activity ratio,
which is the number of source instructions to be added or modified in any given time
period divided by the total number of instructions:
Heavy emphasis on reliability and the use of modern programming practices during
development may reduce the amount of effort required for maintenance, while low
emphasis on reliability and modern practices during development may increase the
difficulty of maintenance.
The Constructive Cost Model (COCOMO) is a software cost estimation model that
helps predict the effort, cost, and schedule required for a software development
project. Developed by Barry Boehm in 1981, COCOMO uses a mathematical formula
based on the size of the software project, typically measured in lines of code (LOC).
The key parameters that define the quality of any software product, which are also an
outcome of COCOMO, are primarily effort and schedule:
2. Schedule: This simply means the amount of time required for the completion of
the job, which is, of course, proportional to the effort put in. It is measured in the
units of time such as weeks, and months.
In the COCOMO model, software projects are categorized into three types based on
their complexity, size, and the development environment. These types are:
1. Organic: A software project is said to be an organic type if the team size required
is adequately small, the problem is well understood and has been solved in the
past and also the team members have a nominal experience regarding the
problem.
1. Cost Estimation: To help with resource planning and project budgeting, COCOMO
offers a methodical approach to software development cost estimation.
5. Support for Decisions: During project planning, the model provides a quantitative
foundation for choices about scope, priorities, and resource allocation.
7. Resource Optimization: The model helps to maximize the use of resources, which
raises productivity and lowers costs.
The Basic COCOMO model is a straightforward way to estimate the effort needed for
a software development project. It uses a simple mathematical formula to predict
how many person-months of work are required based on the size of the project,
measured in thousands of lines of code (KLOC).
36
Page
It estimates effort and time required for development using the following expression:
P V V Durga PraSad Department of computer science, AWDC, KKD
SOFTWARE ENGINEERING
E = a*(KLOC)b PM
Tdev = c*(E)d
Person required = Effort/ Time
Where,
E is effort applied in Person-Months
KLOC is the estimated size of the software product indicate in Kilo Lines of Code
Tdev is the development time in months
a, b, c are constants determined by the category of software project given in below
table.
The above formula is used for the cost estimation of the basic COCOMO model and
also is used in the subsequent models. The constant values a, b, c, and d for the Basic
Model for the different categories of the software projects are:
Software a b c d
Projects
2. These formulas are used as such in the Basic Model calculations, as not much
consideration of different factors such as reliability, and expertise is taken into
account, henceforth the estimate is rough.
Suppose that a Basic project was estimated to be 400 KLOC (kilo lines of code).
Calculate effort and time for each of the three modes of development. All the
constants value provided in the following table:
Solution
37
From the above table we take the value of constant a,b,c and d.
Page
import java.util.Arrays;
public class BasicCOCOMO
{
private static final double[][] TABLE =
{
{2.4, 1.05, 2.5, 0.38},
{3.0, 1.12, 2.5, 0.35},
{3.6, 1.20, 2.5, 0.32}
};
private static final String[] MODE =
{
"Organic", "Semi-Detached", "Embedded"
};
model = 2;
Page
}
System.out.println("The mode is " + MODE[model]);
// Calculate Effort
double effort = TABLE[model][0] * Math.pow(size,
TABLE[model][1]);
// Calculate Time
double time = TABLE[model][2] * Math.pow(effort,
TABLE[model][3]);
// Calculate Persons Required
double staff = effort / time;
// Output the values calculated
System.out.println("Effort = " + Math.round(effort) +
" Person-Month");
System.out.println("Development Time = " + Math.round(time) +
" Months");
System.out.println("Average Staff Required = " + Math.round(staff) +
" Persons");
}
public static void main(String[] args)
{
int size = 4;
calculate(size);
}
}
Output
Examples
1. NASA Space Shuttle Software Development: NASA estimated the time and money
needed to build the software for the Space Shuttle program using the COCOMO
model. NASA was able to make well-informed decisions on resource allocation
and project scheduling by taking into account variables including project size,
complexity, and team experience.
2. Big Business Software Development: The COCOMO model has been widely used
by big businesses to project the time and money needed to construct intricate
business software systems. These organizations were able to better plan and
allocate resources for their software projects by using COCOMO’s estimation
methodology.
39
Page
1. Systematic cost estimation: Provides a systematic way to estimate the cost and
effort of a software project.
2. Helps to estimate cost and effort: This can be used to estimate the cost and effort
of a software project at different stages of the development process.
3. Helps in high-impact factors: Helps in identifying the factors that have the
greatest impact on the cost and effort of a software project.
4. Helps to evaluate the feasibility of a project: This can be used to evaluate the
feasibility of a software project by estimating the cost and effort required to
complete it.
1. Assumes project size as the main factor: Assumes that the size of the software is
the main factor that determines the cost and effort of a software project, which
may not always be the case.
2. Does not count development team-specific characteristics: Does not take into
account the specific characteristics of the development team, which can have a
significant impact on the cost and effort of a software project.
3. Not enough precise cost and effort estimate: This does not provide a precise
estimate of the cost and effort of a software project, as it is based on assumptions
and averages.
Purpose of this Document – At first, main aim of why this document is necessary
and what’s purpose of document is explained and described.
General description
Functional Requirements
Interface Requirements
In this, software interfaces which mean how software program communicates with
each other or users either in form of any language, code, or message are fully
described or explained. Examples can be shared memory, data streams, etc.
Performance Requirements
In this, how a software system performs desired functions under specific condition
is explained. It also explains required time, required memory, maximum error rate,
etc. The performance requirements part of an SRS specifies the performance
constraints on the software system. All the requirements relating to the
performance characteristics of the system must be clearly specified. There are two
types of performance requirements: static and dynamic. Static requirements are
those that do not impose constraint on the execution characteristics of the system.
41
Page
Design Constraints
In this, constraints which simply mean limitation or restriction are specified and
explained for design team. Examples may include use of a particular algorithm,
hardware and software limitations, etc. There are a number of factors in the client’s
environment that may restrict the choices of a designer leading to design
constraints such factors include standards that must be followed resource limits,
operating environment, reliability and security requirements and policies that may
have an impact on the design of the system. An SRS should identify and specify all
such constraints.
Non-Functional Attributes
In this, non-functional attributes are explained that are required by software system
for better performance. An example may include Security, Portability, Reliability,
Reusability, Application compatibility, Data integrity, Scalability capacity, etc.
In this, initial version and budget of project plan are explained which include overall
time duration required and overall cost required for development of project.
Appendices
Test plans are generated by testing group based on the describe external
behaviour.
Maintenance and support staff need it to understand what the software product
is supposed to do.
Project manager base their plans and estimates of schedule, effort and
resources on it.
in documentation purpose.
Concise: The SRS report should be concise and at the same time, unambiguous,
consistent, and complete. Verbose and irrelevant descriptions decrease readability
and also increase error possibilities.
Black-box view: It should only define what the system should do and refrain from
stating how to do these. This means that the SRS document should define the
external behavior of the system and not discuss the implementation issues. The SRS
report should view the system to be developed as a black box and should define the
externally visible behavior of the system. For this reason, the SRS report is also known
as the black-box specification of a system.
Conceptual integrity: It should show conceptual integrity so that the reader can
merely understand it. Response to undesired events: It should characterize
acceptable responses to unwanted events. These are called system response to
exceptional conditions.
. They allow for the precise description of requirements and facilitate the
validation and verification of software designs against these specifications.
State-based specifications focus on the states of the system and the transitions
between them, making them useful for reactive systems.
The languages used for formal specifications are rigorously defined to eliminate
ambiguity. Notable examples include Z notation, the Vienna Development
Method (VDM), and the Abstract Machine Notation (AMN). These languages
facilitate the modeling of software systems, including their behaviors and
interfaces, making them effective for both implementation and verification
processes.
Language processor:-
1. We know that a computer understands instructions in machine code i.e. 0’s &
1’s and the programs are mostly written in High Level Language like C, C++,
44
Page
JAVA etc. and they are called source code and it cannot be executed directly by
the computer.
3. Language processor can translate the source code or program code into
machine code.
4. Source code or program code are written in HLL (High Level Language) which
is easy for human understanding and machine code is written in LLL (Low Level
language) which is easy for machine understanding.
5. We can also say that a language processor converts High Level Language into
Low Level Language by the help of its types.
a. Compiler
b. Assembler c. Interpreter
2. The ideas express by the designer in terms related to the application domain
of the software and to implement these ideas their description has to be
interpreted in terms related to the execution domain of the computer system.
5. Semantic gap is the gap between application domain and execution domain.
Page
6. For a language processor the input program is termed as source program and
language use for it i.e. termed as source language and the output program is
termed as target program.
7. The language process activities are divided into two groups as follow:-
Language specification: -
UNIT-3
Introduction of Software Design Process
Software Design is the process of transforming user requirements into a suitable
form, which helps the programmer in software coding and implementation. During
the software design phase, the design document is produced, based on the customer
requirements as documented in the SRS document. Hence, this phase aims to
transform the SRS document into a design document.
There are many concepts of software design and some of them are given below
1. Abstraction (Hide Irrelevant data): Abstraction simply means to hide the details to
reduce complexity and increase efficiency or quality. Different levels of Abstraction
are necessary and must be applied at each stage of the design process so that any
error that is present can be removed to increase the efficiency of the software
solution and to refine the software solution. The solution should be described in
broad ways that cover a wide range of different things at a higher level of
abstraction and a more detailed description of a solution of software should be
given at the lower level of abstraction.
2. Modularity (subdivide the system): Modularity simply means dividing the system or
project into smaller parts to reduce the complexity of the system or project. In the
same way, modularity in design means subdividing a system into smaller parts so
that these parts can be created independently and then use these parts in different
systems to perform different functions. It is necessary to divide the software into
47
available like Monolithic software that is hard to grasp for software engineers. So,
modularity in design has now become a trend and is also important.
3. Architecture (design a structure of something): Architecture simply means a technique
to design a structure of something. Architecture in designing software is a concept
that focuses on various elements and the data of the structure. These components
interact with each other and use the data of the structure in architecture.
4. Refinement (removes impurities): Refinement simply means to refine something to
remove any impurities if present and increase the quality. The refinement concept
of software design is a process of developing or presenting the software or system
in a detailed manner which means elaborating a system or software. Refinement is
very necessary to find out any error if present and then to reduce it.
5. Pattern (a Repeated form): A pattern simply means a repeated form or design in
which the same shape is repeated several times to form a pattern. The pattern in
the design process means the repetition of a solution to a common recurring
problem within a certain context.
6. Information Hiding (Hide the Information): Information hiding simply means to hide
the information so that it cannot be accessed by an unwanted party. In software
design, information hiding is achieved by designing the modules in a manner that
the information gathered or contained in one module is hidden and can‟t be
accessed by any other modules.
7. Refactoring (Reconstruct something): Refactoring simply means reconstructing
something in such a way that it does not affect the behavior of any other features.
Refactoring in software design means reconstructing the design to reduce
complexity and simplify it without impacting the behavior or its functions. Fowler
has defined refactoring as “the process of changing a software system in a way that
it won‟t impact the behavior of the design and improves the internal structure”.
modularization
In this current age of software, you would be hard-pressed to find a program that isn't
continuously growing and evolving. Designing a program all at once, with all required
functions, would be difficult due to its size, complexity and constant changes. This is
where modularization comes in.
Modularization is the process of separating the functionality of a program into
independent, interchangeable modules, such that each contains everything
necessary to execute only one aspect of the desired functionality.
With modularization, we can easily work on adding separate and smaller modules to a
program without being hindered by the complexity of its other functions. In short, it’s
about being flexible and fast in adding more software functions to a program. In a
software engineering team, we could easily work independently on each module
without affecting others’ work.
number of bugs and the duration it takes to test and release a program.
Page
Benefits of modularization
Why should we be decomposing our projects into modules? As shown in my
experience with a single-file program, programs without proper modularization
would be a nightmare to maintain and extend.
In modularization, the modules have minimal dependency on other modules. So, we
can easily make changes in a module without affecting other parts of the program.
The following are just the gist of how modularization would improve the development
process for a program.
Easier to add and maintain smaller components
Easier to understand each module and their purpose
Easier to reuse and refactor modules
Better abstraction between modules
Saves time needed to develop, debug, test and deploy a program
A module should have only a single responsibility, that is the Single Responsibility
Principle. Thus, it should depend minimally on other modules. The independence of a
module can be measured using coupling and cohesion.
Coupling: Coupling is the measure of the degree of interdependence between the
modules. A good software will have low coupling.
Cohesion: Cohesion is a measure of the degree to which the elements of the module
are functionally related. It is the degree to which all elements directed towards
performing a single task are contained in the component. Basically, cohesion is the
internal glue that keeps the module together. A good software design will have high
cohesion.
Design Notations
Design Notations are primarily meant to be used during the process of design and
are used to represent design or design decisions. For a function-oriented design, the
design can be represented graphically or mathematically by the following
Design notations are the various ways in which a system's design is visually
represented or described during the software development process. These notations
help developers, designers, and stakeholders understand and communicate the
structure, behavior, and interactions within a system. Here are some commonly used
design notations
a) Class Diagram
The class diagram is a central modeling technique that runs through nearly all
object-oriented methods. This diagram describes the types of objects in the system
and various kinds of static relationships which exist between them.
environment (actors). Use cases enable you to relate what you need from a system
to how the system delivers on those needs.
c) Activity Diagram
53
Page
A data flow diagram (DFD) maps out how information, actors, and steps flow within a
process or system. It uses symbols to show the people and processes needed to
move data correctly.
DFDs are important because they help you visualize how data moves through your
system, spot inefficiencies, and find opportunities to improve overall functionality.
This leads to more efficient operations, better decision-making, and enhanced
communication among team members
data flow diagram are generally used to show how data moves through a system,
emphasizing data flow and processes rather than detailed software behavior.
ER diagram
A good system design is to organize the program modules in such a way that are
easy to develop and change. Structured design techniques help developers to deal
with the size and complexity of programs. Analysts create instructions for the
developers about how code should be written and how pieces of code should fit
together to form a program.
1. Structured Design
Structured design is primarily about breaking problems down into several well-
organised components. The benefit of utilizing this design technique is that it
simplifies difficulties. This allows for the minor pieces to be problem-solved so they
can fit into the larger image. The solution components are organized hierarchically.
Structured design is primarily based on the divide and conquer technique, in which a
large problem is divided into smaller ones, each of which is tackled independently
until the larger problem is solved. Solution modules are used to address the
individual problems. The structured design stresses the importance of these modules'
organization to produce exact results. A good structured design has high cohesion
and low coupling arrangements.
This design approach differs from the other two in that it focuses on objects and
classes. This technique is centred on the system's objects and their attributes.
Furthermore, the characteristics of all these objects' attributes are encapsulated
together, and the data involved is constrained so that polymorphism can be enabled.
55
based on their attributes. The class hierarchy is then established, and the
relationships between these classes are defined.
This design technique is entirely focused on first subdividing the system into
subsystems and components. Rather to constructing from the bottom up, the top-
down approach conceptualizes the entire system first and then divides it into
56
multiple subsystems. These subsystems are then designed and separated into smaller
Page
subsystems and sets of components that meet the larger system's requirements.
Instead of defining these subsystems as discrete entities, this method considers the
entire system to be a single entity. When the system is finally defined and divided
based on its features, the subsystems are considered separate entities. The
components are then organised in a hierarchical framework until the system's lowest
level is designed.
B) Bottom-Up Approach
This system design technique prioritises the design of subsystems and the lowest-
level components (even sub-components). Higher-level subsystems and larger
components can be produced more readily and efficiently if these components are
designed beforehand. This reduces the amount of time spent on recon and
troubleshooting. The process of assembling lower-level components into larger sets
is repeated until the entire system is composed of a single component. This design
technique also makes generic solutions and low-level implementations more
reusable.
Engineering projects today require meticulous planning and execution across various
phases as they serve as the cornerstones of success. As part of this, one of the most
critical stages is detailed engineering –where the project blueprint takes shape and
becomes a reality. It is, therefore, considered to be the project‟s backbone. It serves
as the bridge between conceptual design and actual construction while
encompassing a multitude of tasks that are vital for the project‟s success. Further, it
helps ensure seamless alignment of resources to achieve the desired outcome.
57
Page
coordination, and/or the timeliness of the system‟s actions are of critical importance
Page
to the overall functionality and to the end-users. The group‟s research spans many
areas, including embedded systems, Internet of Things (IoT), communications,
robotics, automotive systems, large scale process control, avionics, distributed
computing, and High-Performance Computing (HPC).
Real-time systems are those that are required to respond to inputs within a finite and
specified time interval. In some systems, the required response times are measured in
milliseconds, in others it is seconds, minutes, or even hours. Nevertheless, they all
have timing requirements that must be satisfied. In the production of real-time
systems, it is insufficient to use testing of the final system to ensure its compliance
with the requirements (as it is infeasible to test all possible timing interference
patterns in a system of reasonable complexity). A comprehensive and systematic
approach to specification, design, implementation and analysis is required.
Distributed systems are those that divide their workload across networked „nodes‟
(e.g. processors, computers, embedded devices, robots), which coordinate their
actions through message passing. These nodes may be tightly integrated via wired
connections (e.g. High Performance Computing platforms), or loosely connected
through wireless communication (e.g. IoT devices and robot swarms). Nodes can also
be distributed across a variety of spatial scales, from cloud platforms with
computation spread across international data centres, to devices located throughout
a home, or even networked processors within a single silicon chip. This presents
unique challenges in terms of programmability, coordination, communication, and
fault tolerance, each demanding consideration of the distributed nature of the
system.
The research conducted by the Real-Time and Distributed Systems group is unified
around the notions of understanding, modelling, analysing, simulating, optimising,
and predicting the performance and use of systems that are real-time and/or
distributed in nature.
Software testing is integral to ensuring the software meets all the requirements and
works correctly. A test plan is a document explaining how you‟ll test the software,
what resources you‟ll need, and when it should be done. Creating a good test plan is
vital to spot any issues. In this guide, we‟ll give you step-by-step instructions on how
to make a practical test plan so that your software testing process will be successful.
A test plan is a document that outlines the strategy, objectives, resources, and
schedule of a software testing process. The test plan will typically include details such
as the type and number of tests that need to be conducted, the purpose of each test,
the required tools, and how test results will be analyzed and reported. It is regularly
59
Page
A test plan is essential for several reasons. Firstly, it is a communication tool between
stakeholders and testing team members. This ensures that everyone understands
what, why, and how to test. It will also outline how to report test findings, what to
consider as a pass or fail, and any other criteria that may be applicable. Besides, it will
outline the expected outcomes and ensure that testing happens according to plan.
This is why it is important to know how to create a test plan.
Objectives of a test
The objectives of a test plan are to define the scope, approach, and resources
required for testing. It aims to establish test objectives and deliverables, identify test
tasks and responsibilities, outline the test environment and configuration, and define
the test schedule to ensure efficient and effective testing.
A detailed test plan further assists individuals in working together to complete the
project and maintain consistent and transparent communication throughout the
testing process.
A Test Plan is a document that outlines the strategy, scope, objectives, resources, and
schedule of a testing process. It is an essential part of software development and
testing, as it provides a roadmap for the execution of tests. The components of a Test
Plan include:
1. Test goal: The test plan should explain what the testing is meant to
accomplish, including the features and functions that will be tested and any
requirements that must be met.
2. Scope and approach: It should also outline what will be tested, how it will be
tested, and which testing methods or approaches will be used.
3. Test environment: It should specify the hardware, software, and network
configurations needed for the tests and any third-party tools or systems used.
4. The test plan should include details about what you‟ll be doing when you‟ll be
doing it, and what resources you need to do it.
5. It should list the deliverables, like test cases, scripts, and reports that will be
created during testing. It should also have a schedule outlining the testing
60
6. The plan should identify the personnel (people), equipment, and facilities
you‟ll need to complete the tests.
7. Potential problems: The test plan should list any potential issues arising during
the testing process and how they will be dealt with.
8. Approval: The test plan needs to have a clear approval process where all
stakeholders and project team members agree on the goals of the testing
effort and sign off on it.
There are three types of test plans,
Master Test Plan: Contains multiple testing levels and has a comprehensive test
strategy.
Phase Test Plan: Tailored to address a specific phase within the overall testing
strategy.
Specific Test Plan: Explicitly designed for other testing types like performance,
security, and load testing. Simply put, it is a test plan focusing only on the non-
functional aspects.
Making a test plan is the most crucial task of the test management process. The
following seven steps to prepare a test plan.
When project begins then it is expected that project related activities must be
initiated. In project planning, series of milestones must be established. Milestone
can be defined as recognizable endpoint of software project activity. At each
milestone, report must be generated.
Milestone is distinct and logical stage of the project. It is used as signal post for
project start and end date, need for external review or input and for checking
budget, submission of the deliverable, etc. It simply represents clear sequence of
events that are incrementally developed or build until project gets successfully
completed. It is generally referred to as task with zero-time duration because they
are used to symbolize an achievement or point of time in project. It helps in
signifying change or stage in development.
2. Walkthrough
Imagine gathering around a virtual campfire with your fellow developers, each armed
with a flashlight to shine on different corners of the codebase. That‟s basically what a
software walkthrough is – a collective effort to understand, discuss, and review the
software.
Unlike some of its more formal siblings, such as code reviews or inspections, a
walkthrough is a bit more laid-back. It‟s an opportunity for developers, testers, and
stakeholders to share insights into the software‟s architecture and behavior.
In this blog, we will discuss software walkthrough, how it works, its importance in the
software development lifecycle, and more.
62
Page
Also, participants typically go through the various features and functionalities of the
software step by step, discussing and analyzing its behavior, design, and functionality.
It can include examining the user interface, testing specific functionalities, and
reviewing the underlying code or architecture.
A software walkthrough is done for several reasons. The main reason is that the
walkthrough process contributes to the overall improvement of the software
development process and the quality of the software product.
Inspections are a formal type of review that involves checking the documents
thoroughly before a meeting and is carried out mostly by moderators. A meeting is
63
Inspection meetings can be held both physically and virtually. The purpose of these
meetings is to review the code and the design with everyone and to report any bugs
found.
Benefits
It is easier to find defects for the people who have not done the
implementation themselves and are unaware of its correctness beforehand.
Knowledge sharing about specific software artifacts and designs.
Knowledge sharing regarding defect detection practices.
Flaws are identified at early stages.
It reduces the rework and testing effort.
Who is involved?
Moderator: Inspector who is responsible for organizing and reporting on
inspection.
Author: Owner of the report.
Reader: A person who guides the examination of the product.
Recorder: An inspector who notes down all the defects on the defect list.
Inspector: Member of the inspection team.
Steps of inspection
Planning
The planning phase starts when the entry criteria for the inspection state are met. A
moderator verifies that the product entry criteria are met.
Overview
In the overview phase, a presentation is given to the inspector with some
background information needed to review the software product properly.
Preparation
This is considered an individual activity. In this part of the process, the inspector
collects all the materials needed for inspection, reviews that material, and notes any
defects.
Meeting
The moderator conducts the meeting. In the meeting, the defects are collected and
reviewed.
Rework
The author performs this part of the process in response to defect disposition
determined at the meeting.
Follow-up
In follow-up, the moderator makes the corrections and then compiles the inspection
management and defects summary report.
64
Page
UNIT-4
User interface design
User interface is the front-end application view to which user interacts in order to use
the software. User can manipulate and control the software as well as hardware by
means of user interface. Today, user interface is found at almost every place where
digital technology exists, right from computers, mobile phones, cars, music players,
airplanes, ships etc.
User interface is part of software and is designed such a way that it is expected to
provide the user insight of the software. UI provides fundamental platform for
human-computer interaction.
UI can be graphical, text-based, audio-video based, depending upon the underlying
hardware and software combination. UI can be hardware or software or a
combination of both.
The software becomes more popular if its user interface is:
Attractive
Simple to use
Responsive in short time
Clear to understand
Consistent on all interfacing screens
UI is broadly divided into two categories:
Command Line Interface
Graphical User Interface
1. Command Line Interface (CLI)
CLI has been a great tool of interaction with computers until the video display
monitors came into existence. CLI is first choice of many technical users and
programmers. CLI is minimum interface a software can provide to its users.
CLI provides a command prompt, the place where the user types the command and
feeds to the system. The user needs to remember the syntax of command and its use.
Earlier CLI were not programmed to handle the user errors effectively.
A command is a text-based reference to set of instructions, which are expected to be
executed by the system. There are methods like macros, scripts that make it easy for
the user to operate.
CLI uses less amount of computer resource as compared to GUI.
CLI Elements:
65
Text-Box - Provides an area for user to type and enter text-based data.
Buttons - They imitate real life buttons and are used to submit inputs to the
software.
The complexity of cockpit controls, for example, increases the cognitive load on
pilots, leading to accidents due to human error (that’s why it’s called ‘human’ factors).
Product creators, similarly, look for ways to reduce cognitive load on users.
The cockpit of an Airbus A380 looks complex to anyone who doesn’t have flight
experience; similar, human factors design aims to make things easier when you’re
using a product or platform.
computer.
Page
The goal is to reduce the number of mistakes that users make and produce more
comfortable interactions with a product. Human factors design is about
understanding human capabilities and limitations and then applying this knowledge
to product design. It’s also a combination of many disciplines, including psychology,
sociology, engineering, and industrial design.
Most of the human factors principles listed below come from the ISO 9241 standards
for ergonomics of human-computer interaction. The principles mentioned in this
section have one goal: helping the user engage with a product and get into a state of
‘flow’ when using it.
1. Physical ergonomics
This information helps human factor specialists design a product or device so that
users can complete tasks efficiently and effectively. For example, when we apply
human factors design in mobile app design, we size touch controls to minimize the
risk of false actions.
Human factors design takes into account how a user interacts with the product,
including using properly sized buttons versus buttons that are too small. Image
credit Apple.
2. Consistency
This principle states that a system should look and work the same throughout.
Consistency in design plays a key role in creating comfortable interactions. If a
product uses consistent design, a user can transfer a learned skill to other parts of
69
the product.
Page
● Internal consistency – Apply the same conventions across all elements of the
user interface. For example, when you design a graphical user interface (GUI),
use the same visual appearance of UI elements throughout.
● External consistency – Use the same design across all platforms for the
product, such as desktop, mobile, and so on.
3. Familiarity
The principle of familiarity states the importance of using familiar concepts and
metaphors in the design of a human-computer interface. The design industry loves
innovation, and it’s very tempting for designers to create something new and
unexpected. But at the same time, users love familiarity. As they spend time using
products other than ours (Jakob’s Law of Internet User Experience), they become
familiar with standard design conventions and come to expect them.
Designers who reinvent the wheel and introduce unusual concepts increase the
learning curve for their users. When the usage isn’t familiar, users have to spend extra
time learning how to interact with your product. To combat this, strive for
intuitiveness by using patterns that people are already familiar with.
4. Efficiency
Users should be able to complete their tasks in the shortest possible time. As a
designer, it’s your job to reduce the user’s cognitive load—-that is, it shouldn’t
require a ton of brain power to interact with the product.
● Break down complex tasks into simple steps. By doing that, you can reduce
the complexity and simplify decision-making.
● Reduce the number of operations required to complete the task. Remove all
extra actions and make navigation paths as short as possible. Make sure your
user can dedicate all their time (and brainpower) to the task at hand, not the
interface of a product.
● Guide the user. Guide your user to learn how to use the system by giving them
all information upfront. Anticipate places where users might need extra help.
● Offer shortcuts. For seasoned users, it’s important to offer shortcuts that can
improve their productivity. An example would be keyboard shortcuts that help
users complete certain operations without using a mouse.
5. Error management
To err is human. But that doesn’t mean your users like it! The way a system handles
errors has a tremendous impact on your users. This includes error prevention, error
70
correction, and helping your user get back on track when an error does occur.
Page
● Prevent errors from occurring whenever possible. Create user journeys and
analyze them to identify places in which users might face troubles.
● Protect users from making fatal errors. Create defensive layers that prevent
users from getting fatal error states. For example, design system dialogs that
ask users to confirm their action (such as deleting files or their entire account).
● When an error does occur, provide messages that help users solve the
problem.
● Never blame users. If you practice user-centered design, you know that it’s not
the user’s fault; instead, it’s your design flaws that lead users to make mistakes.
The emergence of HCI dates back to the 1980s, when personal computing was on the
rise. It was when desktop computers started appearing in households and corporate
offices. HCI’s journey began with video games, word processors, and numerical units.
However, with the advent of the internet and the explosion of mobile and diversified
technologies such as voice-based and Internet of Things (IoT), computing became
omnipresent and omnipotent. Technological competence further led to the evolution
of user interactions. Consequently, the need for developing a tool that would make
such man-machine interactions more human-like grew significantly. This established
HCI as a technology, bringing different fields such as cognitive engineering,
linguistics, neuroscience, and others under its realm.
1. The user
A user operates a computer system with an objective or goal in mind. The computer
provides a digital representation of objects to accomplish this goal. For example,
booking an airline for a destination could be a task for an aviation website. In such
goal-oriented scenarios
3. The interface
The interface is a crucial HCI component that can enhance the overall user
interaction experience. Various interface-related aspects must be considered, such as
interaction type (touch, click, gesture, or voice), screen resolution, display size, or
even color contrast. Users can adjust these depending on the user’s needs and
requirements.
For example, consider a user visiting a website on a smartphone. In such a case, the
mobile version of the website should only display important information that allows
the user to navigate through the site easily. Moreover, the text size should be
appropriately adjusted so that the user is in a position to read it on the mobile
72
Page
device. Such design optimization boosts user experience as it makes them feel
comfortable while accessing the site on a mobile phone.
4. The context
HCI is not only about providing better communication between users and computers
but also about factoring in the context and environment in which the system is
accessed. For example, while designing a smartphone app, designers need to
evaluate how the app will visually appear in different lighting conditions (during day
or night) or how it will perform when there is a poor network connection. Such
aspects can have a significant impact on the end-user experience.
Importance of HCI
HCI is crucial in designing intuitive interfaces that people with different abilities and
expertise usually access. Most importantly, human-computer interaction is helpful for
communities lacking knowledge and formal training on interacting with specific
computing systems.
With efficient HCI designs, users need not consider the intricacies and complexities of
using the computing system. User-friendly interfaces ensure that user interactions are
clear, precise, and natural.
Today, technology has penetrated our routine lives and has impacted our daily
activities. To experience HCI technology, one need not own or use a smartphone or
computer. When people use an ATM, food dispensing machine, or snack vending
machine, they inevitably come in contact with HCI. This is because HCI plays a vital
role in designing the interfaces of such systems that make them usable and efficient.
2. Industry
Industries that use computing technology for day-to-day activities tend to consider
HCI a necessary business-driving force. Efficiently designed systems ensure that
employees are comfortable using the systems for their everyday work. With HCI,
systems are easy to handle, even for untrained staff.
HCI is critical for designing safety systems such as those used in air traffic control
(ATC) or power plants. The aim of HCI, in such cases, is to make sure that the system
is accessible to any non-expert individual who can handle safety-critical situations if
the need arises.
3. Accessible to disabled
73
Page
The primary objective of HCI is to design systems that make them accessible, usable,
efficient, and safe for anyone and everyone. This implies that people with a wide
range of capabilities, expertise, and knowledge can easily use HCI-designed systems.
It also encompasses people with disabilities. HCI tends to rely on user-centered
techniques and methods to make systems usable for people with disabilities.
HCI is an integral part of software development companies that develop software for
end-users. Such companies use HCI techniques to develop software products to
make them usable. Since the product is finally consumed by the end-user, following
HCI methods is crucial as the product’s sales depend on its usability.
Examples of HCI
Technological development has brought to light several tools, gadgets, and devices
such as wearable systems, voice assistants, health trackers, and smart TVs that have
advanced human-computer interaction technology.
Let’s look at some prominent examples of HCI that have accelerated its evolution.
1. IoT technology
IoT devices and applications have significantly impacted our daily lives. According to
a May 2022 report by IoT Analytics, global IoT endpoints are expected to reach 14.4
billion in 2022 and grow to 27 billion (approx.) by 2025. As users interact with such
devices, they tend to collect their data, which helps understand different user
interaction patterns. IoT companies can make critical business decisions that can
eventually drive their future revenues and profits.
2. Eye-tracking technology
Eye-tracking is about detecting where a person is looking based on the gaze point.
Eye-tracking devices use cameras to capture the user’s gaze along with some
embedded light sources for clarity. Moreover, these devices use machine learning
algorithms and image processing capabilities for accurate gaze detection.
74
Page
Businesses can use such eye-tracking systems to monitor their personnel’s visual
attention. It can help companies manage distractions that tend to trouble their
employees, enhancing their focus on the task. In this manner, eye-tracking
technology, along with HCI-enabled interactions, can help industries monitor the
daily operations of their employees or workers.
Speech recognition technology interprets human language, derives meaning from it,
and performs the task for the user. Recently, this technology has gained significant
popularity with the emergence of chatbots and virtual assistants.
For example, products such as Amazon’s Alexa, Microsoft’s Cortana, Google’s Google
Assistant, and Apple’s Siri employ speech recognition to enable user interaction with
their devices, cars, etc. The combination of HCI and speech recognition further fine-
tune man-machine interactions that allow the devices to interpret and respond to
users’ commands and questions with maximum accuracy. It has various applications,
such as transcribing conference calls, training sessions, and interviews.
4. AR/VR technology
AR and VR are immersive technologies that allow humans to interact with the digital
world and increase the productivity of their daily tasks. For example, smart glasses
enable hands-free and seamless user interaction with computing systems. Consider
an example of a chef who intends to learn a new recipe. With smart glass technology,
the chef can learn and prepare the target dish simultaneously.
5. Cloud computing
Today, companies across different fields are embracing remote task forces. According
to a ‘Breaking Barriers 2020’ survey by Fuze (An 8×8 Company), around 83% of
employees feel more productive working remotely. Considering the current trend,
conventional workplaces will witness a massive rejig and transform entirely in a
couple of decades. Thanks to cloud computing and human-computer interaction,
such flexible offices have become a reality.
Goals of HCI
The principal objective of HCI is to develop functional systems that are usable, safe,
and efficient for end-users. The developer community can achieve this goal by
fulfilling the following criteria:
● Design methods, techniques, and tools that allow users to access systems
based on their needs
75
Page
● Adjust, test, refine, validate, and ensure that users achieve effective
communication or interaction with the systems
● Always give priority to end-users and lay the robust foundation of HCI
1. Usability
Usability is key to HCI as it ensures that users of all types can quickly learn and use
computing systems. A practical and usable HCI system has the following
characteristics:
● How to use it: This should be easy to learn and remember for new and
infrequent users to learn and remember. For example, operating systems with
a user-friendly interface are easier to understand than DOS operating systems
that use a command-line interface.
● Efficient: An efficient system defines how good the system is and whether it
accomplishes the tasks that it is supposed to. Moreover, it illustrates how the
system provides the necessary support to users to complete their tasks.
● Enjoyable: Users find the computing system enjoyable to use when the
interface is less complex to interpret and understand.
2. User experience
User experience is a subjective trait that focuses on how users feel about the
computing system when interacting with it. Here, user feelings are studied
individually so that developers and support teams can target particular users to
evoke positive feelings while using the system.
HCI systems classify user interaction patterns into the following categories and
further refine the system based on the detected pattern:
3. Takeaway
Page
Cleverly designed computer interfaces motivate users to use digital devices in this
modern technological age. HCI enables a two-way dialog between man and machine.
Such effective communication makes users believe they are interacting with human
personas and not any complex computing system. Hence, it is crucial to build a
strong foundation of HCI that can impact future applications such as personalized
marketing, eldercare, and even psychological trauma recovery.
Early Focus is Placed on the User and Task: How many users are needed to
perform the task is established and who the appropriate users should be is
determined (someone who has never used the interface, and will not use the
interface in the future, is most likely not a valid user). In addition, the task the
users will be performing and how often the task needs to be performed is
defined.
Empirical Measurement: The interface is tested with real users who meet the
interface daily. The results can vary with the performance level of the user and
the typical human computer interaction may not always be represented.
Quantitative usability specifics, such as the number of users performing the
task, the time to complete the task, and the number of errors made during the
task are determined.
Iterative Design: After determining what users, tasks, and empirical
measurements to include, the following iterative design steps are performed:
3. Analyze Results
4. Repeat
Methodologies
Activity Theory: Utilized in HCI to characterize and consider the setting where
human cooperation with PCs occurs. Action hypothesis gives a structure for
reasoning about activities in these specific circumstances and illuminates
design of interactions from an action-driven perspective.
User-Focused Design (UFD): Client Focused Structure (CFS) is a cutting edge,
broadly rehearsed plan theory established on the possibility that clients must
become the overwhelming focus in the plan of any PC framework. Clients,
architects, and specialized experts cooperate to determine the requirements
and restrictions of the client and make a framework to support these
components. Frequently, client-focused plans are informed by ethnographic
investigations of situations in which clients will associate with the framework.
This training is like participatory design, which underscores the likelihood for
end-clients to contribute effectively through shared plan sessions and
workshops.
Principles of UI Design: These standards may be considered during the design
of a client interface: resistance, effortlessness, permeability, affordance,
consistency, structure, and feedback.
Value Delicate Design: A technique for building innovation that accounts for
the individuals who utilize the design straightforwardly, and just as well for
those who the design influences, either directly or indirectly. VSD utilizes an
iterative plan process that includes three kinds of examinations: theoretical,
exact, and specialized.