HCI Unit 4 NOTES
HCI Unit 4 NOTES
UNIT 4
1.Discuss about HCI in software process.
Figure 1 The activities in the waterfall model of the software life cycle
A.Requirements Specification:
- Focuses on capturing what the eventual system will provide, not how it will
provide it.
- Involves eliciting information from the customer about the work environment
and functions of the software product.
- Starts at the beginning of product development and requires formulating
customer requirements in a language suitable for implementation.
- Transforms requirements from natural language to more precise executable
languages.
Architectural Design:
- Concentrates on how the system provides expected services.
- Involves high-level decomposition of the system into components and
describing interdependencies between them.
- Utilizes structured techniques to derive architectural descriptions.
- Typically focuses on capturing functional requirements but may not
immediately address non-functional requirements like efficiency, reliability, and
safety.
Detailed Design:
- Refines the component descriptions provided by architectural design.
- Ensures that the behaviour implied by the higher-level description is
preserved.
- Involves selecting the best refinement to satisfy as many non-functional
requirements as possible.
Coding and Unit Testing:
- Involves implementing components based on detailed design in an executable
programming language.
- Component is tested to verify correct performance according to predefined
test criteria.
Integration and Testing:
- Involves integrating implemented components and further testing to ensure
correct behaviour.
- Acceptance testing with customers may be performed to ensure system
meets requirements.
- System is released to the customer after acceptance.
Maintenance:
- All work on the system after release is considered maintenance.
- Involves correcting errors discovered after release and revising system
services to meet new requirements.
- Provides feedback to all other activities in the software life cycle.
• Styles of evaluation —
If we consider style of evaluation there are two main parts. First one performed under
laboratory conditions and second one conducted in the work environment or ‘in the field’.
Laboratory studies : In the first type of evaluation studies, users are taken out of their
normal work environment to take part in controlled tests, often in a specialist usability
laboratory.
Field studies : The second type of evaluation takes the designer or evaluator out into the
user’s work environment in order to observe the system in action
• Observational techniques —
A popular way to gather information about actual use of a system is to observe users
interacting with it. Usually they are asked to complete a set of predetermined tasks,
although, if observation is being carried out in their place of work, they may be observed
going about their normal duties.
Think aloud and cooperative evaluation :- Think aloud is a form of observation where the
user is asked to talk through what he is doing as he is being observed; for example,
describing what he believes is happening, why he takes an action, what he is trying to do.
Protocol analysis :- Methods for recording user actions analyze. we can use following
methods for recode user details.
— Paper and pencil
— Audio recording
— Video recording
— Computer logging and etc.
the most popular thing is use a mix method of these.
Automatic protocol analysis tools :- Analysing protocols, whether video, audio or system
logs, is time consuming and tedious by hand. It is made harder if there is more than one
stream of data to synchronize. One solution to this problem is to provide automatic analysis
tools to support the task. These offer a means of editing and annotating video, audio and
system logs and synchronizing these for detailed analysis.
Post-task walkthroughs :- Often data obtained via direct observation lack interpretation. We
have the basic actions that were performed, but little knowledge as to why. Even where the
participant has been encouraged to think aloud through the task, the information may be at
the wrong level.
• Query techniques —
Another set of evaluation techniques relies on asking the user about the interface directly.
Query techniques can be useful in eliciting detail of the user’s view of a system. They
embody the philosophy that states that the best way to find out how a system meets user
requirements is to ‘ask the user’. They can be used in evaluation and more widely to collect
information about user requirements and tasks. There are two main types of query
technique:
interviews — Interviewing users about their experience with an interactive system provides
a direct and structured way of gathering information.
questionnaires — An alternative method of querying the user is to administer a
questionnaire. This is clearly less flexible than the interview technique, since questions are
fixed in advance, and it is likely that the questions will be less probing.
The different methods of evaluation under these two broad categories are:
Evaluation through expert analysis
Expert analysis is practical when the designer lacks the resources to involve users. Evaluation
through inspection by experts can be done through the following methods: