Chapter 6 _ Task Analysis
Chapter 6 _ Task Analysis
6
Task analysis
Ashley French, Leah K. Taylor, Melissa R. Lemke
Agilis Consulting Group, LLC., Cave Creek, AZ, United States
O U T L I N E
“Task analysis started by understanding the demands of tasks that need to be performed by an
operator, so that for each task the key human capabilities and limitations could be derived.” Bisantz
and Burns (2008)
1. Introduction
A task analysis is the process intended to identify a user’s goal, what steps must be done in
order to achieve the goal, what experiences (personal, social and cultural) users bring to the
tasks and how the environment might influence the user. It is an important design tool that
can be effectively used early in the design process (i.e., before the creation of device
prototypes) to help inform use-related design of the user interface (UI) components.
Throughout the design process, a task analysis can also be used as a fundamental framework
to the development of use-related risk documentation (e.g., uFMEA or Fault Tree Analysis),
human factors protocol development, and usability evaluation. It can highlight elements of
the user-device interactions that could be problematic for users, which provides designers op-
portunities early and throughout the product development process to implement risk mitiga-
tions proactively, ultimately saving time and manufacturing costs.
Additionally, a task analysis is a critical input for instructional designers while creating the
device’s instructional materials (e.g., Instructions for Use (IFU), quick reference guide (QRG),
user manual or training materials). The instructional materials accompanying a device aim to
guide accurate, safe and effective user performance. As the task analysis defines and de-
scribes what constitutes that performance, ultimately, good instructional materials cannot
be developed without a comprehensive task analysis.
As defined by HE75 3.94 (ANSI/AAMI HE75, 2009), a task analysis is, “a set of systematic
methods that produces detailed descriptions of the sequential and simultaneous manual and
intellectual activities of personnel who are operating, maintaining, or controlling devices or
systems.” The FDA recognizes that a task analysis is an important type of preliminary anal-
ysis that can provide insight into the following questions (CDRH guidance, 2016):
• What use errors might users encounter for each task?
• What circumstances might promote users to make use errors on each task?
• What harm might result from each use error?
• How might the occurrence of each use error be prevented or made less frequent?
• How might the severity of the potential harm associated with each use error be reduced?
A complete and accurate task analysis requires a systematic process to break down neces-
sary device operations into a hierarchical sequence of definable tasks that are intended to be
completed by the device user.
This chapter describes the process of conducting a task analysis, provides examples and
promotes its use as a fundamental requirement in the application of human factors in medical
device design. It is one of the most important tools in the Human Factors Toolkit and is rele-
vant to virtually all aspects of the design process including the following:
• Verification of function allocation is appropriate
• As input to the risk analysis process
• Analysis of errors, potential errors and critical incidents as part of post-market
surveillance
• Highlighting potential design problems and crafting solutions
• Developing formative and summative usability evaluations
• Designing instructions for use, quick start guides and training materials
The methods and techniques for conducting task analysis have been devised in almost
every field of endeavor involving human performance. Further, many techniques known
by other names (e.g., Fault Tree Analysis, Link Analysis, Failure Modes and Effects Analysis
(FMEA)) are often considered special types of task analysis by human factors professionals
with the main reference text being Kirwan and Aimsworth’s “A Guide to Task Analysis”
(1992).
2. Overall process
Task analysis development is a systematic process that produces a coherent “road map” to
device-user interactions. A use case is the first input necessary for developing a task analysis.
Use cases are representative situations that describe how intended users will use the device in
actual use environments. These use cases provide input to and guide the development of the
task analysis by identifying specific tasks that users need to complete in order to successfully
use the device in those specific situations. The next steps for developing a task analysis
involve task identification, sub-task breakdown and application of the PCA model (i.e.,
Perceptual, cognitive, and manual action requirements model). The PCA model is recommen-
ded in CDRH guidance (2016) guidance however, this model can be limited for devices and
systems that have multiple simultaneous users where coordination and communication tasks
are involved (e.g., complex surgical systems or electrophysiology technologies).
Once the tasks and sub tasks have been accurately described, potential use errors are iden-
tified along with associated harms and severity. Fig. 6.1 illustrates the task analysis process
starting from identifying use cases (Step 1 in the process) and in some cases (uFMEA) ending
with determining potential harms and severity of harm (Step 6 in the process).
The terminology used in a task analysis may vary across device teams. The terminology
used in this chapter includes terms such as; “use cases,” tasks,” and “sub-tasks.” Although
terminology may change slightly or have somewhat different meanings across manufac-
turers, it is important for a manufacturer to use a systematic and consistent hierarchy
when creating and utilizing a task analysis and associated terminology. Table 6.1 helps illus-
trate the differences between use cases, tasks, and sub-tasks for an infusion pump.
TABLE 6.1 Example use case, task identification, and sub-task breakdown for an infusion pump.
Terminology definitions Examples
• Device set-up
• Troubleshooting
• Programming Therapy
• Monitoring Status
• Maintenance
• Shut-down
Task: Individual task (s) for the high-level functions Tasks:
or
Action or set of actions performed by a user to achieve a • Enter patient data
• Enter dosage concentration
specific goal task (CDRH guidance, 2016)
• Enter start time
Sub-Task: Interaction steps (i.e. individual steps for carrying Sub-Tasks:
out a task in the order of their performance)
• Press power button
• Verify pump on
• Select dosage entry screen
Different users of the device may encounter the same or different use cases. Consider the
previous example of an infusion pump. A health care professional may have a use case to
program the pump, and a lay patient may have a use case to initiate a pre-programmed bolus
of medication. When identifying use cases it is important to consider the different users
throughout the life cycle of a device. When user-device interactions differ across intended
user groups, the task analysis should include all tasks and specify the users who complete
specific tasks in the analysis.
• If the order of tasks is important, sequencing should be documented in the task anal-
ysis. For example, most devices must be powered on before any other tasks can be
completed.
• What happens if someone skips a task? If the task is skipped, is the task optional?
• There may be a task that is inadvertently missed or intentionally skipped by a user.
Some tasks are critical to the safe and effective use of the device, while others are
included as steps in the logical workflow. Consideration for tasks that fall into these
categories should be documented in the task analysis. For example, an optional task
may be to pinch the skin around an injection site to administer therapy.
• Is timing a critical component?
• Some devices include timing as part of the safe and effective use of the device. The
timing may be related to warm up time of medication, length of holding an injection,
or wait time for a diagnostic result to appear. If timing is determined to be relevant
to the safe and effective use of the device, the time requirements should be integrated
into the task analysis in an observable and measurable way.
• Are there alternative ways/shortcuts to complete a task that are acceptable or
unacceptable?
• For some tasks, there may be multiple ways to accomplish the same end goal. An
ethnographic study may provide insight into these alternative paths or shortcuts. If
alternative methods are acceptable they should be reflected in the task analysis. If a
shortcut is unacceptable, it should be noted as a potential use error and mitigations
incorporated in an effort to eliminate the occurrence.
Additionally, when breaking down tasks into sub-tasks, it is important to integrate order
and timing (when applicable) in an observable and quantifiable way. Verbiage should be
concise, consistent (with no ambiguity) and measurable (which becomes extremely important
when task analysis guides usability testing). For example, terms such as “after” and “before”
should be included when order is important. When timing is critical, measurable metrics
should be included. For example, instead of “hold until the medication has been delivered,”
provide a metric such as “hold for 5 s until the medication is no longer visible in the win-
dow.” While the above provides limited examples, the text “Taxonomies of Human Perfor-
mance” (1984) by Fleishman and Quaintance provides more comprehensive examples of
task descriptions.
The precision of wording included in task analysis can influence the quality of the analysis.
It is important to avoid using words that are ambiguous and will be difficult to observe or
measure (e.g., “adequate,” “well,” or “enough”). Instead define what is meant by ambiguous
words with as much detail as possible that describes the specific UI components and the ex-
pected user-device interactions. The more specific and stand-alone the sub-tasks can be, the
more useful a task analysis will be as an early design tool as well as a framework for subse-
quent human factors data collection and analysis. Table 6.3 provides examples of original
task phrasing and suggested improvements to task phrasing, which also includes the separa-
tion of tasks into more discrete sub-tasks. Note that the specificity and phrasing included in a
task analysis might lead to development of instructional materials that provide the user with
similarly concise instructions, although this is not always the case.
It is important to capture all tasks for a device, including tasks and sub-tasks that are asso-
ciated with observable performance data and those that are not observable (i.e., knowledge
tasks). For example, understanding instructions on proper storage and understanding an
expiration date may not be possible to observe during a simulated use scenario. However,
if the task is included in the task analysis and is identified as critical, it can be assessed
Notes: Separated original task into discrete sub-tasks, since some users
may have issues with the first sub-task that requires the user to perform
an action for a specific period of time (i.e., take a strong, deep breath for
3 seconds) and others may have issues with the second part which relies
on a specific design element to provide feedback to the user (i.e., continue
breathing until indicator changes from green to gray). In addition,
breaking into discrete sub-tasks will allow more detailed observations of
UI elements and user performance if usability testing is conducted.
Open the medicine. Define sub-tasks:
1. Tear off strip of foil along perforation (bottom of packaging).
2. Tear medication pack along perforation to separate one capsule
segment from the rest of the packaging.
3. Press capsule through foil to remove it from individual packaging.
Notes: Separated original task into discrete sub-tasks that focus on
different aspects of the packaging design that may impact user-device in-
teractions (e.g., perforations in the medication pack, individual package
segment containing the capsule).
Mix suspension well. Identify precise wording:
• Mix until no particles are visible in the suspension.
Notes: Updated wording to indicate how “well” is defined in the original
task wording.
Adequately press and hold green Identify precise wording:
button.
• Press and hold green button for 5 seconds until the light stops blinking
Notes: Updated wording to indicate how “adequately” is defined in
original task wording.
Decompress the guard deep enough. Identify precise wording:
• Decompress the guard so that yellow guard is no longer visible
• Twist until a click is heard and the cap can tighten no more
Notes: Updated wording to indicate how “closed” is defined in the
original task wording.
2. Overall process 71
TABLE 6.4 Task type examples in the PCA model.
Task types
through a knowledge task question. Knowledge tasks are represented in the task analysis and
are often phrased as “Understand [critical information content]” (e.g., “Understand the de-
vice should be stored in the refrigerator” or “Understand the device should be stored in
the original carton”). If these critical non-observable knowledge tasks are omitted during
the task analysis process, gaps will exist with respect to use-related risk and the user
interface.
2.4 Step four: apply the perception, cognition, and manual action
(PCA) model
A complete task analysis is fundamental to user interface optimization, use error predic-
tion and prevention, and determination of the user’s interactions with the device relative
to task requirements. These user task requirements include user actions, user perception of
information and user cognitive processes (Sharit, 2006). A Perception, Cognition, and Manual
Action (PCA) model is an FDA recommended strategy for task analysis that is used to iden-
tify user-device interactions and characterize user capabilities. Applying PCA to a task anal-
ysis adds specific user requirements that lend support for identification of potential use errors
and root cause analysis during human factors testing. This model identifies user actions
related to the perceptual inputs, cognitive processing, and physical actions involved in the
task (CDRH guidance, 2016). Table 6.4 below highlights task types according to the PCA
model and is intended to provide examples though it is not an exhaustive list of types of
tasks.
In more complex systems, there may be a fourth category of tasks which is Communica-
tion (people to people). For example, this includes the following task descriptors: informing,
requesting, directing, advising, or querying.
Once sub-tasks are identified, the PCA model is used to further break down the sub-tasks
to identify user requirements to complete each sub-task. For each sub-task, user requirements
FIG. 6.2 Model of the operational context between a user (PCA components), device, and user interface. Adapted
from FDA HF Guidance (CDRH, 2016).
related to perceptual inputs, cognitive processing, and manual actions necessary to perform
the sub-task should be documented.
Fig. 6.2 below describes the user-device interface relationship. Risk occurs when any of the
perception, cognition, or action tasks are difficult, confusing, or impossible for the user. For
example:
• If perceptual information, such as warning labels or directions, are not seen or heard by
the user (perceptual requirements), this information will not be available for the user to
understand (cognitive processing).
Consider a wearable cardiovascular device that includes an alarm to warn the user to
change the battery. If the user is unable to hear the alarm sound, he/she will not be alerted
to take the necessary action(s) to change the battery, which could have fatal consequences.
• If a user correctly perceives the information from the device or labeling, but has diffi-
culty understanding what that information means, the necessary action may be missed
or done incorrectly.
For example, if a user with a wearable cardiovascular device hears the alarm but does not
understand that it means to change the battery, the user may take no action or the wrong ac-
tion in response to the alarm.
• If the action step requires users to act quicker or more forcefully than they are physically
capable of doing, this interaction will not occur.
The estimated probability of occurrence of a problem is not always accurate, and many use
errors are not anticipated until device use is simulated and user interaction with the device is
observed, or even later once the product is released and the manufacturer observes post-
market problems. Therefore, severity and potential harm are preferred measures for deter-
mining if user interface modifications are required to reduce or eliminate harm (ANSI/
AAMI 14971, 2010) and should therefore be included in a task analysis.
Severity levels are determined and justified by the manufacturer for a particular medical
device under clearly defined conditions of use (ANSI/AAMI 14971, 2010). An example range
of severity levels provided by ANSI/AAMI 14971 is described in Table 6.5.
Once severity levels are identified for all tasks or sub-tasks in the task analysis, tasks can be
assigned a task category. CDRH defines a critical task as, “A user task which, if performed
incorrectly or not performed at all, would or could cause serious harm to the patient or user,
where harm is defined to include compromised medical care.” (CDRH guidance, 2016).
Although the only category that is defined by CDRH is a critical task, it can be beneficial to
group tasks into a hierarchy to identify critical tasks and non-critical tasks. Task category
should be linked to severity rating, and a definition of how tasks are categorized should be
included in the task analysis. Note: CDER draft guidance defines a critical task differently
than CDRH, specifically “user tasks that, if performed incorrectly or not performed at all,
would or could cause harm to the patient or user, where harm is defined to include compro-
mised medical care.”
For the entire project team it is key to note that the task analysis is a living document.
Throughout the development, testing, and re-design process iterations are likely. These
iterations may include added, removed, modified, and/or re-ordered tasks. Additionally,
revisions may involve modifications to the PCA analysis, risk severity, use errors, and task
categorizations.
2.7 Example task analysis with risk and task category delineation
Table 6.6 below presents an example Task Analysis for an infusion pump including the
sub-tasks, PCA elements, potential use errors, potenetial harms, severity of harm and task
category.
2 Select continuous • P: See menu labels User does not select correct Inaccurate infusion 3 Critical
• C: Read and understand menu labels option to program continuous
• A: Select menu label to program therapy
2. Overall process
continuous therapy
3 Enter prescribed drug • P: See editable field User does not enter the Inaccurate drug 3 Critical
concentration • P: See measurement units prescribed value correctly concentration
• P: See keypad
• C: Understand where to enter
concentration value
• C: Understand to select the correct
units of measurement
• A: Select editable field
• A: Use keypad to enter value
4 Enter prescribed • P: See editable field User does not enter the Inaccurate amount 3 Critical
amount to be • P: See measurement units prescribed value correctly to be infused value
infused • P: See keypad
• C: Understand where to enter
amount to be infused
• A: Select editable field
• A: Use keypad to enter value
(Continued)
75
TABLE 6.6 Task analysis example for an infusion pump.dcont’d
76
Device use Potential harm Severity Task
# sub-tasks PCA elements Potential use errors (outcomes) of harma category
5 Enter prescribed • P: See editable field User does not enter the Inaccurate rate of 3 Critical
rate of infusion • P: See measurement units prescribed value correctly infusion
• P: See keypad
• C: Understand where to enter rate
• A: Select editable field
• A: Use keypad to enter value
6 Select CONFIRM • P: See option to CONFIRM User does not select the correct Delayed therapy or 3 Critical
• C: Read and understand option option to proceed with infusion no therapy delivered
labels
• A: Select option to CONFIRM
II. Discovery & input methods
7 Understand • P: See confirmation message User does not recognize or Inaccurate infusion 3 Critical
confirmation message • C: Understand confirmation message understand confirmation
message
6. Task analysis
8 Select RUN to start • P: See RUN button User does not select RUN Delayed therapy or 3 Critical
continuous therapy • C: Understand that RUN button to start delivering therapy no therapy delivered
delivery will start therapy
• A: Select RUN button
9 Wait for a minimum • P: See Program Pump confirmation User does not wait a minimum System error may 2 Essential
of 30 s before screen of 30 s before proceeding with appear
additional • C: Know to wait a minimum of 30 s any additional programming
programming • A: Wait 30 s before proceeding with
any other programming
a
Scale used for severity is as follow: 3dSignificant, 2dModerate, 1dNegligible.
3. Hierarchical task analysis 77
TABLE 6.7 Hierarchical task analysis for a handheld blood glucose meter.
Hierarchical Task Analysis e Blood Glucose (BG) Meter
0. Take BG reading
Plan 0 - User must complete tasks in the following order to successfully take a BG reading:
1 e 2 e 3 e 4 e 5, or
2 e 1 e 3e4 e 5
1. Turn on BG meter
Plan 1 - User must complete tasks in the following order to successfully turn on BG meter:
1.1e1.2
1.1 Insert batteries into BG meter
1.2 Press power button
2. Load strip into BG meter
2.1 Insert strip into BG meter slot
3. Load blood sample onto strip
Plan 3 - User must complete tasks in the following order to successfully load blood sample onto strip:
3.1e3.2 e 3.3
3.1 Lance finger with a lancing device
3.2 Squeeze finger until a drop of blood appears
3.3 Place drop of blood on sample area of the test strip
4. Analyze blood sample
4.1 Press Test button
5. Interpret results
Plan 5 - User must complete tasks in the following order to successfully interpret results:
5.1e5.2
5.1 Read value displayed by BG meter
5.2 Determine if action is necessary
5. Using task analysis for instructional design 79
applications but allows for easy conversion into a full task analysis (including PCA, use error
identification, and determination of severity level).
Using HTA as an analysis method has both advantages and disadvantages. Advantages
include that it is easy to create and implement and is a useful input to other analyses
including task analysis, PCA analysis and use error identification. It provides a high level
overview that can be beneficial as a reference while performing more detailed analyses. There
are also disadvantages to using HTA as the only task analysis because the HTA contains
mostly descriptive rather than analytical information and it might not be suited for complex
systems and tasks (Annett, 2004).
knowledge tasks. Additionally, it may require that the task analysis include descriptions of
the tasks in terms of the knowledge, skills and abilities (KSAs) the user must have in order
to achieve successful performance of device use. It is important to note that KSAs are most
helpful when developed by instructional designers or professionals who are familiar with
performance-based training. Otherwise KSAs easily become an exhaustive list of, for
example, required user knowledge that is content heavy and may not necessarily be relevant
to performance descriptions or be useful in the development of instructions. Deficiencies in a
task analysis, such as a lack of clear KSAs, can lead to poorly designed instructions, which
can cause difficulties use errors, including incorrect performance and lack understanding
and ability to find critical information.
Often, when human factors experts are analyzing use errors that have occurred in a usabil-
ity evaluation, the instructional materials may be a potential source of the error. A robust and
thorough task analysis can help pinpoint the actual source of the use error and provide
insight into a potential mitigation.
Additionally, instructional designers may be required to look deeper and in more detail
when developing the task analysis. They may ask questions related to the importance of
timing and the consequences if a task is not performed correctly. These insights impact
how the instructional material is presented including warnings, cautions and format as it
is common for use errors to stem from knowledge comprehension aspects of a task. Much
of the cognitive load on a user comes from the processing of the instructional material.
The task analysis helps promote user performance expectations and clearly communicates
KSAs. Chapter 9 provides more insight into the development and role of instructional design
in overall device usability.
6. Summary
Acknowledgments
Special thanks to Drs. Daryle Gardner-Bonneau, Deborah Billings Broky, and Jessie Huisinga for critical comments
and suggestions. Thank you to Elissa Yancey for editing.
References
Annett, J. (2004). Hierarchical task analysis. In E. Salas, H. W. Hendrick, N. A. Stanton, A. Hedge, & K. Brookhuis
(Eds.), Handbook of human factors and ergonomics methods (pp. 329e337). CRC Press. https://doi.org/10.1201/
9780203489925-9.
ANSI/AAMI HE75. (2009). Human factors engineering e design of medical devices.
ANSI/AAMI 14971. (2010). Medical devicesd application of risk management to medical devices. American National Stan-
dard, 2008.
Bisantz, A. M., & Burns, C. M. (2008). Applications of cognitive work analysis. CRC Press.
Kirwan, B., & Ainsworth, L. K. (1992). A guide to task analysis. University of Michigan, 16(2), 417. https://doi.org/10.
4324/9780203221457.
Sharit, J. (2006). Handbook of human factors and ergonomics. https://doi.org/10.1002/0470048204.ch27.
Stanton, N. A., Salmon, P. M., et al. (2013). Human factors methods: A practical guide for engineering and design. Ashgate
Pub. Co.
U.S. Department of Health and Human Services Food & Drug Administration Center for Devices and Radiological
Health (2016). Applying human factors and usability engineering to medical devices.