Infosec Process Action Model (Ipam) : Systematically Addressing Individual Security Behavior

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 25

InfoSec Process Action Model (IPAM): Systematically Addressing

Individual Security Behavior


Curry, Michaela.,c., Marshall, Byronb., Crossler, Robert .E. a., Correia, John a.
a.
Carson College of Businesses, Washington State University
b.
College of Business, Oregon State University
c.
Corresponding Author

Recommended citation:
Curry, M., Marshall, B., Crossler, R. E., Correia, J., InfoSec Process Action Model (IPAM): Systematically
Addressing Individual Security Behavior, The DATA BASE for Advances in Information Systems, accepted
– November 14, 2017.

Abstract
While much of the extant InfoSec research relies on single assessment models that predict intent to act,
this article proposes a multi-stage InfoSec Process Action Model (IPAM) that can positively change
individual InfoSec behavior. We believe that this model will allow InfoSec researchers to focus more
directly on the process that leads to action and develop better interventions that address problematic
security behaviors. Building on successful healthcare efforts that resulted in smoking cessation, regular
exercise, and a healthier diet, among others, IPAM is a hybrid predictive process approach to behavioral
InfoSec improvement. IPAM formulates the motivational antecedents of intent as separate from the
volitional drivers of behavior. Singular fear appeals often seen in InfoSec research are replaced by more
nuanced treatments appropriately differentiated to support behavioral change as part of a process;
phase-appropriate measures of self-efficacy are employed to more precisely assess the likelihood that a
participant will act on good intentions; and decisional balance –assessment of pro and con perceptions –
is monitored over time. These notions better align InfoSec research with both leading security practice
and successful comparators in healthcare. We believe IPAM can both help InfoSec research models
better explain actual behavior and better inform practical security behavior improvement initiatives.

Introduction
Fear appeals are a prominent approach used in the information security (InfoSec) literature to influence
security behavior changes and scare individuals into performing a security behavior in order to prevent a
terrible event from happening (Boss et al. 2015). This research is characterized by a singular fear appeal
to induce change and also focuses on changing users’ intentions, rather than changing actual behavior
(Johnston et al. 2015; Johnston and Warkentin 2010), with one known exception (Boss et al. 2015). This
is often the case, because these studies are conducted by collecting data at a single point in time, where
effects on changes in actual behavior are impossible to measure.

Contrary to the way that IS scholars explore changes in security behaviors with a singular manipulation,
information security professionals recommend that security training be an ongoing process with
multiple interventions over time. Such best practices recommend that organizations design, implement,
and maintain a security awareness and training program. The frequency of exposure to awareness
material is an important consideration. Best practices recommend that security awareness be a
continuous program (Wilson and Hash 2003), for example, conducted frequently, even on a daily basis,
to maintain a high level of security awareness (Security Awareness Program Special Interest Group
2014). For employees that work closely with information technology, exposure to cybersecurity
essentials is also recommended (Toth and Klein 2014). Following such suggestions results in turning
security behavior into a habit that individuals perform reflexively and do not have to think about (Vance
et al. 2012; Walsh 2017). Thus, cybersecurity training is not a one-time event; instead it should be
continuous and tailored to ensure consistent security behaviors that protect the organization. However,
there is a noticeable gap in the existing literature, which, as discussed above, focuses on a single
manipulation and does not follow the recommendations of security professionals. As such, we draw on
staged theories from the health literature to develop a theoretical foundation that is better able to
follow these recommendations.

There are notable similarities between individual behaviors impacting information system security and
day-to-day behaviors that impact health. For example, just as smoking can impact society, so too may
organizations experience negative consequences when individuals are careless with information
security, such as insecure mobile devices, storing privileged data in an unauthorized cloud service, or
reusing passwords across systems. In view of the similarities, it is not surprising that researchers in both
health and security behavior research share common research methodologies to promote behavior
modification. For example, both health and security behavior studies have applied Protection
Motivation Theory (PMT) (Boss et al. 2015; Maddux and Rogers 1983; Rogers 1975) and the Theory of
Planned Behavior (TPB) (Ajzen 1991; Bulgurcu et al. 2010) to improve behavior. These theories identify
mature drivers of the formulation of intent to act.

While intent is a primary driver of action, healthcare psychology models also formulate stages (or
phases) that both precede and follow the intention phase in a process to achieve a ‘higher’ state of
desired behavior. For example, the Transtheoretical Model of Behavioral Change (TTM) (Prochaska and
Velicer 1997) specifies that behavior change is an “intentional process that unfolds over time and
involves progress through a series of six stages of change” (Prochaska 2013), while the Health Action
Process Approach (HAPA) (Schwarzer 2008), narrowly focuses on only two stages: the formulation of
intent followed by action.

A noticeable gap in the InfoSec research is the examination of stages other than intention in an effort to
move towards compliance. We theorize that many individual security behaviors follow a predictable
process and expect that carefully targeted interventions tailored to specific phases should result in
higher levels of action. Consequently, this paper makes three important theoretical contributions. First,
we propose a formal process approach adapted from healthcare that incorporates two distinct phases of
change that can be employed to study individual security behavior change. Second, we introduce new
constructs to InfoSec research that assess the transition between formulating intent and acting (Ajzen et
al. 2004; Gollwitzer 1999). Third, we provide theoretical justification that a process approach is better
suited to studying InfoSec behavior change since individuals undergo a systematic and predictable
evolution during the transition process.

A process model incorporating motivational and volitional phases can offer greater insights into
behavior change and would be a significant contribution to the InfoSec research literature. Such a model
could be used to assess the effectiveness of various treatments and to identify individuals who may
benefit from additional treatments. It may also prove useful in predicting those who are most likely to
change behavior. Although the literature reflects some interest in investigating process action models
for behavior (Burns et al. 2012), to the best of our knowledge, this is the first attempt in information
systems research to theorize about the application of a process action model for behavioral InfoSec.

The paper is structured as follows. We summarize the major relevant theories in InfoSec research, then
review staged and process models, concluding with a hybrid predictive process model. We then
conceptualize and describe our proposed model and provide an illustrative description of one possible
study procedure. Finally, we conclude by discussing the implications of our model and provide
suggestions for future studies.

Relevant Theories of Behavior and Behavior Change


Understanding why individuals act and what drives their behavior is important for influencing harm
reduction behavior, such as security policy compliance. There are many theories to explain behavior and
behavior change (for reviews, see: Armitage and Conner 2000; Davis et al. 2014; Milne et al. 2006), but
InfoSec research has been dominated by the Theory of Planned Behavior (TPB) (Ajzen 1991) and
Protection Motivation Theory (PMT) (Maddux and Rogers 1983; Rogers 1975), as illustrated in Table 1.
We briefly review both theories because of their relevance to our later formulation, then critique their
application to InfoSec research in light of previously discussed recommendations that individuals need
continuous exposure to security awareness in order to achieve the highest level of security behaviors
and awareness.

Theory of Planned Behavior (TPB)


The Theory of Planned Behavior (TPB) (Ajzen 1991), based on an earlier Theory of Reasoned Action
(TRA) (Ajzen and Fishbein 1975), posits that intent is the result of attitude beliefs towards a behavior,
normative beliefs that valued others have about how to behave, and control beliefs about the ease of
behaving. TPB has been employed to examine home computer security (Anderson and Agarwal 2010),
use of anti-spyware software (Dinev and Hu 2007), and security policy compliance (Bulgurcu et al. 2010;
Tejaswini Herath and Rao 2009). It has also been widely used in healthcare to study problem behaviors
such as smoking cessation, alcohol abstinence, exercise, condom use, and more (Albarracin et al. 2001;
Godin and Kok 1996; Sheeran et al. 1999).

Protection Motivation Theory (PMT)


Protection Motivation Theory (PMT) (Maddux and Rogers 1983; Rogers 1975), originally introduced to
explain the effects of fear on health behaviors, has also been widely used in InfoSec (e.g. Crossler and
Belanger 2014; T Herath and Rao 2009; Johnston and Warkentin 2010; Lee and Larsen 2009). Protective
behavior is thought to be motivated by two outcomes. Error: Reference source not foundThe first is a
threat appraisal based on perceptions of severity and vulnerability. If the outcome of the threat
appraisal is sufficiently high, it can stimulate fear. The second is a coping appraisal where potential
responses are evaluated based on self-efficacy, response efficacy, cost, and maladaptive rewards.
Extensive meta-analyses of PMT (Floyd et al. 2000; Milne et al. 2000; Witte and Allen 2000) also attest to
its versatility in understanding how to motivate protective action against various health issues, such as
AIDS, cancer, and heart disease.

Fear Appeals
A prominent stream of InfoSec research seeks to manipulate the threat appraisal process using fear
appeals. PMT posits that an individual’s appraisal of the chance and magnitude of negative outcomes
can produce fear, which influences intent to behave. Fear appeals have demonstrated increased
compliance with recommended measures, such as using anti-spyware for computer protection (Boss et
al. 2015; Johnston and Warkentin 2010), security policy compliance (Johnston et al. 2015; Vance et al.
2012), and performing data backup (Boss et al. 2015).

Critiquing InfoSec Research


As illustrated in Table 1, the prototypical InfoSec study employs intentions as the research outcome in a
hypothetical scenario (e.g. ‘what are your intentions about making your password unique?’) in order to
estimate the likelihood of action (e.g. actually changing a password to be different from those used on
other systems). Although intent has been shown to be a key driver of actions (Boss et al. 2015; Crossler
et al. 2014), there is a significant problem of ‘inclined abstainers’, that is, those who formulate intent but
subsequently fail to act (Gollwitzer and Sheeran 2006; Sheeran 2002). Meta-analyses suggest that nearly
half of all intenders may ultimately fail to act (Sheeran 2002). The gap is caused, in part, because
individuals tend to estimate intentions in hypothetical conditions differently from how they behave in
real conditions (Ajzen et al. 2004). For example, an individual may intend to comply with a
recommendation to change their password, but circumstances could delay or prevent translating intent
into action. If abstract assessments do not sufficiently account for unanticipated obstacles, they may be
a less reliable indicator. For example, it may be easier to continue using a previously used password,
since formulating a new unique password and committing it to memory takes time and cognitive
resources that may be presently unavailable. Consequently, intent can decay over time, making it a
weak predictor of action.

Additionally, studies that employ research models based on TPB and PMT typically rely on a single
observation and cannot capture evolving changes in the attitudes that drive action. Security
professionals advocate continuous exposure to security awareness training, which helps individuals
develop habitual security behaviors, as opposed to behaviors that require them to think about choosing
the proper actions to mitigate security threats (Vance et al. 2012; Walsh 2017). This effort to move
people to a point where they are performing habitual behaviors is consistent with the recommendation
that there are at least two stages of awareness that all individuals involved with information technology
should be exposed to – basic “Security Awareness” and more specialized and targeted “Cybersecurity
Essentials” (Security Awareness Program Special Interest Group 2014). The goal of these multiple stages
of training is to ensure that individuals have the adequate level of knowledge and skills to behave in a
secure manner consistent with the expectations of the organization.

Table 1. Selected Prior Research in Behavioral Information Security

Author Description Principal Theory(s) applied Dependent


(Date) variable
Dinev & Hu Individual adoption of spyware Theory of Planned Intent
(2007) protective technology. Behavior, Theory of
Reasoned Action,
Technology Acceptance
Model
Herath & Rao Employee information security policy Protection Motivation Intent
(2009) compliance. Theory, Theory of Planned
Behavior, Deterrence
Theory
Lee and Small- and medium-sized business Protection Motivation Adoption
Larsen (2009) executives’ decision to adopt anti- Theory
malware software for their
organizations.
Anderson & Individual home computer security Protection Motivation Intent
Agarwal behavioral intentions. Theory, Theory of Planned
(2010) Behavior
Bulgurcu et Employee awareness and compliance Theory of Planned Intent
al. (2010) with information security policy. Behavior, Diffusion of
Innovation
Johnston & Fear appeals to manipulate individual Protection Motivation Intent
Warkentin compliance with computer security Theory,
(2010) recommendations. Fear Appeals Model
Vance et al. Habitual IS security compliance Protection Motivation Intent
(2012) strengthens future employee Theory, Habit Theory
compliance.
R. Crossler & Individual compliance with a set of Protection Motivation Behavior
Belanger unified security practices. Theory
(2014)
Boss et al. Fear appeals to manipulate individual Protection Motivation Behavior
(2015) use of automated back-ups and anti- Theory
malware software.
Johnston et Fear appeals to manipulate employee Protection Motivation Intent
al. (2015) use of strong passwords, encrypting Theory, Enhanced Fear
portable data storage devices, and Appeal Rhetorical
locking workstations. Framework, Deterrence
Theory

Social Cognitive Theory


One way of bridging the intent-to-action gap is to incorporate Social Cognitive Theory (SCT) (Bandura
1977, 1986). SCT emphasizes a triadic reciprocal determinism between behavioral, personal, and
environmental factors (Bandura 1989). Generally speaking, SCT states that individuals formulate goals
patterned after modeled behaviors. In order to attain these desired levels of behavior, they rely on both
their perceived outcomes from attainment (outcome expectancies) and their confidence to achieve the
attainment (self-efficacy). More precisely, Bandura (1997; 1977) defines self-efficacy as “a belief in ones’
capabilities to organize and execute the courses of action required to produce a given attainment”
(1977, p. 3), while outcome expectancies are “a person’s estimate that a given behavior will lead to
certain outcomes”(1997, p. 193). Self-efficacy is theorized to be influenced by four important sources:
performance accomplishments, vicarious experience, verbal persuasion, and physiological states. The
outcomes of self-efficacy and outcome expectancies are theorized to be behavior choice, persistent
behavior until the goals are attained, anxiety, and task performance.

Self-Efficacy
SCT, and specifically its formulation of self-efficacy, has been shown to be one of the most significant
drivers of intent (Armitage and Conner 2000; Milne et al. 2000) and has been applied to InfoSec research
in various ways, including the theories mentioned above. These studies find consistent support for the
relationship between self-efficacy and intention to perform a number of InfoSec behaviors including use
of anti-spyware software (Lee and Larsen 2009), use of anti-plagiarism software (Lee 2011), and
compliance with security policies (Crossler et al. 2014; T Herath and Rao 2009; Ifinedo 2012). Although
these studies highlight the importance of self-efficacy in InfoSec research, they do not utilize its full
potential since they only capture its state at a single point in time.

Decisional Balance
In addition to different formulations of self-efficacy, outcome expectancies can also be tailored to
specific goals. The traditional approach of InfoSec research models is to measure at a single point in time
an individual’s attitude towards initiating action (e.g. Bulgurcu et al. 2010), or response efficacy (e.g.
Boss et al. 2015). However, TTM studies show support for the assessment of decisional balance of
attitudes captured over time as a weighted scale (pros/cons) of positive outcome expectancies (pros)
and negative outcome expectancies (cons) (Hall and Rossi 2008; Prochaska and Velicer 1997). Individuals
who are able to transition from intent to action typically assess pros higher than cons. Put another way,
for behavior to change, perceived positive expectancies must be higher than the negative expectancies.

Adapting InfoSec Research to a Process Approach


We view InfoSec behavior as a process beginning with awareness of a recommended security practice,
to formulating intent, then adopting the recommended action, and culminating in recovery from
noncompliance. Our formulation of changing security behavior also subscribes to multiple types of self-
efficacy and outcome expectancies in order to capture the progression through various levels of goal
attainment, which we will describe in more detail below. This transitional formulation of multiple types
of self-efficacy and outcome expectancies is new to the IS field, but different formulations of self-
efficacy were introduced for studying addictive behavior (Diclemente 1986) and later customized based
on health behavior stage attainment (Marlatt et al. 1997). We now review relevant process or stage
models from healthcare used to support our theorized transitional approach to InfoSec behavior
change.

Precaution Adoption Process Model


Healthcare psychology stage models, such as the Precaution Adoption Process Model (PAPM)
(Weinstein et al. 1988; Weinstein and Sandman 1992), seek to identify all the stages involved in
protective behavior change and what leads from one stage to the next. PAPM identifies seven ordered
stages along a path: unaware, unengaged, undecided, decided not to act, decided to act, acting, and
maintenance. PAPM has been applied to different types of health behaviors in a limited number of
studies, which include home Radon testing (Weinstein and Sandman 1992), smoking cessation (Borrelli
et al. 2002), screening for osteoporosis (Blalock et al. 2002), mammograms (Clemow et al. 2000), and
colon cancer (Glanz et al. 2007).

In formulating PAPM, Weinstein et al. (1988) and Weinstein and Sandman (1992) identify four principal
elements and assumptions that define stage models (Weinstein et al. 1998).

1. Stages are theoretical constructs so a clearly defined ideal is needed to categorize each stage.
2. Stages need to be ordered along a common path. However, it is permissible to skip stages, and
some stages may not be applicable (e.g. deciding not to act). Individuals may also go backwards.
3. Individuals face common barriers to change at each stage, which facilitates tailoring
interventions to a stage.
4. Individuals face different barriers to change in different stages. If the same interventions work
for everyone then a stage model may be unnecessary.

Although the stages proposed by PAPM are compelling, to date there have been only a few studies using
this approach, which limits support for its usefulness (Sutton 2005).

Transtheoretical Model of Behavioral Change


The most widely used stage model is the Transtheoretical Model of Behavioral Change (TTM) (Prochaska
2013; Prochaska and Velicer 1997). TTM was initially developed for smoking cessation and has been
adopted to address many other problem behaviors. TTM defines six discrete stages: precontemplation
(not seriously considering a change), contemplation (seriously considering a change), preparation
(making small changes), action (making changes to an appropriate level), maintenance (sustaining the
change over time), and termination (eliminating the risk of relapse).

While PAPM seeks to explain the mental states of each stage, TTM’s main goal is to explain how
behaviors change, and 10 processes of change have been identified that consist of strategies and
techniques individuals use to change their behavior (Prochaska and Velicer 1997). TTM employs self-
efficacy and acknowledges the importance of decisional balance (e.g. pros vs. cons) as key constructs
that explain why behaviors occur. Applying TTM begins by assessing an individual’s current stage and
using interventions to increase self-efficacy and help set goals for progressing to the next stage. A meta-
analysis of 48 TTM studies (Hall and Rossi 2008) suggests that for individuals to progress from
precontemplation to action, pros must increase while cons decrease. Importantly, changes in decisional
balance are not usually apparent at the start, rather it develops as individuals move through a process
designed to influence behavior. TTM, like PAPM, formulates changing behavior as a process divided into
distinct stages. Importantly, TTM also confirms that interventions are more effective when they are
tailored to each stage, as proposed by PAPM. This suggests multiple treatment/assessment points are
more effective than a single point, as exemplified in MIS fear appeals research (Boss et al. 2015;
Johnston et al. 2015; Johnston and Warkentin 2010).

Although TTM provides a useful lens into the process of behavior change, it has been heavily criticized as
flawed and problematic (Sutton 2001, 2005). From the perspective of studying InfoSec behavior, the
most damaging criticism is the lack of a formal causal relationship coupled with no standardized
measures. These limitations make it impossible to accumulate results across studies into a coherent
body of knowledge (Sutton 2001, 2005). In fact, TTM studies of smoking cessation tested a variety of
interventions, but few scientific conclusions could be made, leading some to call for abandoning TTM
entirely (West 2005) and adopting models that are more formally specified. One such promising
candidate is the Health Action Process Approach (HAPA) (Schwarzer 2008), which we briefly introduce
before adapting it to study InfoSec behavior change.

Health Action Process Approach: A Hybrid Predictive Process Model


The HAPA (Schwarzer 2008) is a hybrid model using both a stage layer and a causal layer. Unlike other
stage models, HAPA narrowly focuses on two important stages: “(1) preintentional motivation processes
that lead to a behavioral intention, and (2) postintentional volition processes that lead to the actual
health behavior” (Schwarzer et al. 2011, p. 162). HAPA emphasizes the transition from “wanting” to
“doing," historically a weakness of clinical efforts to move individuals towards “higher” order health
behaviors (Schwarzer and Sniehotta 2003; Sniehotta et al. 2005).
By narrowly focusing on two stages, the model is also able to formulate a causal layer of constructs that
describe each transition. The drivers of intention are risk perceptions, pre-action self-efficacy, and
motivational outcome expectancies. While these are somewhat familiar drivers of intent (as, for
example, in the TPB and PMT), they are different formulations designed to be parsimonious but effective
indicators (Schwarzer 2008). The volitional drivers of behavior are action planning, coping planning,
maintenance self-efficacy, and recovery self-efficacy. All HAPA constructs are described in more detail
below.

One criticism of HAPA is that the use of static variables limits its ability to detect behavior change over
time (Velicer and Prochaska 2008). For example, using multiple assessments of decisional balance
(pros/cons) allows TTM to detect changes in attitude towards a recommended behavior. Incorporating
decisional balance into HAPA may benefit security behavior intervention for two reasons. First, most
failures to comply do not have an immediate negative consequence, which implies attitudes may change
after effects become apparent. Second, pro/con assessments might be considered less intrusive and,
consequently, more transparently answered than explicit behavior questions (e.g. ‘how long is your
password?’), which could provide a roadmap for attack if the survey results were disclosed.

Theory Development
We now introduce the InfoSec Process Action Model (IPAM) for studying InfoSec behavior change, as
depicted in Figure 1, and a set of propositions to guide future experimental studies. This approach is
adapted from, and borrows heavily from, HAPA (Schwarzer 2008) as well as other theories described
above and as depicted in Table 2. IPAM, like HAPA, is a hybrid stage model that conceptualizes InfoSec
behavior improvement in two distinct phases: motivational and volitional. In the motivational phase, the
goal is to formulate intent towards a recommended security behavior. To better address the transition
from intent to action, it also includes an intermediate volitional phase that precedes action (Sniehotta et
al. 2005).

Table 2. Theories Incorporated by the InfoSec Process Action Model

Author (date) Theory Description Limitation(s) Elements


adopted by IPAM
Ajzen (1991) Theory of Planned Predicts Significant gap Intent as a
Behavior behavioral between predictor of
intention. intentions and behavior.
behavior.
Maddux and Protection Describes Significant gap Self-efficacy as a
Rogers (1983) Motivation motivation to between key predictor of
Theory engage in intentions and intent.
protective behavior.
behavior.
Bandura (1977, Social Cognitive Individuals As used by extant Goal setting,
1986) Theory formulate goals InfoSec research, multiple
patterned after single time formulations of
modeled assessments do self-efficacy, and
behaviors. not fully realize outcome
potential. expectancies
determine
behavior.

Weinstein et al. Precaution Proposes four Limited direct Distinct


(1988); Weinstein Adoption Process principal elements evidence to transitional states
& Sandman Model and assumptions support its of change, and
(1992) to define stage formulation. similar factors
models. influence
transitions
between stages.
Prochaska and Transtheoretical Change is an No formal causal Self-efficacy and
Velicer (1997), Model of intentional relationship or decisional balance
Velicer and Behavioral process that standardized (e.g. pros vs. cons)
Prochaska Change unfolds over time measures. are key constructs
(2008) through six stages that explain why
of change. behaviors occur.
Schwarzer (2008) Health Action A hybrid Limited ability to Two stages of
Process Approach predictive process detect behavior change:
model focused on change over time. preintentional
two stages: intent motivation and
and volition. postintentional
volition.
Parsimonious
drivers of intent;
new volitional
constructs as
transitional
predictors of
behavior.

Phase transition models hold the promise of better predictive power. While causation, explanatory
power, and predictive power are distinct notions in theory, they are certainly related. The ability to
identify individuals who are less likely to comply with leading security practices can allow increased
monitoring or intervention. This kind of assessment is implicit in phase transition models that use
operationalized measures of constructs to guide treatment or intervention (Schuz et al. 2007; Schwarzer
2008; Schwarzer et al. 2011; Schwarzer and Sniehotta 2003).
IPAM theorizes that in the motivational phase, intentions are based on self-beliefs, which include risk
perceptions, outcome expectancies, and perceived self-efficacy. It also assumes the transition from
intention to behavior is mediated by volitional factors, such as planning and initiative (Schwarzer 2008),
as well as changes in outcome expectancies. Inspired by TTMs use of pro/con profiles as a function of
change (Prochaska and Velicer 1997), the addition of two new constructs - planning and action outcome
expectancies - is the major difference between HAPA and IPAM. As a hybrid predictive process model, it
may prove useful for classifying individuals into different groups and tailoring interventions to help
transition from one stage to the next, while the additional intention-behavior mediators may both
increase the probability of transition and the power to predict that a transition will occur.

Pre-action P3a Maintenance P2c Recovery


Self-Efficacy Self-Efficacy Self-Efficacy Self-Efficacy
P1a P2a
P2b

Security Action/
Risk P1b P3b P2d/e Security
Perceptions
Behavior Coping
Behavior
Actions
Intention Planning
P1c P2f P2g
Motivational Planning Action
P3c P2h Decisional
Outcome Outcome Outcome
Expectancies Expectancies Expectancies Balance

Motivational Phase Volitional Phase

Figure 1. InfoSec Process Action Model (IPAM) for studying behavior change, adapted from the Health Action Process Model.

We now describe the major constructs and interactions of this model and introduce the propositions
that can be used to guide future applications of IPAM.

Motivational Phase
The motivational phase is characterized by individuals moving from a desire to realize an abstract notion
(e.g. to feel more secure) to formulating a determined resolve towards a specific goal (e.g. storing
sensitive data only on approved infrastructure). Therefore, the goal of this phase is to formulate an
intent that precedes action (Ajzen and Fishbein 1975). Major contributors to behavioral intention in
IPAM are identical to those used in HAPA: personal evaluation of the chance that a negative outcome
will occur (risk perception), assessments of the costs and benefits (outcome expectancies, pro and con),
and belief in one’s capability (task self-efficacy) (Schwarzer 2008). While TPB and PMT use somewhat
similar constructs, Garcia and Mann (2003) compared TPB, PMT, and HAPA in two studies of diet
avoidance and breast self-examination, and their findings present empirical evidence that HAPA
constructs were better predictors of intent, because they were able to more reliably assess behavior
control and self-efficacy. Operationally, since IPAM involves more constructs and emphasizes the
volitional phase, it uses fewer indicators to assess drivers of intent than other models. The interaction of
these motivational phase constructs is captured in propositions P1a through P1c.
Pre-action self-efficacy occurs in the motivational phase of the process where individuals do not yet act
but develop the goal or attainment of improved InfoSec behaviors. Consistent with Bandura’s definition
of self-efficacy (1977), we define pre-action self-efficacy as a person’s belief in their capabilities to
organize and execute the courses of action required to improve their InfoSec behaviors. Prior research in
IS has shown self-efficacy to significantly influence behavioral intentions when dealing with technology
(e.g. Chiu and Wang 2008). When applying this to improved InfoSec behavior, the more confident a
person is in their ability to improve their InfoSec behavior, the more likely they will intend to behave
more securely. If an individual lacks this confidence, they will be less likely to start the process toward
improved InfoSec behavior, resulting in the following proposition.

P1a: Security behavior intention is influenced by pre-action self-efficacy.

Risk Perceptions: Risks perceptions specify perceived susceptibility to a threat (Schwarzer and Sniehotta
2003). Although not a strong predictor of intentions (Armitage and Conner 2000; Milne et al. 2000,
2006), the awareness of risks is theorized to promote deliberations about initiating behavior change
(Prentice-Dunn and Rogers 1986; Rogers 1975). Fear appeals, as previously described, make individuals
feel at risk in order to motivate protective behavior. However, fear appeals can backfire and result in
avoidance, denial, or anger (Witte and Allen 2000). Therefore, to set the stage for contemplation and
motivation to change (Schwarzer 1999), assessments of risk are combined with self-efficacy and
outcome expectancies (Albert Bandura 1997; Bandura 1977). Risk perceptions are usually assessed in
terms of vulnerability and severity (Schwarzer and Sniehotta 2003). However, because the function is to
promote deliberations instead of fear, as in PMT, operationally, a single assessment of relative
vulnerability (Weinstein 1982) may be sufficient.

P1b: Security behavior intention is influenced by risk perceptions.

Motivational Outcome Expectancies: Perceptions about the consequences of action also influence
intentions to act. Therefore, outcome expectancies plays a role in predicting intent (Schwarzer 2008;
Velicer and Prochaska 2008). We define motivational outcome expectancies as a person’s estimate that
improved InfoSec behavior will lead to certain outcomes. Motivational outcome expectancies are
associated with the outcomes from the overall improvement of InfoSec behavior, whereas future
constructs of outcome expectancies refer to specific volitional aspects in the process needed to change
InfoSec behavior. As mentioned earlier, IPAM incorporates decisional balance across the phases of
transitions towards action.

P1c: Security behavior intention is influenced by outcome expectancies.

Security behavior intention is an individual’s perceived likelihood that they will engage in a
recommended security behavior (Ajzen 1991; Ajzen and Fishbein 1975) such as carefully securing mobile
devices that access sensitive information or following strong/unique password recommendations.

Volitional Phase
Volitional phase treatments and assessments assume participants have already formed a behavioral
intent, e.g. to diligently review information access authorizations or systematically employ
strong/unique passwords. Intenders should benefit from treatments that target making plans to
translate their behavior into action (Gollwitzer 1999; Schwarzer 2008). The ability to make plans and
stick to them even in the face of unanticipated obstacles depends, in part, on a broader notion of self-
efficacy (Bandura 1994; Schwarzer and Luszczynska 2008). Decisional balance continues to play an
important role at the volitional phase because initiating action depends on pros being higher than cons
(Hall and Rossi 2008; Prochaska and Velicer 1997). Non-intenders, on the other hand, may benefit from
additional treatments to target outcome expectancies (build pros or reduce cons), risk perceptions
(increase the perceptions of vulnerability), or pre-action self-efficacy (build confidence). Although
differentiated treatments for intenders vs. non-intenders are theoretically desirable, they may be
impractical in organizational settings. IPAM, as a hybrid continuum-stage model, does not require
complete formation of intent prior to entering the volitional phase, and treatments that further boost
intent may be valuable in any case. As noted in previous research (Sheeran 2002), many non-intenders
eventually act. Therefore, a single treatment primarily focused on helping intenders make plans may
also be effective for those for whom intent is not yet clearly formed.

In IPAM, maintenance self-efficacy, recovery self-efficacy, action planning, and coping planning are
proposed as proximal determinants for translating intent into action, while planning outcome
expectancies and action outcome expectancies are functional indicators of change. The interaction of
these volitional phase constructs is captured in propositions P2a through P2h.

Maintenance Self-Efficacy is optimistic self-belief about the overcoming of obstacles and difficulties
when implementing a behavior (Schwarzer 2008). While task self-efficacy at the motivational phase
emphasizes the ability to act, maintenance self-efficacy refocuses on the ability to get oneself to act as
needed when needed. Although one may have intent and pre-action self-efficacy – for example,
intending to review information access needs prior to granting permission – individuals may lack
confidence that they will act on the intent when faced with challenges, such as unfamiliarity with the
information assets and tools to assist in reviewing. Thus, we define maintenance self-efficacy as a
person’s belief in their capabilities to organize and execute the courses of action required to maintain
their improved InfoSec behaviors. While task self-efficacy tends to predict intentions, maintenance self-
efficacy tends to predict behaviors (Schwarzer 2008; Schwarzer et al. 2011). This variant of self-efficacy
reflects changes in context resulting from movement through the change process.

P2a: Security behavior action is influenced by maintenance self-efficacy.

Recovery Self-Efficacy is a person’s belief in their capabilities to organize and execute the courses of
action required to resume improved InfoSec behavior after an interruption. The focus is on lapses and
one’s self-belief about being able to recommit to an action even after a relapse (Schwarzer 2008). Most
people are well aware that it is easy to fall back into problematic habits. This can, but does not have to,
lead to discouragement. Old behaviors are stubbornly persistent, and this variation of self-efficacy
anticipates pauses in the new behavior while addressing the ability to once again recommit and
continue in an action until the new behavior is recovered.

P2b: Security behavior action is influenced by recovery self-efficacy.

The effect of recovery self-efficacy on security behavior is also related to maintenance self-efficacy. That
is to say, those who are confident in their ability to make plans will tend to also be more confident in
their ability to recover from relapses.

P2c. The relationship between maintenance self-efficacy and security behavior action is partially
mediated by recovery self-efficacy.
Whereas intentions are general goals, implementation necessitates making plans to act (Gollwitzer
1999; Parks–Stamm et al. 2007). Planning actions may both demonstrate that an individual will act and
help them build the propensity to act. Two types of planning behaviors are thought to be involved in this
phase. The first is planning to act, while the second is anticipating barriers and planning to overcome
them.

Action Planning: Visualizing oneself engaging in actions is associated with goal attainment (Bandura
1977). Making plans to act, e.g. “Next time I am asked to approve access for a new employee in my
department I will invest the time to carefully review the list of information assets involved,” helps
formulate mental scenarios for translating intent to action (Gollwitzer 1999; Gollwitzer and Sheeran
2006; Parks–Stamm et al. 2007) because it includes specific cues of “when, where and how” (Schwarzer
1999, p. 118), which decreases the likelihood of procrastinating and forgetting intentions.

P2d: Security behavior action is influenced by action planning.

Coping Planning: Anticipating barriers and formulating alternative strategies to overcome them (Schuz
et al. 2007; Schwarzer 2008; Schwarzer et al. 2011) is another form of planning. Assisted by
maintenance self-efficacy and recovery self-efficacy, the ability to cope with unforeseen obstacles aids in
goal attainment (Sniehotta et al. 2005). For example, although an individual makes plans to protect an
account with a strong password, a lost or forgotten password presents an obstacle that may impede
action. Visualizing obstacles and formulating a plan to overcome, e.g. “I will use the ‘forgot my
password’ link,” has been described as mental contrasting (Oettingen and Gollwitzer 2010). Such mental
contrasting aids in overcoming both anticipated and unanticipated obstacles by strengthening an
individual’s commitment and refining their understanding of goal attainability under specific conditions
(Adriaanse et al. 2010).

P2e: Security behavior action is influenced by coping planning.

Planning and Action Outcome Expectancies: TTM studies contrasting individuals who successfully make
the transition from intent to action to those who do not, report that pros (assessment of benefits) have
increased and cons (assessment of negative outcomes) have decreased, while for non-actors cons
remain higher than pros (Hall and Rossi 2008; Prochaska and Velicer 1997). Therefore, the purpose of
assessing outcome expectancies is to evaluate intervention effectiveness and progression towards a
decisional profile that would support initiating action. This addition is useful and necessary in the
context of InfoSec since it offers a clear indicator of progress towards action while also suggesting
specific strategies for overcoming resistance to change. For example, outcome expectancies may be
presented as a list of the most common benefits and drawbacks associated with a recommended
security action. Even if motivational, planning, and action outcome assessments use identical measures,
they measure increasingly nuanced beliefs as an individual moves through a process from awareness to
acting. We speculate that patterns of change in decisional balance over time are an amalgam of multiple
psychometric dimensions, such as likelihood, severity, fear, anger, and cost assessments, among others.
For example, an individual responsible for the security of an entire department may be more motivated
for precaution than an employee with limited IT knowledge.

Assessments of outcome expectancies at the planning stage can be useful to identify those who have
already begun to internalize the security ideal, while offering insights into which positive or negative
expectancies should be targeted for additional treatments. Assessing outcome expectancies at the
volitional stage can be used to confirm actors/non-actors and also identify those who may be
susceptible to relapse. These may also be useful to further tailor interventions targeting non-actors or
offering relapse preventions strategies. Put another way, assessing outcome expectancies at multiple
stages provides a clear indicator of changes in decisional profiles that would support initiating action.
The profile of a non-actor in pre-intention typically depicts cons higher than pros, while the profile of an
intender who is initiating action in the volitional stage typically depicts pros higher than cons (Hall and
Rossi 2008; Prochaska and Velicer 1997). Therefore, we define planning outcome expectancies as an
individual’s estimate during the assessment of planning that adopting InfoSec behavior will lead to
certain outcomes. Similarly, action outcome expectancies are an individual’s estimate during the
assessment of actions that adopting InfoSec behavior will lead to certain outcomes.

P2f: Security behavior action is influenced by planning outcome expectancies.

P2g: Security behavior action is influenced by action outcome expectancies.

Like self-efficacy, we expect outcome expectancies in the volitional phase to be closely related such that
the effect of action outcome expectancies on security behavior is partially influenced by planning
outcome expectancies.

P2h: The relationship between planning outcome expectancies and security behavior action is partially
mediated by action outcome expectancies.

Security Behavior Action: Behavior refers to an individual’s actions in using a recommended security
behavior.

Multi-phase Assessments
Our model views InfoSec behavior compliance narrowly by segmenting individuals into three groups:
non-intenders, intenders, and actors. Like HAPA, IPAM does not embrace Weinstein and Sandman’s
(1988) full definition of a stage model, instead it conceives of participants as moving through a process
with recognizable stages. Segmenting a process into stages can facilitate improved understanding of
drivers and effects. For example, Liang and Xue (2009) formulate a threat avoidance process, later used
to study safeguards of home computer usage (Liang and Xue 2010), while Cooper and Zmud (1990)
identify pre- and post-implementation phases of IT implementation, and these phases were later used
by Venkatesh and Bala (2008), among others, to study technology acceptance.

While intent is often used to predict action (e.g. TPB and PMT), our third proposition formalizes our
expectation that adding volitional phase constructs to the model will result in greater predictive power
of security actions. If so, IPAM would be an important contribution to closing the intent-action gap. We
formulate propositions P3a through P3c to described interactions regarding the multi-phase dimension
of IPAM.

P3a: Pre-action self-efficacy will influence maintenance self-efficacy.

P3b: Security behavior intent will influence action planning and coping planning.

P3c: Motivational outcome expectancies will influence planning outcome expectancies.

Table 3. Summary of IPAM Constructs


Construct Description
Motivation Phase Formulate intent by raising awareness of risks and educating on offsetting
behaviors.

Pre-Action The belief in one’s capabilities to organize and execute the courses of
Self-Efficacy action required to improve their InfoSec behaviors.
Risk Perception One’s perceived susceptibility to an InfoSec threat.

Motivational Outcome One’s estimate that improved InfoSec behavior will lead to certain
Expectancies outcomes.

Volitional Phase Encourages volitional engagement in behavior adoption.


Maintenance The belief in one’s capabilities to organize and execute the courses of
Self-Efficacy action required to maintain their improved InfoSec behaviors.

Action Planning One’s making specific plans to act in compliance with a recommend InfoSec
behavior.
Coping Planning One’s visualizing barriers to action and formulating strategies to overcome
them.

Recovery The belief in one’s capabilities to organize and execute the courses of
Self-Efficacy action required to resume improved InfoSec behavior after an interruption.
Planning Outcome One’s estimate during the assessment of planning that adopting InfoSec
Expectancies behavior will lead to certain outcomes.

Action Outcome One’s estimate during the assessment of actions that adopting InfoSec
Expectancies behavior will lead to certain outcomes.

Example Study Procedure


In this section, we discuss how IPAM can be assessed in an interrupted time-series quasi-experiment
designed to improve a targeted security behavior. One possible experimental design would compare
effectiveness of security training by segmenting participants into different treatment groups to compare
levels of behavior. Some security measures may be complied with immediately, but more complex
behavioral recommendations take longer for individuals to internalize and adapt their behavior. Our
multi-phase design is well suited to nudging individuals to comply with more complex and difficult-to-
adopt security behavior. Although other designs are possible, as suggested by Schwarzer (2008), the
approach described here has three interaction points (T1, T2, and T3) with participants. It also fits well
with the recommended best practice of ongoing exposure to security awareness (Security Awareness
Program Special Interest Group 2014; Walsh 2017). Items for assessing the IPAM constructs may be
adapted from HAPA studies (for example, see: Renner and Schwarzer 2005; Schwarzer 2008), while
outcome expectancies may be adapted from TTM (for example, see: Prochaska et al. 1997; Rossi et al.
2001).

In this example, it is assumed that an organization is preparing to adopt more stringent password
requirements, and at the time of implementation they send a notification of this new standard along
with an invitation to participate in T1 “security awareness training”. T1 may consist of a baseline
assessment of current behavior (if required) for later comparison to assess effectiveness. Following the
baseline, various instructive or motivational treatments may be applied. For example, messages that
describe the threat of weak passwords and reusing passwords across systems may target risk
perceptions. Contrasting this with the reduction in risk from strong and unique passwords may
strengthen outcome expectancies. Finally, instructions on better behavior combined with examples and
an opportunity to practice may strengthen pre-action self-efficacy. Following these treatments, the
motivational constructs may be assessed, including pre-action self-efficacy, risk perception, motivational
outcome expectancies, and ending with intent. Alternatively, assessing outcome expectancies before
and after a motivational treatment may offer useful insight into decisional balance.

While some individuals will respond and change behavior immediately, others take longer. A break of
some period is typical (Schwarzer 2008) in order to allow beliefs to change before additional treatments
are administered (e.g. 1-3 weeks). Of course, the time needed to make a change is dependent on many
factors. Although we do not specify a time frame for change, we speculate that the time gaps may allow
a participant to give some consideration to relevant contextual factors but not enough time to
effectively extinguish the impact of the previous interaction. In an ideal staged approach, participant
treatments are initiated as individuals demonstrate that they have reached a certain stage. Our IPAM
approach does not require clear stage definitions or identifiers, but rather is intended for participants to
move through a somewhat predictable process that leads to action. Of course we do not expect that all
participants would successfully transition to action. However, the application of customized
supplemental treatments to target non-intenders may result in higher levels of action. For example, low
risk perceptions may benefit from fear appeals, low outcome expectancies may be treated with
emphasis on benefits, and low perceptions of pre-action self-efficacy may be treated with additional
training. These supplemental treatments can be administered separately or incorporated into T2 as
needed.

In T2, the assumption is that intent has been formed. Participants in T2 may receive treatments to
strengthen their volition to act. For example, encouraging participants to confront obstacles and
respond with effort and persistence may strengthen maintenance self-efficacy. Messages that
encourage making plans to begin changing passwords for one’s most important accounts may promote
action planning. Describing common barriers (e.g. forgetting one’s password), and how to overcome
(e.g. the forgot password link), may facilitate anticipating and overcoming barriers and promote coping
planning. Finally, reminder messages of the risk and how the new behavior can offset it may continue to
strengthen outcome expectancies. These treatments can then be followed by assessments of volitional
constructs, for example, maintenance self-efficacy, action planning, coping planning, and planning
outcome expectancies.

At this point, more time is allowed to pass in order for participants to act before a final T3 interaction. At
T3, the assumption is that a majority of participants have initiated the recommended behavior, which is
measured with a post assessment. Next, an assessment of action outcome expectancies may be used to
help confirm self-reported behavior, since participants who adopt a recommended behavior should
assess positive outcome expectancies higher than negative outcome expectancies. Despite successfully
initiating the behavior, relapses are a distinct possibility. Therefore, participants may also benefit from a
final treatment to address and strengthen recovery self-efficacy, for example, encouraging a
recommitment to improved behavior even if they have relapsed (e.g. staying committed to strong
unique passwords, after reusing another password when creating a new account). Following the
treatment, an assessment of recovery self-efficacy may help indicate relapse potential.

IPAM does not specify additional interaction points, but an organization could target those who failed to
initiate the behavior or had higher relapse potential. For example, the next round of “security awareness
training” emphasizing a different recommended behavior could also include supplemental treatments
and assessments from the previous training.

Discussion
We have presented the theoretical support for a multi-stage process with treatments and assessments
tailored to support an expected InfoSec Behavior progression from motivation through volition to
action, which would result in a) improved behavior, b) additional insight into the drivers of action, and c)
insight to guide appropriate treatments. We proposed the InfoSec Process Action Model (IPAM) adapted
from the HAPA model from the healthcare domain.

Building on insights from the healthcare field, we suggest that better interventions for manipulating
behavioral InfoSec can result in more secure information systems. While PMT and TPB identify mature
drivers of the motivational phase leading to the formulation of intent, we propose a parsimonious set of
drivers, which may be comparable (Garcia and Mann 2003).

Intentions have been the principal focus of behavioral efforts in InfoSec research (Crossler et al. 2013),
and much less is known about the volitional phase where intention is translated into action (Ajzen et al.
2004). However, progress in health behavior change (Schwarzer 2008; Sniehotta et al. 2005) suggests
that planning and more nuanced measures of self-efficacy may help address this gap by assessing
indicators of volition and by initiating actions that exercise volitional thinking. Adopting insights from the
HAPA model, IPAM assesses self-efficacy differently as individuals are transitioning from intent to action.
Because of many notable similarities between poor healthcare and security behavior, we believe that
focusing on the drivers of volition can help close the gap of translating intent into action (Gollwitzer and
Sheeran 2006; Sheeran 2002).

While IPAM draws heavily from HAPA (Schwarzer 2008), IPAM additionally emphasizes decisional
balance. While HAPA studies have occasionally employed decisional balance elements, they are not part
of its formal specification as used in other efforts. We include this component in IPAM because it seems
especially valuable in determining InfoSec behaviors. Pro/con assessments are less likely to implicitly
compromise security by documenting security weaknesses; may be more honestly answered given that
direct assessments of compliance with security guidelines may be thought to lead to punitive measures;
and may be more powerfully indicative given the indirect or time-delayed correlation between weak
security behavior and negative organizational consequences. Simply put, training focused on different
skills to improve behavior requires time for mastery. When assessments of action immediately follow
training, participants may not yet have sufficient time to reassess their perceptions, and studies in
decisional balance have shown pros must be higher than cons in order for action to be initiated. Thus,
although the phased assessment is more complex to administer, we contend that for certain
problematic behaviors it will be more effective.

Theoretical Implications
Bridging the gap from a theoretical model of behavior to pragmatic intervention has been posed as a
challenge in both healthcare and MIS research. In 2001, the Institute of Medicine (IOM) (Richardson and
Berwick 2001) identified inadequate translation of theoretical research into pragmatic interventions as a
major contributor to the gap in healthcare. MIS faces similar issues, leading to calls for more
interventions that improve technology acceptance (Venkatesh 2006; Venkatesh and Bala 2008). In
InfoSec research this call has begun to be answered through the research using fear appeals (Boss et al.
2015; Johnston et al. 2015; Johnston and Warkentin 2010).

IPAM makes three important theoretical contributions. First, we advance a formal process approach
suitable for studying some of the thorniest InfoSec behavior problems. IPAM’s origins in healthcare
behavior change offer extensive evidence of its ability to promote improvements in stubborn behaviors.
Second, we introduce new constructs to InfoSec research capable of assessing whether individuals are
successfully translating their good intentions into actions. We speculate that these constructs may also
be suitable to many other IS domains where intentions have been employed, such as technology
acceptance, for example. Third, our adaption to the InfoSec domain includes justification that a process
approach supports developing habitual behaviors. Habitual security compliance is linked to improved
security behavior (Vance et al. 2012), so using a process model helps promote better security. The
example experiment describes one application to evaluate the effectiveness of such interventions.

Managerial Implications
The multi-phased IPAM approach is better able to satisfy the recommendations of security professionals
to offer continuous exposure to security awareness training and help develop habitual security
behaviors. We encourage organizations to systematically introduce new security recommendations in
the phased approach described above. IPAM is ideally suited to assessing problematic behaviors that
can be difficult to assess. Some examples of potential issues that can be addressed include the following.

Do users reuse passwords for organizational systems on external sites?


Do users copy sensitive organizational data on unsecure storage?
Do administrators properly balance risk in authorizing and revoking access to organizational
information resources?

Answers to questions such as these have a substantial impact on organizational risk. Users may be
unlikely to self-report risky behavior, and technical solutions that assess or mitigate such behavior are
likely to be impractical or costly. Abstract assessments of pros and cons can provide powerfully
predictive indicators of behavior. This is promising because an organization can easily assess employees’
pro/con profile, as well as maintenance self-efficacy and planning formulation, as part of routine
security compliance training. Such measures may play a useful role in assessing training effectiveness
and identifying individuals in need of treatments, as well as potential deviants.

Conclusion, Limitations, and Future Work


One limitation of IPAM is that it is currently a theoretical model untested in an organizational setting.
We expect that future work could 1) provide operationalized assessments of constructs that can be
evaluated, 2) evaluate its ability to predict those who may be less likely to comply with a recommended
security behavior, 3) evaluate if the volitional constructs are useful to help close the gap between
intention and behavior by comparing IPAM directly against other models, and 4) evaluate the
effectiveness of decisional balance as an indicator of a profile supporting the initiation of action.

Although we have focused heavily on the volitional phase, a limitation of IPAM is that it may too
narrowly focused on a broad process of behavior change. For example, the rich history on drivers of
intent is simplified, while in some circumstances assessing fear, anger, maladaptive rewards, and other
indicators may be valuable. Further, as suggested by TTM, additional stages of behavior change may be
preferential for changing some behaviors (Velicer and Prochaska 2008), and if so, this may necessitate a
broader theory of the stages of InfoSec behavior change. For example, certain security procedures, like
problematic health behaviors, are stubborn and difficult to permanently change. To the best of our
knowledge there is scant work on security behavior relapse prevention and recovery, which we believe
may be worthy of additional examination.

One final limitation of IPAM is that it does not address the framing of security recommendations.
Insights from threat avoidance suggest that individuals behave differently when faced with comparable
choices regarding loss or gain (Kahneman and Tversky 1979; Huigang Liang and Xue 2009). A process
approach that integrates training and assessment inevitably frames security behaviors. Participants are
primed to consider security recommendations as risk avoidance strategies. Priming has been shown to
significantly influence respondents (Marshall et al. 2015), so the inclusion of training or other
interventions may increase the indicative power of survey items in predicting behavior. Further, the
survey items may also implicitly frame the respondents thinking as they interact with the interventions.
The framing of questions and training may, therefore, significantly influence both indicative power and
behavior change. This notion is worthy of further exploration.

In conclusion, we are optimistic about the promise of using a process approach to change behavior and
the suitability of IPAM as a hybrid predictive process model. IPAM has the potential to begin a rich
stream of research in understanding the factors that lead to improved InfoSec behavioral decisions.

References
Adriaanse, M., Oettingen, G., Gollwitzer, P. M., Hennes, E. P., Ridder, D. T. D. De, and Wit, J. B. F. De.
2010. “When planning is not enough: Fighting unhealthy snacking habits by mental contrasting
with implementation intentions,” European Journal of Social Psychology Eur., (40), pp. 625–634
(doi: 10.1002/ejsp).
Ajzen, I. 1991. “The Theory of Planned Behavior,” Organisation Behavior and Human Decision Process,
(50), pp. 179–211 (doi: 10.1016/0749-5978(91)90020-T).
Ajzen, I., Brown, T. C., and Carvajal, F. 2004. “Explaining the discrepancy between intentions and actions:
The case of hypothetical bias in contingent valuation,” Personality and social psychology (available
at http://psp.sagepub.com/content/30/9/1108.short).
Ajzen, I., and Fishbein, M. 1975. “Belief , Attitude , Intention and Behavior : An Introduction to Theory
and Research,” Reading, Mass. : Addison-Wesley Pub. Co.
Albarracin, D., Johnson, B., and Fishbein, M. 2001. “Theories of reasoned action and planned behavior as
models of condom use: a meta-analysis.,” Psychological (available at
http://psycnet.apa.org/psycinfo/2001-16276-007).
Anderson, C. L., and Agarwal, R. 2010. “Practicing safe computing: a multimedia empirical examination
of home computer user security behavioral intentions,” MIS quarterly, (34:3), pp. 613–643 (doi:
10.1016/j.arth.2009.05.009).
Armitage, C. J., and Conner, M. 2000. “Social cognition models and health behaviour: A structured
review,” Psychology and Health, (15:February), pp. 173–189 (doi: 10.1080/08870440008400299).
Bandura, A. 1977. “Self-efficacy: toward a unifying theory of behavioral change.,” Psychological review,
(84:2), pp. 191–215 (available at http://psycnet.apa.org/journals/rev/84/2/191/).
Bandura, A. 1986. Social foundations of thought and action: A social cognitive theory. (available at
http://psycnet.apa.org/psycinfo/1985-98423-000/).
Bandura, A. 1989. “Human agency in social cognitive theory.,” American psychologist (available at
http://psycnet.apa.org/psycinfo/1990-01275-001).
Bandura, A. 1994. Self‐efficacy (available at
http://onlinelibrary.wiley.com/doi/10.1002/9780470479216.corpsy0836/full).
Bandura, A. 1997. “Theoretical perspectives,” in Self-efficacy: The exercise of control, (Vol. 50), p. 604
(doi: 10.1177/0957154X9400501708).
Bandura, A. 1997. “Self-efficacy: The Exercise of Control,” Encyclopedia of human behavior, (4), pp. 71–
81 (doi: 10.1002/9780470479216.corpsy0836).
Blalock, S., DeVellis, B., and Patterson, C. 2002. “Effects of an osteoporosis prevention program
incorporating tailored educational materials,” American Journal of (available at
http://www.ajhpcontents.org/doi/abs/10.4278/0890-1171-16.3.146).
Borrelli, B., McQuaid, E. L., Becker, B., Hammond, K., Papandonatos, G., Fritz, G., and Abrams, D. 2002.
“Motivating parents of kids with asthma to quit smoking: the PAQS project,” Health Education
Research, (17:5), Oxford University Press, pp. 659–669 (doi: 10.1093/her/17.5.659).
Boss, S. R., Galletta, D. F., Lowry, P. B., Moody, G. D., and Polak, P. 2015. “What Do Systems Users Have
to Fear? Using Fear Appeals To Engender Threats and Fear that Motivate Protective Security
Behaviours,” MIS Quarterly, (39:4), pp. 837–864 (doi: 10.25300/MISQ/2015/39.4.5).
Bulgurcu, B., Cavusoglu, H., Benbasat, I., and Information, M. 2010. “Information security policy
compliance: an empirical study of rationality-based beliefs and information security awareness,”
MIS Quarterly, (34:3), pp. 523–548 (doi: 10.1093/bja/aeq366).
Burns, M., Durcikova, A., and Jenkins, J. 2012. “On Not Falling for Phish: Examining Multiple Stages of
Protective Behavior of Information System End-Users,” ICIS 2012 Proceedings (available at
http://aisel.aisnet.org/icis2012/proceedings/ResearchInProgress/78).
Chiu, C., and Wang, E. 2008. “Understanding Web-based learning continuance intention: The role of
subjective task value,” Information & Management (available at
http://www.sciencedirect.com/science/article/pii/S0378720608000293).
Clemow, L., Costanza, M., and Haddad, W. 2000. “Underutilizers of mammography screening today:
characteristics of women planning, undecided about, and not planning a mammogram,” Annals of
Behavioral (available at http://link.springer.com/article/10.1007/BF02895171).
Cooper, R., and Zmud, R. 1990. “Information technology implementation research: a technological
diffusion approach,” Management science (available at
http://pubsonline.informs.org/doi/abs/10.1287/mnsc.36.2.123).
Crossler, R., and Belanger, F. 2014. “An Extended Perspective on Individual Security Behaviors :
Protection Motivation Theory and a Unified Security Practices ( USP ) Instrument,” The Database
for Advances in Information Systems, (45:4), pp. 51–71 (doi: 10.1145/2691517.2691521).
Crossler, R. E., Johnston, A. C., Lowry, P. B., Hu, Q., Warkentin, M., and Baskerville, R. 2013. “Future
directions for behavioral information security research,” Computers & Security, (32), Elsevier Ltd,
pp. 90–101 (doi: 10.1016/j.cose.2012.09.010).
Crossler, R. E., Long, J. H., Loraas, T. M., and Trinkle, B. S. 2014. “Understanding Compliance with Bring
Your Own Device Policies Utilizing Protection Motivation Theory Bridging the Intention-Behavior
Gap.,” Journal of Information Systems, (28:1), pp. 209–226 (doi: 10.2308/isys-50704).
Davis, R., Campbell, R., Hildon, Z., Hobbs, L., and Michie, S. 2014. “Theories of behaviour and behaviour
change across the social and behavioural sciences: a scoping review,”
http://dx.doi.org/10.1080/17437199.2014.941722, Routledge.
Diclemente, C. C. 1986. “Self-Efficacy and the Addictive Behaviors,” Journal of Social and Clinical
Psychology, (4:3), pp. 302–315 (doi: 10.1521/jscp.1986.4.3.302).
Dinev, T., and Hu, Q. 2007. “The Centrality of awareness in the formation of user behavioral intention
toward protective information technologies,” Journal of the Association for Information Systems,
(8:7), pp. 386–408 (doi: Article).
Floyd, D. L., Prentice-Dunn, S., and Rogers, R. W. 2000. “A meta-analysis of research on protection
motivation theory,” Journal of applied social psychology, (30:2), pp. 407–429.
Garcia, K., and Mann, T. 2003. “From ‘I Wish’ to ‘I Will’: social-cognitive predictors of behavioral
intentions.,” Journal of health psychology, (8:3), pp. 347–360 (doi: 10.1177/13591053030083005).
Glanz, K., Steffen, A., and Taglialatela, L. 2007. “Effects of colon cancer risk counseling for first-degree
relatives,” Cancer Epidemiology Biomarkers & (available at
http://cebp.aacrjournals.org/content/16/7/1485.short).
Godin, G., and Kok, G. 1996. “The theory of planned behavior: a review of its applications to health-
related behaviors,” American journal of health promotion (available at
http://www.ajhpcontents.org/doi/abs/10.4278/0890-1171-11.2.87).
Gollwitzer, P. 1999. “Implementation intentions: strong effects of simple plans.,” American psychologist
(available at http://psycnet.apa.org/journals/amp/54/7/493/).
Gollwitzer, P., and Sheeran, P. 2006. “Implementation intentions and goal achievement: A meta‐analysis
of effects and processes,” Advances in experimental social psychology (available at
http://www.sciencedirect.com/science/article/pii/S0065260106380021).
Hall, K., and Rossi, J. 2008. “Meta-analytic examination of the strong and weak principles across 48
health behaviors,” Preventive medicine (available at
http://www.sciencedirect.com/science/article/pii/S0091743507004860).
Herath, T., and Rao, H. 2009. “Protection motivation and deterrence: a framework for security policy
compliance in organisations,” European Journal of Information Systems (available at
http://link.springer.com/article/10.1057/ejis.2009.6).
Herath, T., and Rao, H. R. 2009. “Protection motivation and deterrence: a framework for security policy
compliance in organisations,” European Journal of Information Systems, (18:2), pp. 106–125 (doi:
10.1057/ejis.2009.6).
Ifinedo, P. 2012. “Understanding information systems security policy compliance: An integration of the
theory of planned behavior and the protection motivation theory,” Computers and Security, (31:1),
Elsevier Ltd, pp. 83–95 (doi: 10.1016/j.cose.2011.10.007).
Johnston, A. C., and Warkentin, M. 2010. “Fear Appeals and Information Security Behaviors: an Empirical
Study,” Management Information Systems Quarterly, (34:3), pp. 549–566.
Johnston, A., Warkentin, M., and Siponen, M. 2015. “An Enhanced Fear Appeal Rhetorical Framework:
Leveraging Threats to the Human Asset Through Sanctioning Rhetoric.,” MIS quarterly (available at
http://aisel.aisnet.org/cgi/viewcontent.cgi?article=3224&context=misq).
Kahneman, D., and Tversky, A. 1979. “Prospect theory: An analysis of decision under risk,”
Econometrica: Journal of the Econometric Society, pp. 263–291.
Lee, Y. 2011. “Understanding anti-plagiarism software adoption: An extended protection motivation
theory perspective,” Decision Support Systems (available at
http://www.sciencedirect.com/science/article/pii/S0167923610001156).
Lee, Y., and Larsen, K. R. 2009. “Threat or coping appraisal: determinants of SMB executives’ decision to
adopt anti-malware software,” European Journal of Information Systems, (18:2), pp. 177–187 (doi:
10.1057/ejis.2009.11).
Liang, H., and Xue, Y. 2009. “Avoidance of information technology threats: a theoretical perspective,”
MIS quarterly (available at http://www.jstor.org/stable/20650279).
Liang, H., and Xue, Y. 2009. “Avoidance of Information Technology Threats: a Theoretical Perspective,”
MIS quarterly, (33:1), pp. 71–90 (available at http://www.jstor.org/stable/20650279).
Liang, H., and Xue, Y. 2010. “Understanding security behaviors in personal computer usage: A threat
avoidance perspective,” Journal of the Association for Information Systems, (11:7), pp. 394–413.
Maddux, J. E., and Rogers, R. W. 1983. “Protection motivation and self-efficacy: A revised theory of fear
appeals and attitude change,” Journal of Experimental Social Psychology, (19:5), pp. 469–479 (doi:
10.1016/0022-1031(83)90023-9).
Marlatt, G., Baer, J., and Quigley, L. 1997. “Self-efficacy and addictive behavior,” in Selfefficacy in
changing (available at https://books.google.com/books?
hl=en&lr=&id=JbJnOAoLMNEC&oi=fnd&pg=PA289&dq=Marlatt+et+al+1997+-+Self-
efficacy+and+addictive+behavior+&ots=mVc64wELf_&sig=AuLj5VDMN9A0zpQwgS4qUqMhBqI).
Marshall, B., Curry, M., and Kawalek, P. 2015. “Improving IT Assessment with IT Artifact Affordance
Perception Priming,” International Journal of Accounting Information Systems, (19), pp. 17–28 (doi:
10.1016/j.accinf.2015.11.005).
Milne, S., Sheeran, P., and Orbell, S. 2000. “Prediction and intervention in health-related behavior: A
meta-analytic review of protection motivation theory,” Journal of Applied Social (available at
http://onlinelibrary.wiley.com/doi/10.1111/j.1559-1816.2000.tb02308.x/full).
Milne, S., Sheeran, P., and Orbell, S. 2006. “Prediction and Intervention in Health Related Behavior: A
Meta Analytic Review of Protection Motivation Theory,” Journal of Applied Social Psychology,
(30:1), pp. 106–143 (doi: 10.1111/j.1559-1816.2000.tb02308.x).
Oettingen, G., and Gollwitzer, P. 2010. “Strategies of Setting and Implementing Goals,” in Social
Psychological Foundations of Clinical Psychology, J. E. Maddux and T. J.P. (eds.), New York: The
Guilford Press (available at
https://www.researchgate.net/profile/Peter_Gollwitzer2/publication/50389011_Strategies_of_se
tting_and_implementing_goals_Mental_contrasting_and_implementation_intentions/links/54e12f
960cf296663791f22e.pdf).
Parks–Stamm, E. J., Gollwitzer, P. M., and Oettingen, G. 2007. “Action Control by Implementation
Intentions: Effective Cue Detection and Efficient Response Initiation,” Social Cognition, (25:2), pp.
248–266 (doi: 10.1521/soco.2007.25.2.248).
Prentice-Dunn, S., and Rogers, R. W. 1986. “Protection motivation theory and preventive health: Beyond
the health belief model,” Health education research, (1:3), pp. 153–161.
Prochaska, J. O., DiClemente, C., and Velicer, W. F. 1997. “The Transtheoretical Model of Health
Behavior Change,” American journal of health promotion, (12:1), pp. 38–48 (doi: 10.4278/0890-
1171-12.1.38).
Prochaska, J. O. J. 2013. “Transtheoretical model of behavior change,” Encyclopedia of behavioral
medicine, (251:1700), pp. 1997–2000 (doi: 10.1007/978-1-4419-1005-9).
Prochaska, J. O., and Velicer, W. F. 1997. “The Transtheoretical Change Model of Health Behavior,”
American Journal of Health Promotion, (12:1), pp. 38–48 (doi: 10.4278/0890-1171-12.1.38).
Renner, B., and Schwarzer, R. 2005. “Risk and health behaviors: documentation of the Scales of the
Research Project ‘Risk Appraisal Consequences in Korea’(RACK),” … the Research Project “Risk
Appraisal Consequences in … (available at https://scholar.google.com/scholar?
hl=en&q=Risk+and+Health+Behaviors+Documentation+of+the+Scales+of+the+Research+Project
%3A+“Risk+Appraisal+Consequences+in+Korea”+%28RACK
%29+&btnG=&as_sdt=1%2C48&as_sdtp=#0).
Richardson, W., and Berwick, D. 2001. “Crossing the quality chasm: a new health system for the 21st
century,” Institute of Medicine (available at
http://nationalacademies.org/hmd/reports/2001/crossing-the-quality-chasm-a-new-health-
system-for-the-21st-century.aspx).
Rogers, R. W. 1975. “A protection motivation theory of fear appeals and attitude change1,” The journal
of psychology, (91:1), pp. 93–114.
Rossi, S. R., Greene, G. W., Rossi, J. S., Plummer, B. A., Benisovich, S. V., Keller, S., Velicer, W. F., Redding,
C. A., Prochaska, J. O., Pallonen, U. E., and Meier, K. S. 2001. “Validation of decisional balance and
situational temptations measures for dietary fat reduction in a large school-based population of
adolescents,” Eating Behaviors, (2:1), pp. 1–8 (doi: 10.1016/S1471-0153(00)00019-2).
Schuz, B., Sniehotta, F. F., and Schwarzer, R. 2007. “Stage-specific effects of an action control
intervention on dental flossing,” Health Education Research, (22:3), pp. 332–341 (doi:
10.1093/her/cyl084).
Schwarzer, R. 1999. “Self-regulatory Processes in the Adoption and Maintenance of Health Behaviors.,”
Journal of health psychology, (4:2), pp. 115–27 (doi: 10.1177/135910539900400208).
Schwarzer, R. 2008. “Modeling health behavior change: How to predict and modify the adoption and
maintenance of health behaviors,” Applied Psychology (available at
http://onlinelibrary.wiley.com/doi/10.1111/j.1464-0597.2007.00325.x/full).
Schwarzer, R., Lippke, S., and Luszczynska, A. 2011. “Mechanisms of health behavior change in persons
with chronic illness or disability: the Health Action Process Approach (HAPA).,” Rehabilitation
psychology (available at http://psycnet.apa.org/journals/rep/56/3/161/).
Schwarzer, R., and Luszczynska, A. 2008. “How to overcome health-compromising behaviors: The health
action process approach,” European Psychologist (available at
http://econtent.hogrefe.com/doi/abs/10.1027/1016-9040.13.2.141).
Schwarzer, R., and Sniehotta, F. 2003. “On the assessment and analysis of variables in the health action
process approach: Conducting an investigation,” Berlin: Freie … (available at http://userpage.fu-
berlin.de/~gesund/hapa_web.pdf).
Security Awareness Program Special Interest Group. 2014. “Best Practices for Implementing a Security
Awareness Program: PCI Data Security Standard (PCI DSS),” Security Standards Council (available at
https://www.pcisecuritystandards.org/documents/PCI_DSS_V1.0_Best_Practices_for_Implementin
g_Security_Awareness_Program.pdf).
Sheeran, P. 2002. “Intention-Behaviour Relations: A Conceptual and Empirical Review,” European
Review of Social Psychology, (12:1), pp. 1–26 (doi: 10.1002/0470013478.ch1).
Sheeran, P., Abraham, C., and Orbell, S. 1999. “Psychosocial correlates of heterosexual condom use: a
meta-analysis.,” Psychological bulletin (available at
http://psycnet.apa.org/journals/bul/125/1/90/).
Sniehotta, F. F. F., Scholz, U., and Schwarzer, R. 2005. “Bridging the intention–behaviour gap: Planning,
self-efficacy, and action control in the adoption and maintenance of physical exercise,” Psychology
& Health, (20:2), pp. 143–160 (doi: 10.1080/08870440512331317670).
Sutton, S. 2001. “Back to the drawing board? A review of applications of the transtheoretical model to
substance use,” Addiction, (96:1), pp. 175–186 (doi: 10.1080/09652140020017049).
Sutton, S. 2005. “Stage Theories of Health Behavior,” in Predicting Health Behavior, M. Conner and P.
Norman (eds.), (2nd ed.), Open University Press, p. 402 (doi: 10.1016/S0925-7535(97)81483-X).
Toth, P., and Klein, P. 2014. “A Role-Based Model for Federal Information Technology/ Cybersecurity
Training,” National Institute of Standards and technology Special Publication 800-16 Revision 1 (3
rd Draft) (available at http://csrc.nist.gov/publications/drafts/800-16-rev1/sp800_16_rev1_3rd-
draft.pdf).
Vance, A., Siponen, M., and Pahnila, S. 2012. “Motivating IS security compliance: insights from habit and
protection motivation theory,” Information & Management (available at
http://www.sciencedirect.com/science/article/pii/S0378720612000328).
Velicer, W. F., and Prochaska, J. O. 2008. “Stage and non-stage theories of behavior and behavior
change: A comment on Schwarzer,” Applied Psychology, pp. 75–83 (doi: 10.1111/j.1464-
0597.2007.00327.x).
Venkatesh, V. 2006. “Where to go from here? Thoughts on future directions for research on individual-
level technology adoption with a focus on decision making,” Decision Sciences, (37:4), pp. 497–518
(doi: 10.1111/j.1540-5414.2006.00136.x).
Venkatesh, V., and Bala, H. 2008. “Technology acceptance model 3 and a research agenda on
interventions,” Decision Sciences, (39:2), pp. 273–315 (doi: 10.1111/j.1540-5915.2008.00192.x).
Walsh, K. 2017. “Security Awareness: 5 Ways to Educate Employees | Reciprocity,” (available at
https://reciprocitylabs.com/security-awareness-5-ways-to-educate-employees/; retrieved March
26, 2017).
Weinstein, N. D. 1982. “Unrealistic optimism about susceptibility to health problems.,” Journal of
behavioral medicine, (5:4), pp. 441–460 (doi: 10.1007/BF00846146).
Weinstein, N. D., Sandman, P. M., and Blalock, S. J. 1988. “The precaution adoption process.,” Health
psychology (available at http://psycnet.apa.org/journals/hea/7/4/355/).
Weinstein, N., Rothman, A., and Sutton, S. 1998. “Stage theories of health behavior: conceptual and
methodological issues.,” Health Psychology (available at
http://psycnet.apa.org/journals/hea/17/3/290/).
Weinstein, N., and Sandman, P. 1992. “A model of the precaution adoption process: evidence from
home radon testing.,” Health psychology (available at
http://psycnet.apa.org/journals/hea/11/3/170/).
West, R. 2005. “Time for a change: putting the Transtheoretical (Stages of Change) Model to rest,”
Addiction, (100:8), Blackwell Science Ltd, pp. 1036–1039 (doi: 10.1111/j.1360-0443.2005.01139.x).
Wilson, M., and Hash, J. 2003. “Building an Information Technology Security Awareness and Training
Program,” National Institute of Standards and Technology (doi: 10.6028/NIST.SP.800-50).
Witte, K., and Allen, M. 2000. “A meta-analysis of fear appeals: Implications for effective public health
campaigns,” Health education & behavior (available at
http://heb.sagepub.com/content/27/5/591.short).

You might also like