Gravina 2021

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Journal of Organizational Behavior Management

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/worg20

Assessment of Employee Performance

Nicole Gravina, Jessica Nastasi & John Austin

To cite this article: Nicole Gravina, Jessica Nastasi & John Austin (2021): Assessment
of Employee Performance, Journal of Organizational Behavior Management, DOI:
10.1080/01608061.2020.1869136

To link to this article: https://doi.org/10.1080/01608061.2020.1869136

View supplementary material

Published online: 28 Feb 2021.

Submit your article to this journal

Article views: 305

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=worg20
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT
https://doi.org/10.1080/01608061.2020.1869136

Assessment of Employee Performance


a
Nicole Gravina , Jessica Nastasia, and John Austinb
a
Department of Psychology, University of Florida, Gainesville, USA; bReaching Results, USA

ABSTRACT KEYWORDS
Assessments are commonly used in organizational behavior Assessment; performance
management (OBM) to identify performance targets, determine diagnostic checklist;
environmental variables contributing to poor performance, and procedural acceptability;
devise appropriate interventions. This paper describes the role informant assessment;
descriptive assessment
of assessment at the individual performer level in OBM and the
assessment process. It also reviews four common types of OBM
assessments: historical assessments, functional assessments,
preference assessments, and procedural acceptability and dis­
cusses the research support, weaknesses, and opportunities for
future research for each. Finally, we conclude with recommen­
dations for the future of assessment in OBM, including incorpor­
ating technology, using ongoing question-asking to informally
assess performance and the environment, developing and vali­
dating survey instruments and other assessment tools, and
attending to cultural variables in assessments.

In organizations, most results are directly or indirectly a function of what


people say and do or their behavior. The body of research in behavior analysis
tells us that behavior is driven by its environment, and most notably, the
consequences in the environment. The environments which drive employee
behavior are mostly created by the nature of the work and the people who
work in the organization, especially the leaders. Arguably, the ultimate goal of
organizational behavior management (OBM) is to scientifically study and
understand the role of various work environments in influencing leader and
employee behavior and the results they produce.
Organizations can be evaluated at three levels: the organization level, the
process level, and the performer level (McGee & Crowley-Koch, 2020;
Rummler & Brache, 1995). While assessing and intervening at the organiza­
tion and process level can lead to large-scale organizational changes, they still
require an understanding of the performer level because performers drive
process and organizational change. Without environments that support per­
formers in upholding process improvements and organizational changes,
those improvements will fail. Thus, it is critically important to understand
how the environment influences employee behavior and how it can be chan­
ged to encourage different behaviors. This paper will focus exclusively on

CONTACT Nicole Gravina [email protected] School of Psychology, 945 Center Dr., Gainesville, Florida 32611.
Supplemental data for this article can be accessed on the publisher’s website.
© 2021 Taylor & Francis
2 N. GRAVINA ET AL.

performer-level assessments that can help us understand the factors influen­


cing people’s behavior in organizations.
Researchers and practitioners have developed several different performance
assessments to help understand and influence organizational behaviors. In this
paper, we will review the role of performance assessment, discuss the assess­
ment process, and focus on four broad classes of assessments related to
individual and employee performance:

1) Historical assessments, which analyze past data to derive optimal perfor­


mance targets that deliver organizational results.
2) Functional assessments, which analyze the environment to discover the
causes of poor performance or barriers to improved performance.
3) Preference assessments, which measure a person’s relative preference for
various items as potential reinforcers for their own behavior.
4) Procedural acceptability assessments, which measure the extent to which
the treatments developed and implemented by researchers or practi­
tioners are preferred by consumers or end-users.

We will describe the role and process of performer level assessments, review
each of these types of assessments, including recent developments, suggest
future research in each area, and then look forward to the next 20 years of
research and practice in OBM performance assessment.

The role of assessment in OBM


Performance assessments in OBM serve a variety of functions. Practitioners
use assessments to identify performance issues in need of intervention (Sulzer-
Azaroff & Fellner, 1984), determine environmental factors causing perfor­
mance issues (Austin, 2000), and select functionally appropriate interventions
(Carr, Wilder, Majdalany, Mathisen, & Strain, 2013). Also, as part of the
intervention process, the practitioner may engage employees from the outset,
build rapport, and learn more about an organization’s processes and jargon,
improving subsequent intervention implementation (Sigurdsson & Austin,
2006). This initial investment of time pays dividends when an intervention
is implemented later. A deep understanding of the work environment can help
identify the most critical performance targets.
Furthermore, a functional intervention based on an understanding of the
current work environment is likely to be more effective than using a generic
“off-the-shelf” solution or blindly using a solution that worked in a different
environment. In addition, function-based solutions are potentially less
effortful and less costly. Further, employees may be more likely to buy into
the solution when they know a manager or consultant spent time learning
about their organization and gathering input (McSween, Myers, & Kuchler,
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 3

1990). Moreover, an assessment produces a valuable permanent product for


the organization, which can provide initial evidence of the practitioner’s
potential contributions to the organization and result in continued engage­
ment. Therefore, assessments of various types should be standard practice
in OBM.
In 2018, Wilder and colleagues reviewed research published in the Journal
of Organizational Behavior Management (JOBM) from 2000 to 2015 and found
that assessments were included in 28% of all OBM field studies and 48% of
a subset of safety-related studies. Ninety percent of the behavioral safety
studies that included assessments used a historical assessment, which often
involved examining the incident and first aid data from the site to select
intervention targets. Taken together, these data suggest that OBM researchers
use assessments, but there is an opportunity to expand their use in OBM field
research and practice. Furthermore, in some cases, it makes sense to combine
assessment procedures, and limited scholarly work guides the selection of
intervention tools as part of the assessment process.

Assessment process
The assessment process usually includes three stages: pre-assessment, assess­
ment, and intervention planning, aptly described by Cunningham and Geller
(2012). Assessments aimed at one, well-defined and easily observed perfor­
mance target (e.g., cleaning at the end of shift) will quickly move through this
process, but larger-scale assessments (e.g., identifying safety behaviors that will
lead to a reduction in injuries) will require more time in each stage. Following
this process will help researchers and practitioners select the best assessments,
identify the appropriate stakeholders, and reap the most value from the
assessment.

Pre-assessment

During the pre-assessment stage, the practitioner and organizational stake­


holders must agree on the assessment’s goal and scope (Cunningham & Geller,
2012). This is particularly important in large-scale assessments and those
aimed at identifying performance issues because a myriad of other problems
can surface, and the assessment can grow in scope. For example, a practitioner
conducting an assessment to improve safety behaviors performed by workers
may learn that the maintenance staff is not performing necessary safety-related
repairs in a timely fashion, and this could require a separate analysis. An
agreed-upon plan can limit scope creep and keep the team focused. Identifying
the scope can also help identify who should be involved in the assessment
process. For example, an interview-based assessment could be conducted with
both the employees whose performance is being assessed and their supervisor.
4 N. GRAVINA ET AL.

There are several other choices for practitioners and stakeholders to make
during pre-assessment planning. For example, the practitioner must select the
appropriate assessments, decide how they will be administered, and identify
resources needed. We will describe some of the assessments available in the
next section. An assessment can be administered by record review, individual
or group interviews, observation, survey, or a combination of these methods.
The administration will be, in part, dependent on who will contribute to the
assessment and what fits best with the chosen assessment and the employees’
jobs. For example, some people may not have a dedicated work computer, and
therefore, they may be more likely to respond to a survey administered using
paper during a meeting. Some employees may also prefer anonymity.
Resources required might include documents or data already available, access
to the work areas for observation, employee time, access to employee e-mail
addresses, access to scheduled meetings, and a space to work. The goal
of the planning stage is to use forethought to design an assessment plan to
gather useful information as efficiently as possible. An inefficient assessment
process will waste valuable time and resources and delay the start of an
intervention. However, proceeding without an assessment could be more
costly if an intervention does not produce the desired results.

Assessment
During the assessment phase, the practitioner uses the assessments to gather
information. Consider planning the assessment in a way that leads to optimal
information gathering. For example, administering a survey before an inter­
view means that the results can help guide interview questions. Anonymous
surveys may also yield different information than group or individual inter­
views because they gather honest input with less fear of repercussions.
Practitioners may also want to create a plan to keep the information collected
organized so that it is easy to locate relevant information later.
Although the scope should be identified prior to the assessment, it is wise to
allow some flexibility in the process so that the practitioner can gather as much
relevant information as possible. For example, when using a structured inter­
view like the Performance Diagnostic Checklist (PDC, Austin, 2000), which
will be described later, practitioners can ask follow-up questions to clarify
responses and gather more nuanced information. Suppose that during the
PDC interview, the client responds that the supervisor is not present during
task completion (question 5). In that case, the practitioner can directly observe
task completion to confirm the response and ask follow-up questions to
determine if this would improve the performance. A survey could include
open-ended questions so that employees can provide information not stated in
the survey, allowing practitioners to learn more about the performance issue
and organization.
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 5

At the end of the assessment, practitioners often create a report to discuss


with the site. For a small-scale assessment that employs one assessment tool,
a graphical display and summary are likely sufficient to communicate the
results. Larger-scale assessments might include a report with an executive
summary that describes the assessment process followed and results, followed
by more details and then recommendations. Reports should consist of both
strengths identified by the intervention (e.g., employees have been sufficiently
trained and materials are readily available) and opportunities for improve­
ment (e.g., performance feedback is lacking) to improve leader buy-in.

Intervention planning
After the assessment is complete, the findings can be used to select appropriate
interventions. Although describing examples of sample interventions is
beyond this paper’s scope, we would like to offer a few suggestions to consider
during intervention planning. First, interventions that are most likely to be
implemented effectively and consistently should be selected. A well-selected
intervention is not useful if it is not implemented. During the assessment
process, practitioners have learned about the organization, the intervention
targets that might work best for them, and barriers to implementation.
Therefore, they can design an intervention that fits the client’s needs and
environment. In many OBM studies that employed assessments, researchers
started with one intervention component and then added components as
needed (e.g., Cruz et al., 2019). This approach allows the practitioner to use
the least intrusive intervention necessary to produce the desired results and
shape the organizational behaviors required to maintain the solution. It also
provides evidence for the organization that all intervention components
included must be maintained to sustain the improvements.
Now that we have described the assessment process, we will discuss assess­
ments that researchers and practitioners can use to learn more about the
performance issues. Each of these assessments has advantages and disadvan­
tages and is suitable in different contexts, determined during the pre-
assessment phase.

Historical assessment
Many organizations already measure relevant behaviors or correlative out­
comes (e.g., sales, absenteeism, turnover, product rejects, reported injuries)
before the consultation. In some cases, those data may be used to identify
performance targets or inform the development of intervention procedures
(Bumstead & Boyce, 2005). This method is sometimes referred to as
a historical assessment and is similar to a records review, typical in clinical
behavior analysis. Historical assessments are one of the most common
6 N. GRAVINA ET AL.

assessment methods utilized in OBM, perhaps due to the low effort and cost
required compared to other methods (Wilder, Lipschultz, King, Driscoll, &
Sigurdsson, 2018). Historical assessments have been used in a variety of
settings, including manufacturing, retail, human services, sales, public trans­
portation, food services, and construction (Fante, Gravina, & Austin, 2007;
Hermann, Ibarra, & Hopkins, 2010; Lebbon, Austin, Rost, & Stanley, 2011;
Lee, Shon, & Oah, 2014; Olson & Austin, 2001).
Historical assessments can provide vital information to help select inter­
vention targets and conditions under which behaviors might be more or less
likely to occur and are used most often in behavioral safety (Wilder et al.,
2018). Historical assessments may be particularly amenable to behavioral
safety because industrial organizations must collect data to comply with the
Occupational Safety and Health Administration (OSHA) requirements.
Therefore, measures such as recordable injuries, compensation claims, and
lost time injuries may be available across several years. Furthermore, historical
assessments are considered a best-practice assessment method for instituting
behavior-based safety processes (McSween, 2003). In human service settings,
researchers and practitioners can use historical data to identify which proce­
dures or programs are consistently followed, billing trends, trends in absences
and turnover, arrangements that lead to the best client outcomes, and poten­
tial monetary savings in addressing the performance. The utility of historical
assessments may hinge on the accuracy and reliability of data collected before
intervention procedures; thus, researchers and practitioners should inquire
further to evaluate the quality of data to be used for historical assessments.
Although historical assessments can be a useful starting point to narrow the
focus onto the most critical performance targets, they are typically combined
with other assessment procedures (e.g., direct observation, interviews) to
inform intervention selection. For example, a historical assessment might
find that injuries to the hand comprise most of the injuries over the past five
years at a company. Still, the most appropriate solution might involve super­
visors requiring employees to wear gloves, monitoring the behavior, and
praising it when it happened. In this case and many cases, the intervention
targets are different from the controlling variables, and different types of
assessment may be required to understand each of these. Thus, a functional
assessment may be needed to devise an effective solution.

Performance analysis
In 1999, Austin et al. lamented that OBM had not kept pace with other areas of
behavior analysis in developing functional assessments to improve the selec­
tion of effective interventions. They identified three reasons for the glaring
omission. First, OBM interventions appear to be effective without assessment
procedures. However, previous research in clinical behavior analysis indicates
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 7

that functional assessment procedures can enhance interventions and subse­


quent outcomes compared to default intervention procedures (Wacker, Berg,
& Harding, 2014). Second, employees typically possess well-developed lan­
guage skills and the ability to accurately describe contingencies and other
aspects of their work environment, and therefore much of their work behavior
will include rule-governed components. Finally, OBM is concerned with
increasing behavior, and in clinical behavior analysis, most of the assessments
were designed to understand harmful behaviors that should be decreased or
replaced. These differences do not negate the relevance of assessment proce­
dures, but the assessments employed in OBM are necessarily distinct from
assessments used in other behavior analysis areas. In fact, some argue that the
term performance analysis is a better fit for assessments used in organizations,
because the procedures and outcomes do not directly map onto clinical
functional assessments.
Austin and colleagues went on to describe performance analysis tools
developed by behavior analysts that guided examining variables that could
contribute to performance issues, including Gilbert’s (1978) vantage analysis
and troubleshooting guide, Mager and Pipe (1997) simple flowchart, and
Rummler and Brache (1995) systems tools. The authors also discussed the
ABC Analysis described by Daniels and others for analyzing performance
(Daniels & Bailey, 2014). Each of these assessments help trained professionals
analyze performance, but they did not directly suggest interventions. Austin
and colleagues suggested that performance analyses should consider four
areas: Antecedents, equipment and processes, knowledge and skills, and con­
sequences. They also argued that although interviews may not be as effective as
more direct assessment procedures such as descriptive or experimental ana­
lysis in clinical behavior analysis, they may be sufficient and the most practical
for business settings.
In 2000, Austin published a book chapter that included an assessment tool,
the Performance Diagnostic Checklist (PDC), that spanned the four areas
mentioned above. The PDC could be used to interview employees, conduct
observations or collect data, and aid in selecting an effective intervention, all of
which will be described in more detail later in the chapter. A flurry of studies
followed that employed and extended the tool. Following the PDC, other
assessment procedures were incorporated into OBM research, including a pre-
intervention analysis involving antecedent manipulations (Therrien, Wilder,
Rodriguez, & Wine, 2005), a structural analysis (Fante, Gravina, Betz, &
Austin, 2010), and a fluency-based skill assessment to identify skill deficits
(Pampino, Wilder, & Binder, 2005). Austin’s papers were published at a time
that, along with many others’ work, helped propel OBM into a more con­
temporary behavioral approach for addressing performance issues in organi­
zations while simultaneously revealing more research questions. Twenty years
have passed since the PDC was published, and since then, researchers have
8 N. GRAVINA ET AL.

examined and expanded upon the assessment and incorporated other func­
tional assessment methods into research and practice. OBM practitioners and
researchers have leaned on the behavior analysis knowledge to further develop
assessment methods for organizations. Behavior analysis typically utilizes
three types of assessments: indirect assessment, direct assessment, and experi­
mental analysis (Kelley, LaRue, & Roane, 2014), and many OBM assessments
use more than one of these methods in concert (Wilder et al., 2018).

Indirect assessments
An indirect assessment involves gathering information to understand vari­
ables impacting a performance issue without directly observing those beha­
viors. Practitioners often use indirect assessments such as surveys, rating
scales, and interviews because they are easy and quick to administer, require
minimal training, and enable input from various sources. Below, we describe
two common indirect assessment methods in OBM: the PDC and its variations
and the ABC Analysis.

Performance diagnostic checklist


As opposed to identifying targets of interventions, the PDC is a quick, low-
effort tool for forming a hypothesis about the function of target behaviors and
informing the selection of intervention procedures and may be used in
a variety of settings. It is one of the most common assessments utilized in
OBM, and its usage appears to be on an increasing trend (Wilder et al., 2018).
The PDC was created to be a more specific version of the Behavior
Engineering Model created by Gilbert (1978); the categories and some of the
exact questions were based on a protocol analysis of expert OBM consultants’
problem solving (Austin, 1996). An early version of the PDC appeared in other
publications such as Mattaini and Thyer (1996). The intent was to create a tool
that novices could use to identify effective solutions to organizational pro­
blems. While the PDC is a list of questions, researchers and practitioners can
collect descriptive data to inform some of the answers.
Research has demonstrated that interventions informed by PDC results
yield desirable intervention outcomes (e.g., Amigo, Smith, & Ludwig, 2008;
Doll, Livesey, McHaffie, & Ludwig, 2007; Eikenhout & Austin, 2005; Gravina,
VanWagner, & Austin, 2008; Pampino, Heering, Wilder, Barton, & Burson,
2004; Rodriguez et al., 2006). For example, Pampino et al. (2004) increased the
completion of maintenance tasks in a coffee shop with intervention proce­
dures informed by PDC results. Specifically, PDC results suggested that low
task completion may occur because of a lack of adequate antecedent and
consequence manipulation instead of training deficits or issues with equip­
ment and processes.
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 9

Whereas the four domains of the PDC (i.e., antecedents and information,
equipment and processes, knowledge and skills, and consequences) apply to
performance in various settings, the original 20-item checklist is not always
specific enough to identify contingencies operating in certain domains.
Therefore, more precise iterations of the PDC have been developed and
applied, including the PDC for human services (PDC-HS; Carr et al., 2013),
the PDC for occupational safety (PDC-S; Martinez-Onstott, Wilder, &
Sigurdsson, 2016), and the PDC for parents (PDC-P; Hodges, Villacorta,
Wilder, Ertel, & Luong, 2020).
The PDC-HS was developed to assess the performance of employees
responsible for providing direct care to other individuals (Carr et al.,
2013). Carr et al. (2013) posed a few unique considerations for the perfor­
mance of employees in human service settings, including inadequate treat­
ment integrity, inaccurate data collection, deficits in program development,
issues with attendance or tardiness, insufficient reporting, and poor graph
construction. The authors administered the PDC in an autism treatment
center providing early intervention services, then made revisions according
to the conditions specific to human service organizations, and the inclusion
of sections for scoring and corresponding intervention recommendations.
Modifications included updated domain titles to a) training, b) task clarifica­
tion and prompting, c) resources, materials, and processes, and d) perfor­
mance consequences, effort, and competition. Next, 11 behavior analysts
were asked to pilot and assess the PDC-HS, and revisions were made
accordingly. Finally, the predictive validity and utility of the last version of
the PDC-HS were assessed by comparing the use of indicated and non-
indicated interventions as identified by the PDC-HS. Results showed that
performance improvements were greater after implementing the PDC-HS
indicated intervention compared to a non-indicated intervention, suggesting
that the PDC-HS may be a valuable tool for identifying performance deficits
and subsequent intervention recommendations in a human service setting
(Carr et al., 2013).
Since its publication, the PDC-HS has been utilized in a variety of settings,
including schools (Bowe & Sellers, 2018; Merritt, DiGennaro Reed, &
Martinez, 2019), retail stores (e.g., Loughrey, Marshall, Bellizzi, & Wilder,
2013; Smith & Wilder, 2018), and further evaluation in autism treatment
clinics (Ditzian, Wilder, King, & Tanz, 2015; Wilder et al., 2018). A review
conducted by Wilder, Cymbal, and Villacorta (2020) indicated that the per­
formance consequences, effort, and competition domains were endorsed most
often across settings. Future research may bolster support for using the PDC-
HS by evaluating the tool compared to other assessment methodologies and
assessing its utility in additional human service settings (e.g., residential
facilities, clinics for the treatment of substance use, crisis intervention services,
geriatric facilities).
10 N. GRAVINA ET AL.

Two additional iterations of the PDC include the PDC-Safety (PDC-S;


Martinez-Onstott et al., 2016) and the PDC-Parent (PDC-P; Hodges et al.,
2020). Martinez-Onstott et al. (2016) made modifications to the PDC to
include safety-specific language with the addition of a Likert scale response
measure and formal scoring instructions. The authors used the adapted tool to
identify deficits contributing to noncompliance with personal protective
equipment (PPE) requirements among three landscaping professionals.
Assessment results indicated deficits in a performance consequence, and an
intervention employing the use of graphic feedback increased proper PPE
usage across participants, but there was no comparison between indicated
and non-indicated interventions. A study conducted by Cruz et al. (2019) used
the PDC-Safety to evaluate variables influencing noncompliance with hand
hygiene protocols at a center-based treatment facility for individuals with
developmental and intellectual disabilities. Results suggest that the interven­
tion indicated by PDC-Safety was effective at increasing hand hygiene com­
pliance compared to the non-indicated intervention. Hodges et al. (2020)
adapted the PDC to assess barriers to treatment implementation by parents
for children with problem behavior and evaluated the effectiveness of indi­
cated versus non-indicated interventions. In experiment one, assessment
results indicated a task clarification and prompting intervention, which
increased parents’ treatment integrity. In experiment two, the authors demon­
strated that the indicated intervention was more effective than the non-
indicated intervention at improving parent performance (Hodges et al.,
2020). Further research is needed to evaluate the effectiveness of indicated
versus non-indicated interventions for all PDC iterations.
Some additional recommendations for future research on the efficacy,
feasibility, and validity of the PDC and PDC iterations may be considered.
First, although preliminary research suggests acceptable validity and test-retest
reliability for the PDC-HS (Cymbal, Wilder, Thomas, & Ertel, 2020; Wilder,
Lipschultz, Gehrman, Ertel, & Hodges, 2019), additional research assessing the
reliability and validity of the PDC and its iterations with a larger sample size is
warranted. Replications of previous findings may be extended to novel settings
and compare the use of the PDC to specialized iterations of the PDC like the
PDC-HS, PDC-Safety, and PDC-Parent as well as other formal assessment
tools. Comparisons could examine the quality of recommendations yielded
from the assessment and the time, cost, and resources required. In addition,
future research should systematically evaluate the relationship between PDC
domain scores and intervention efficacy. Although some early projects using
the PDC were carried out by undergraduate students and coached by graduate
or doctoral-level behavior analysts (e.g., Austin, Weatherly, & Gravina, 2005;
Rohn, Austin, & Lutrey, 2002; Shier, Rae, & Austin, 2003) suggesting that the
tool does not require extensive experience to use it, the level of skill necessary
to use the PDC could be further examiined. Cymbal et al. (2020) evaluated
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 11

whether Master’s, Bachelor’s, and Associate’s degree (or high school diploma)
level practitioners trained in behavior analysis could use the PDC-HS to
accurately identify domains responsible for a performance problem described
in three vignettes. The results indicated that Master’s and Bachelor’s level
practitioners were slightly better at accurately identifying the correct domains
for the performance problem than Associate’s level practitioners, but the
difference was small (~5–6%). Researchers should also compare the PDC
against a more informal interview across novice practitioners and OBM
experts.
Future studies should evaluate whether the type of individual interviewed
(e.g., manager or performer, high-performer or low-performer) differentially
impacts the information gathered and selecting appropriate intervention pro­
cedures. For example, low performers may find it more difficult to describe the
barriers to performance, while high performers may have identified barriers
and employed workarounds to produce good results. Researchers could also
further refine the PDC by including a rating of importance for each item.
Finally, future iterations of the PDC could incorporate culturally responsive
questions into the tool to guide users to be culturally sensitive when asking the
questions and selecting interventions (See Appendix A for an updated PDC
with many of these considerations embedded).

ABC analysis
An ABC Analysis is an assessment in which practitioners identify antecedents
and consequences that support and discourage desired and undesired perfor­
mances (Connellan, 1978; Daniels & Bailey, 2014) ABC Analyses are typically
constructed based on information known about the performance concerns,
but interviews could provide additional information. ABC analyses appear to
be common in business because they are easy to teach, help leaders understand
variables that may contribute to poor performance and can be applied to many
situations. However, the ABC Analysis findings may make intervention devel­
opment less intuitive than the PDC since the ABC analysis is framed in terms
of antecedents and consequences rather than specific solutions such as feed­
back, training, equipment, or praise. Despite the seemingly common applica­
tion of ABC Analysis in organizations, limited research demonstrates their
utility for selecting interventions. Researchers could examine whether insight
gleaned from ABC Analyses improves intervention selection by practitioners
with various experience levels.
Despite being easy and quick to administer, previous clinical research on
indirect assessments indicates that this use alone may be insufficient because
they can yield inaccurate or incorrect information compared to direct assess­
ments (Fisher, Piazza, Bowman, & Amari, 1996; Lennox & Miltenberger, 1989;
Umbreit, 1996). Although there are concerns about indirect assessments
within behavior analysis, OBM research has repeatedly demonstrated the
12 N. GRAVINA ET AL.

utility of indirect assessments for identifying performance issues and selecting


interventions in organizational settings. Indirect assessments may be more
informative in OBM because they are typically conducted with adults who
have the language skills to describe their current environment. However, OBM
researchers have never directly compared indirect assessments with descrip­
tive or experimental assessments. A comparison of assessment methods could
help researchers identify boundary conditions for when indirect assessments
are suitable and when more rigorous assessment methods are required.
Finally, none of the current indirect assessments used in OBM ask questions
about culture (within and beyond the organization) or learning history, which
could be important factors in intervention design and make indirect methods
more valuable.

Direct assessments
Direct assessments involve the direct observation and recording of behavior
without manipulating the environment. Direct assessments yield descriptive
data on behavior and the conditions when it is most and least likely to occur
and are more rigorous than indirect assessments. Direct assessments are
usually informed by indirect assessments conducted prior to the direct assess­
ment (Thompson & Borrero, 2014). In OBM, direct assessments may involve
observing high and low performers to compare differences in how they work
and monitoring work performance under naturally occurring conditions (e.g.,
in the presence and absence of customers or the supervisor). Data can be
collected using data sheets or A-B-C or narrative recording (i.e., recording the
antecedents, behaviors, and consequences). Sometimes, data are analyzed
visually (e.g., scatterplot), statistically (e.g., correlations), or using
a probability analysis or lag sequential analysis (e.g., calculating the probability
that a specific consequence is more likely to follow a specific behavior).

Narrative recording
Narrative recording entails observing and recording antecedents, behaviors,
and consequences as they occur in the natural environment. A book chapter
published in 1982 described a descriptive assessment procedure designed to
identify effective sales behaviors (Crawley, Adler, O’Brien, & Duffy, 1982). The
researchers followed top-performing salespeople and low-performing sales­
people and took detailed data on their behaviors and subsequent sales engage­
ments. When they interviewed top salespeople and asked why they were
effective at selling (indirect assessment), they did not gather much useful
information. However, the direct observations yielded information about
behaviors top sellers engaged in, and they created a checklist. Later, the
researchers taught low-performing salespeople to follow the checklist, and
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 13

their sales increased dramatically. When the checklist and training were
implemented organization-wide, they saw a dramatic increase in sales.

Direct observation
OBM researchers collect data under various conditions during baseline to help
identify an intervention. For example, Fienup, Luiselli, Joy, Smyth, and Stein
(2013) collected data on the start and end times of consecutive meetings and
the transition time required between meetings. They found that meetings that
started and ended late affected punctuality at the next meeting. Fante et al.
(2010) noticed that the high variability of pharmacist safe performance
appeared to be due to the presence and absence of a makeshift wrist support.
After a variable baseline phase, the researchers collected descriptive data on
the improvised wrist support presence and found a strong correlation; the use
of wrist supports resulted in safer wrist positioning. These simple observations
led to powerful interventions that may not have been obvious without direct
observation.

Scatterplot
A scatterplot presents collected data visually so that patterns can be detected.
For example, Anbro et al. (2020) used a scatterplot in a study that evaluated
virtual reality (VR) to assess a training procedure to improve communication
and situational awareness among medical and nursing students. The VR
technology recorded eye gaze, and observers recorded correct communication
steps. A scatterplot revealed that the training improved communication but
not looking at the patient. Scatterplots are useful for identifying temporal
patterns or relationships between two variables. However, sometimes it can
be difficult to detect patterns in responding using a scatterplot, specifically if
the measures plotted are not presented in the appropriate analysis unit. If no
patterns emerge, another assessment procedure may be required.
Although direct assessments are more rigorous than indirect assessments,
there are some disadvantages worth mentioning. Because direct assessments
require direct observation, data collection, and analysis, they necessitate more
time and training to complete. A culturally competent assessment will require
even more training. Also, while direct assessments involve direct observations
of behavior and conditions, they do not demonstrate a functional relationship
because no environmental variables are manipulated. Therefore, they may
require a similar amount of time as an experimental analysis but yield less
informative results. Finally, it may be problematic to observe and take data on
all the behaviors and conditions of organizations’ concern. For example,
collecting descriptive data on unsafe work behaviors could be problematic if
they occur infrequently and unethical if they are not intervened upon
immediately.
14 N. GRAVINA ET AL.

Experimental analysis

Experimental analyses are more rigorous than indirect and direct assessments,
and they can also yield more definitive information about causal variables and
lead to optimal interventions. In an experimental analysis, researchers or
practitioners systematically manipulate environmental variables and observe
responses in each condition (Wacker et al., 2014). The variables manipulated
are usually chosen based on the results of an indirect or direct assessment. For
example, if an observer notices that employees behave differently in the
presence and absence of the supervisor, they could manipulate their presence
to see if a functional relationship emerges. By showing that the behavior
changes when the experimenter changes the environmental condition, we
become more confident that the environmental variable is responsible for
behavior changes. The more demonstrations of this relationship, the more
confident we can be in our conclusions. This process is similar to the elements
of prediction, verification, and replication, as seen in design methodology
(Cooper, Heron, & Heward, 2020; Erath et al., 2020). For practical reasons,
when conducted as part of an assessment, these manipulations usually occur in
short segments (e.g., 5 to 30 min), which enables the experimenter to identify
functional relationships quickly.
Safety performance may be amenable to an experimental analysis if the
conditions tested do not put employees at prolonged or unnecessary risk.
For example, following the direct assessment conducted with pharmacy
employees conducted by Fante et al. (2010) mentioned above, the research­
ers conducted an experimental manipulation. Because they observed that
pharmacy employee posture appeared to be safer when the makeshift wrist
support was in place, they manipulated the presence of the wrist support.
They concluded that it was functionally related to safety performance. Low-
risk, low-effort experimental analyses like those conducted by Fante et al.
(2010) may be modified and applied in other contexts to assess the variables
impacting performance and inform the selection of intervention
procedures.
Experimental analyses may also involve manipulation of aspects of the
physical environment like sounds, light, and the presence of others. Therrien
et al. (2005) also manipulated a series of variables in an alternating fashion to
determine which increased the likelihood of employees greeting customers at
a sandwich shop. Although they found that having the radio did not appear to
influence performance, the door chime was most likely to encourage customer
greetings, followed by the presence of a manager. The researchers then com­
bined the door chime and manager presence and demonstrated substantial
improvement in greetings over baseline conditions. Finally, the experimenters
added feedback, which increased performance to 100% for the last two ses­
sions. The experimental analysis employed by Therrien and colleagues
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 15

demonstrates how stimuli presumed to serve a discriminative function can be


verified via experimental analysis.
Finally, Pampino et al. (2005) included a performance analysis in their study
to increase correct entering of job codes by construction supervisors. Job codes
were used for billing, and entering incorrect codes could result in over or
under billing the client. The researchers broke the task into component skills
(remembering codes, locating codes, and typing codes) and evaluated the
supervisor’s performance for each skill. Once they identified deficits, they
used fluency training to improve their accuracy and their speed of perfor­
mance. Thus, the assessment resulted in more precise and efficient training
procedures.
Despite many successful examples of experimental analyses in clinical
behavior analysis, limited research using this assessment approach exists in
OBM. In some cases, it may be too challenging to manipulate variables while
people work, especially if it may interfere with job performance or safety.
Improving virtual reality and other simulation technology could enhance our
ability to conduct experimental analyses without risking harm or lost produc­
tivity in the future. However, employees may behave differently under con­
trived conditions than during their everyday work. Still, an experimental
analysis may be more efficient than testing interventions over a more extended
period using an experimental design. Furthermore, it may provide more
accurate information than traditional surveys employed in Industrial-
Organizational Psychology. OBM researchers should continue to develop
experimental analysis methods to help identify effective interventions.

Reinforcer and preference assessments


The identification of reinforcers is integral to developing and implementing
any effective consequence-based intervention component (Simonian, Brand,
Mason, Heinicke, & Luoma, 2020). Presumed reinforcers may not function as
such, wasting time and money. Therefore, stimulus preference assessments
(identification of putative reinforcers) and reinforcer assessments (identifica­
tion of stimuli demonstrated to increase behavior) are a cornerstone of
behavior analytic interventions (Fisher et al., 1992; Roscoe, Iwata, & Kahng,
1999). Common preference assessment formats in clinical behavior analysis
include forced-choice procedures (i.e., selection between stimuli such that
a hierarchy may be established) such as the paired-stimulus preference assess­
ment (Fisher et al., 1992), ranking procedures, and multiple stimulus prefer­
ence assessments with (MSW; Windsor, Piché, & Locke, 1994) or without
replacement (MSWO; DeLeon & Iwata, 1996).
Preference and reinforcer assessments are ubiquitous in applied behavior
analytic research conducted with individuals with developmental disabilities
yet are seldom utilized with typically developing adults or employees (Wine,
16 N. GRAVINA ET AL.

Kelley, & Wilder, 2014). Previous research indicates managers are poor at
predicting employee preferences, and in practice, they often request help in
identifying the reinforcers of their employees, thus the use of more formal
preference and reinforcer assessment methodology with employees may be
warranted (Wilder, Harris, Casella, Wine, & Postma, 2011; Wilder, Rost, &
McMahon, 2007). Also, employee preferences may change over time; thus,
preference should be reassessed over extended periods of employment (Wine,
Gilroy, & Hantula, 2012; Wine et al., 2014).
Waldvogel and Dixon (2008) compared the use of a ranked survey and
multiple stimulus preference assessment without replacement with four
direct-care staff. They found that assessment ranks correlated across formats
for three out of four employees, but no reinforcer assessment was conducted to
determine whether or not preferred stimuli functioned as reinforcers. Wilder
et al. (2007) compared the use of stimuli identified by a ranked survey to
stimuli identified with a verbal choice format. Then a reinforcer assessment
was used to determine whether stimuli identified functioned as a reinforcer.
Results indicated that the survey format was more accurate in identifying
reinforcers compared to the verbal choice format. Wine, Reis, and Hantula
(2014) compared using a ranking procedure, survey, and MSWO and con­
ducted a subsequent reinforcer assessment with three direct-care staff mem­
bers. All preference assessment formats identified reinforcers, but the results
of social validity measures indicated that the MSWO was rated as more
cumbersome, less preferred, and took more time than ranking and survey
formats.
Practitioners can also identify preferred job tasks and working arrange­
ments to improve job assignments and identify strategies to make aversive
tasks more palatable. For example, Green, Reid, Passante, and Canipe (2008)
created an assessment tool they termed the Task Enjoyment Motivation
Protocol (TEMP), which involved supervisors interviewing staff to identify
which job tasks were least preferred and the aspects of those tasks that made
them less preferred. Supervisors then attempted to remove undesirable prop­
erties of tasks reported as less preferred (e.g., eliminate interruptions while
reviewing timesheets). One participant disliked conducting staff observations
because staff appeared to dislike being observed. The researchers added
a performance lottery so that the participant could deliver lottery tickets
based on observations, resulting in staff rating being observed as more favor­
able on a rating scale. The researchers also attempted to increase desirable
stimuli associated with the task (e.g., providing snacks during paperwork).
Results indicated that the tasks were rated and ranked higher after making
changes based on the TEMP assessment.
OBM researchers have also evaluated feedback preferences. For example,
Reid and Parsons (1996) demonstrated that staff in a clinical, residential
setting preferred immediate versus delayed feedback. Sigurdsson and Ring
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 17

(2013) found that undergraduate students preferred feedback on correct quiz


performance compared to incorrect quiz performance, even though both
resulted in similar quiz performance scores. Researchers have also demon­
strated that employees prefer different delivery modes of feedback. For exam­
ple, Hardesty, Orchowitz, and Bowman (2018) found that therapists in
a clinical setting preferred to view data in a line graph, while direct care
employees preferred bar graphs.
Research indicates that the most practical and accurate preference assess­
ments in OBM differ from clinical behavior analysis, which is to be expected.
While research on directly assessing preferences exists in OBM, it is mostly
related to tangible reinforcers, which may not be practical in all organiza­
tions. Some practitioners would argue (the 3rd author, for example) they are
less effective than social reinforcers. Green et al. (2008) demonstrated that
changing aspects of the work task could improve the task rating and ranking,
which warrants more research. Other researchers have shown that partici­
pants prefer different aspects of feedback delivery (e.g., Sigurdsson & Ring,
2013). OBM researchers have a tremendous opportunity to evaluate practical
strategies that allow leaders to identify employee preferences for reinforcers
and other work-related factors and improve work productivity and enjoy­
ment. More focus on assessments related to work and preferences could
result in designing interventions that employees find acceptable and
enjoyable.

Procedural acceptability assessments


The term social validity encompasses the social significance of goals selected,
the acceptability of intervention procedures, and the importance of interven­
tion outcomes (Wolf, 1978). Assessments of social validity generally solicit
information from consumers of an intervention to inform the initial develop­
ment and long-term viability of intervention procedures (Schwartz & Baer,
1991). Specifically, these assessments may indicate the need for further educa­
tion or training, detect undesirable intervention effects, and allow us to protect
the rights of consumers (Hawkins, 1991; Kazdin, 1977). Although the term
social validity has been subject to debate, there is immense value in assessing
the acceptability of intervention goals, procedures, and outcomes for consu­
mers. Assessments of procedural acceptability may be particularly useful in
OBM to predict undesirable intervention outcomes and increase the prob­
ability that institutionalization will occur (Nastasi, Simmons, & Gravina, 2020;
Sigurdsson & Austin, 2006). Unacceptable management procedures may
increase absenteeism, turnover, disputes, and arbitrations between labor and
management (Parsons, 1998). Therefore, this section will specifically focus on
measures of procedural acceptability (see pre-assessment section for the selec­
tion of targets).
18 N. GRAVINA ET AL.

Interviews, questionnaires, rating scales, and choice procedures are the most
common methods used to assess procedural acceptability in OBM (Nastasi
et al., 2020). It would be valuable if researchers or practitioners developed
a valid protocol to evaluate consumer use or nonuse of behavioral interventions
as the dependent variable to predict acceptability, and that can be used in
practice to increase adoption of behavioral technology. Due to the subjective
nature of these measures, the utility of procedural acceptability assessments
may hinge on the conditions under which those assessments were employed
(Schwartz & Baer, 1991). Therefore, researchers and practitioners should con­
sider a few variables when conducting procedural acceptability assessments.
First, procedural acceptability must be assessed using a representative sample of
relevant consumers across an organization. Researchers and consultants should
also consider how other variables such as the anonymity or the availability of
results to an immediate supervisor may alter consumer responding. Procedural
acceptability should also be assessed at multiple time points across an inter­
vention. The assessment results can then be used to alter the intervention or
supplement intervention procedures as needed to maximize outcomes and
maintenance. Researchers could use this information to examine how the
acceptability changes based on how the intervention is produced and the
changes in performance it creates over time. Finally, procedural acceptability
could be used as a tool for improving cultural awareness during intervention
development and adjusting interventions to be more culturally sensitive.
Although the benefits of assessing procedural acceptability are numerous,
limited research exists on the use of procedural acceptability assessments in
OBM. This observation is ironic since virtually every OBM practitioner encoun­
ters challenges when encouraging clients to change their behavior and adopt
behavioral recommendations and/or systems. Future research should evaluate
the accuracy and reliability of results across different types of procedural accept­
ability measures. Procedural acceptability by customers or clients may also differ
across components of an intervention; thus, further research is needed to
determine which aspects of an intervention may be more or less acceptable to
those who interact with employees exposed to interventions. Furthermore,
researchers should assess the variables impacting the accuracy of subjective
measures across organizational settings. Finally, researchers and practitioners
should recognize that acceptability should be the bare minimum. Organizational
leaders should ultimately strive to maximize intervention outcomes and improve
job satisfaction among all members of an organization (Hantula, 2015).

Reflections on the current state of assessment in OBM


OBM currently utilizes several performer level assessments that help identify
intervention targets, select interventions likely to improve performance and
help learn employee preferences. Each of these assessments improves our
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 19

understanding of the behaviors of employees and the environment where they


work. That understanding can lead to more sophisticated, tailored, and effec­
tive interventions, which researchers have demonstrated.
Despite the increase in assessments employed in OBM research, there is still
tremendous opportunity for growing and refining the research and application
of assessments. Although researchers have studied assessments, there have been
only modest attempts to evaluate the validity and reliability of the assessments
(e.g., Carr et al., 2013; Cymbal et al., 2020; Johnson, Casella, McGee, & Lee, 2014;
Wilder et al., 2019). In other words, we do not know whether the assessment
procedures are leading to the best possible behavior targets and interventions or
if practitioners would choose the same targets and interventions without the aid
of an assessment. Furthermore, we do not know if selecting these target beha­
viors, and interventions leads to optimal organization outcomes (Daniels &
Bailey, 2014). This question could be, at least partially, answered by conducting
a meta-analysis to compare effect sizes of applied intervention studies that
employed an assessment with those that did not. It might also be useful to
compare intervention selection among experts and novices for a variety of
performance issues with and without the aid of an assessment tool.
Although practitioners acknowledge that assessments offer additional ben­
efits such as building rapport and learning more about the client, the work,
and the organizational jargon, these benefits have not been documented in the
research literature. Pre-post surveys could evaluate practitioner-client rela­
tionships, and pre-posts tests could evaluate the knowledge acquired about the
work and organization. Practitioners could also be surveyed to learn more
about their experiences with employing assessments in organizations.
Although researchers have assessed preferences for reinforcers and tasks,
there is an opportunity to expand this area by developing tools to determine
preferences for other aspects of work like meeting arrangements, feedback
discussions, task preferences, and other aspects of the work experience.
Furthermore, researchers could evaluate whether accommodating employee
preferences for various task arrangements leads to improved work perfor­
mance or job satisfaction.
Finally, researchers have used assessments to identify interventions that are
likely to be useful, but they have rarely assessed the social validity of interven­
tions before implementation. Doing so could allow researchers to determine if
expected acceptability predicts effectiveness. It could also increase our under­
standing of intervention “buy-in” if acceptability were assessed periodically
through the intervention process.

Looking forward, the next 20 years


Given the assumed focus on behavior driving results and environment driving
behavior, one possible direction for research and practice is to help leaders and
20 N. GRAVINA ET AL.

managers understand critical environmental factors to more easily improve


performance and the workplace. For example, it’s not hard to imagine digital
tools requiring no behavioral training for leaders to conduct assessment
(historical, functional, reinforcer preference, or procedural acceptability-
focused) and derive behaviorally-sound solutions. Of course, the problem of
proper deployment would remain, and OBM has always had the challenge of
teaching people to effectively implement behavioral interventions, so that
should also be a focus in the next 20 years.
A low-tech and informal method that has promise for assessing the function
of various environmental influences on target behaviors is question asking. In
recent years, a growing number of articles in large publications such as the
Harvard Business Review has featured question asking. Simple observation in
virtually any organizational setting reveals that we often reinforce the behavior
of people who get the right answer, but rarely the person who asks a question.
Researchers have found that leaders do not ask enough questions; when people
ask more questions, it improves learning and interpersonal bonds (i.e., in some
situations, people like the questioner more); and that certain types of questions
are more effective than others (Brooks & John, 2018). OBM researchers could
teach leaders to ask questions that are more effective at uncovering the causes
of poor performance to derive more effective and more acceptable solutions.
Regular question-asking about performance could lead to more agile and
preventative solutions.
Over the next 20 years, OBM has a tremendous opportunity to develop new
assessments, including behavior-based surveys, that could help identify per­
formance targets and effective interventions and then validate them. For
example, OBM researchers could create an assessment to evaluate the overall
level of reinforcement in the work environment and examine whether that
predicts good performance. Validated and useful assessments may serve to
disseminate OBM by helping leaders readily understand the work environ­
ment and drivers of behavior.
Finally, as behavior analysts become more culturally informed, OBM has an
opportunity to embed culturally sensitive questions and procedures into
assessments. For example, feedback assessments should consider the broader
culture where the organization operates and how feedback typically works in
that culture. Cultural humility is a process of learning about the culture of
others and understanding your own beliefs and cultural identities and how
these might impact your behavior, expectations, and decisions. This approach
is important and relevant for the field of OBM, as well as consultants and
leaders in the context of improving performance because it can be culturally
insensitive, not to mention behaviorally less effective, to assume you know all
of the required elements for performance improvement. We argue that a more
effective and culturally humble approach is to collect data, to ask questions, to
conduct a variety of assessments as described in this chapter, and to learn
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 21

about the people in the work environment and engage them in deciding on
solutions before taking the most appropriate course of action. The last 20 years
of OBM research and practice has seen an increase in the use of assessments.
The next 20 years should focus on expanding and refining them to improve
our impact on employees and organizations.

Disclosure statement
No potential conflict of interest was reported by the authors.

ORCID
Nicole Gravina http://orcid.org/0000-0001-8210-7159

References
Amigo, S., Smith, A., & Ludwig, T. (2008). Using task clarification, goal setting, and feedback to
decrease table busing times in a franchise pizza restaurant. Journal of Organizational
Behavior Management, 28(3), 176–187. doi:10.1080/01608060802251106
Anbro, S. J., Szarko, A. J., Houmanfar, R. A., Maraccini, A. M., Crosswell, L. H., Harris, F. C., . . .
Starmer, L. (2020). Using virtual simulations to assess situational awareness and commu­
nication in medical and nursing education: A technical feasibility study. Journal of
Organizational Behavior Management, 40(1–2), 1–11. doi:10.1080/01608061.2020.1746474
Austin, J. (1996). Organizational troubleshooting in expert management consultants and experi­
enced managers [Unpublished doctoral dissertation]. Florida, USA: Florida State University,
Tallahassee.
Austin, J. (2000). Performance analysis and performance diagnostics. In J. Austin & J. E. Carr
(Eds.), Handbook of Applied Behavior Analysis (pp. 321–350). Oakland, CA: Context Press.
Austin, J., Weatherly, N., & Gravina, N. (2005). Using task clarification, graphic feedback, and
verbal feedback to increase closing task completion in a privately owned restaurant. Journal
of Applied Behavior Analysis, 38(1), 117–121. doi:10.1901/jaba.2005.159-03
Bowe, M., & Sellers, T. P. (2018). Evaluating the performance diagnostic checklist-human
services to assess incorrect error-correction procedures by preschool paraprofessionals.
Journal of Applied Behavior Analysis, 51(1), 166–176. doi:10.1002/jaba.428
Brooks, A. W., & John, L. K. (2018 May-June). The surprising power of questions. Harvard
Business Review, 60–67. https://hbr.org/2018/05/the-surprising-power-of-questions
Bumstead, A., & Boyce, T. E. (2005). Exploring the effects of cultural variables in the imple­
mentation of behavior-based safety in two organizations. Journal of Organizational Behavior
Management, 24(4), 43–63. doi:10.1300/J075v24n04_03
Carr, J. E., Wilder, D. A., Majdalany, L., Mathisen, D., & Strain, L. A. (2013). An
assessment-based solution to a human-service employee performance problem. Behavior
Analysis in Practice, 6(1), 16–32. doi:10.1007/bf03391789
Connellan, T. K. (1978). How to improve human performance. New York, NY: Harper and Row.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2020). Applied behavior analysis (3rd ed.). Upper
Saddle River, NJ: Pearson.
22 N. GRAVINA ET AL.

Crawley, W. J., Adler, B. S., O’Brien, R. M., & Duffy, E. M. (1982). Making salesmen: Behavioral
assessment and intervention. In Industrial behavior modification: A management handbook
(pp. 184–199). Oxford, United Kingdom: Pergamon Press.
Cruz, N. J., Wilder, D. A., Phillabaum, C., Thomas, R., Cusick, M., & Gravina, N. (2019).
Further evaluation of the performance diagnostic checklist-safety (PDC-Safety). Journal of
Organizational Behavior Management, 39(3–4), 266–279. doi:10.1080/01608
061.2019.1666777
Cunningham, T. R., & Geller, E. S. (2012). A comprehensive approach to identifying interven­
tion targets for patient-safety improvement in a hospital setting. Journal of Organizational
Behavior Management, 32(3), 194–220. doi:10.1080/01608061.2012.698114
Cymbal, D., Wilder, D. A., Thomas, R., & Ertel, H. (2020). Further evaluation of the validity
and reliability of the performance diagnostic checklist-human services. Journal of
Organizational Behavior Management, 1–9. doi:10.1080/01608061.2020.1792027
Daniels, A. C., & Bailey, J. S. (2014). Performance management: Changing behavior that drives
organizational effectiveness (5th ed.). Atlanta, Georgia, USA: Aubrey Daniels International,
Inc.
DeLeon, I. G., & Iwata, B. A. (1996). Evaluation of a multiple-stimulus presentation format for
assessing reinforcer preferences. Journal of Applied Behavior Analysis, 29(4), 519–533.
doi:10.1901/jaba.1996.29-519
Ditzian, K., Wilder, D. A., King, A., & Tanz, J. (2015). An evaluation of the performance
diagnostic checklist–human services to assess an employee performance problem in
a center-based autism treatment facility. Journal of Applied Behavior Analysis, 48(1),
199–203. doi:10.1002/jaba.171
Doll, J., Livesey, J., McHaffie, E., & Ludwig, T. D. (2007). Keeping an uphill edge: Managing
cleaning behaviors at a ski shop. Journal of Organizational Behavior Management, 27(3),
41–60. doi:10.1300/j075v27n03_04
Eikenhout, N., & Austin, J. (2005). Using goals, feedback, reinforcement, and a performance
matrix to improve customer service in a large department store. Journal of Organizational
Behavior Management, 24(3), 27–62. doi:10.1300/J075v24n03_02
Erath, T. G., Pellegrino, A. J., DiGennaro Reed, F. D., Ruby, S. A., Blackman, A. L., &
Novak, M. D. (2020). Experimental research methodologies in organizational behavior man­
agement. [Manuscript submitted for publication]. JOBM: Department of Applied Behavior
Science, University of Kansas.
Fante, R., Gravina, N., & Austin, J. (2007). A brief pre-intervention analysis and demonstration
of the effects of a behavioral safety package on postural behaviors of pharmacy employees.
Journal of Organizational Behavior Management, 27(2), 15–25. doi:10.1300/J075v27n02_02
Fante, R., Gravina, N., Betz, A., & Austin, J. (2010). Structural and treatment analyses of safe
and at-risk behaviors and postures performed by pharmacy employees. Journal of
Organizational Behavior Management, 30(4), 325–338. doi:10.1080/01608061.2010.520143
Fienup, D. M., Luiselli, J. K., Joy, M., Smyth, D., & Stein, R. (2013). Functional assessment and
intervention for organizational behavior change: Improving the timeliness of staff meetings
at a human services organization. Journal of Organizational Behavior Management, 33(4),
252–264. doi:10.1080/01608061.2013.843435
Fisher, W. W., Piazza, C. C., Bowman, L. G., & Amari, A. (1996). Integrating caregiver report
with a systematic choice assessment to enhance reinforcer identification. American Journal
on Mental Retardation. https://psycnet.apa.org/record/1996-01619-002
Fisher, W. W., Piazza, C. C., Bowman, L. G., Hagopian, L. P., Owens, J. C., & Slevin, I. (1992).
A comparison of two approaches for identifying reinforcers for persons with severe and
profound disabilities. Journal of Applied Behavior Analysis, 25(2), 491–498. doi:10.1901/
jaba.1992.25-491
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 23

Gilbert, T. F. (1978). Human competence. New York, NY: McGraw-Hill.


Gravina, N., VanWagner, M., & Austin, J. (2008). Increasing physical therapy equipment
preparation behaviors using task clarification, graphic feedback and modification of work
environment. Journal of Organizational Behavior Management, 28(2), 110–122. doi:10.1080/
01608060802100931
Green, C. W., Reid, D. H., Passante, S., & Canipe, V. (2008). Changing less-preferred duties to
more-preferred: A potential strategy for improving supervisor work enjoyment. Journal of
Organizational Behavior Management, 28(2), 90–109. doi:10.1080/01608060802100899
Hantula, D. A. (2015). Job satisfaction: The management tool and leadership responsibility.
Journal of Organizational Behavior Management, 35(1–2), 81–94. doi:10.1080/
01608061.2015.1031430
Hardesty, S. L., Orchowitz, P. M., & Bowman, L. G. (2018). An evaluation of staff preference for
graphic characteristics. Journal of Organizational Behavior Management, 38(4), 345–353.
doi:10.1080/01608061.2018.1524338
Hawkins, R. P. (1991). Is social validity what we are interested in? Argument for a functional
approach. Journal of Applied Behavior Analysis, 24(2), 205. doi:10.1901/jaba.1991.24-205
Hermann, J. A., Ibarra, G. V., & Hopkins, B. L. (2010). A safety program that integrated
behavior-based safety and traditional safety methods and its effects on injury rates of
manufacturing workers. Journal of Organizational Behavior Management, 30(1), 6–25.
doi:10.1080/01608060903472445
Hodges, A. C., Villacorta, J., Wilder, D. A., Ertel, H., & Luong, N. (2020). Assessment and
improvement of parent training: An evaluation of the performance diagnostic checklist–
parent. Behavioral Development, 25(1), 1–16. doi:10.1037/bdb0000092
Johnson, D. A., Casella, S. E., McGee, H., & Lee, S. C. (2014). The use and validation of
pre-intervention diagnostic tools in Organizational Behavior Management. Journal of
Organizational Behavior Management, 34(2), 104–121. doi:10.1080/01608061.2014.914009
Kazdin, A. E. (1977). Assessing the clinical or applied importance of behavior change through
social validation. Behavior Modification, 1(4), 427–452. doi:10.1177/014544557714001
Kelley, M. E., LaRue, R. H., & Roane, H. S. (2014). Indirect behavioral assessments: Interviews
and rating scales. In W. W. Fisher, C. C. Piazza, & H. S. Roane (Eds.), Handbook of applied
behavior analysis (pp. 472–488). New York, NY: Guilford Press.
Lebbon, A., Austin, J., Rost, K., & Stanley, L. (2011). Improving safe consumer transfers in
a day treatment setting using training and feedback. Behavior Analysis in Practice, 4(2),
35–43. doi:10.1007/BF03391782
Lee, K., Shon, D., & Oah, S. (2014). The relative effects of global and specific feedback on safety
behaviors. Journal of Organizational Behavior Management, 34(1), 16–28. doi:10.1080/
01608061.2013.878264
Lennox, D. B., & Miltenberger, R. G. (1989). Conducting a functional assessment of problem
behavior in applied settings. Research and Practice for Persons with Severe Disabilities, 14(4),
304–311. doi:10.1177/154079698901400409
Loughrey, T. O., Marshall, G. K., Bellizzi, A., & Wilder, D. A. (2013). The use of video
modeling, prompting, and feedback to increase credit card promotion in a retail setting.
Journal of Organizational Behavior Management, 33(3), 200–208. doi:10.1080/
01608061.2013.815097
Mager, R. F., & Pipe, P. (1997). Analyzing performance problems. Atlanta, GA: Center for
Effective Performance Inc.
Martinez-Onstott, B., Wilder, D., & Sigurdsson, S. (2016). Identifying the variables contribut­
ing to at-risk performance: Initial evaluation of the Performance Diagnostic Checklist–
Safety (PDC-Safety). Journal of Organizational Behavior Management, 36(1), 80–93.
doi:10.1080/01608061.2016.1152209
24 N. GRAVINA ET AL.

Mattaini, M. A., & Thyer, B. A. (Eds.). (1996). Finding Solutions to Social Problems: Behavioral
Strategies for Change. Washington, DC: American Psychological Association Press
McGee, H. M., & Crowley-Koch, B. J. (2020). Types of organizational assessments and their
applications [Manuscript submitted for publication]. Department of Psychology, Western
Michigan University.
McSween, T. E. (2003). Values-based safety process: Improving your safety culture with beha­
vior-based safety. Hoboken, NJ: John Wiley & Sons.
McSween, T. E., Myers, W., & Kuchler, T. C. (1990). Getting buy-in at the executive level.
Journal of Organizational Behavior Management, 11(1), 207–221. doi:10.1300/J07
5v11n01_13
Merritt, T. A., DiGennaro Reed, F. D., & Martinez, C. E. (2019). Using the Performance
Diagnostic Checklist–Human Services to identify an indicated intervention to decrease
employee tardiness. Journal of Applied Behavior Analysis, 52(4), 1034–1048. doi:10.1002/
jaba.643
Nastasi, J., Simmons, D., & Gravina, N. (2020). Has OBM found its heart? An assessment of
procedural acceptability trends in the journal of organizational behavior management.
Manuscript submitted for publication.
Olson, R., & Austin, J. (2001). Behavior-based safety and working alone: The effects of a
self-monitoring package on the safe performance of bus operators. Journal of
Organizational Behavior Management, 21(3), 5–43. doi:10.1300/J075v21n03_02
Pampino, R. N., Jr, Heering, ,. P. W., Wilder, D. A., Barton, C. G., & Burson, L. M. (2004). The
use of the performance diagnostic checklist to guide intervention selection in an indepen­
dently owned coffee shop. Journal of Organizational Behavior Management, 23(2–3), 5–19.
doi:10.1300/J075v23n02_02
Pampino, R. N., Wilder, D. A., & Binder, C. (2005). The use of functional assessment and
frequency building procedures to increase product knowledge and data entry skills among
foremen in a construction organization. Journal of Organizational Behavior Management, 25
(2), 1–36. doi:10.1300/J075v25n02_01
Parsons, M. B. (1998). A review of procedural acceptability in organizational behavior
management. Journal of Organizational Behavior Management, 18(2–3), 173–190.
doi:10.1300/J075v18n02_09
Reid, D. H., & Parsons, M. B. (1996). A comparison of staff acceptability of immediate versus
delayed verbal feedback in staff training. Journal of Organizational Behavior Management,
16(2), 35–47. doi:10.1300/j075v16n02_03
Rodriguez, M., Wilder, D. A., Therrien, K., Wine, B., Miranti, R., Daratany, K., . . .
Rodriguez, M. (2006). Use of the performance diagnostic checklist to select an intervention
designed to increase the offering of promotional stamps at two sites of a restaurant franchise.
Journal of Organizational Behavior Management, 25(3), 17–35. doi:10.1300/J075v25n03_02
Rohn, D., Austin, J., & Lutrey, S. (2002). Decreasing cash shortages using verbal and graphic
feedback. Journal of Organizational Behavior Management, 22(1), 33–46. doi:10.1300/
J075v22n01_03
Roscoe, E. M., Iwata, B. A., & Kahng, S. (1999). Relative versus absolute reinforcement effects:
Implications for preference assessments. Journal of Applied Behavior Analysis, 32(4),
479–493. doi:10.1901/jaba.1999.32-479
Rummler, G., & Brache, A. (1995). Improving performance: How to manage the white space on
the organizational chart San Francisco: Jossey-Bass.
Schwartz, I. S., & Baer, D. M. (1991). Social validity assessments: Is current practice state of the
art? Journal of Applied Behavior Analysis, 24(2), 189–204. doi:10.1901/jaba.1991.24-189
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 25

Shier, L., Rae, C., & Austin, J. (2003). Using task clarification, checklists and performance
feedback to increase tasks contributing to the appearance of a grocery store. Performance
Improvement Quarterly, 16(2), 26–40. doi:10.1111/j.1937-8327.2003.tb00277.x
Sigurdsson, S. O., & Austin, J. (2006). Institutionalization and response maintenance in
organizational behavior management. Journal of Organizational Behavior Management, 26
(4), 41–77. doi:10.1300/J075v26n04_03
Sigurdsson, S. O., & Ring, B. M. (2013). Evaluating preference for graphic feedback on correct
versus incorrect performance. Journal of Organizational Behavior Management, 33(2),
128–136. doi:10.1080/01608061.2013.785889
Simonian, M. J., Brand, D., Mason, M. A., Heinicke, M., & Luoma, S. M. (2020). A systematic
review of research evaluating the use of preference assessment methodology inn the
workplace. Journal of Organizational Behavior Management, 1–19. doi:10.1080/
01608061.2020.1819933
Smith, M., & Wilder, D. A. (2018). The use of the performance diagnostic checklist-human
services to assess and improve the job performance of individuals with intellectual
sisabilities. Behavior Analysis in Practice, 11(2), 148–153. doi:10.1007/s40617-018-0213-4
Sulzer-Azaroff, B., & Fellner, D. (1984). Searching for performance targets in the behavior
analysis of occupational safety: An assessment strategy. Journal of Organizational Behavior
Management, 6(2), 53–65. doi:10.1300/J075v06n02_09
Therrien, K., Wilder, D. A., Rodriguez, M., & Wine, B. (2005). Preintervention analysis and
improvement of customer greeting in a restaurant. Journal of Applied Behavior Analysis, 38
(3), 411–415. doi:10.1901/jaba.2005.89-04
Thompson, R. H., & Borrero, J. C. (2014). Direct observation. In W. W. Fisher, C. C. Piazza, &
H. S. Roane (Eds.), Handbook of applied behavior analysis (pp. 472–488). New York, NY:
Guilford Press.
Umbreit, J. (1996). Functional analysis of disruptive behavior in an inclusive classroom.
Journal of Early Intervention, 20(10), 18–29. doi:10.1177/105381519602000104
Wacker, D. P., Berg, W. K., & Harding, J. W. (2014). Functional and structural approaches to
behavioral assessment of problem behavior. In W. W. Fisher, C. C. Piazza, & H. S. Roane
(Eds.), Handbook of applied behavior analysis (pp. 472–488). New York, NY: Guilford Press.
Waldvogel, J. M., & Dixon, M. R. (2008). Exploring the utility of preference assessments in
organizational behavior management. Journal of Organizational Behavior Management, 28
(1), 76–87. doi:10.1080/01608060802006831
Wilder, D. A., Cymbal, D., & Villacorta, J. (2020). The performance diagnostic checklist-
human services: A brief review. Journal of Applied Behavior Analysis, 53(2), 1170–1176.
doi:10.1002/jaba.676
Wilder, D. A., Harris, C., Casella, S., Wine, B., & Postma, N. (2011). Further evaluation of the
accuracy of managerial prediction of employee preference. Journal of Organizational
Behavior Management, 31(2), 130–139. doi:10.1080/01608061.2011.569202
Wilder, D. A., Lipschultz, J., Gehrman, C., Ertel, H., & Hodges, A. (2019). A preliminary
assessment of the reliability and validity of the performance diagnostic checklist-human
services. Journal of Organizational Behavior Management, 39(3–4), 194–212. doi:10.1080/
01608061.2019.1666772
Wilder, D. A., Lipschultz, J. L., King, A., Driscoll, S., & Sigurdsson, S. (2018). An analysis of the
commonality and type of preintervention assessment procedures in the journal of organiza­
tional behavior management (2000–2015). Journal of Organizational Behavior Management,
38(1), 5–17. doi:10.1080/01608061.2017.1325822
Wilder, D. A., Rost, K., & McMahon, M. (2007). The accuracy of managerial prediction of
employee preference: A brief report. Journal of Organizational Behavior Management, 27(2),
1–14. doi:10.1300/J075v27n02_01
26 N. GRAVINA ET AL.

Windsor, J., Piché, L. M., & Locke, P. A. (1994). Preference testing: A comparison of two
presentation methods. Research in Developmental Disabilities, 15(6), 439–455. doi:10.1016/
0891-4222(94)90028-0
Wine, B., Gilroy, S., & Hantula, D. A. (2012). Temporal (in)stability of employee preferences
for rewards. Journal of Organizational Behavior Management, 32(1), 58–64. doi:10.1080/
01608061.2012.646854
Wine, B., Kelley, D. P., III, & Wilder, D. A. (2014). An initial assessment of effective preference
assessment intervals among employees. Journal of Organizational Behavior Management, 34
(3), 188–195. doi:10.1080/01608061.2014.944747
Wine, B., Reis, M., & Hantula, D. A. (2014). An evaluation of stimulus preference assessment
methodology in organizational behavior management. Journal of Organizational Behavior
Management, 34(1), 7–15. doi:10.1080/01608061.2013.873379
Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied
behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11(2), 203–214.
doi:10.1901/jaba.1978.11-203

You might also like