Gravina 2021
Gravina 2021
Gravina 2021
To cite this article: Nicole Gravina, Jessica Nastasi & John Austin (2021): Assessment
of Employee Performance, Journal of Organizational Behavior Management, DOI:
10.1080/01608061.2020.1869136
ABSTRACT KEYWORDS
Assessments are commonly used in organizational behavior Assessment; performance
management (OBM) to identify performance targets, determine diagnostic checklist;
environmental variables contributing to poor performance, and procedural acceptability;
devise appropriate interventions. This paper describes the role informant assessment;
descriptive assessment
of assessment at the individual performer level in OBM and the
assessment process. It also reviews four common types of OBM
assessments: historical assessments, functional assessments,
preference assessments, and procedural acceptability and dis
cusses the research support, weaknesses, and opportunities for
future research for each. Finally, we conclude with recommen
dations for the future of assessment in OBM, including incorpor
ating technology, using ongoing question-asking to informally
assess performance and the environment, developing and vali
dating survey instruments and other assessment tools, and
attending to cultural variables in assessments.
CONTACT Nicole Gravina [email protected] School of Psychology, 945 Center Dr., Gainesville, Florida 32611.
Supplemental data for this article can be accessed on the publisher’s website.
© 2021 Taylor & Francis
2 N. GRAVINA ET AL.
We will describe the role and process of performer level assessments, review
each of these types of assessments, including recent developments, suggest
future research in each area, and then look forward to the next 20 years of
research and practice in OBM performance assessment.
Assessment process
The assessment process usually includes three stages: pre-assessment, assess
ment, and intervention planning, aptly described by Cunningham and Geller
(2012). Assessments aimed at one, well-defined and easily observed perfor
mance target (e.g., cleaning at the end of shift) will quickly move through this
process, but larger-scale assessments (e.g., identifying safety behaviors that will
lead to a reduction in injuries) will require more time in each stage. Following
this process will help researchers and practitioners select the best assessments,
identify the appropriate stakeholders, and reap the most value from the
assessment.
Pre-assessment
There are several other choices for practitioners and stakeholders to make
during pre-assessment planning. For example, the practitioner must select the
appropriate assessments, decide how they will be administered, and identify
resources needed. We will describe some of the assessments available in the
next section. An assessment can be administered by record review, individual
or group interviews, observation, survey, or a combination of these methods.
The administration will be, in part, dependent on who will contribute to the
assessment and what fits best with the chosen assessment and the employees’
jobs. For example, some people may not have a dedicated work computer, and
therefore, they may be more likely to respond to a survey administered using
paper during a meeting. Some employees may also prefer anonymity.
Resources required might include documents or data already available, access
to the work areas for observation, employee time, access to employee e-mail
addresses, access to scheduled meetings, and a space to work. The goal
of the planning stage is to use forethought to design an assessment plan to
gather useful information as efficiently as possible. An inefficient assessment
process will waste valuable time and resources and delay the start of an
intervention. However, proceeding without an assessment could be more
costly if an intervention does not produce the desired results.
Assessment
During the assessment phase, the practitioner uses the assessments to gather
information. Consider planning the assessment in a way that leads to optimal
information gathering. For example, administering a survey before an inter
view means that the results can help guide interview questions. Anonymous
surveys may also yield different information than group or individual inter
views because they gather honest input with less fear of repercussions.
Practitioners may also want to create a plan to keep the information collected
organized so that it is easy to locate relevant information later.
Although the scope should be identified prior to the assessment, it is wise to
allow some flexibility in the process so that the practitioner can gather as much
relevant information as possible. For example, when using a structured inter
view like the Performance Diagnostic Checklist (PDC, Austin, 2000), which
will be described later, practitioners can ask follow-up questions to clarify
responses and gather more nuanced information. Suppose that during the
PDC interview, the client responds that the supervisor is not present during
task completion (question 5). In that case, the practitioner can directly observe
task completion to confirm the response and ask follow-up questions to
determine if this would improve the performance. A survey could include
open-ended questions so that employees can provide information not stated in
the survey, allowing practitioners to learn more about the performance issue
and organization.
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 5
Intervention planning
After the assessment is complete, the findings can be used to select appropriate
interventions. Although describing examples of sample interventions is
beyond this paper’s scope, we would like to offer a few suggestions to consider
during intervention planning. First, interventions that are most likely to be
implemented effectively and consistently should be selected. A well-selected
intervention is not useful if it is not implemented. During the assessment
process, practitioners have learned about the organization, the intervention
targets that might work best for them, and barriers to implementation.
Therefore, they can design an intervention that fits the client’s needs and
environment. In many OBM studies that employed assessments, researchers
started with one intervention component and then added components as
needed (e.g., Cruz et al., 2019). This approach allows the practitioner to use
the least intrusive intervention necessary to produce the desired results and
shape the organizational behaviors required to maintain the solution. It also
provides evidence for the organization that all intervention components
included must be maintained to sustain the improvements.
Now that we have described the assessment process, we will discuss assess
ments that researchers and practitioners can use to learn more about the
performance issues. Each of these assessments has advantages and disadvan
tages and is suitable in different contexts, determined during the pre-
assessment phase.
Historical assessment
Many organizations already measure relevant behaviors or correlative out
comes (e.g., sales, absenteeism, turnover, product rejects, reported injuries)
before the consultation. In some cases, those data may be used to identify
performance targets or inform the development of intervention procedures
(Bumstead & Boyce, 2005). This method is sometimes referred to as
a historical assessment and is similar to a records review, typical in clinical
behavior analysis. Historical assessments are one of the most common
6 N. GRAVINA ET AL.
assessment methods utilized in OBM, perhaps due to the low effort and cost
required compared to other methods (Wilder, Lipschultz, King, Driscoll, &
Sigurdsson, 2018). Historical assessments have been used in a variety of
settings, including manufacturing, retail, human services, sales, public trans
portation, food services, and construction (Fante, Gravina, & Austin, 2007;
Hermann, Ibarra, & Hopkins, 2010; Lebbon, Austin, Rost, & Stanley, 2011;
Lee, Shon, & Oah, 2014; Olson & Austin, 2001).
Historical assessments can provide vital information to help select inter
vention targets and conditions under which behaviors might be more or less
likely to occur and are used most often in behavioral safety (Wilder et al.,
2018). Historical assessments may be particularly amenable to behavioral
safety because industrial organizations must collect data to comply with the
Occupational Safety and Health Administration (OSHA) requirements.
Therefore, measures such as recordable injuries, compensation claims, and
lost time injuries may be available across several years. Furthermore, historical
assessments are considered a best-practice assessment method for instituting
behavior-based safety processes (McSween, 2003). In human service settings,
researchers and practitioners can use historical data to identify which proce
dures or programs are consistently followed, billing trends, trends in absences
and turnover, arrangements that lead to the best client outcomes, and poten
tial monetary savings in addressing the performance. The utility of historical
assessments may hinge on the accuracy and reliability of data collected before
intervention procedures; thus, researchers and practitioners should inquire
further to evaluate the quality of data to be used for historical assessments.
Although historical assessments can be a useful starting point to narrow the
focus onto the most critical performance targets, they are typically combined
with other assessment procedures (e.g., direct observation, interviews) to
inform intervention selection. For example, a historical assessment might
find that injuries to the hand comprise most of the injuries over the past five
years at a company. Still, the most appropriate solution might involve super
visors requiring employees to wear gloves, monitoring the behavior, and
praising it when it happened. In this case and many cases, the intervention
targets are different from the controlling variables, and different types of
assessment may be required to understand each of these. Thus, a functional
assessment may be needed to devise an effective solution.
Performance analysis
In 1999, Austin et al. lamented that OBM had not kept pace with other areas of
behavior analysis in developing functional assessments to improve the selec
tion of effective interventions. They identified three reasons for the glaring
omission. First, OBM interventions appear to be effective without assessment
procedures. However, previous research in clinical behavior analysis indicates
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 7
examined and expanded upon the assessment and incorporated other func
tional assessment methods into research and practice. OBM practitioners and
researchers have leaned on the behavior analysis knowledge to further develop
assessment methods for organizations. Behavior analysis typically utilizes
three types of assessments: indirect assessment, direct assessment, and experi
mental analysis (Kelley, LaRue, & Roane, 2014), and many OBM assessments
use more than one of these methods in concert (Wilder et al., 2018).
Indirect assessments
An indirect assessment involves gathering information to understand vari
ables impacting a performance issue without directly observing those beha
viors. Practitioners often use indirect assessments such as surveys, rating
scales, and interviews because they are easy and quick to administer, require
minimal training, and enable input from various sources. Below, we describe
two common indirect assessment methods in OBM: the PDC and its variations
and the ABC Analysis.
Whereas the four domains of the PDC (i.e., antecedents and information,
equipment and processes, knowledge and skills, and consequences) apply to
performance in various settings, the original 20-item checklist is not always
specific enough to identify contingencies operating in certain domains.
Therefore, more precise iterations of the PDC have been developed and
applied, including the PDC for human services (PDC-HS; Carr et al., 2013),
the PDC for occupational safety (PDC-S; Martinez-Onstott, Wilder, &
Sigurdsson, 2016), and the PDC for parents (PDC-P; Hodges, Villacorta,
Wilder, Ertel, & Luong, 2020).
The PDC-HS was developed to assess the performance of employees
responsible for providing direct care to other individuals (Carr et al.,
2013). Carr et al. (2013) posed a few unique considerations for the perfor
mance of employees in human service settings, including inadequate treat
ment integrity, inaccurate data collection, deficits in program development,
issues with attendance or tardiness, insufficient reporting, and poor graph
construction. The authors administered the PDC in an autism treatment
center providing early intervention services, then made revisions according
to the conditions specific to human service organizations, and the inclusion
of sections for scoring and corresponding intervention recommendations.
Modifications included updated domain titles to a) training, b) task clarifica
tion and prompting, c) resources, materials, and processes, and d) perfor
mance consequences, effort, and competition. Next, 11 behavior analysts
were asked to pilot and assess the PDC-HS, and revisions were made
accordingly. Finally, the predictive validity and utility of the last version of
the PDC-HS were assessed by comparing the use of indicated and non-
indicated interventions as identified by the PDC-HS. Results showed that
performance improvements were greater after implementing the PDC-HS
indicated intervention compared to a non-indicated intervention, suggesting
that the PDC-HS may be a valuable tool for identifying performance deficits
and subsequent intervention recommendations in a human service setting
(Carr et al., 2013).
Since its publication, the PDC-HS has been utilized in a variety of settings,
including schools (Bowe & Sellers, 2018; Merritt, DiGennaro Reed, &
Martinez, 2019), retail stores (e.g., Loughrey, Marshall, Bellizzi, & Wilder,
2013; Smith & Wilder, 2018), and further evaluation in autism treatment
clinics (Ditzian, Wilder, King, & Tanz, 2015; Wilder et al., 2018). A review
conducted by Wilder, Cymbal, and Villacorta (2020) indicated that the per
formance consequences, effort, and competition domains were endorsed most
often across settings. Future research may bolster support for using the PDC-
HS by evaluating the tool compared to other assessment methodologies and
assessing its utility in additional human service settings (e.g., residential
facilities, clinics for the treatment of substance use, crisis intervention services,
geriatric facilities).
10 N. GRAVINA ET AL.
whether Master’s, Bachelor’s, and Associate’s degree (or high school diploma)
level practitioners trained in behavior analysis could use the PDC-HS to
accurately identify domains responsible for a performance problem described
in three vignettes. The results indicated that Master’s and Bachelor’s level
practitioners were slightly better at accurately identifying the correct domains
for the performance problem than Associate’s level practitioners, but the
difference was small (~5–6%). Researchers should also compare the PDC
against a more informal interview across novice practitioners and OBM
experts.
Future studies should evaluate whether the type of individual interviewed
(e.g., manager or performer, high-performer or low-performer) differentially
impacts the information gathered and selecting appropriate intervention pro
cedures. For example, low performers may find it more difficult to describe the
barriers to performance, while high performers may have identified barriers
and employed workarounds to produce good results. Researchers could also
further refine the PDC by including a rating of importance for each item.
Finally, future iterations of the PDC could incorporate culturally responsive
questions into the tool to guide users to be culturally sensitive when asking the
questions and selecting interventions (See Appendix A for an updated PDC
with many of these considerations embedded).
ABC analysis
An ABC Analysis is an assessment in which practitioners identify antecedents
and consequences that support and discourage desired and undesired perfor
mances (Connellan, 1978; Daniels & Bailey, 2014) ABC Analyses are typically
constructed based on information known about the performance concerns,
but interviews could provide additional information. ABC analyses appear to
be common in business because they are easy to teach, help leaders understand
variables that may contribute to poor performance and can be applied to many
situations. However, the ABC Analysis findings may make intervention devel
opment less intuitive than the PDC since the ABC analysis is framed in terms
of antecedents and consequences rather than specific solutions such as feed
back, training, equipment, or praise. Despite the seemingly common applica
tion of ABC Analysis in organizations, limited research demonstrates their
utility for selecting interventions. Researchers could examine whether insight
gleaned from ABC Analyses improves intervention selection by practitioners
with various experience levels.
Despite being easy and quick to administer, previous clinical research on
indirect assessments indicates that this use alone may be insufficient because
they can yield inaccurate or incorrect information compared to direct assess
ments (Fisher, Piazza, Bowman, & Amari, 1996; Lennox & Miltenberger, 1989;
Umbreit, 1996). Although there are concerns about indirect assessments
within behavior analysis, OBM research has repeatedly demonstrated the
12 N. GRAVINA ET AL.
Direct assessments
Direct assessments involve the direct observation and recording of behavior
without manipulating the environment. Direct assessments yield descriptive
data on behavior and the conditions when it is most and least likely to occur
and are more rigorous than indirect assessments. Direct assessments are
usually informed by indirect assessments conducted prior to the direct assess
ment (Thompson & Borrero, 2014). In OBM, direct assessments may involve
observing high and low performers to compare differences in how they work
and monitoring work performance under naturally occurring conditions (e.g.,
in the presence and absence of customers or the supervisor). Data can be
collected using data sheets or A-B-C or narrative recording (i.e., recording the
antecedents, behaviors, and consequences). Sometimes, data are analyzed
visually (e.g., scatterplot), statistically (e.g., correlations), or using
a probability analysis or lag sequential analysis (e.g., calculating the probability
that a specific consequence is more likely to follow a specific behavior).
Narrative recording
Narrative recording entails observing and recording antecedents, behaviors,
and consequences as they occur in the natural environment. A book chapter
published in 1982 described a descriptive assessment procedure designed to
identify effective sales behaviors (Crawley, Adler, O’Brien, & Duffy, 1982). The
researchers followed top-performing salespeople and low-performing sales
people and took detailed data on their behaviors and subsequent sales engage
ments. When they interviewed top salespeople and asked why they were
effective at selling (indirect assessment), they did not gather much useful
information. However, the direct observations yielded information about
behaviors top sellers engaged in, and they created a checklist. Later, the
researchers taught low-performing salespeople to follow the checklist, and
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 13
their sales increased dramatically. When the checklist and training were
implemented organization-wide, they saw a dramatic increase in sales.
Direct observation
OBM researchers collect data under various conditions during baseline to help
identify an intervention. For example, Fienup, Luiselli, Joy, Smyth, and Stein
(2013) collected data on the start and end times of consecutive meetings and
the transition time required between meetings. They found that meetings that
started and ended late affected punctuality at the next meeting. Fante et al.
(2010) noticed that the high variability of pharmacist safe performance
appeared to be due to the presence and absence of a makeshift wrist support.
After a variable baseline phase, the researchers collected descriptive data on
the improvised wrist support presence and found a strong correlation; the use
of wrist supports resulted in safer wrist positioning. These simple observations
led to powerful interventions that may not have been obvious without direct
observation.
Scatterplot
A scatterplot presents collected data visually so that patterns can be detected.
For example, Anbro et al. (2020) used a scatterplot in a study that evaluated
virtual reality (VR) to assess a training procedure to improve communication
and situational awareness among medical and nursing students. The VR
technology recorded eye gaze, and observers recorded correct communication
steps. A scatterplot revealed that the training improved communication but
not looking at the patient. Scatterplots are useful for identifying temporal
patterns or relationships between two variables. However, sometimes it can
be difficult to detect patterns in responding using a scatterplot, specifically if
the measures plotted are not presented in the appropriate analysis unit. If no
patterns emerge, another assessment procedure may be required.
Although direct assessments are more rigorous than indirect assessments,
there are some disadvantages worth mentioning. Because direct assessments
require direct observation, data collection, and analysis, they necessitate more
time and training to complete. A culturally competent assessment will require
even more training. Also, while direct assessments involve direct observations
of behavior and conditions, they do not demonstrate a functional relationship
because no environmental variables are manipulated. Therefore, they may
require a similar amount of time as an experimental analysis but yield less
informative results. Finally, it may be problematic to observe and take data on
all the behaviors and conditions of organizations’ concern. For example,
collecting descriptive data on unsafe work behaviors could be problematic if
they occur infrequently and unethical if they are not intervened upon
immediately.
14 N. GRAVINA ET AL.
Experimental analysis
Experimental analyses are more rigorous than indirect and direct assessments,
and they can also yield more definitive information about causal variables and
lead to optimal interventions. In an experimental analysis, researchers or
practitioners systematically manipulate environmental variables and observe
responses in each condition (Wacker et al., 2014). The variables manipulated
are usually chosen based on the results of an indirect or direct assessment. For
example, if an observer notices that employees behave differently in the
presence and absence of the supervisor, they could manipulate their presence
to see if a functional relationship emerges. By showing that the behavior
changes when the experimenter changes the environmental condition, we
become more confident that the environmental variable is responsible for
behavior changes. The more demonstrations of this relationship, the more
confident we can be in our conclusions. This process is similar to the elements
of prediction, verification, and replication, as seen in design methodology
(Cooper, Heron, & Heward, 2020; Erath et al., 2020). For practical reasons,
when conducted as part of an assessment, these manipulations usually occur in
short segments (e.g., 5 to 30 min), which enables the experimenter to identify
functional relationships quickly.
Safety performance may be amenable to an experimental analysis if the
conditions tested do not put employees at prolonged or unnecessary risk.
For example, following the direct assessment conducted with pharmacy
employees conducted by Fante et al. (2010) mentioned above, the research
ers conducted an experimental manipulation. Because they observed that
pharmacy employee posture appeared to be safer when the makeshift wrist
support was in place, they manipulated the presence of the wrist support.
They concluded that it was functionally related to safety performance. Low-
risk, low-effort experimental analyses like those conducted by Fante et al.
(2010) may be modified and applied in other contexts to assess the variables
impacting performance and inform the selection of intervention
procedures.
Experimental analyses may also involve manipulation of aspects of the
physical environment like sounds, light, and the presence of others. Therrien
et al. (2005) also manipulated a series of variables in an alternating fashion to
determine which increased the likelihood of employees greeting customers at
a sandwich shop. Although they found that having the radio did not appear to
influence performance, the door chime was most likely to encourage customer
greetings, followed by the presence of a manager. The researchers then com
bined the door chime and manager presence and demonstrated substantial
improvement in greetings over baseline conditions. Finally, the experimenters
added feedback, which increased performance to 100% for the last two ses
sions. The experimental analysis employed by Therrien and colleagues
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 15
Kelley, & Wilder, 2014). Previous research indicates managers are poor at
predicting employee preferences, and in practice, they often request help in
identifying the reinforcers of their employees, thus the use of more formal
preference and reinforcer assessment methodology with employees may be
warranted (Wilder, Harris, Casella, Wine, & Postma, 2011; Wilder, Rost, &
McMahon, 2007). Also, employee preferences may change over time; thus,
preference should be reassessed over extended periods of employment (Wine,
Gilroy, & Hantula, 2012; Wine et al., 2014).
Waldvogel and Dixon (2008) compared the use of a ranked survey and
multiple stimulus preference assessment without replacement with four
direct-care staff. They found that assessment ranks correlated across formats
for three out of four employees, but no reinforcer assessment was conducted to
determine whether or not preferred stimuli functioned as reinforcers. Wilder
et al. (2007) compared the use of stimuli identified by a ranked survey to
stimuli identified with a verbal choice format. Then a reinforcer assessment
was used to determine whether stimuli identified functioned as a reinforcer.
Results indicated that the survey format was more accurate in identifying
reinforcers compared to the verbal choice format. Wine, Reis, and Hantula
(2014) compared using a ranking procedure, survey, and MSWO and con
ducted a subsequent reinforcer assessment with three direct-care staff mem
bers. All preference assessment formats identified reinforcers, but the results
of social validity measures indicated that the MSWO was rated as more
cumbersome, less preferred, and took more time than ranking and survey
formats.
Practitioners can also identify preferred job tasks and working arrange
ments to improve job assignments and identify strategies to make aversive
tasks more palatable. For example, Green, Reid, Passante, and Canipe (2008)
created an assessment tool they termed the Task Enjoyment Motivation
Protocol (TEMP), which involved supervisors interviewing staff to identify
which job tasks were least preferred and the aspects of those tasks that made
them less preferred. Supervisors then attempted to remove undesirable prop
erties of tasks reported as less preferred (e.g., eliminate interruptions while
reviewing timesheets). One participant disliked conducting staff observations
because staff appeared to dislike being observed. The researchers added
a performance lottery so that the participant could deliver lottery tickets
based on observations, resulting in staff rating being observed as more favor
able on a rating scale. The researchers also attempted to increase desirable
stimuli associated with the task (e.g., providing snacks during paperwork).
Results indicated that the tasks were rated and ranked higher after making
changes based on the TEMP assessment.
OBM researchers have also evaluated feedback preferences. For example,
Reid and Parsons (1996) demonstrated that staff in a clinical, residential
setting preferred immediate versus delayed feedback. Sigurdsson and Ring
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 17
Interviews, questionnaires, rating scales, and choice procedures are the most
common methods used to assess procedural acceptability in OBM (Nastasi
et al., 2020). It would be valuable if researchers or practitioners developed
a valid protocol to evaluate consumer use or nonuse of behavioral interventions
as the dependent variable to predict acceptability, and that can be used in
practice to increase adoption of behavioral technology. Due to the subjective
nature of these measures, the utility of procedural acceptability assessments
may hinge on the conditions under which those assessments were employed
(Schwartz & Baer, 1991). Therefore, researchers and practitioners should con
sider a few variables when conducting procedural acceptability assessments.
First, procedural acceptability must be assessed using a representative sample of
relevant consumers across an organization. Researchers and consultants should
also consider how other variables such as the anonymity or the availability of
results to an immediate supervisor may alter consumer responding. Procedural
acceptability should also be assessed at multiple time points across an inter
vention. The assessment results can then be used to alter the intervention or
supplement intervention procedures as needed to maximize outcomes and
maintenance. Researchers could use this information to examine how the
acceptability changes based on how the intervention is produced and the
changes in performance it creates over time. Finally, procedural acceptability
could be used as a tool for improving cultural awareness during intervention
development and adjusting interventions to be more culturally sensitive.
Although the benefits of assessing procedural acceptability are numerous,
limited research exists on the use of procedural acceptability assessments in
OBM. This observation is ironic since virtually every OBM practitioner encoun
ters challenges when encouraging clients to change their behavior and adopt
behavioral recommendations and/or systems. Future research should evaluate
the accuracy and reliability of results across different types of procedural accept
ability measures. Procedural acceptability by customers or clients may also differ
across components of an intervention; thus, further research is needed to
determine which aspects of an intervention may be more or less acceptable to
those who interact with employees exposed to interventions. Furthermore,
researchers should assess the variables impacting the accuracy of subjective
measures across organizational settings. Finally, researchers and practitioners
should recognize that acceptability should be the bare minimum. Organizational
leaders should ultimately strive to maximize intervention outcomes and improve
job satisfaction among all members of an organization (Hantula, 2015).
about the people in the work environment and engage them in deciding on
solutions before taking the most appropriate course of action. The last 20 years
of OBM research and practice has seen an increase in the use of assessments.
The next 20 years should focus on expanding and refining them to improve
our impact on employees and organizations.
Disclosure statement
No potential conflict of interest was reported by the authors.
ORCID
Nicole Gravina http://orcid.org/0000-0001-8210-7159
References
Amigo, S., Smith, A., & Ludwig, T. (2008). Using task clarification, goal setting, and feedback to
decrease table busing times in a franchise pizza restaurant. Journal of Organizational
Behavior Management, 28(3), 176–187. doi:10.1080/01608060802251106
Anbro, S. J., Szarko, A. J., Houmanfar, R. A., Maraccini, A. M., Crosswell, L. H., Harris, F. C., . . .
Starmer, L. (2020). Using virtual simulations to assess situational awareness and commu
nication in medical and nursing education: A technical feasibility study. Journal of
Organizational Behavior Management, 40(1–2), 1–11. doi:10.1080/01608061.2020.1746474
Austin, J. (1996). Organizational troubleshooting in expert management consultants and experi
enced managers [Unpublished doctoral dissertation]. Florida, USA: Florida State University,
Tallahassee.
Austin, J. (2000). Performance analysis and performance diagnostics. In J. Austin & J. E. Carr
(Eds.), Handbook of Applied Behavior Analysis (pp. 321–350). Oakland, CA: Context Press.
Austin, J., Weatherly, N., & Gravina, N. (2005). Using task clarification, graphic feedback, and
verbal feedback to increase closing task completion in a privately owned restaurant. Journal
of Applied Behavior Analysis, 38(1), 117–121. doi:10.1901/jaba.2005.159-03
Bowe, M., & Sellers, T. P. (2018). Evaluating the performance diagnostic checklist-human
services to assess incorrect error-correction procedures by preschool paraprofessionals.
Journal of Applied Behavior Analysis, 51(1), 166–176. doi:10.1002/jaba.428
Brooks, A. W., & John, L. K. (2018 May-June). The surprising power of questions. Harvard
Business Review, 60–67. https://hbr.org/2018/05/the-surprising-power-of-questions
Bumstead, A., & Boyce, T. E. (2005). Exploring the effects of cultural variables in the imple
mentation of behavior-based safety in two organizations. Journal of Organizational Behavior
Management, 24(4), 43–63. doi:10.1300/J075v24n04_03
Carr, J. E., Wilder, D. A., Majdalany, L., Mathisen, D., & Strain, L. A. (2013). An
assessment-based solution to a human-service employee performance problem. Behavior
Analysis in Practice, 6(1), 16–32. doi:10.1007/bf03391789
Connellan, T. K. (1978). How to improve human performance. New York, NY: Harper and Row.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2020). Applied behavior analysis (3rd ed.). Upper
Saddle River, NJ: Pearson.
22 N. GRAVINA ET AL.
Crawley, W. J., Adler, B. S., O’Brien, R. M., & Duffy, E. M. (1982). Making salesmen: Behavioral
assessment and intervention. In Industrial behavior modification: A management handbook
(pp. 184–199). Oxford, United Kingdom: Pergamon Press.
Cruz, N. J., Wilder, D. A., Phillabaum, C., Thomas, R., Cusick, M., & Gravina, N. (2019).
Further evaluation of the performance diagnostic checklist-safety (PDC-Safety). Journal of
Organizational Behavior Management, 39(3–4), 266–279. doi:10.1080/01608
061.2019.1666777
Cunningham, T. R., & Geller, E. S. (2012). A comprehensive approach to identifying interven
tion targets for patient-safety improvement in a hospital setting. Journal of Organizational
Behavior Management, 32(3), 194–220. doi:10.1080/01608061.2012.698114
Cymbal, D., Wilder, D. A., Thomas, R., & Ertel, H. (2020). Further evaluation of the validity
and reliability of the performance diagnostic checklist-human services. Journal of
Organizational Behavior Management, 1–9. doi:10.1080/01608061.2020.1792027
Daniels, A. C., & Bailey, J. S. (2014). Performance management: Changing behavior that drives
organizational effectiveness (5th ed.). Atlanta, Georgia, USA: Aubrey Daniels International,
Inc.
DeLeon, I. G., & Iwata, B. A. (1996). Evaluation of a multiple-stimulus presentation format for
assessing reinforcer preferences. Journal of Applied Behavior Analysis, 29(4), 519–533.
doi:10.1901/jaba.1996.29-519
Ditzian, K., Wilder, D. A., King, A., & Tanz, J. (2015). An evaluation of the performance
diagnostic checklist–human services to assess an employee performance problem in
a center-based autism treatment facility. Journal of Applied Behavior Analysis, 48(1),
199–203. doi:10.1002/jaba.171
Doll, J., Livesey, J., McHaffie, E., & Ludwig, T. D. (2007). Keeping an uphill edge: Managing
cleaning behaviors at a ski shop. Journal of Organizational Behavior Management, 27(3),
41–60. doi:10.1300/j075v27n03_04
Eikenhout, N., & Austin, J. (2005). Using goals, feedback, reinforcement, and a performance
matrix to improve customer service in a large department store. Journal of Organizational
Behavior Management, 24(3), 27–62. doi:10.1300/J075v24n03_02
Erath, T. G., Pellegrino, A. J., DiGennaro Reed, F. D., Ruby, S. A., Blackman, A. L., &
Novak, M. D. (2020). Experimental research methodologies in organizational behavior man
agement. [Manuscript submitted for publication]. JOBM: Department of Applied Behavior
Science, University of Kansas.
Fante, R., Gravina, N., & Austin, J. (2007). A brief pre-intervention analysis and demonstration
of the effects of a behavioral safety package on postural behaviors of pharmacy employees.
Journal of Organizational Behavior Management, 27(2), 15–25. doi:10.1300/J075v27n02_02
Fante, R., Gravina, N., Betz, A., & Austin, J. (2010). Structural and treatment analyses of safe
and at-risk behaviors and postures performed by pharmacy employees. Journal of
Organizational Behavior Management, 30(4), 325–338. doi:10.1080/01608061.2010.520143
Fienup, D. M., Luiselli, J. K., Joy, M., Smyth, D., & Stein, R. (2013). Functional assessment and
intervention for organizational behavior change: Improving the timeliness of staff meetings
at a human services organization. Journal of Organizational Behavior Management, 33(4),
252–264. doi:10.1080/01608061.2013.843435
Fisher, W. W., Piazza, C. C., Bowman, L. G., & Amari, A. (1996). Integrating caregiver report
with a systematic choice assessment to enhance reinforcer identification. American Journal
on Mental Retardation. https://psycnet.apa.org/record/1996-01619-002
Fisher, W. W., Piazza, C. C., Bowman, L. G., Hagopian, L. P., Owens, J. C., & Slevin, I. (1992).
A comparison of two approaches for identifying reinforcers for persons with severe and
profound disabilities. Journal of Applied Behavior Analysis, 25(2), 491–498. doi:10.1901/
jaba.1992.25-491
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 23
Mattaini, M. A., & Thyer, B. A. (Eds.). (1996). Finding Solutions to Social Problems: Behavioral
Strategies for Change. Washington, DC: American Psychological Association Press
McGee, H. M., & Crowley-Koch, B. J. (2020). Types of organizational assessments and their
applications [Manuscript submitted for publication]. Department of Psychology, Western
Michigan University.
McSween, T. E. (2003). Values-based safety process: Improving your safety culture with beha
vior-based safety. Hoboken, NJ: John Wiley & Sons.
McSween, T. E., Myers, W., & Kuchler, T. C. (1990). Getting buy-in at the executive level.
Journal of Organizational Behavior Management, 11(1), 207–221. doi:10.1300/J07
5v11n01_13
Merritt, T. A., DiGennaro Reed, F. D., & Martinez, C. E. (2019). Using the Performance
Diagnostic Checklist–Human Services to identify an indicated intervention to decrease
employee tardiness. Journal of Applied Behavior Analysis, 52(4), 1034–1048. doi:10.1002/
jaba.643
Nastasi, J., Simmons, D., & Gravina, N. (2020). Has OBM found its heart? An assessment of
procedural acceptability trends in the journal of organizational behavior management.
Manuscript submitted for publication.
Olson, R., & Austin, J. (2001). Behavior-based safety and working alone: The effects of a
self-monitoring package on the safe performance of bus operators. Journal of
Organizational Behavior Management, 21(3), 5–43. doi:10.1300/J075v21n03_02
Pampino, R. N., Jr, Heering, ,. P. W., Wilder, D. A., Barton, C. G., & Burson, L. M. (2004). The
use of the performance diagnostic checklist to guide intervention selection in an indepen
dently owned coffee shop. Journal of Organizational Behavior Management, 23(2–3), 5–19.
doi:10.1300/J075v23n02_02
Pampino, R. N., Wilder, D. A., & Binder, C. (2005). The use of functional assessment and
frequency building procedures to increase product knowledge and data entry skills among
foremen in a construction organization. Journal of Organizational Behavior Management, 25
(2), 1–36. doi:10.1300/J075v25n02_01
Parsons, M. B. (1998). A review of procedural acceptability in organizational behavior
management. Journal of Organizational Behavior Management, 18(2–3), 173–190.
doi:10.1300/J075v18n02_09
Reid, D. H., & Parsons, M. B. (1996). A comparison of staff acceptability of immediate versus
delayed verbal feedback in staff training. Journal of Organizational Behavior Management,
16(2), 35–47. doi:10.1300/j075v16n02_03
Rodriguez, M., Wilder, D. A., Therrien, K., Wine, B., Miranti, R., Daratany, K., . . .
Rodriguez, M. (2006). Use of the performance diagnostic checklist to select an intervention
designed to increase the offering of promotional stamps at two sites of a restaurant franchise.
Journal of Organizational Behavior Management, 25(3), 17–35. doi:10.1300/J075v25n03_02
Rohn, D., Austin, J., & Lutrey, S. (2002). Decreasing cash shortages using verbal and graphic
feedback. Journal of Organizational Behavior Management, 22(1), 33–46. doi:10.1300/
J075v22n01_03
Roscoe, E. M., Iwata, B. A., & Kahng, S. (1999). Relative versus absolute reinforcement effects:
Implications for preference assessments. Journal of Applied Behavior Analysis, 32(4),
479–493. doi:10.1901/jaba.1999.32-479
Rummler, G., & Brache, A. (1995). Improving performance: How to manage the white space on
the organizational chart San Francisco: Jossey-Bass.
Schwartz, I. S., & Baer, D. M. (1991). Social validity assessments: Is current practice state of the
art? Journal of Applied Behavior Analysis, 24(2), 189–204. doi:10.1901/jaba.1991.24-189
JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 25
Shier, L., Rae, C., & Austin, J. (2003). Using task clarification, checklists and performance
feedback to increase tasks contributing to the appearance of a grocery store. Performance
Improvement Quarterly, 16(2), 26–40. doi:10.1111/j.1937-8327.2003.tb00277.x
Sigurdsson, S. O., & Austin, J. (2006). Institutionalization and response maintenance in
organizational behavior management. Journal of Organizational Behavior Management, 26
(4), 41–77. doi:10.1300/J075v26n04_03
Sigurdsson, S. O., & Ring, B. M. (2013). Evaluating preference for graphic feedback on correct
versus incorrect performance. Journal of Organizational Behavior Management, 33(2),
128–136. doi:10.1080/01608061.2013.785889
Simonian, M. J., Brand, D., Mason, M. A., Heinicke, M., & Luoma, S. M. (2020). A systematic
review of research evaluating the use of preference assessment methodology inn the
workplace. Journal of Organizational Behavior Management, 1–19. doi:10.1080/
01608061.2020.1819933
Smith, M., & Wilder, D. A. (2018). The use of the performance diagnostic checklist-human
services to assess and improve the job performance of individuals with intellectual
sisabilities. Behavior Analysis in Practice, 11(2), 148–153. doi:10.1007/s40617-018-0213-4
Sulzer-Azaroff, B., & Fellner, D. (1984). Searching for performance targets in the behavior
analysis of occupational safety: An assessment strategy. Journal of Organizational Behavior
Management, 6(2), 53–65. doi:10.1300/J075v06n02_09
Therrien, K., Wilder, D. A., Rodriguez, M., & Wine, B. (2005). Preintervention analysis and
improvement of customer greeting in a restaurant. Journal of Applied Behavior Analysis, 38
(3), 411–415. doi:10.1901/jaba.2005.89-04
Thompson, R. H., & Borrero, J. C. (2014). Direct observation. In W. W. Fisher, C. C. Piazza, &
H. S. Roane (Eds.), Handbook of applied behavior analysis (pp. 472–488). New York, NY:
Guilford Press.
Umbreit, J. (1996). Functional analysis of disruptive behavior in an inclusive classroom.
Journal of Early Intervention, 20(10), 18–29. doi:10.1177/105381519602000104
Wacker, D. P., Berg, W. K., & Harding, J. W. (2014). Functional and structural approaches to
behavioral assessment of problem behavior. In W. W. Fisher, C. C. Piazza, & H. S. Roane
(Eds.), Handbook of applied behavior analysis (pp. 472–488). New York, NY: Guilford Press.
Waldvogel, J. M., & Dixon, M. R. (2008). Exploring the utility of preference assessments in
organizational behavior management. Journal of Organizational Behavior Management, 28
(1), 76–87. doi:10.1080/01608060802006831
Wilder, D. A., Cymbal, D., & Villacorta, J. (2020). The performance diagnostic checklist-
human services: A brief review. Journal of Applied Behavior Analysis, 53(2), 1170–1176.
doi:10.1002/jaba.676
Wilder, D. A., Harris, C., Casella, S., Wine, B., & Postma, N. (2011). Further evaluation of the
accuracy of managerial prediction of employee preference. Journal of Organizational
Behavior Management, 31(2), 130–139. doi:10.1080/01608061.2011.569202
Wilder, D. A., Lipschultz, J., Gehrman, C., Ertel, H., & Hodges, A. (2019). A preliminary
assessment of the reliability and validity of the performance diagnostic checklist-human
services. Journal of Organizational Behavior Management, 39(3–4), 194–212. doi:10.1080/
01608061.2019.1666772
Wilder, D. A., Lipschultz, J. L., King, A., Driscoll, S., & Sigurdsson, S. (2018). An analysis of the
commonality and type of preintervention assessment procedures in the journal of organiza
tional behavior management (2000–2015). Journal of Organizational Behavior Management,
38(1), 5–17. doi:10.1080/01608061.2017.1325822
Wilder, D. A., Rost, K., & McMahon, M. (2007). The accuracy of managerial prediction of
employee preference: A brief report. Journal of Organizational Behavior Management, 27(2),
1–14. doi:10.1300/J075v27n02_01
26 N. GRAVINA ET AL.
Windsor, J., Piché, L. M., & Locke, P. A. (1994). Preference testing: A comparison of two
presentation methods. Research in Developmental Disabilities, 15(6), 439–455. doi:10.1016/
0891-4222(94)90028-0
Wine, B., Gilroy, S., & Hantula, D. A. (2012). Temporal (in)stability of employee preferences
for rewards. Journal of Organizational Behavior Management, 32(1), 58–64. doi:10.1080/
01608061.2012.646854
Wine, B., Kelley, D. P., III, & Wilder, D. A. (2014). An initial assessment of effective preference
assessment intervals among employees. Journal of Organizational Behavior Management, 34
(3), 188–195. doi:10.1080/01608061.2014.944747
Wine, B., Reis, M., & Hantula, D. A. (2014). An evaluation of stimulus preference assessment
methodology in organizational behavior management. Journal of Organizational Behavior
Management, 34(1), 7–15. doi:10.1080/01608061.2013.873379
Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied
behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11(2), 203–214.
doi:10.1901/jaba.1978.11-203