Laboratory Medicine Quality Indicators
Laboratory Medicine Quality Indicators
Laboratory Medicine Quality Indicators
CME/SAM
Upon completion of this activity you will be able to: identify and apply standard evaluation criteria for health-related quality indicators/performance measures. describe how current laboratory medicine quality indicators/measures meet these criteria and what the main gaps in our knowledge are.
The ASCP is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The ASCP designates this educational activity for a maximum of 1 AMA PRA Category 1 Credit per article. This activity qualifies as an American Board of Pathology Maintenance of Certification Part II Self-Assessment Module. The authors of this article and the planning committee members and staff have no relevant financial relationships with commercial interests to disclose. Questions appear on p 443. Exam is located at www.ascp.org/ajcpcme.
Abstract
We summarize information on quality indicators related to laboratory testing from published literature and Internet sources to assess current gaps with respect to stages of the laboratory testing process, the Institute of Medicine (IOM) health care domains, and quality measure evaluation criteria. Our search strategy used various general and specific terms for clinical conditions and laboratory procedures. References related to a potential quality indicator associated with laboratory testing and an IOM health care domain were included. With the exception of disease- and condition-related indicators originating from clinical guidelines, the laboratory medicine quality indicators reviewed did not satisfy minimum standard evaluation criteria for quality or performance measures (ie, importance, scientific acceptability, and feasibility) and demonstrated a need across the total laboratory testing process for consistently specified, useful, and evidencebased, laboratory-related quality and performance measures that are important to health outcomes and meaningful to health care stakeholders for which laboratories can be held accountable.
Laboratory testing and services have an important role in the provision of health care and in utilization and reimbursement. Assessing the quality of laboratory services using quality indicators or performance measures requires a systematic, transparent, and consistent approach to collecting and analyzing data. A comprehensive approach would address all stages of the laboratory total testing process,1 with a focus on the areas considered most likely to have important consequences on patient care and health outcomes. Quality indicator data should be collected over time to identify, correct, and continuously monitor problems and improve performance and patient safety by identifying and implementing effective interventions and for the purpose of increased consistency and standardization of key processes among clinical laboratories. Certain laboratory medicine quality indicators have been advocated for use as internal quality assessment tools.2-4
Quality Measures
Based on the Institute of Medicine (IOM) definition of quality of care as the degree to which health care services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge,5 a quality indicator is a tool that enables the user to quantify the quality of a selected aspect of care by comparing it with a criterion.6 A quality indicator may be defined as an objective measure that evaluates critical health care domains as defined by the IOM (patient safety, effectiveness, equity, patient-centeredness, timeliness, and efficiency), is based on evidence associated with those domains, and can be implemented in a consistent and comparable manner across settings and over time.7
American Society for Clinical Pathology
418 418
More specifically, the Agency for Healthcare Research and Quality (AHRQ) National Quality Measures Clearinghouse (NQMC), a public information database promoting widespread access to specifications and details on approximately 2,700 evidence-based health care quality measures (as of January 2009), identifies desirable attributes of a health care quality measure based on a comprehensive review of existing frameworks from national and international organizations committed to health care quality measurement and improvement.8 These criteria for quality indicators are widely adopted by many health care organizations, they do not vary with an indicators proposed use, and they are grouped into 3 conceptual areas: (1) importance, (2) scientific soundness, and (3) feasibility of a measure, each having detailed narrower categories as summarized in the following sections (from the AHRQ desirable measure attributes and based on reviews of quality measure frameworks from the National Committee for Quality Assurance [NCQA], the Joint Commission, Foundation for Accountability, IOM, US Department of Health and Human Services, Performance Measures Coordinating Council, Physician Consortium for Performance Improvement, Australias National Health Performance Committee, Britains National Health System, and German Agency for Quality in Medicine). Importance Relevance to stakeholders: topic area is of interest and financially and strategically important to stakeholders (eg, businesses, clinicians, and patients) Health importance: addresses clinically important aspects of health, defined as high prevalence or incidence and significant effect on disease burden (ie, population morbidity and mortality) Equitable distribution: can examine whether disparities exist among patients by analysis of subgroups Potential for improvement: evidence indicates overall poor quality or variations in quality indicating a need for the measure Health care system influence: results can be improved by feasible actions or interventions under health care system control Scientific Soundness Clinical logic: topic area is explicitly and strongly supported by evidence (ie, indicated to be of great importance to improving quality of care) Measure properties: reliable (results reproducible and the degree to which they are free from random error), valid (associated with what it purports to measure), allow for patient and consumer variables (stratification or case-mix adjustment), and comprehensible (understandable for users who will be acting on the data)
American Society for Clinical Pathology
Feasibility Explicit specification: detailed specifications for the numerator, denominator, and data collection requirements understandable and implementable Data availability: needed data source available, accessible, and timely, and consideration given to whether the measurement costs are justified by the potential for improvement in care
419
419 419
zTable 1z Laboratory Medicine Quality Indicators by Stage of the Total Testing Process
Stage Test ordering Test order appropriateness Patient identification/specimen collection Inpatient wristband identification error Patient satisfaction with phlebotomy Specimen identification, preparation, and transport Specimen inadequacy/rejection Blood culture contamination Specimen container information error Analysis Proficiency testing performance Gynecologic cytology-biopsy discrepancy Result reporting Inpatient laboratory result availability Corrected laboratory reports Critical values reporting Turnaround time Clinician satisfaction with laboratory services Result interpretation and ensuing action Follow-up of abnormal cervical cytology results
*
IOM Domains* Effectiveness, efficiency, timeliness Safety Patient-centeredness Effectiveness, efficiency, safety, timeliness Efficiency, safety Efficiency, safety Safety Effectiveness, efficiency, safety Patient-centeredness, timeliness Efficiency, safety Safety, timeliness Timeliness Effectiveness, timeliness Effectiveness, timeliness
Descriptions of the Institute of Medicine (IOM) health care domains are as follows: effectiveness, providing care processes and achieving outcomes supported by scientific evidence; efficiency, avoiding waste, including waste of equipment, supplies, ideas, and energy; equity, providing care that does not vary in quality because of personal characteristics such as sex, ethnicity, geographic location, and socioeconomic status; patient-centeredness, meeting patient needs and preferences and providing education and support; safety, preventing or reducing actual or potential bodily harm; and timeliness, obtaining needed care while reducing delays. See Table 2 for selected laboratory tests by disease/condition as noted in the Agency for Healthcare Research and Quality National Quality Measures Clearinghouse.
with laboratory testing or services and (2) having the potential to be related to at least 1 IOM health care domain.7 Indicators meeting the inclusion criteria were then categorized according to the following 6 stages of the total laboratory testing process1: (1) test ordering; (2) patient identification and specimen collection; (3) specimen identification, preparation, and transport; (4) analysis; (5) result reporting; and (6) result interpretation and ensuing action.
were associated with patient-centeredness, and none of these indicators were associated with equity. Based on the relatively small number of indicators and their lack of widespread use in practice, the stages of the total testing process and IOM domains do not seem to be well covered. The AHRQ NQMC categorizes measures into the following 7 primary domains: access, outcome (health state), patient experience, process, structure, use of service, and population health.17 All of the laboratory medicine quality indicators identified except one (patient satisfaction with phlebotomy) are process measures, compared with about half of the NQMC measures. (The NQMC health care measure domains relate to the following descriptions: [1] process: health care service provided to or on behalf of a patient appropriately based on scientific evidence of efficacy or effectiveness; [2] outcome: health state of a patient resulting from health care; [3] access: patients or enrollees attainment of timely and appropriate health care; [4] patient experience: patients or enrollees report concerning observations of and participation in health care; [5] structure of care: feature of a health care organization or clinician relevant to its capacity to provide health care; [6] use of service: provision of a service to, on behalf of, or by a group of persons defined by nonclinical characteristics without determination of the appropriateness of the service; and [7] population health: state of health of a group of persons defined by nonclinical characteristics.) With the exception of test order appropriateness, none of the quality indicators identified in this review is listed in any form in the AHRQ
American Society for Clinical Pathology
zTable 2z Selected Quality Measures and Guidelines for Recommended Laboratory Tests by Disease and Condition in the Agency for Healthcare Research and Quality National Quality Measures and Guideline Clearinghouses
Disease/Condition Anemia Breast cancer Cardiovascular disease Cervical cancer Chlamydia infection Diabetes HIV infection/AIDS Lead poisoning Pneumonia Prenatal conditions Renal disease Sepsis Upper respiratory infection Urinary tract infection Venous thromboembolism Sources CMS (2004) American Society of Clinical Oncology/College of American Pathologists (2007), Breast Health Global Initiative (2006), ICSI (2005) American College of Cardiology (2003), American Heart Association (2003), BMA (2006), CMS (2007), European Society of Cardiology (2005), ICSI (2006), NCQA (2003), PCPI (2003), The Joint Commission (2008), USPSTF (2008), VHA (2002) American Cancer Society (2002), CMS (2007), ICSI (2005), Kaiser Permanente Care Management Institute (2006), NCQA (2003), The Joint Commission (2008), USPSTF (2008), VHA (2002), Wisconsin Department of Health (2006) NCQA (2005), USPSTF (2008) BMA (2006), Health Disparities Collaboratives (2006), ICSI (2005), NCQA (2003), National Diabetes Quality Improvement Alliance (2003), VHA (2002), Wisconsin Department of Health (2002) CDC (2006), New York State Department of Health (2005), USPSTF (2008) Wisconsin Department of Health (2006) CMS (2007), The Joint Commission (2008) CDC (2006), PCPI (2002), USPSTF (2008), Wisconsin Department of Health (2006) CMS (2005), Renal Physicians Association (2002) CMS (2007), The Joint Commission (2008) ICSI (2003), NCQA (2006) ICSI (2004) ICSI (2006)
BMA, British Medical Association; CDC, Centers for Disease Control and Prevention; CMS, Centers for Medicare & Medicaid Services; ICSI, Institute for Clinical Systems Improvement; NCQA, National Committee for Quality Assurance; PCPI, Physician Consortium for Performance Improvement; USPSTF, US Preventive Services Task Force; VHA, Veterans Health Administration.
NQMC and, based on the results of this review, the indicators do not seem to satisfy their inclusion criteria.17 In particular, one NQMC criterion for process measures requires that a current review of the evidence supports that the measured clinical process has led to improved health outcomes. Other potential US sources of quality indicators and guidelines for clinical laboratories (eg, regulatory, standard-setting, and accrediting organizations) were not included in the AHRQ NQMC and NGC clearinghouses. Summarized information for each of the 14 reviewed laboratory medicine quality indicators is provided in the following format: definition, rationale (brief statement describing supporting health-related reasons), quality gap (AHRQ health importance and potential for improving health), and evidence base (AHRQ scientific soundnessclinical logic criteria associated with quality of care outcomes and interventions). Test Ordering Test Order Appropriateness Definition.Two types of quality indicators were identified. The first measures test order appropriateness, and the second measures inappropriateness: (1) Percentage of laboratory test orders that meet specific testing guidelines18,19: A list of quality measures has been compiled by the AHRQ in its NQMC database; many involve laboratory tests recommended for specific diseases and conditions (see zTable 2z for a selected list).8 Unlike this measure, which is based
American Society for Clinical Pathology
on laboratory test orders, the denominators for most of the measures in Table 2 are population-based, and they target not only improving health care quality but also public health. (2) Percentage of laboratory test orders duplicated within defined intervals.20-22 There is no standard definition for what constitutes an inappropriate, incorrect, or duplicative test order. Rationale.(1) Assess appropriateness of laboratory tests ordered for screening, management, diagnosis, and monitoring of various diseases or clinical conditions consistent with guidelines. (2) Reduce wasteful and unnecessary testing. Quality Gap.Many laboratory test orders are not supported by guidelines18,19 or are unnecessary duplicate tests.20-22 These test orders add unnecessary costs and potentially contribute to delayed, inappropriate, and potentially harmful clinical decisions. On the other hand, evidence-based laboratory testing may be underutilized. Evaluating underutilization requires population-based measures. For many guidelines specifying appropriate use of laboratory tests, including those in the AHRQ NGC, there are no quality indicators, and there is a notable lack of guidelines and indicators related to anatomic pathology.23 Evidence Base.Principal sources of guidelines relating to utilization of laboratory tests are various health care, medical, and condition-specific organizations, many of which are listed in the AHRQ NQMC and NGC databases and identified in Table 2.8 Although a few studies have shown a significant decrease in hospital length of stay (LOS) associated with
Am J Clin Pathol 2009;131:418-431
DOI: 10.1309/AJCPJF8JI4ZLDQUE
421
421 421
greater test order appropriateness,24,25 most studies did not indicate an effect on outcomes.18,19,26-29 Underuse of recommended laboratory tests has been shown to have a negative impact in relation to specific conditions.30-33 Promotion of guidelines18,34,35 and provision of education,18,36,37 periodic feedback,18,35,37-40 reminders,21,41 and electronic decisionsupport systems42,43 to clinicians and changes in laboratory requisition forms34 and in funding policy34 may decrease the number of inappropriately ordered laboratory tests, resulting in cost savings. Linking clinicians to electronic medical records may decrease errors of omission and improve adherence to practice guidelines.44 Patient Identification and Specimen Collection Inpatient Wristband Identification Error Definition.This indicator is the percentage of inpatients with absent or wrong wristbands, multiple wristbands having conflicting data, or wristbands containing erroneous, missing, or illegible data.45-48 Rationale.Inpatient wristband errors may lead to misidentification of a patient, which could result in inappropriate treatment.49 Inpatient wristband errors could be associated with incorrectly performed laboratory tests or mislabeled patient specimens, including blood specimens that could lead to a hemolytic transfusion reaction from an incompatible blood type.46 Quality Gap.Several studies have documented prevalence of wristband errors or, specifically, absent wristbands to be as high as 2.1% to 5.7%.45-48 However, a recent longitudinal study of wristband errors suggests the rate is close to 1%, with only 0.1% of these errors representing wristband mix-ups involving 2 patients.50 There are multiple published studies identifying some type of patient or specimen identification error as a major contributor to acute hemolytic reactions from infusion of ABO-incompatible blood, indicating that 40% to 50% of transfusion-related deaths result from identification errors49,51-54; however, there is no information specific to wristbands. There are no consistent and reliable data on the frequency with which wristband and patient and specimen identification errors occur, let alone their consequences. Evidence Base.No published studies were found documenting a relationship between wristband errors and any process or intermediate outcomes of interest, nor were there published controlled studies with results demonstrating the effectiveness of interventions or practices at reducing inpatient wristband identification errors.52 Except for transfusion medicine, no direct evidence was found relating patient misidentification to any adverse impact on clinical, health, or cost outcomes. There is evidence for effectiveness of wristband monitoring to decrease patient misidentification during phlebotomy.46
422 422 Am J Clin Pathol 2009;131:418-431
DOI: 10.1309/AJCPJF8JI4ZLDQUE
Patient Satisfaction With Phlebotomy Definition.This indicator is the percentage of patients satisfied with phlebotomy services. There is no standard definition of patient satisfaction with phlebotomy that has been assessed using questionnaires in several hospital-based outpatient55,56 and inpatient57 studies. Rationale.Specimen collection is one of the few areas of laboratory medicine that involves direct patient contact. As a result, phlebotomy services provide one opportunity to measure patients perceptions of their experience with laboratory services. Quality Gap.When asked if they were satisfied with their phlebotomy experience in a survey 2 days after the procedure, 15% of outpatients stated that they were not.58 However, an earlier similar study found patients far less frequently dissatisfied with the overall phlebotomy services.56 The limitations of these data are that they are dated, as no study published after 1996 was identified assessing patients satisfaction with phlebotomy, and no standard measurement tool has been proposed that would assess patients satisfaction with the specific aspects of the phlebotomy service. Evidence Base.Patient satisfaction with phlebotomy services has not been related to any other outcomes. No study could be found that demonstrated effectiveness of any intervention to improve patient satisfaction with phlebotomy services. Specimen Identification, Preparation, and Transport Specimen Inadequacy and Rejection Definition.This indicator is the percentage of specimens rejected.55,59-61 There is no standard definition or specific measure to assess the adequacy of specimens. Rationale.Specimen adequacy can affect the accuracy and usefulness of laboratory test results. Monitoring specimen acceptability may facilitate identification of quality improvement (QI) opportunities that could reduce rejection rates and improve patient care. Quality Gap.Programs to track laboratory quality have reported aggregated specimen rejection rates ranging from 0.3% to 0.8%.55,59-61 However, in a single-institution study, the proportion of specimens rejected was up to 2.2% in the emergency department (ED).59 Evidence Base.Although some form of this indicator has been used in several hundred hospital laboratories to estimate specimen adequacy,55,59-61 no systematic study has related it to any other outcomes. The type of specimen collection personnel impacted specimen rejection rates; nonlaboratory personnel were 2 to 4 times more likely to be associated with rejected specimens compared with laboratory personnel.47,48 Use of a QI monitor for specimen rejection did not result in better performance.60,61
American Society for Clinical Pathology
Blood Culture Contamination Definition.This indicator is defined as the percentage of positive blood cultures identified as contaminated.62 The term contaminated has not been uniformly defined. Rationale.Laboratory evaluation and clinical intervention associated with blood culture contamination consume substantial health care resources.63-70 Clinicians rely on blood culture results to diagnose and monitor febrile patients. When acting on a potentially contaminated blood culture, clinicians must choose to ignore a result that could be potentially lifethreatening or take a conservative approach of fighting an infection that might not exist. Quality Gap.False-positive blood cultures lead not only to unnecessary repeated tests, but also to unnecessary drug use with potential harm to patients and significant downstream patient care costs. In 2 separate multi-institutional studies of inpatient blood cultures, one involving more than 600 hospitals and the other more than 300 hospitals, the median estimated blood culture contamination rates were 2.5% and 2.9%, respectively.62,68 Evidence Base.False-positive culture results are costly because they are associated with increased hospital LOS, diagnostic testing, and antibiotic prescriptions.69,70 Patients with contaminated blood cultures compared with patients with negative blood cultures have had statistically significantly higher total hospital LOS (13.9 vs 5.5 days), postculture LOS (8.9 vs 4.6 days), postculture number of days of antibiotic therapy (5.9 vs 2.9 days), vancomycin use and postculture cost of antibiotics ($760 vs $120 in 1993 dollars), and postculture hospital cost per patient ($10,500 vs $4,200 in 1993 dollars).70 No evidence was found directly linking a reduction in the percentage of contaminated blood cultures to other clinical or health outcomes. Long-term monitoring and use of dedicated phlebotomy teams are interventions associated with sustained reductions in blood culture contamination rates.62,64,65,67-70 Specimen Container Information Error Definition.This indicator is the percentage of all specimens sent to the laboratory with inaccurate or inadequate information on the specimen container (eg, no label or illegible or missing patient information, clinical information, or tissue source for surgical specimens).71 There is no standard definition for what constitutes inaccurate or inadequate information. Rationale.Specimens with inaccurate or inadequate information may adversely impact test result reporting, delay patient diagnosis and treatment, negatively impact patient satisfaction with the health care system, and negatively impact the associated clinical, health, and economic outcomes.72 Quality Gap.Studies have not been done that consistently measure the rate of inaccurate or inadequate specimen (labeling) information; however, rates have been reported
American Society for Clinical Pathology
between 0.01% and 0.03% for chemistry and hematology specimens50,55,60,61 and between 0.4% and 2% for surgical pathology specimens.71,73 Evidence Base.Inaccurate or inadequate specimen information may impact clinical processes and/or outcomes50,61; however, no direct evidence was found relating this indicator to any outcome. Aside from whether personnel were from the laboratory or elsewhere,60,61 no interventions were identified that improved performance using this indicator. Analysis Proficiency Testing Performance Definition.This indicator is the percentage of correct proficiency testing (PT) results. Criteria for passing vary by analyte (eg, target value a fixed concentration limit, a fixed percentage, or 3 SD for results of a given laboratory group).74 Rationale.There is some evidence that PT performance relates to performance using actual patient specimens75-78; however, there is no direct evidence to support it. The Clinical Laboratory Improvement Amendments of 1988 (CLIA) regulations have minimum PT requirements that must be met for US laboratories to be certified.74 Quality Gap.Based on data collected from up to 7,000 physician office, clinic, and small hospital laboratories, PT failure (defined as unacceptable PT result for an individual sample as determined by CLIA criteria74) rates in 2004 for 8 chemistry and hematology analytes were 1.1% to 5.5% and were 2.8% to 7.3% for 3 positive culture tests and 0.6% to 1.9% for 3 negative culture tests.79 An analysis of the PT data from the Centers for Medicare & Medicaid Services for laboratories inspected by the CAP, the Joint Commission, the states, and the Commission on Laboratory Accreditation (COLA) during the 1999-2003 period showed PT failure (defined as unsatisfactory PT performance [<4 of 5 PT samples with an acceptable result in a testing event as determined by CLIA criteria74] on 2 consecutive or 2 of 3 testing events) rates ranging from 4% to 6% (for CAP-inspected laboratories) to 11% to 13% (for COLA-inspected laboratories).80 Evidence Base.Although PT performance has been positively correlated with performance in blind PT75,76 and with routine patient testing,77,78 there is no direct evidence that improved PT performance positively impacts actual test performance or any other outcome. There is evidence that PT failure rates decrease with increased experience performing PT. PT failure (defined as an unacceptable PT result for an individual sample as determined by CLIA criteria74) rates for chemistry and hematology decreased from 1994 to 2004 for 8 analytes most often tested in physician office and clinical laboratories79: 18.7% to 3.2% for cholesterol, 6.3% to 1.1% for potassium, and 5.7% to 2.4% for creatinine. In addition,
Am J Clin Pathol 2009;131:418-431
DOI: 10.1309/AJCPJF8JI4ZLDQUE
423
423 423
microbiology failure rates decreased for positive and negative cultures between 1994 and 2004. Similar downward trends for PT failure (defined as unsatisfactory PT performance [<4 of 5 PT samples with an acceptable result in a testing event as determined by CLIA criteria74] on 2 consecutive or 2 of 3 testing events) rates were also observed using the 1999-2003 Centers for Medicare & Medicaid Services data for COLAinspected laboratories (failure rate decreasing from 13% to 11%) and for state-inspected laboratories (from 9% to 8%).80 There is no published evidence for the effectiveness of any intervention to improve PT performance. In one study of PT, despite consistent feedback on PT errors, there was no significant change in participants subsequent performance over time.81 Gynecologic Cytology-Biopsy Discrepancy Definition.This indicator is the percentage of patients with discordant cervical cytology and cervical biopsy results82 for whom a Papanicolaou (Pap) smear was submitted within the previous 3 months.83 There are no standardized criteria or practices for measuring this discrepancy rate.84 Rationale.Cytohistologic correlation may be a useful tool to monitor performance and to identify specimen types prone to error.74,85 An annual evaluation of the number of gynecologic cases in which cytologic and histologic results are discrepant is required by CLIA regulations.74 Although sampling variables account for the majority of false-negative results,82,85,86 interpretation variability is substantial for all types of cervical specimens.87 Quality Gap.There seems to be great variability in practices and standards for identifying a discrepant pair of cytologic-biopsy results, and many laboratories have found that most cervical cytology-biopsy noncorrelation is the result of sampling problems.83,84 Institutional gynecologic cytologic-histologic discrepancy rates of 1.8% to 9.4% of all result pairs have been documented82; and one estimate of Pap smear discrepancies is that they occur in 0.9% of all cytologic specimens.85 One study of cervical cytologic-biopsy specimens revealed a predictive value for a positive cytologic result of 89%.88 Evidence Base.The percentage of cytologic-histologic gynecologic discrepancies that was deemed to result in severe harm (eg, loss of life or limb or long-lasting morbidity secondary to an unnecessary diagnostic test) ranged from 0% to 6% by site in a study done in 4 hospitals.84 Based on aggregate data from these hospitals, the frequency of physician-perceived severity for discrepancies was 46% for no harm to the patient, 8% for any harm avoided by addressing such discrepancies (ie, near misses), and 45% for any patient harm.82 No improvement trend was identified for hospital laboratories participating in a program monitoring cervical cytology-biopsy discrepancy rates.83 No evidence was found
424 424 Am J Clin Pathol 2009;131:418-431
DOI: 10.1309/AJCPJF8JI4ZLDQUE
demonstrating that any intervention to reduce gynecologic cytology-histology discrepancy rates is effective or that this indicator is associated with any actual outcomes. Result Reporting Inpatient Laboratory Result Availability Definition.This indicator is the percentage of test results available for morning rounds as stipulated in the institution policy.89 There are no standard definitions for what constitutes compliance because this indicator is institutionspecific.89,90 Rationale.If laboratory results are not available for clinicians morning rounds, there may be a delay in the treatment and diagnosis of a patient that may unnecessarily prolong the LOS. The objective of this measure is to assess the compliance rate for meeting morning test reporting deadlines, which may identify opportunities for improvement. Quality Gap.A survey of more than 300 hospitals found 10% of CBC and electrolyte tests were not reported on or before the reporting deadlines that the participating laboratories set for themselves.89 When more than 2,000 physicians from these hospitals were asked how often delayed morning laboratory test results contributed to delays in inpatient treatments or increased hospital LOS, only 1 in 4 indicated that delayed result reporting might contribute. There was no association between physician satisfaction and morning reporting compliance rates.89 Evidence Base.No published evidence was found relating this indicator to any outcomes or for interventions that are effective at improving performance. Corrected Laboratory Reports Definition.This indicator is the percentage of specific laboratory reports corrected.91,92 There is no standard definition for the basis of correction of such laboratory reports. Rationale.This indicator may be used to determine causes of the corrections so that preventive actions can reduce the release of incorrect reports. Quality Gap.Aggregate mean and median rates of corrected reports were less than 2 per 1,000 cases based on a survey of more than 1.5 million surgical pathology specimens.91 Evidence Base.In one study of microbiology laboratory reports, clinician interviews revealed that 7% of 480 corrected reports were associated with an adverse clinical impact; of these 32 cases, 59% involved delayed therapy, 25% involved unnecessary therapy, and 25% were associated with inappropriate therapy.92 Most of these errors were considered amenable to laboratory-based interventions. No published evidence was found relating this indicator to any actual outcome or for interventions that are effective at improving performance.
American Society for Clinical Pathology
Critical Values Reporting Definition.Critical values reporting is the percentage of all critical laboratory test results reported to a health care provider.93 Critical values are defined as those for which reporting delays can result in serious adverse outcomes for patients.94 There is no standard list of laboratory tests included in this indicator, nor are there standard critical value limits for specific laboratory tests.93,95-97 In part, this is because of variation in test methods, patient population, and individual patient characteristics. There is no widely accepted, standard method of reporting or the appropriate people who should receive these laboratory test results.97 Rationale.Critical values reporting is considered an important laboratory process because it can impact clinical decision making, patient safety, and operational efficiency.94 Critical laboratory test results, by definition, represent potentially life-threatening situations98,99 and require rapid and timely evaluation by clinicians. Reporting of critical values is required by CLIA regulations,74 and the Joint Commission 2009 National Patient Safety Goals for hospitals include multiple requirements related to critical values reporting under the goal of improving the effectiveness of communication among caregivers.4 Quality Gap.Reported occurrences of critical values ranged from 1 in 2,000 to 1 in 100 tests.96,100 In a survey of about 200 hospital laboratories self-reporting their unreported critical values, there was wide variation among hospitals, with the rate of unreported critical values of 6.6% or more for the 25% worst-performing institutions in 2001.93 The 25% bestperforming hospitals had unreported critical value rates of up to 0.9%, and half of the institutions had unreported critical value rates of 2.3% or more.93 Evidence Base.No studies were found relating critical values reporting to any outcomes; however, critical values have been found to influence patient care. In a survey of nursing supervisors and physicians, the majority of medical staff interviews (63%) and reviews of medical records (65%) indicated that critical values resulted in a change in therapy, and 95% of surveyed physicians indicated that critical laboratory results were valuable for patient care.96 No published studies were identified on any interventions that were effective in improving the rate of critical values reporting. Turnaround Time Definition.This indicator refers to the percentage of specific laboratory tests that do not meet a reporting deadline.101 There are no widely accepted turnaround time (TAT) goals for specific laboratory tests. Laboratories most commonly (41%) defined TAT as time of specimen receipt in the laboratory to time of results reporting.102 However, order-toreporting TAT is the most common clinician definition for TAT.102-111
American Society for Clinical Pathology
Rationale.Timely reporting of laboratory tests may improve patient care efficiency, effectiveness, and satisfaction.111 In particular, the speed of diagnosis of acute myocardial infarction using cardiac troponin tests in the ED may determine the type of therapy and patient outcomes.112 Quality Gap.Of about 500 hospital laboratories returning data on more than 2.2 million stat (results expected to be reported within 1 hour from the time ordered per CAP definitions used in past Q-Probes studies [1991-2008], http:// www.cap.org/apps/docs/q_probes/q-probes_definitions.pdf. Updated December 8, 2008. Accessed January 24, 2009) tests, TATs in excess of 70 minutes were observed for 11% of such tests.110 In another study using a different definition of unacceptable TAT, of approximately 300 hospitals monitoring the TAT for 225,000 stat ED potassium levels, 15% fell short of the expectations of the ordering clinicians.113 Evidence Base.Many stat tests are not used for urgent clinical decisions; therefore, faster results may not impact outcomes.106 Some studies have shown shorter TATs can shorten LOS in certain ED situations,108,114-118 but the impact on other outcomes is unclear. Except for implementation of point-ofcare testing,115,119-121 no published studies were identified on any intervention that was consistently effective in improving laboratory TAT. Clinician Satisfaction With Laboratory Services Definition.This indicator is the percentage of clinicians satisfied with various aspects of laboratory services such as TAT, accessibility, and communication.115,122-127 There are no standardized measures. Rationale.Customer satisfaction is generally considered a quality measure, with clinicians being the immediate customer for most laboratory services. Quality Gap.The lowest satisfaction scores have been related to poor communication, including timely reporting, communication of relevant information, and notification of significant abnormal results.127 The following dissatisfaction rates have been reported for clinicians: 5% for overall surgical consultation process,123 10% to 47% for various aspects of reference laboratory telephone services,124 9% to 47% for various aspects of anatomic pathology services,127 22% for a hospital transfusion service,125 and 10% to 21% for chemical pathology services (communication with laboratory, TAT, and reporting format).122 Evidence Base.Specific aspects of clinician dissatisfaction may be related to diagnostic or treatment errors or delays and inappropriate utilization of laboratory services and their associated costs; however, no evidence was found related to any outcomes. Except for indirect evidence that implementation of point-of-care testing reduces TAT,115,119-121 there is no direct evidence for any interventions that would improve clinician satisfaction with laboratory services.
Am J Clin Pathol 2009;131:418-431
DOI: 10.1309/AJCPJF8JI4ZLDQUE
425
425 425
Result Interpretation and Ensuing Action Follow-up of Abnormal Cervical Cytologic Results Definition.This indicator is the percentage of abnormal cervical cytologic (Pap smear) results that were not followed up within 6 months.128 Follow-up procedures, however, have not been uniformly defined. Rationale.For Pap smear screening to be effective in preventing cervical cancer, appropriate and timely clinical follow-up for patients with abnormal findings is needed.128 Quality Gap.A survey of more than 300 hospital laboratories reported follow-up information for approximately 16,000 patients with cervical cytologic diagnoses of carcinoma, high-grade squamous intraepithelial lesion (SIL), low-grade SIL, or glandular intraepithelial lesion.128 Within 6 months, the following percentages of patients with the following diagnoses had not received any follow-up procedures: 18% with carcinoma, 18% with high-grade SIL, 28% with low-grade SIL, and 26% with glandular intraepithelial lesions. More than 12% of patients with cytologic findings of highgrade SIL or carcinoma had no documentation of follow-up within 1 year.128 Similarly, an earlier study found 12% of abnormal cervical cytologic results lacked follow-up.129 Of 60 adolescent patients referred to a colposcopy clinic, 38% did not keep their colposcopy appointment despite outreach, and 13% to 17% of patients had no documented procedural follow-up 1 year later.130 Evidence Base.There is no published, direct evidence that follow-up of women with abnormal cervical cytologic results is related to any clinical, health, or cost outcomes. However, considering the strong evidence supporting Pap smear screening,30 follow-up of abnormal cytologic results can be linked by inference to health outcomes. Involvement of the family physician,129 outreach interventions,131,132 enhancement of teamwork and functional coordination,133 direct-mail communications with and without phone intervention,134 intensive follow-up protocol,135 and provision of risk communication packages136 and economic vouchers135 have been shown to increase the rate of cervical cytology follow-up.
Discussion
This review summarizes published information on certain laboratory testingrelated quality indicators and is, therefore, subject to publication bias. A more detailed evaluation of the indicators reviewed was not completed owing to the paucity of published information; considerable variation and inconsistency in key terms, definitions, implementation, and measurement and reporting practices; and a lack of basic supporting evidence. These problems resulted in a general lack of evidence supporting the importance, scientific soundness,
426 426 Am J Clin Pathol 2009;131:418-431
DOI: 10.1309/AJCPJF8JI4ZLDQUE
and usefulness of most of these indicators, particularly those typically used for internal QI because laboratories do not generally publish their internal monitoring data. For the laboratory indicators reviewed, standardized terminology, measurement specifications, data collection methods and evidence establishing quality gaps, and relationships to process, clinical, health, and economic outcomes are needed. The relevance of the identified quality indicators to various health system stakeholders and their use to positively impact the health care system were typically not addressed in the information that was available, indicating their selection was not made on the basis of evidence-based evaluation but instead relied on opinion within the laboratory community. Although most of the quality indicators identified may be useful for internal QI, for the many reasons identified, they are not meaningful for external comparisons or public reporting. One of the limitations of the reviewed indicators is that they do not apply as well to commercial laboratories despite the considerable proportion of testing conducted by these entities. A reason that these quality indicators do not apply as well in such settings is that there has been a lack of effort by commercial laboratories and the broader field of laboratory medicine to develop such indicators and to make them publicly accessible despite their disproportionately large share of laboratory testing volume. Many of these indicators are based primarily on selfreported surveys rather than on scientific study designs and/ or adequately specified, standardized, and consistently implemented data collection methods. The general lack of evidence supporting many laboratory medicine indicators results in part from the difficulty inherent in such studies because it is not easy to attribute effects on outcomes to specific laboratory processes considering many other confounding variables. There seems to be a dearth of data even from any retrospective, observational studies scientifically validating many of these indicators. Laboratory testing and related process improvements certainly have the potential to improve outcomes of interest and consequences that are also relevant to the IOM domains; however, this has not been demonstrated for the quality indicators identified in this review with the exception of blood culture contamination and populationbased testing measures consistent with guidelines that have originated in the broader health care community (Table 2). This review highlights the fact that the reviewed laboratory medicine quality indicators do not adequately address the stages of the total laboratory testing process or the IOM domains of health care, most notably equity and patientcenteredness. The most germane elements of patient-centeredness (eg, participatory and shared decision making137) are not usually evaluated in the area of laboratory testing. These include involving patients in the decision to order a test consistent with their values and preferences and understanding
American Society for Clinical Pathology
of laboratory results and possible future clinical or preventive actions.138 Other areas that have not been adequately monitored are metrics related to laboratory-driven clinical and preventive actions in which effective use of health information technology and medical decision-support systems have been shown to improve the provision of service.139,140 Notwithstanding the lack of published evidence-based indicators for laboratory performance, a great deal of collective and individual expert review and effort went into developing laboratory indicators by organizations such as the CAP and the Joint Commission based on consensus around accreditation standards, best practices, and measures of performance. Until the advent of new evidencebased laboratory medicine guidelines and quality indicators, however, it seems prudent to continue relying on accepted industry and clinical time-tested standards to guide laboratory practice in lieu of other available and reasonable alternatives. Because there are so many processes involved in laboratory testing, there is considerable challenge in identifying, defining, and, ultimately, implementing indicators that cover the various stages of the total laboratory testing process, in general and specific to different diseases and conditions, that address the IOM domains, various testing environments, and multiple relevant stakeholders. We did not present any review of quality indicators for some steps in the laboratory testing process such as specimen receipt, log in, and processing, even though they are frequent sources of errors, because metrics for assessing these steps have not been well defined and standardized, and published laboratory literature has not evaluated these steps in multi-institutional settings. Progress requires developing common priorities, standards, and definitions to facilitate meaningful measurement development initiatives and data collection for external comparisons. This requires substantial effort because these basic, necessary requirements have not yet been met for laboratory testingrelated indicators. Ultimately, future efforts should be directed to developing a set of laboratory medicine quality indicators that have significant health importance and are scientifically sound, implementable with standardized and available data elements, and useful to multiple stakeholders.23 This set of indicators should make it possible to develop meaningful public reporting on the status of laboratory-based health care with the ultimate goal of improving the provision and utilization of laboratory services consistent with contributing to improved health care quality and population health.
From the Laboratory Practice Evaluation and Genomics Branch, Division of Laboratory Systems, National Center for Preparedness, Detection and Control of Infectious Diseases, Centers for Disease Control and Prevention, Atlanta, GA. Address reprint requests to Dr Shahangian: CDC, 1600 Clifton Rd, NE, Mailstop G-23, Atlanta, GA 30329; [email protected].
American Society for Clinical Pathology
References
1. Lundberg GD. Acting on significant laboratory results. JAMA. 1981;245:1762-1763. 2. Clinical and Laboratory Standards Institute. Application of a Quality Management System Model for Laboratory Services. 3rd ed. Wayne, PA: CLSI; 2004. Document GP26-A3. 3. College of American Pathologists. About Q-Tracks. http:// www.cap.org/apps/cap.portal?_nfpb=true&cntvwrPtlt_ actionOverride=%2Fportlets%2FcontentViewer%2Fsho w&_windowLabel=cntvwrPtlt&cntvwrPtlt%7BactionForm. contentReference%7D=q_tracks%2Fq_tracks_desc.html&_ state=maximized&_pageLabel=cntvwr. Updated January 2, 2009. Accessed January 24, 2009. 4. The Joint Commission. 2009 national patient safety goals: laboratory services program. http://www.jointcommission.org/ patientsafety/nationalpatientsafetygoals/09_lab_npsgs.htm. Accessed January 24, 2009. 5. Institute of Medicine Committee to Design a Strategy for Quality Review and Assurance in Medicare. Medicare: A Strategy for Quality Assurance. Washington, DC: National Academies Press; 1990. 6. Agency for Healthcare Research and Quality. National Quality Measures Clearinghouse: desirable measure attributes. http:// www.qualitymeasures.ahrq.gov/resources/measure_use. aspx#attributes. Updated January 19, 2009. Accessed January 24, 2009. 7. Institute of Medicine Committee on Quality of Health Care in America. To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 2000. 8. Agency for Healthcare Research and Quality. National Quality Measures Clearinghouse: measure archive. http://www. qualitymeasures.ahrq.gov/resources/summaryarchive.aspx. Updated January 19, 2009. Accessed January 24, 2009. 9. Agency for Healthcare Research and Quality. National Guideline Clearinghouse. http://www.guideline.gov/browse/ browsecondition.aspx. Updated January 19, 2009. Accessed January 24, 2009. 10. Agency for Healthcare Research and Quality. National Healthcare Quality Report. http://www.ahrq.gov/qual/ nhqr07/nhqr07.pdf. February 2008. Accessed January 24, 2009. 11. College of American Pathologists. Past Q-Probes studies. http://www.cap.org/apps/cap.portal?_nfpb=true&cntvwrPtlt_ actionOverride=%2Fportlets%2FcontentViewer%2Fshow&_ windowLabel=cntvwrPtlt&cntvwrPtlt%7BactionForm. contentReference%7D=q_probes%2Fpaststudies.html&_ state=maximized&_pageLabel=cntvwr. Updated December 8, 2008. Accessed January 24, 2009. 12. National Committee for Quality Assurance. Measuring quality: improving health. http://web.ncqa.org/tabid/661/default. aspx. Accessed January 24, 2009. 13. Agency for Healthcare Research and Quality. US Preventive Services Task Force (USPSTF). http://www.ahrq.gov/clinic/ uspstfix.htm. Accessed January 24, 2009. 14. Centers for Disease Control and Prevention. MMWR Morbidity and Mortality Weekly Report. MMWR recommendations and reports: current volume. http://www. cdc.gov/mmwr/mmwr_rr.html. Accessed January 24, 2009. 15. Centers for Disease Control and Prevention. MMWR Morbidity and Mortality Weekly Report. MMWR recommendations and reports: past volumes. http://www.cdc. gov/mmwr/recreppy.html. Accessed January 24, 2009.
427 427
427
16. Guide to Community Preventive Services. The community guide. http://www.thecommunityguide.org. Updated December 23, 2008. Accessed January 24, 2009. 17. Agency for Healthcare Research and Quality. National Quality Measures Clearinghouse: inclusion criteria. http://www. qualitymeasures.org/about/inclusion.aspx. Updated January 19, 2009. Accessed January 24, 2009. 18. Merlani P, Garnerin P, Diby M, et al. Quality improvement report: linking guideline to regular feedback to increase appropriate requests for clinical tests: blood gas analysis in intensive care. BMJ. 2001;323:620-624. 19. Ozbek OA, Oktem MA, Dogan G, et al. Application of hepatitis serology testing algorithms to assess inappropriate laboratory utilization. J Eval Clin Pract. 2004;10:519-523. 20. Bates DW, Boyle DL, Rittenberg E, et al. What proportion of common diagnostic tests appear redundant? Am J Med. 1998;104:361-368. 21. Bates DW, Kuperman GJ, Rittenberg E, et al. A randomized trial of a computer-based intervention to reduce utilization of redundant laboratory tests. Am J Med. 1999;106:144-150. 22. Valenstein P, Schifman RB. Duplicate laboratory orders: a College of American Pathologists Q-Probes study of thyrotropin requests in 502 institutions. Arch Pathol Lab Med. 1996;120:917-921. 23. Behal R. Identification of performance measures of importance for quality in laboratory medicine. http://qualityforum.org/ pdf/projects/lab-med/txlabpaper_behal_final%2005-21-07. pdf. December 2006. Accessed January 24, 2009. 24. Shea S, Sideli RV, DuMouchel W, et al. Computer-generated informational messages directed to physicians: effect on length of hospital stay. J Am Med Inform Assoc. 1995;2:58-64. 25. Tierney WM, Miller ME, Overhage JM, et al. Physician inpatient order writing on microcomputer workstations: effects on resource utilization. JAMA. 1993;269:379-383. 26. Tierney WM, Miller ME, McDonald CJ. The effect on test ordering of informing physicians of the charges for outpatient diagnostic tests. N Engl J Med. 1990;322:1499-1504. 27. Neilson EG, Johnson KB, Rosenbloom ST, et al. The impact of peer management on test-ordering behavior. Ann Intern Med. 2004;141:196-204. 28. Wang TJ, Mort EA, Nordberg P, et al. A utilization management intervention to reduce unnecessary testing in the coronary care unit. Arch Intern Med. 2002;162:1885-1890. 29. Hampers LC, Cha S, Gutglass DJ, et al. The effect of price information on test-ordering behavior and patient outcomes in a pediatric emergency department. Pediatrics. 1999;103:877-882. 30. US Preventive Services Task Force. The guide to clinical preventive services 2008: recommendations of the US Preventive Services Task Force. http://www.ahrq.gov/clinic/ pocketgd08/pocketgd08.pdf. September 2008. Accessed January 24, 2009. 31. Morrow DA, Cannon CP, Jesse RL, et al. National Academy of Clinical Biochemistry Laboratory Medicine practice guidelines: clinical characteristics and utilization of biochemical markers in acute coronary syndromes. Circulation. 2007;115:e356-e375. http://circ.ahajournals.org/cgi/reprint/115/13/e356. Accessed January 24, 2009. 32. Tang WH, Francis GS, Morrow DA, et al. National Academy of Clinical Biochemistry Laboratory Medicine practice guidelines: clinical utilization of cardiac biomarker testing in heart failure. Circulation. 2007;116:e99-e109. http://circ. ahajournals.org/cgi/reprint/116/5/e99. Accessed January 24, 2009.
428 428
33. Stratton IM, Adler AI, Neil HA, et al. Association of glycaemia with macrovascular and microvascular complications of type 2 diabetes (UKPDS 35): prospective observational study. BMJ. 2000;321:405-412. 34. van Walraven C, Goel V, Chan B. Effect of population-based interventions on laboratory utilization: a time-series analysis. JAMA. 1998;280:2028-2033. 35. Verstappen WHJM, van der Weijden T, Sijbrandij J, et al. Effect of a practice-based strategy on test ordering performance of primary care physicians: a randomized trial. JAMA. 2003;289:2407-2412. 36. Sucov A, Bazarian JJ, de Lahunta EA, et al. Test ordering guidelines can alter ordering patterns in an academic emergency department. J Emerg Med. 1999;17:391-397. 37. Bunting PS, van Walraven C. Effect of a controlled feedback intervention on laboratory test ordering by community physicians. Clin Chem. 2004;50:321-326. 38. Winkens RAG, Pop P, Grol RPTM, et al. Effect of feedback on test ordering behaviour of general practitioners. BMJ. 1992;304:1093-1096. 39. Winkens RAG, Pop P, Bugter-Maessen AMA, et al. Randomised controlled trial of routine individual feedback to improve rationality and reduce numbers of test requests. Lancet. 1995;345:498-502. 40. Winkens RAG, Ament AJHA, Pop P, et al. Routine individual feedback on requests for diagnostic tests: an economic evaluation. Med Decis Making. 1996;16:309-314. 41. Durieux P, Ravaud P, Porcher R, et al. Long-term impact of a restrictive laboratory test ordering form on tumor marker prescriptions. Int J Technol Assess Health Care. 2003;19:106-113. 42. Poley MJ, Edelenbos KI, Mosseveld M, et al. Cost consequences of implementing an electronic decision support system for ordering laboratory tests in primary care: evidence from a controlled prospective study in the Netherlands. Clin Chem. 2007;53:213-219. 43. Smith BJ, McNeely MDD. The influence of an expert system for test ordering and interpretation on laboratory investigations. Clin Chem. 1999;45:1168-1175. 44. Overhage JM, Tierney WM, Zhou XH, et al. A randomized trial of corollary orders to prevent errors of omission. J Am Med Inform Assoc. 1997;4:364-375. 45. Dale JC, Renner SW. Wristband errors in small hospitals: a College of American Pathologists Q-Probes study of quality issues in patient identification. Lab Med. 1997;28:203-207. 46. Howanitz PJ, Renner SW, Walsh MK. Continuous wristband monitoring over 2 years decreases identification errors: a College of American Pathologists Q-Tracks study. Arch Pathol Lab Med. 2002;126:809-815. 47. Novis DA, Miller KA, Howanitz PJ, et al. Audit of transfusion procedures in 660 hospitals: a College of American Pathologists Q-Probes study of patient identification and vital sign monitoring frequencies in 16494 transfusions. Arch Pathol Lab Med. 2003;127:541-548. 48. Renner SW, Howanitz PJ, Bachner P. Wristband identification error reporting in 712 hospitals: a College of American Pathologists Q-Probes study of quality issues in transfusion practice. Arch Pathol Lab Med. 1993;117:573-577. 49. Linden JV, Wagner K, Voytovich AE, et al. Transfusion errors in New York State: an analysis of 10 years experience. Transfusion. 2000;40:1207-1213.
50. Valenstein PN, Raab SS, Walsh MK. Identification errors involving clinical laboratories: a College of American Pathologists Q-Probes study of patient and specimen identification errors at 120 institutions. Arch Pathol Lab Med. 2006;130:1106-1113. 51. Linden JV, Paul B, Dressler KP. A report of 104 transfusion errors in New York State. Transfusion. 1992;32:601-606. 52. Valenstein PN, Sirota RL. Identification errors in pathology and laboratory medicine. Clin Lab Med. 2004;24:979-996. 53. Williamson LM, Lowe S, Love EM, et al. Serious Hazards of Transfusion (SHOT) initiative: analysis of the first two annual reports. BMJ. 1999;319:16-19. 54. Mercuriali F, Inghilleri G, Colotti MT, et al. Bedside transfusion errors: analysis of 2 years use of a system to monitor and prevent transfusion errors. Vox Sang. 1996;70:16-20. 55. Dale JC, Novis DA. Outpatient phlebotomy success and reasons for specimen rejection. Arch Pathol Lab Med. 2002;126:416-419. 56. Howanitz PJ, Cembrowski GS, Bachner P. Laboratory phlebotomy: College of American Pathologists Q-Probe study of patient satisfaction and complications in 23783 patients. Arch Pathol Lab Med. 1991;115:867-872. 57. Howanitz PJ, Schifman RB. Inpatient phlebotomy practices: a College of American Pathologists Q-Probes quality improvement study of 2351643 phlebotomy requests. Arch Pathol Lab Med. 1994;118:601-605. 58. Dale JC, Howanitz PJ. Patient satisfaction in phlebotomy: a College of American Pathologists Q-Probes study. Lab Med. 1996;27:188-192. 59. Stark A, Jones BA, Chapman D, et al. Clinical laboratory specimen rejection: association with the site of patient care and patients characteristics. Arch Pathol Lab Med. 2007;131:588-592. 60. Jones BA, Calam RR, Howanitz PJ. Chemistry specimen acceptability: a College of American Pathologists Q-Probes study of 453 laboratories. Arch Pathol Lab Med. 1997;121:19-26. 61. Jones BA, Meier F, Howanitz PJ. Complete blood count specimen acceptability: a College of American Pathologists Q-Probes study of 703 laboratories. Arch Pathol Lab Med. 1995;119:203-208. 62. Schifman RB, Strand CL, Meier FA, et al. Blood culture contamination: a College of American Pathologists Q-Probes study involving 640 institutions and 497134 specimens from adult patients. Arch Pathol Lab Med. 1998;122:216-221. 63. Bates DW, Goldman L, Lee TH. Contaminant blood cultures and resource utilization: the true consequences of false-positive results. JAMA. 1991;265:365-369. 64. Little JR, Murray PR, Traynor PS, et al. A randomized trial of povidone-iodine compared with iodine tincture for venipuncture site disinfection: effects on rates of blood culture contamination. Am J Med. 1999;107:119-125. 65. Norberg A, Christopher NC, Ramundo ML, et al. Contamination rates of blood cultures obtained by dedicated phlebotomy vs intravenous catheter. JAMA. 2003;289:726-729. 66. Shafazand S, Weinacker AB. Blood cultures in the critical care unit: improving utilization and yield. Chest. 2002;122:1727-1736. 67. Surdulescu S, Utamsingh D, Shekar R. Phlebotomy teams reduce blood-culture contamination rate and save money. Clin Perform Qual Health Care. 1998;6:60-62.
68. Bekeris LG, Tworek JA, Walsh MK, et al. Trends in blood culture contamination: a College of American Pathologists Q-Tracks study of 356 institutions. Arch Pathol Lab Med. 2005;129:1222-1225. 69. Weinbaum FI, Lavie S, Danek M, et al. Doing it right the first time: quality improvement and the contaminant blood culture. J Clin Microbiol. 1997;35:563-565. 70. Weinstein MP. Blood culture contamination: persisting problems and partial progress. J Clin Microbiol. 2003;41:2275-2278. 71. Nakhleh RE, Zarbo RJ. Surgical pathology specimen identification and accessioning: a College of American Pathologists Q-Probes study of 1004115 cases from 417 institutions. Arch Pathol Lab Med. 1996;120:227-233. 72. Howanitz PJ. Errors in laboratory medicine: practical lessons to improve patient safety. Arch Pathol Lab Med. 2005;129:1252-1261. 73. Makary MA, Epstein J, Pronovost PJ, et al. Surgical specimen identification errors: a new measure of quality in surgical care. Surgery. 2007;141:450-455. 74. Laboratory Requirements: Clinical Laboratory Improvement Amendments of 1988. 42 USC (1988). http://wwwn.cdc.gov/ clia/pdf/42cfr493_2004.pdf. Accessed January 24, 2009. 75. Parsons PJ, Reilly AA, Esernio-Jenssen D, et al. Evaluation of blood lead proficiency testing: comparison of open and blind paradigms. Clin Chem. 2001;47:322-330. 76. Reilly AA, Salkin IF, McGinnis MR, et al. Evaluation of mycology laboratory proficiency testing. J Clin Microbiol. 1999;37:2297-2305. 77. Jenny RW, Jackson KY. Proficiency test performance as a predictor of accuracy of routine patient testing for theophylline. Clin Chem. 1993;39:76-81. 78. Keenlyside RA, Collins CL, Hancock JS, et al. Do proficiency test results correlate with the work performance of screeners who screen Papanicolaou smears? Am J Clin Pathol. 1999;112:769-776. 79. Edson DC, Massey LD. Proficiency testing performance in physicians office, clinic and small hospital laboratories, 1994-2004. Lab Med. 2007;38:237-239. 80. Government Accountability Office. Clinical lab quality: CMS and survey organization oversight should be strengthened. http://www.gao.gov/new.items/d06416.pdf. June 2006. Accessed January 24, 2009. 81. Novak RW. Do proficiency testing participants learn from their mistakes? Experience from the EXCEL throat culture module. Arch Pathol Lab Med. 2002;126:147-149. 82. Raab SS. Improving patient safety through quality assurance. Arch Pathol Lab Med. 2006;130:633-637. 83. Zarbo RJ, Jones BA, Friedberg RC, et al. Q-Tracks: a College of American Pathologists program of continuous laboratory monitoring and longitudinal tracking. Arch Pathol Lab Med. 2002;126:1036-1044. 84. Raab SS, Grzybicki DM, Zarbo RJ, et al. Anatomic pathology databases and patient safety. Arch Pathol Lab Med. 2005;129:1246-1251. 85. Clary KM, Silverman JF, Liu Y, et al. Cytohistologic discrepancies: a means to improve pathology practice and patient outcomes. Am J Clin Pathol. 2002;117:567-573. 86. Selvaggi SM. Implications of low diagnostic reproducibility of cervical cytologic and histologic diagnoses. JAMA. 2001;285:1506-1508.
429 429
429
87. Stoler MH, Schiffman M. Interobserver reproducibility of cervical cytologic and histologic interpretations: realistic estimates from the ASCUS-LSIL Triage Study. JAMA. 2001;285:1500-1505. 88. Jones BA, Novis DA. Cervical biopsy-cytology correlation: a College of American Pathologists Q-Probes study of 22439 correlations in 348 laboratories. Arch Pathol Lab Med. 1996;120:523-531. 89. Novis DA, Dale JC. Morning rounds inpatient test availability: a College of American Pathologist Q-Probes study of 79860 morning complete blood cell count and electrolyte test results in 367 institutions. Arch Pathol Lab Med. 2000;124:499-503. 90. Dale JC, Steindel SJ, Walsh M. Early morning blood collections: a College of American Pathologists Q-Probes study of 657 institutions. Arch Pathol Lab Med. 1998;122:865-870. 91. Nakhleh RE, Zarbo RJ. Amended reports in surgical pathology and implications for diagnostic error detection and avoidance: a College of American Pathologists Q-Probes study of 1667547 accessioned cases in 359 laboratories. Arch Pathol Lab Med. 1998;122:303-309. 92. Yuan S, Astion ML, Schapiro J, et al. Clinical impact associated with corrected results in clinical microbiology testing. J Clin Microbiol. 2005;43:2188-2193. 93. Wagar EA, Stankovic AK, Wilkinson DS, et al. Assessment monitoring of laboratory critical values: a College of American Pathologists Q-Tracks study of 180 institutions. Arch Pathol Lab Med. 2007;131:44-49. 94. Dighe AS, Rao A, Coakley AB, et al. Analysis of laboratory critical value reporting at a large academic medical center. Am J Clin Pathol. 2006;125:758-764. 95. Hanna D, Griswold P, Leape LL, et al. Communicating critical test results: safe practice recommendations. Jt Comm J Qual Patient Saf. 2005;31:68-80. 96. Howanitz PJ, Steindel SJ, Heard NV. Laboratory critical values policies and procedures: a College of American Pathologists Q-Probes study in 623 institutions. Arch Pathol Lab Med. 2002;126:663-669. 97. Wagar EA, Friedberg RC, Souers R, et al. Critical values comparison: a College of American Pathologists Q-Probes survey of 163 clinical laboratories. Arch Pathol Lab Med. 2007;131:1769-1775. 98. Tate KE, Gardner RM. Computers, quality, and the clinical laboratory: a look at critical value reporting. Proc Annu Symp Comput Appl Med Care. 1993;193-197. 99. Emancipator K. Critical values: ASCP practice parameter: American Society of Clinical Pathologists. Am J Clin Pathol. 1997;108:247-253. 100. Kuperman GJ, Boyle D, Jha A, et al. How promptly are inpatients treated for critical laboratory results? J Am Med Inform Assoc. 1998;5:112-119. 101. Bonini P, Plebani M, Ceriotti F, et al. Errors in laboratory medicine. Clin Chem. 2002;48:691-698. 102. Steindel SJ, Howanitz PJ. Physician satisfaction and emergency department laboratory test turnaround time. Arch Pathol Lab Med. 2001;125:863-871. 103. Howanitz PJ, Cembrowski GS, Steindel SJ, et al. Physician goals and laboratory test turnaround times: a College of American Pathologists Q-Probes study of 2763 clinicians and 722 institutions. Arch Pathol Lab Med. 1993;117:22-28. 104. Jones BA, Novis DA. Nongynecologic cytology turnaround time: a College of American Pathologists Q-Probes study of 180 laboratories. Arch Pathol Lab Med. 2001;125:1279-1284.
430 430 Am J Clin Pathol 2009;131:418-431
DOI: 10.1309/AJCPJF8JI4ZLDQUE
105. Jones BA, Valenstein PN, Steindel SJ. Gynecologic cytology turnaround time: a College of American Pathologists Q-Probes study of 371 laboratories. Arch Pathol Lab Med. 1999;123:682-686. 106. Kilgore ML, Steindel SJ, Smith JA. Evaluating stat testing options in an academic health center: therapeutic turnaround time and staff satisfaction. Clin Chem. 1998;44:1597-1603. 107. Novis DA, Jones BA, Dale JC, et al. Biochemical markers of myocardial injury test turnaround time: a College of American Pathologists Q-Probes study of 7020 troponin and 4368 creatine kinase-MB determinations in 159 institutions. Arch Pathol Lab Med. 2004;128:158-164. 108. Steindel SJ. Timeliness of clinical laboratory tests: a discussion based on five College of American Pathologists Q-Probes studies. Arch Pathol Lab Med. 1995;119:918-923. 109. Steindel SJ, Jones BA. Routine outpatient laboratory test turnaround times and practice patterns: a College of American Pathologists Q-Probes study. Arch Pathol Lab Med. 2002;126:11-18. 110. Steindel SJ, Novis DA. Using outlier events to monitor test turnaround time. Arch Pathol Lab Med. 1999;123:607-614. 111. Valenstein P, Walsh M. Five-year follow-up of routine outpatient test turnaround time: a College of American Pathologists Q-Probes study. Arch Pathol Lab Med. 2003;127:1421-1423. 112. Ryan TJ, Antman EM, Brooks NH, et al. 1999 update: ACC/AHA Guidelines for the Management of Patients With Acute Myocardial Infarction: executive summary and recommendations: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on Management of Acute Myocardial Infarction). Circulation. 1999;100:1016-1030. 113. Novis DA, Walsh MK, Dale JC, et al. Continuous monitoring of stat and routine outlier turnaround times: two College of American Pathologists Q-Tracks monitors in 291 hospitals. Arch Pathol Lab Med. 2004;128:621-626. 114. Bluth EI, Lambert DJ, Lohmann TP, et al. Improvement in stat laboratory turnaround time: a model continuous quality improvement project. Arch Intern Med. 1992;152:837-840. 115. Lee-Lewandrowski E, Corboy D, Lewandrowski K, et al. Implementation of a point-of-care satellite laboratory in the emergency department of an academic medical center: impact on test turnaround time and patient emergency department length of stay. Arch Pathol Lab Med. 2003;127:456-460. 116. Holland LL, Smith LL, Blick KE. Reducing laboratory turnaround time outliers can reduce emergency department patient length of stay: an 11-hospital study. Am J Clin Pathol. 2005;124:672-674. 117. Singer AJ, Ardise J, Gulla J, et al. Point-of-care testing reduces length of stay in emergency department chest pain patients. Ann Emerg Med. 2005;45:587-591. 118. Holland LL, Smith LL, Blick KE. Total laboratory automation can help eliminate the laboratory as a factor in emergency department length of stay. Am J Clin Pathol. 2006;125:765-770. 119. Fitch JC, Mirto GP, Geary KL, et al. Point-of-care and standard laboratory coagulation testing during cardiovascular surgery: balancing reliability and timeliness. J Clin Monit Comput. 1999;15:197-204. 120. Lewandrowski K. How the clinical laboratory and the emergency department can work together to move patients through quickly. Clin Leadersh Manag Rev. 2004;18:155-159.
121. Lazarenko GC, Dobson C, Enokson R, et al. Accuracy and speed of urine pregnancy tests done in the emergency department: a prospective study. CJEM. 2001;3:292-295. 122. Allen KR, Harris CM. Measure of satisfaction of general practitioners with the chemical pathology services in Leeds Western Health District. Ann Clin Biochem. 1992;29:331-336. 123. Azam M, Nakhleh RE. Surgical pathology extradepartmental consultation practices. Arch Pathol Lab Med. 2002;126:405-412. 124. Dale JC, Novis DA, Meier FA. Reference laboratory telephone service quality. Arch Pathol Lab Med. 2001;125:608-612. 125. Pennington SJ, McClelland DB, Murphy WG. Clinicians satisfaction with a hospital blood transfusion service: a marketing analysis of a monopoly supplier. Qual Health Care. 1993;2:239-242. 126. Rau J, Cross JL, Hofherr LK, et al. Physician satisfaction with human immunodeficiency virus type 1 and hepatitis B virus testing in San Diego County. Med Care. 1996;34:1-10. 127. Zarbo RJ, Nakhleh RE, Walsh M. Customer satisfaction in anatomic pathology: a College of American Pathologists Q-Probes study of 3065 physician surveys from 94 laboratories. Arch Pathol Lab Med. 2003;127:23-29. 128. Jones BA, Novis DA. Follow-up of abnormal gynecologic cytology: a College of American Pathologists Q-probes study of 16132 cases from 306 laboratories. Arch Pathol Lab Med. 2000;124:665-671. 129. Palm BTHM, Kant AC, Visser EA, et al. The effect of the family physician on improving follow-up after an abnormal Pap smear. Int J Qual Health Care. 1997;9:277-282. 130. Lavin C, Goodman E, Perlman S, et al. Follow-up of abnormal Papanicolaou smears in a hospital-based adolescent clinic. J Pediatr Adolesc Gynecol. 1997;10:141-145. 131. Wagner TH, Engelstad LP, McPhee SJ, et al. The costs of an outreach intervention for low-income women with abnormal Pap smears. Prev Chronic Dis. 2007;4:1-10.
132. Engelstad LP, Stewart S, Otero-Sabogal R, et al. The effectiveness of a community outreach intervention to improve follow-up among underserved women at highest risk for cervical cancer. Prev Med. 2005;41:741-748. 133. Agurto I, Sandoval J, de la Rosa M, et al. Improving cervical cancer prevention in a developing country. Int J Qual Health Care. 2006;18:81-86. 134. Hou SI. Stage of adoption and impact of direct-mail communications with and without phone intervention on Chinese womens cervical smear screening behavior. Prev Med. 2005;41:749-756. 135. Marcus AC, Kaplan CP, Crane LA, et al. Reducing loss-tofollow-up among women with abnormal Pap smears: results from a randomized trial testing an intensive follow-up protocol and economic incentives. Med Care. 1998;36:397-410. 136. Holloway RM, Wilkinson C, Peters TJ, et al. Clusterrandomised trial of risk communication to enhance informed uptake of cervical screening. Br J Gen Pract. 2003;53:620-625. 137. Sheridan SL, Harris RP, Woolf SH; Shared Decision-Making Workgroup of the US Preventive Services Task Force. Shared decision making about screening and chemoprevention: a suggested approach from the U.S. Preventive Services Task Force. Am J Prev Med. 2004;26:56-66. 138. Shahangian S. Laboratory-based health screening: perception of effectiveness, biases, utility, and informed/shared decision making. Lab Med. 2006;37:210-216. 139. Bu D, Pan E, Walker J, et al. Benefits of information technologyenabled diabetes management. Diabetes Care. 2007;30:1137-1142. 140. Rothschild JM, McGurk S, Honour M, et al. Assessment of education and computerized decision support interventions for improving transfusion practice. Transfusion. 2007;47:228-239.
431
431 431