Pediatric Urology: Surgical Complications and Management
()
About this ebook
Pediatric Urology: Surgical Complications and Management, 2nd edition focuses 100% on the most common problems that can occur during pediatric urologic surgery, and how best to resolve them, ensuring the best possible outcome for the patient.
As well as being thoroughly revised with the latest in management guidelines, brand new to this edition are a host of clinical case studies highlighting real-life problems during urologic surgery and the tips and tricks used by the surgeon to resolve issues faced. These will be invaluable for urology trainees learning their trade as well as for those preparing for Board or other specialty exams. Chapters will include problem solving sections as well as key take-home points.
In addition, high-quality teaching videos showing urologic surgery in action will be included via the companion website - again proving an invaluable tool for all those seeking to improve their surgical skills.
Edited by an experienced and international trio of urologists, they will recruit the world’s leading experts, resulting in a uniform, high-quality and evidence-based approach to the topic.
Pediatric Urology: Surgical Complications and Management, 2nd edition is essential reading for all urologists, especially those specialising in pediatric urology and urologic surgery, as well as general surgeons.
Related to Pediatric Urology
Related ebooks
Eureka: General Surgery & Urology Rating: 0 out of 5 stars0 ratingsUrology at a Glance Rating: 0 out of 5 stars0 ratingsCotton and Williams' Practical Gastrointestinal Endoscopy: The Fundamentals Rating: 1 out of 5 stars1/5Singer and Monaghan's Cervical and Lower Genital Tract Precancer: Diagnosis and Treatment Rating: 5 out of 5 stars5/5Abdominal-Pelvic MRI Rating: 0 out of 5 stars0 ratingsERCP: The Fundamentals Rating: 0 out of 5 stars0 ratingsJones' Clinical Paediatric Surgery Rating: 0 out of 5 stars0 ratingsPrinciples and Techniques for the Aspiring Surgeon: What Great Surgeons Do Without Thinking Rating: 0 out of 5 stars0 ratingsEssentials of Pediatric Surgery Rating: 3 out of 5 stars3/5A System of Operative Surgery, Volume IV (of 4) Rating: 4 out of 5 stars4/5Radiology and Follow-up of Urologic Surgery Rating: 0 out of 5 stars0 ratingsBonney's Gynaecological Surgery Rating: 0 out of 5 stars0 ratingsAnal Surgery for General Surgeons: A Handbook of Benign Common Ano-Rectal Disorders. Rating: 0 out of 5 stars0 ratingsUrinary System: Cytology, Histology, Cystoscopy, and Radiology Rating: 0 out of 5 stars0 ratingsSurgical Principles in Inguinal Hernia Repair: A Comprehensive Guide to Anatomy and Operative Techniques Rating: 0 out of 5 stars0 ratingsTop Tips in Urology Rating: 5 out of 5 stars5/5Blandy's Urology Rating: 0 out of 5 stars0 ratingsUreteric Stenting Rating: 0 out of 5 stars0 ratingsCore Laparoscopic Skills Rating: 4 out of 5 stars4/5Atlas of Paediatric Surgery with Mcqs in Paediatric Surgery Rating: 5 out of 5 stars5/5Atlas of Parathyroid Surgery Rating: 0 out of 5 stars0 ratingsThe Surgical Oncology Review: For the Absite and Boards Rating: 0 out of 5 stars0 ratingsHandbook of Urology Rating: 0 out of 5 stars0 ratingsPUZZLES IN GENERAL SURGERY: A STUDY GUIDE (2nd Edition) Rating: 0 out of 5 stars0 ratingsUnexpected Challenges in Vascular Surgery Rating: 0 out of 5 stars0 ratingsAdvanced Surgical Knot Tying, Second Edition: Knots for the General, Orthopedic, and Laparoscopic Surgeon Rating: 0 out of 5 stars0 ratingsPuzzles in General Surgery: A Study Guide Rating: 4 out of 5 stars4/5Education for Pre- and Postoperative Procedures: A Special Urologic Nursing Focus Series Rating: 0 out of 5 stars0 ratingsMicrosurgical Skills: A Handbook of Experimental Microsurgical Techniques Rating: 0 out of 5 stars0 ratingsFoundation Skills in Surgery: Handbook Rating: 3 out of 5 stars3/5
Medical For You
What Happened to You?: Conversations on Trauma, Resilience, and Healing Rating: 4 out of 5 stars4/5Gut: The Inside Story of Our Body's Most Underrated Organ (Revised Edition) Rating: 4 out of 5 stars4/5Brain on Fire: My Month of Madness Rating: 4 out of 5 stars4/5The Body Keeps the Score: Brain, Mind, and Body in the Healing of Trauma Rating: 4 out of 5 stars4/5Mating in Captivity: Unlocking Erotic Intelligence Rating: 4 out of 5 stars4/5The Little Book of Hygge: Danish Secrets to Happy Living Rating: 4 out of 5 stars4/5The Diabetes Code: Prevent and Reverse Type 2 Diabetes Naturally Rating: 5 out of 5 stars5/5Mediterranean Diet Meal Prep Cookbook: Easy And Healthy Recipes You Can Meal Prep For The Week Rating: 5 out of 5 stars5/5Peptide Protocols: Volume One Rating: 4 out of 5 stars4/5The White Coat Investor: A Doctor's Guide to Personal Finance and Investing Rating: 4 out of 5 stars4/5The Song of the Cell: An Exploration of Medicine and the New Human Rating: 4 out of 5 stars4/5Adult ADHD: How to Succeed as a Hunter in a Farmer's World Rating: 4 out of 5 stars4/5The Vagina Bible: The Vulva and the Vagina: Separating the Myth from the Medicine Rating: 5 out of 5 stars5/5Women With Attention Deficit Disorder: Embrace Your Differences and Transform Your Life Rating: 5 out of 5 stars5/5The Emperor of All Maladies: A Biography of Cancer Rating: 5 out of 5 stars5/552 Prepper Projects: A Project a Week to Help You Prepare for the Unpredictable Rating: 5 out of 5 stars5/5Living Daily With Adult ADD or ADHD: 365 Tips o the Day Rating: 5 out of 5 stars5/5Blitzed: Drugs in the Third Reich Rating: 4 out of 5 stars4/5Herbal Healing for Women Rating: 4 out of 5 stars4/5This Is How Your Marriage Ends: A Hopeful Approach to Saving Relationships Rating: 4 out of 5 stars4/5Hidden Lives: True Stories from People Who Live with Mental Illness Rating: 4 out of 5 stars4/5"Cause Unknown": The Epidemic of Sudden Deaths in 2021 & 2022 Rating: 5 out of 5 stars5/5
Reviews for Pediatric Urology
0 ratings0 reviews
Book preview
Pediatric Urology - Prasad P. Godbole
Contributors
Ardavan Akhavan, MD
Assistant Professor of Pediatric Urology
Director of Pediatric Minimally Invasive and Robotic Surgery
James Buchanan Brady Urological Institute
Johns Hopkins Hospital
Baltimore, MD, USA
Angela M. Arlen, MD
Fellow
Emory University School of Medicine;
Children’s Healthcare of Atlanta
Atlanta, GA, USA
Paul F. Austin, MD, FAAP
Director of Pediatric Urology Research
Associate Professor of Urologic Surgery
St Louis Children’s Hospital
Washington University School of Medicine
St Louis, MO, USA
Ben Bridgewater, MBBS, PhD, FRCS(CTh)
Consultant Cardiac Surgeon
Clinical Director and Director of Clinical Audit
University Hospital of South Manchester NHS Foundation Trust
Manchester, UK
Nicol C. Bush, MD, MSc
Pediatric Urology Section
Children’s Medical Center and University of Texas Southwestern Medical Center
Dallas, TX, USA
Anthony A. Caldamone, MD
Professor of Surgery (Urology) and Pediatrics
Alpert Medical School of Brown University
Hasbro Children’s Hospital
Providence, RI, USA
Job K. Chacko, MD
Clinical Pediatric Urologist
Rocky Mountain Pediatric Urology
Denver, CO, USA
David J. Chalmers, MD
Pediatric Urology Fellow
Children’s Hospital Colorado
Aurora, CO, USA
Sarah M. Creighton, MD, FRCOG
Consultant Gynaecologist
University College London Hospital
London, UK
Ahmed A. Darwish, MD, MS, FRCS, FEBPS
Lecturer in Paediatric Surgery
Ain Shams University, Cairo, Egypt;
Senior Clinical Fellow
Pediatric Surgery and Urology
University Hospital of Wales, Cardiff, UK
W. Robert DeFoor Jr, MD, MPH, FAAP
Director, Clinical Research
Co-Director, Pediatric Urology Fellowship Program
Division of Pediatric Urology
Cincinnati Children’s Hospital
Cincinnati, OH, USA
Divyesh Y. Desai, MB MS, MCh(Urol), FEAPU
Paediatric Urologist and Director
Urodynamics Unit
Great Ormond Street Hospital for Children NHS Foundation Trust
London, UK
Steven G. Docimo, MD
Chief Medical Officer
Children’s Hospital of Pittsburgh of UPMC;
Vice President for Children’s Subspecialty Services
UPMC Physician Services Division;
Professor of Pediatric Urology
University of Pittsburgh School of Medicine
Pittsburgh, PA, USA
Ahmad Elderwy, MD
Assistant ProfessorPediatric Urology UnitAssiut Urology and Nephrology HospitalFaculty of Medicine
Assiut University
Assiut, Egypt
Jonathan S. Ellison, MD
Fellow, Department of Urology
University of Michigan Health System and Medical School
Ann Arbor, MI, USA
Kathryn Evans, FRCS(Paed Surg)
Consultant Paediatric Urologist
St George’s Hospital
London, UK
Philippa Evans, BA, MBBS, MRCP, FRCA
Consultant Paediatric Anaesthetist
Great Ormond Street Hospital for Children NHS Foundation Trust
London, UK
Walid A. Farhat, MD, FRCS(C), FAAP
Associate Professor
Paediatric Urology
The Hospital for Sick Children
Toronto, ON, Canada
Marie-Klaire Farrugia, MD, MD(Res), FRCSEd(Paed Surg)
Consultant Paediatric Urologist
Department of Paediatric Surgery
Chelsea and Westminster Hospital
London, UK
Neil C. Featherstone, MBChB, BSc, PhD, FRCS(Paed Surg)
Specialist Registrar Paediatric Urology
London Deanery Training Fellow in Paediatric Urology
Great Ormond Street Hospital for Children NHS Foundation Trust
London, UK
Fernando Ferrer, MD, FAAP, FACS
Vice Chairman, Department of Surgery
Associate Professor Surgery (Urology) and Pediatrics (Oncology)
University of Connecticut School of Medicine;
Executive Vice President, Medical Affairs
Surgeon-in-Chief and Director
Division of Pediatric Urology
Connecticut Children’s Medical Center
University of Connecticut School of Medicine
Hartford, CT, USA
Janelle A. Fox, MD
Fellow, Pediatric Urology
University of Pittsburgh School of Medicine
Pittsburgh, PA, USA
Andrew L. Freedman, MD, FAAP
Director, Pediatric Urology
Vice Chairman for Pediatric Surgical Services
Department of Surgery
Cedars-Sinai Medical Center
Los Angeles, CA, USA
Dominic Frimberger, MD
Professor of Urology
Pediatric Urology
The Children’s Hospital of Oklahoma
Oklahoma City, OK, USA
Joseph M. Gleason, MD
Assistant Professor of Urology
University of Tennessee Health Science Center;
Pediatric Urologist
Le Bonheur Children’s Hospital
Memphis, TN, USA
Prasad P. Godbole, FRCS, FRCS(Paeds), FEAPU
Clinical Director
Division of Surgery and Critical Care;
Consultant Paediatric Urologist
Sheffield Children’s NHS Foundation Trust
Sheffield, UK
Richard Grady, MD
Professor of Urology
University of Washington School of Medicine
Division of Pediatric Urology
Seattle Children’s Hospital
Seattle, WA, USA
Miriam Harel, MD
Pediatric Urology Fellow
Connecticut Children’s Medical Center
University of Connecticut School of Medicine
Hartford, CT, USA
Piet Hoebeke, MD, PhD
Head, Department of Urology
Section of Pediatric Urology and Urogenital Reconstruction
Ghent University Hospital
Gent, Belgium
Amy Hou, MD
Fellow, Pediatric Urology
Children’s Hospital Colorado
Aurora, CO, USA
Richard S. Hurwitz, MD, FAAP
Pediatric Urologist
Department of Urology
Kaiser Permanente Medical Center
Los Angeles, CA, USA
Kim A. R. Hutton, MBChB, FRCS(Paeds), ChM
Consultant Paediatric Surgeon and Urologist
University Hospital of Wales
Cardiff, UK
Martin Kaefer, MD
Professor of Urology
Department of Pediatric Urology
Indiana University School of Medicine
Indianapolis, IN, USA
Chris Kimber, FRACS, FRCS, MAICD
Head of Paediatric Surgery and Urology
Southern Health;
Consultant Paediatric Urologist
Royal Children’s Hospital
Melbourne, Australia
Andrew J. Kirsch, MD
Professor of Urology
Chief, Division of Pediatric Urology
Emory University School of Medicine;
Georgia Urology PA;
Children’s Healthcare of Atlanta
Atlanta, GA, USA
Martin A. Koyle, MD, FAAP, FACS, FRCS(Eng), FRCSC
Professor of Surgery
University of Toronto;
Division Head, Paediatric Urology
Women’s Auxiliary Chair in Urology and Regenerative Medicine
The Hospital for Sick Children
Toronto, ON, Canada
Bradley P. Kropp, MD
Professor
University of Oklahoma Health Sciences Center
Oklahoma City, OK, USA
Nicolaas Lumen, MD
Department of Urology
Section of Oncology and Urogenital Reconstruction
Ghent University Hospital
Gent, Belgium
Elizabeth Malm-Buatsi, MD
Pediatric Urology Fellow
Department of Urology
University of Oklahoma
Oklahoma City, OK, USA
Paul Merguerian, MD, MS, FAAP
Professor of Urology
University of Washington;
Chief, Division of Urology
Seattle Children’s Hospital
Seattle, WA, USA
Lina Michala, MRCOG
Lecturer in Paediatric and Adolescent Gynaecology
University of Athens
Athens, Greece
Eugene Minevich, MD, FAAP, FACS
Professor
Division of Pediatric Urology
Cincinnati Children’s Hospital
Cincinnati, OH, USA
Stan Monstrey, MD, PhD
Head, Department of Plastic Surgery
Ghent University Hospital
Gent, Belgium
Finn Nesbitt, BSc, MBChB, FRCA
Specialist Registrar, Anaesthetics
Great Ormond Street Hospital for Children NHS Foundation Trust
London, UK
Paul H. Noh, MD, FACS, FAAP
Assistant Professor
Director, Minimally Invasive Surgery
Division of Pediatric Urology
Cincinnati Children’s Hospital Medical Center
Cincinnati, OH, USA
Michael C. Ost, MD
Associate Professor of Urology
Division Chief, Pediatric Urology
Vice Chairman, Department of Urology
University of Pittsburgh School of Medicine
Pittsburgh, PA, USA
Maurizio Pacilli, MBBS(Hons), MD, MRCS(Eng)
Specialty Registrar Paediatric Surgery
Oxford University Hospitals NHS Trust
Oxford, UK
Blake W. Palmer, MD
Assistant Professor, Pediatric Urology
University of Oklahoma Health Sciences Center
Oklahoma City, OK, USA
John M. Park, MD
Cheng Yang Chang Professor of Pediatric Urology
Director of Pediatric Urology
Department of Urology
University of Michigan Health System and Medical School
Ann Arbor, MI, USA
Dimitri A. Parra, MD
Pediatric Interventional Radiologist
Department of Diagnostic Imaging
The Hospital for Sick Children;
Assistant Professor
Department of Medical Imaging
University of Toronto
Toronto, ON, Canada
Craig A. Peters, MD
Chief, Division of Surgical Innovation, Technology and Translation
Sheikh Zayed Institute for Pediatric Surgical Innovation
Children’s National Health System;
Professor of Urology and Pediatrics
George Washington University and University of Virginia
Washington, DC, USA
Ashok Rijhwani, MS, MCh, FRCS, DNB
Consultant Pediatric Surgeon and Transplant Surgeon
Columbia Asia Hospital
Bangalore, India
Michael Ritchey, MD
Professor of Urology
Mayo Clinic College of Medicine
Scottsdale, AZ, USA
Rodrigo L. P. Romao, MD
Staff Surgeon
Division of Urology and Division of Pediatric General and Thoracic Surgery
Department of Surgery
IWK Health Centre;
Assistant Professor, Departments of Surgery and Urology
Dalhousie University
Halifax, NS, Canada
Jonathan H. Ross, MD
Chief, Divison of Paediatric Urology
University Hospitals Rainbow Babies and Children’s Hospital;
Professor of Urology
Case Western Reserve University School of Medicine
Cleveland, OH, USA
Joao L. Pippi Salle, MD, PhD, FAAP, FRCSC
Head, Division of Urology
Department of Surgery
Sidra Medical and Research Center
Doha, Qatar
Andrew Sinclair FRCS(Urol)
Consultant Urologist
Stepping Hill Hospital
Stockport, UK
Naima Smeulders
Department of Paediatric Urology
Great Ormond Street Hospital for Children NHS Foundation Trust
London, UK
Warren T. Snodgrass, MD
Professor of Urology
Chief of Pediatric Urology
Department of Urology, Pediatric Urology Section
Children’s Medical Center and University of Texas Southwestern Medical Center
Dallas, TX, USA
Henrik Steinbrecher, BSc(Hons), MBBS, MS, FRCS, FRCS(Paed)
Consultant Pediatric Urologist
Southampton General Hospital
Southampton, UK
Ramnath Subramaniam, MBBS, FRCS(Paed), FEAPU
Consultant Pediatric Urologist
Leeds Teaching Hospitals NHS Trust
Leeds, UK
Kelly A. Swords, MD, MPH
A. Barry Belman Fellow in Pediatric Urology
Children’s National Health System
Washington, DC, USA
Mark Thomas, BSc, MBBChir, FRCA
Consultant Paediatric Anaesthetist
Great Ormond Street Hospital for Children NHS Foundation Trust
London, UK
Mark Tyson, MD
Chief Resident of Urology
Mayo Clinic College of Medicine
Scottsdale, AZ, USA
Shabnam Undre
Department of Paediatric Urology
Great Ormond Street Hospital for Children NHS Foundation Trust
London, UK
Vijaya M. Vemulakonda, MD, JD
Assistant Professor, Pediatric Urology
University of Colorado School of Medicine
Children’s Hospital Colorado
Aurora, CO, USA
Stephanie A. Warne, FRCS(Paed Surg)
Locum Consultant in Paediatric Urology
Addenbrooke’s Hospital
Cambridge, UK
Nathalie Webb, MBBS(Hons), FRACS(Urol)
Head of Paediatric Urology
Monash Children’s Hospital
Clayton, Victoria, Australia
Elias Wehbi, MSc, MD, FRCSC
Assistant Clinical Professor
Department of Urology
University of California, Irvine;Children’s Hospital of Orange County and UCI Medical CenterOrange, CA, USA
Robert Wheeler, FRCS, MS, FRCPCH, LLB(Hons), LLM
Consultant Neonatal and Paediatric Surgeon
Director, Department of Clinical Law
University Hospital of Southampton
Southampton, UK
Duncan T. Wilcox, MBBS, MD, FEAPU
Ponzio Family Chair in Pediatric Urology;Professor and Chair
Department of Pediatric Urology
Children’s Hospital Colorado
Aurora, CO, USA
Ian E. Willetts, BSc(Hons), MBChB(Hons), DM(Oxon), FRCS(Eng), FRCSEd, FRCS(Paed Surg)
Consultant Paediatric Surgeon/Urologist
Oxford University Hospitals NHS Trust
Oxford, UK
Alun Williams, FRCS(Paed)
Consultant Paediatric Urologist and Transplant Surgeon
Nottingham University Hospitals NHS Trust
Nottingham, UK
Lynne L. Woo, MD
Assistant Professor of Pediatric Urology
University Hospitals Rainbow Babies and Children’s Hospital;
Case Western Reserve University School of Medicine
Cleveland, OH, USA
Dan Wood, PhD, FRCS(Urol)
Consultant in Adolescent and Reconstructive Urology
University College London Hospitals;
Honorary Consultant Urologist
Great Ormond Street Hospital for Children NHS Foundation Trust
London, UK
Mark Woodward, MD, FRCS(Paed Surg)
Consultant Paediatric Urologist
Bristol Royal Hospital for Children
Bristol, UK
Jenny H. Yiee, MD
Associate
Kaiser Permanente Southern California Medical Group
Los Angeles
CA, USA
Preface
The editors are delighted to present the second edition of Pediatric Urology: Surgical Complications and Management. Since the first edition was published in 2008, pediatric urology has advanced in terms of our understanding of disease processes and technology. However, it is incumbent on any clinician undertaking pediatric urology practice to be able to maintain and sustain a safe, high quality and outcome oriented practice. While recognizing and managing complications of surgical interventions is important, it is imperative that prevention of these complications is also recognized. The second edition aims to address these issues.
The general format of the textbook has been changed to include some case-based discussions and a summary dos and don’t’s
take-home message for nearly every chapter. Where applicable, certain tips and tricks
to maximize efficiency and minimize the risk of complications has been included. With the IT age, online videos demonstrating the techniques where appropriate also form an added feature of the online second edition. There have been substantial updates to chapters and new chapters have been added based on feedback received from the first edition.
The editors believe that the second edition will be useful for practicing pediatric urologists, urologists in training, pediatric surgeons or indeed any surgeon undertaking office or specialist pediatric urology.
We are indebted to our contributors who have been very supportive in submitting such high quality chapters and within the tight deadlines required. The efforts of Jane Andrew and Rachel Wilkie at Wiley cannot go unrecognized and without whose assistance this book would have been a mere pipe dream.
Finally, as always, we are very grateful to our families who have stood by us and supported us by giving us the time to be able to undertake this worthwhile project.
PG, MK, DW
March 2015
PART I
Principles of Surgical Audit
CHAPTER 1
How to set up prospective surgical audit
Andrew Sinclair¹ and Ben Bridgewater²
¹Stepping Hill Hospital, Stockport, UK
²University Hospital of South Manchester NHS Foundation Trust, Manchester, UK
KEY POINTS
Clinical audit is one of the keystones of clinical governance
Audit can be conducted prospectively or retrospectively and robust data collected for patient benefit
A well-performed audit can inform patients about surgical results and drive continuous quality improvement
Data can be derived from local hospital statistics to nationally reported outcomes
Paper based audit is time consuming and is being replaced by IT-based support to clinical care pathways
Introduction
Clinical audit is one of the keystones
of clinical governance. A surgical department that subjects itself to regular and comprehensive audit should be able to provide data to current and prospective patients about the quality of the services it provides, as well as reassurance to those who pay for and regulate health care. Well-organized audit should also enable the clinicians providing services to continually improve the quality of care they deliver.
There are many similarities between audit and research but, historically, audit has often been seen as the poor relation. For audit to be meaningful and useful it must, like research, be methodologically robust and have sufficient power
to make useful observations; it would be easy to gain false reassurance about the quality of care by looking at outcomes in a small or cherry-picked
group of straightforward cases. Audit can be conducted retrospectively or prospectively and, again like research, prospective audit has the potential to provide the most useful data, and routine prospective audit provides excellent opportunities for patient benefit [1, 2, 3, 4].
Much of the experience we draw on comes from cardiac surgery, where there is a long history of structured data collection, both in the USA and the UK. This was initially driven by clinicians [1, 2, 3, 4, 5, 6, 7], but more recently has been influenced by politicians and the media [7, 8]. Cardiac surgery is regarded as an easy specialty to audit in view of the high volume and proportion of a single operation (coronary artery bypass graft) in most surgeons’ practice set against a small but significant hard measurement endpoint of mortality (which is typically around 2%).
In the UK recently, increasing focus has been placed on national clinical audit. A Public Inquiry into the events at Mid Staffordshire NHS Trust found unsatisfactory care that had gone on for some time, despite the existence of data in the system
that identified potential problems [9]. The UK Government’s response to these events has been to drive public reporting of outcomes down to the level of individual surgeons for 10 specialties, including gastrointestinal surgery, interventional cardiology and urology. These data were published in 2013, and the process has led to marked improvements in engagement with national clinical audit in the UK and has dramatically increased data quality and the utility of the audits [10, 11].
Why conduct prospective audit?
There are a number of reasons why clinicians might decide to conduct a clinical audit (Box 1.1).
Box 1.1 Possible reasons for conducting clinical audit.
As a result of local clinical interests
As a result of clinical incident reporting
To comply with regional or national initiatives
To inform patients about surgical results and support choice
To drive continuous quality improvement
To comply with health care regulation
To engage patients in decisions about their health care
To provide public reassurance
As a result of local clinical interests
Historically, many audit projects have been undertaken as a result of local clinical interests. This may reflect interest in a particular procedure by an individual or a group, or may reflect concern about specific outcomes for a particular operation.
As a result of clinical incident reporting
The major disciplines that ensure high quality care and patient safety are clinical risk management and audit. Most health care organizations should have sophisticated systems in place to report and learn from adverse incidents and near misses [8]. Reporting is usually voluntary and investigated according to a fair and just culture
but it is unlikely that all incidents that occur are reported. If an adverse incident is recorded, the record identifies that it has occurred but gives no indication of how often it has happened previously, and only limited indication of the likelihood of recurrence. A mature organization should have clear links between risk reporting and audit, and choose topics for the latter based on data from the former.
To comply with regional or national initiatives
Increasingly audits are been driven by organizations that exist outside a hospital. These may include audit led by professional societies, regulatory bodies or regional/national quality improvement and transparency initiatives.
To inform patients
Across the world health care is becoming more patient-focused. The modern health care consumer will sometimes want to choose their health care provider on the basis of that hospital or surgeon’s outcomes. Even if patients are not choosing between different hospitals, recent data from the UK suggest that patients are interested in outcomes of surgery by their doctors [13]. Patients’ views should inform decisions about what to audit, and they may be interested in many areas which will be dependent on the planned operation but may include data on mortality, success rates, length of stay, the incidence of postoperative infection and other complications, and patients’ experience data.
To drive continuous quality improvement
It has been shown quite clearly from cardiac surgery that structured data collection, analysis and feedback to clinicians improves the quality of outcomes. This has been detected both when data are anonymous and where named surgeon and hospital outcomes have been published [1, 2, 3, 4]. The magnitude of this effect is large; in the UK, a system of national reporting for surgical outcomes was introduced in 2001 and has led to a 40% reduction in risk adjusted mortality [4]. The introduction of any drug showing a similar benefit would be heralded as a major breakthrough, but routine national audit has not been embraced by most surgical specialties. Simply collecting and reviewing data seems to drive improvement, but it is likely that the magnitude of the benefits derived and the speed at which improvements are seen can be maximized by developing a clear understanding of what data to collect and using optimal managerial structures and techniques to deliver better care. There is some debate about whether publicly disclosing health care outcomes encourages clinicians to avoid taking on high-risk cases [1,4,7,14,15], but recent experience from the UK certainly confirms that public reporting does drive compliance with national audit with all its inherent benefits.
To comply with health care regulation
Healthcare regulators have a responsibility to ensure that hospitals, and the clinicians working in them, are performing to a satisfactory standard. Whilst some assurance can be gained from examining the systems and processes in place within an organization, the proof of the pudding
is in demonstrating satisfactory clinical results. This proof is important and can only come from analyzing benchmarked outcomes data. Regulators of individual clinicians, such as the American Boards in the USA and the General Medical Council in the UK, are changing their emphasis so that it is becoming more important for clinicians to prove they are doing a good job rather than this being assumed. Routine use of structured outcomes data is now supposed to be in place and is included in the current proposals for professional revalidation in the UK (the process by which doctors now have to prove they are fit to continue to practice
[16, 17].
To engage patients in decisions about their health care
As society becomes supported by better mobile devices and connectivity, people are looking to the internet to support many choices that they make, including choices about health care. It is vital that the medical profession and health care organizations accept this and provide patients and their carers with appropriate information to empower and engage them in the concept of shared decision making
with their health care advisers.
To provide public reassurance
It is certainly true in the UK, but is possibly true more widely, that the trust that has traditionally been placed in the medical profession – and, indeed, medical professionals – is being eroded by repeated failure of clinical governance and increasing societal expectations. Maintaining a trusting relationship between an informed public and a trustworthy profession is in everyone’s best interests, and this can be supported by transparent clinical audit data.
What data can be used for audit?
Routine hospital data
Most health care systems are rich in data and poor in information. Medicare data in the USA and Hospital Episodes Statistics in the UK contain information about patient demographics, diagnoses, procedure, mortality, length of stay, day cases rates and readmissions. These information systems are developed for administrative or financial purposes rather than clinical ones, but may potentially contain much useful clinical data and will often have the capacity to provide some degree of adjustment for case mix. In the UK this data has historically not been trusted by clinicians, but recently there has been increasing engagement between doctors and the data which is improving clinical data quality and increasing confidence. Many UK hospitals now have systems to benchmark their outcomes against national or other peer groups, to flag up areas of good practice, detect outlying performance and engage in quality improvement [18].
Ideally, hospitals should have clearly-defined systems in place to use the data: for example, they should regularly compare their outcomes for chosen procedures against an appropriately selected group of other hospitals. Significant good
practice should be celebrated and shared with others inside and outside the organization, and bad outcomes should be investigated. It is not uncommon that high mortality or other clinical indictor rates may have a clear explanation other than that of bad
clinical practice. The data may be incorrect, or there may be issues about classification or attribution that explain away an apparent alert, but structured investigation should improve the knowledge of both the organization and the clinician knowledge about their data systems and may lead to better knowledge that necessitates improvements in patient care.
Specialty-specific multi-center data
A number of surgical disciplines in the USA and the UK have embarked upon national programs to collect prospective disease- or operation-specific datasets. These are usually clinically driven and have benefits above routine hospital data in that a more useful dataset can be designed for specific purposes and, in particular, can look in more detail at subtleties of case mix and specific clinical outcomes in a way that is more robust and sensitive than that derived from routine hospital administration systems. Contemporary cardiac surgical datasets collect variables on preoperative patient characteristics, precise operative data and postoperative mortality, ICU stay, hospital stay, re-explorations, infection, renal failure, tracheostomy, blood usage, stroke rate and intra-aortic balloon pump use. The preoperative and operative data allow outcomes to be adjusted for case complexity to prevent comparison of apples and oranges
by various algorithms such as the EuroSCORE [20]. Data for 10 such audits are now published in the UK down to the level of individual surgeon [10, 11].
Setting up specialty-specific multi-center audit raises a number of challenges including defining clarity of purpose, gaining consensus, agreeing a dataset, securing resource, overcoming information technology and methodology issues, and clarifying ownership of data, information policies and governance arrangements [21]. In cardiac surgery there is now increasing international dialogue between professional organizations, moving towards the collection of standardized data to allow widespread comparisons.
Locally-derived data
Individual hospital departments will often decide to audit a specific theme that may be chosen because of clinical risk management issues, subspecialist interest or other concerns. In the UK National Health Service, dedicated resources for audit were historically top sliced
from the purchasers of health care to generate a culture of clinical quality improvement, but commentators are divided about whether significant benefits have been realized from this approach [13]. In the early stages, large amounts of audit activity were undertaken, but there were significant failures in subsequently delivering appropriate change. To maximize the chances of improving care as a result of audit the following should be considered. Will the sample size be big enough to be useful? What dataset is needed? Will that data be accessible from existing hospital case notes or will prospective data collection be necessary? Is there an existing robust benchmark to which the results of the audit can be compared? How will the significance
of the results be analyzed? Does conducting the audit have financial implications? Will the potential results of the audit have financial implications? Are all stakeholders who may need to change their behavior as a result of the audit involved in the process?
Techniques of data collection
Historically, the majority of audit activity was conducted from retrospective examination of case notes, which was labor intensive and relied on the accuracy and completeness of previously recorded data. There has subsequently been increasing use of prospective data collection, much of which has been based on paper forms. This obviously improves the quality of data, but again requires time and effort from clinical or administrative staff for completion. The development of care pathways whereby multidisciplinary teams manage clinical conditions in predefined ways is thought to improve patient outcomes and will generate structured data that are readily amenable to audit. The use of modern information technology to support care pathways is the holy grail
of effective audit – all data are generated for clinical use and the relevant subset of that data can then be examined for any relevant purpose. The care pathway can be adapted to include new or alternative variables as required. All data collection can be networked and wireless, assuming issues about data access, confidentiality and security are resolved. Maximizing benefits from this approach raises a number of challenges, including implementing major changes in clinical practice and medical culture.
Good practice in audit
A clinical department should benefit from a clear forward plan about its audit activity that should be developed by the multidisciplinary team in conjunction with patients and their carers. The audit activity should include an appropriate mix of national, local and risk management-driven issues and the specifics should depend on the configuration of services and local preferences. The plan should include thoughts about dissemination of results to users and potential users of the services. The multidisciplinary team should include doctors, professions allied to medicine, and administration staff. Adherence to the audit plan should be monitored through the departmental operational management structures. For the department to be successful in improving care as a result of audit there should be clear understanding of effective techniques of change management.
Arguments against audit
In the UK, audit has been an essential part of all doctors’ job plans for a number of years, but audit activity remains sporadic. In some specialties, such as those included in the NHS England transparency agenda, comprehensive audit is being led by clinicians and driven by politicians and the media [10, 11]. In other areas there remains little or no coordinated national audit activity. This may be due to a perceived lack of benefits from audit by clinicians along with failure to meet challenges in gaining consensus or difficulties in securing adequate resources. The experience from cardiac surgery and many other national audits in the UK is that structured national audit improves the quality of mortality outcomes [1, 2, 3, 4]. It is likely that other issues, such as complication rates, are also reduced with associated costs savings, and as such effective audit may well pay for itself.
Conclusion
In modern health care, patients are increasingly looking to be reassured about the quality of care they receive, and doctors are being driven towards demonstrating their competence rather than this being assumed. Hospital departments should have a robust clinical governance strategy that should include joined-up
clinical risk management and audit activity. There are strong arguments that structured audit activity improves the quality of outcomes and for these benefits to be maximized there should be involvement of multidisciplinary teams supported by high-quality operational management.
DOS AND DON’TS
Do
Continually work to evaluate the quality of care you deliver for patients
Develop a strategy for clinical audit which incorporates the relevant area of your practice and is methodologically robust
Benchmark your practice against accepted best practice
Develop a link between learning from risk management and your clinical audit program
Develop links between clinical audit and departmental/individual reflective practice
Evaluate your personal surgical audit and know what to do if your results are not ‘as expected’
Be transparent about your audit program and your results of care
Don’t
Undertake an audit that is not methodologically robust
Fail to implement changes resulting from an audit which demonstrates unsatisfactory processes or outcomes
Derive false reassurance from benchmarking against time-expired clinical standards
Assume that patients and the public have no interest in the outcome of care derived from your audit program.
References
Hannan EL, Kilburn H Jr, Racz M, Shields E, Chassin MR. Improving the outcomes of coronary artery bypass surgery in New York State. JAMA 1994;271(10):761–6.
Grover FL, Shroyer LW, Hammermeister K, et al. A decade of experience with quality improvement in cardiac surgery using the Veterans Affairs and Society of Thoracic Surgeons national databases. Ann Surg 2001;234:464–74.
Hammermeister KE, Johnson R, Marshall G, Grover FL. Continuous assessment and improvement in quality of care. A model from the Department of Veterans Affairs Cardiac Surgery. Ann Surg 1994;219:281–90.
Bridgewater B, Grayson AD, Brooks N, et al. Has the publication of cardiac surgery outcome data been associated with changes in practice in Northwest England: an analysis of 25,730 patients undergoing CABG surgery under 30 surgeons over 8 years. Heart 2007;93(6):PMC 1955202.
Keogh BE, Kinsman R. Fifth National Adult Cardiac Surgical Database Report 2003. London: Society of Cardiothoracic Surgeons, 2004.
Bridgewater B. Society for Cardiothoracic Surgery in GB and Ireland. Heart 2010;96(18):1441–3.
http://society.guardian.co.uk/nhsperformance/story/0,,1439210,00.html
Marshal M, Sheklle P, Brook R, Leatherman S. Dying to Know: Public Release of Information about Quality of Healthcare. London: Nuffield Trust and Rand, 2000.
Francis R, Chair. Mid Staffordshire NHS Foundation Trust Public Inquiry. http://www.midstaffspublicinquiry.com.
NHS England. Everyone Counts. http://www.england.nhs.uk/everyonecounts/.
Bridgewater B, Irvine D, Keogh B. NHS transparency. BMJ 2013;347:f4402.
Department of Health. An Organisation with a Memory. Report of an Expert Group on Learning from Adverse Events in the NHS. London: Department of Health, 2000.
Department of Health. Good Doctors, Safer Patients. Proposals to Strengthen the System to Assure and Improve the Performance of Doctors and to Protect the Safety of Patients. London: Department of Health, 2006.
Chassin MR, Hannan EL, DeBuono BA. Benefits and hazards of reporting medical outcomes publicly. N Engl J Med 1996;334(6):394–8.
Dranove D, Kessler D, McCellan M, Satterthwaite M. Is more information better? The effects of report cards on healthcare providers. J Polit Econ 2003;111:555–88.
The Stationery Office. Trust, Assurance and Safety. The Regulation of Health Professionals in the 21st Century. London: The Stationery Office, 2007.
General Medical Council. Revalidation. http://www.gmc-uk.org/doctors/revalidation.asp.
Dr Foster. www.drfoster.co.uk.
NHS Choices. http://www.nhs.uk/Pages/HomePage.aspx.
Roques F, Nashef SA, Michel P, et al. Risk factors and outcome in European cardiac surgery: analysis of the EuroSCORE multinational database of 19,030 patients. Eur J Cardiothorac Surg 1999;15:816–23.
Hickey GL, Grant SW, Cosgriff R, Dimarakis I, Pagano D, Kappetein AP, Bridgewater B. Clinical registries: governance, management, analysis and applications. Eur J Cardiothorac Surg 2013;44(4):605–14.(Websites last accessed January 2015)
CHAPTER 2
Evaluating personal surgical audit and what to do if your results are not as expected
Andrew Sinclair¹ and Ben Bridgewater²
¹Stepping Hill Hospital, Stockport, UK
²University Hospital of South Manchester NHS Foundation Trust, Manchester, UK
KEY POINTS
Audit is the comparison of surgical results against a previously defined and accepted standard
Published results may be better than the normal surgeon’s
Complexity specific audit is important
Dealing with outlying performance can be ‘directive’ or ‘collaborative’ depending on the surgeon
Surgeons are responsible for ensuring satisfactory quality of care
Introduction
Any well-conducted audit should give information about systems and outcomes related to patient care. Data collection that generates new information about patient outcomes should be classified as research; to be regarded as audit, results need to be compared against a previously-defined and accepted standard. Often an audit will demonstrate satisfactory outcomes and this in itself may be a useful finding which should be of interest to patients, clinicians, managers, commissioners and regulators of health care. It is hoped that structured and regular audit data collection will lead to ongoing improvements in quality as described in Chapter 1. On occasions, audit results will be unacceptable and it is essential that this is recognized and acted upon.
Presentation and analysis of data
Effective audit requires clarity of purpose. When an audit is conceived the clinical question should be clearly stated and the data required to generate an answer should be defined. It is also important to be sure about the outcomes with which you will compare yourself, and there may be a number of options. Data on mortality or complication rates may be available from pooled national or regional registries [1, 2, 3, 4]. Results of specific series of cases may be published through peer review journals for individual hospitals or individuals, but these outcomes may often be better than the norm
because of submission and publication bias. False reassurance might be gained from comparing outcomes with outdated historical results; in cardiac surgery in the UK, a widely-accepted risk adjustment algorithm, the EuroSCORE [5], has been used to benchmark hospitals and surgeons in recent years. This was developed in a multi-center study in Europe in 1997 and improvements in overall quality of care in the UK are such that it no longer reflects current practice [6]. This concept of calibration drift
for cardiac surgery has been seen in both the UK and the USA and is important to take into account when using benchmarking for audit [7].
It is possible to compare outcomes between units or surgeons simply by using crude
or non-risk-adjusted data. Cardiac surgeons have focused on mortality as it is a robust primary end point. In pediatric urology, mortality is not frequent enough to provide a meaningful measure; more appropriate endpoints need to be developed and this is a challenge for the profession.
Using non-risk-adjusted data has simplicity and transparency on its side but it is not embraced with enthusiasm by the majority of surgeons. It is clear that there are quite marked differences in patient characteristics between different units in cardiac surgery, and this variability is probably greater between surgeons who have different subspecialist interests [8]. These issues apply to other areas of surgery. Many surgeons are concerned that any attempt to produce comparative performance using non-risk-adjusted data will stimulate a culture whereby higher-risk patients are denied surgery to help maintain good results – so-called risk-averse behavior. In order to make data comparable between individual surgeons and units there have been a number of attempts to adjust for operative risk in cardiac surgery [9, 10, 11, 12]. Other specialties will need to develop appropriate methodology and ideal tools should be accurate numerical predictors of observed risk (i.e. be calibrated correctly) and the ability to discriminate appropriately across the spectrum of risk (i.e. accurately differentiate between lower- and higher-risk patients).
In addition to the appropriate use of risk adjustment, some units have found graphical techniques of presenting outcomes data useful to monitor performance. Various techniques, such as cumulative summation or variable life-adjusted display plots, have been used to help analyze results and detect trends or outlying performance at an early stage. These curves may be adapted to include predicted mortality to enable observed and expected mortality to be compared. These techniques are well described by Keogh and Kinsman [2]. More recently, interest is developing for measuring outcomes using statistical process control charts, which are widely used in the manufacturing industry. These charts use units of time, typically months when institutions are under scrutiny and the outcome of interest is mortality, and display actual mortality against expected mortality using control limits to define acceptable and unacceptable performance [12].
The use of funnel plots is becoming popular as a way of displaying hospital or individual mortality [13]. These are simply a plot of event rates against volume of surgery, and include exact binomial control limits to allow excessive mortality to be easily detected. They give a strong visual display of divergent performance
[14]. They have been used to analyze routine data to define clinical case-mix and compare hospital outcomes in urology [15]. These methods have been used in the UK to display mortality rates to patients and the public [16].
Classical statistical techniques may be used to compare individual outcomes with a benchmark. When analyzing data from an individual hospital or surgeon it is probably appropriate to select 95% confidence intervals such that if significant differences are observed, there is a 1 in 20 probability that these are due to chance alone. Things become more difficult when many hospitals or surgeons are compared to a national benchmark. In the UK, there are over 200 cardiac surgeons and any comparison of the group against the pooled mortality. Using 95% confidence intervals with this group would raise a high probability of detecting outlying performance due to chance alone because of multiple comparisons, and it is appropriate to adjust for this. The choice of confidence intervals will always end up as a balance between ensuring that true outlying performance is detected without inappropriately creating stigma for surgeons with satisfactory outcomes [17]. It may be useful to select different confidence limits for different purposes. Tight limits may be appropriate for local supportive clinical governance monitoring; one hospital in northwest England launches an internal investigation into practice if a cardiac surgeon’s results fall outside 80% confidence limits but wider limits of 99% have been used to report those surgeon’s outcomes to the public [18].
Dealing with outlying performance
Detecting clinical outcomes that fall outside accepted limits does not necessarily indicate substandard patient care. However, any analysis which causes concern should trigger further validation of the data if appropriate. Then, if indicated, there should be an in-depth evaluation of clinical practice which may include analysis of subspecialty, case mix and an exploration of the exact mechanisms of death or complications. This process may lead to reassurance that practice is satisfactory. Ideally, this should be initiated by the clinician concerned who should be keen to learn from the experience to improve their practice. An excellent example comes from pediatric cardiac surgery: a surgeon had concerns about his mortality outcomes following the arterial switch operation (which is complex, technically challenging, congenital surgery) [19]. He studied his outcomes in detail using CUSUM methodology and determined that things were worse than he would have expected from chance alone. He then underwent retraining with a colleague from another hospital with excellent outcomes, adapted his practice, and subsequently went on to demonstrate good outcomes in a further series of consecutive cases.
On occasions, the process of investigating outlying outcomes may be difficult for the individual hospital or surgeon involved. The investigation may raise significant methodological questions about the techniques of analysis and subsequent examinations. The cause of substandard results may be difficult to detect but may relate to failures in the systems of care in the hospital or department, or failures in the individual [17, 18, 19, 20, 21, 22, 23].
Clinical governance is an individual, departmental and hospital responsibility. Whist the onus should be on the individual with unsatisfactory outcomes to investigate and change their practice, they may need support, advice and direction from their clinical and managerial colleagues. Over recent years, the roles of different organizations in clinical governance are becoming clearer. Most hospitals should now have increasingly effective management structures for promoting quality improvement and detecting suboptimal performance.
The investigation of unsatisfactory outcomes can be facilitated by appropriate clinical leadership, and different techniques may be necessary for different circumstances with the concept of situational leadership
being useful to match the managerial intervention to the willingness and the readiness of the individual whose practice is being investigated [24]. Two examples make this point. A newly-appointed cardiac surgeon had three adverse outcomes following the same type of operation that, to colleagues, seemed to be due to a similar mechanism. Despite discussions, the surgeon involved had little or no insight into the problem. No confidence intervals for performance were crossed because of the small volume of cases involved but, due to the clinical concerns, the surgeon was subjected to forced but supportive retraining of his intraoperative techniques, which led to the reintroduction of full independent practice within a few months and excellent publicly-reported results for that operation several years later. This would be described in a situational leadership model as a directive
approach. A second example is that of a senior surgeon with a low-volume mixed cardiothoracic practice who had a bad run
of cardiac results, which again led to outcomes that failed to generate statistically-significant mortality outcomes. On his own initiative, he involved his clinical managers and launched an in-depth analysis of his practice. He detected that he was conducting very high-predicted-risk surgery despite lower volumes of surgery than some single specialty colleagues. He was also suspicious of a potential common mechanism of adverse outcomes in several cases of mortality and some cases of morbidity. Along with colleagues, he changed his referred practice to make it more compatible with low volume mixed cardiothoracic surgery and adapted his technique of surgery to avoid further problems. This again resulted in excellent subsequent outcomes. This would be described in a situational leadership model as a collaborative
approach. From a managerial perspective, both examples led to satisfactory ends, but adopting the appropriate leadership style was important in reaching the desired conclusions. In addition to having some understanding of leadership intervention models, we would also recommend that clinical managers have expertise in having ‘difficult conversations
, understand some change management theory, and have the ability to use an understanding of their personality characteristics and those of their colleagues to maximize the benefits, and the downsides, of managerial interventions.
In addition to the roles of the individual and the hospital in ensuring satisfactory outcomes, other agencies should be acting to support the process. In the UK, the Chief Medical Officer produced a report, Good Doctors, Safer Patients, about regulation of health care, and now the General Medical Council has responsibility for professional regulation, but passes significant responsibilities down to employers [25, 26, 27, 28]. Professional revalidation for all 230,000 doctors in the UK has now started, and should include data from clinical audit to give positive affirmation that good care is being delivered. It is suggested that professional societies should set clear, unambiguous standards for care, and recertification of doctors should be dependent on achieving those standards. Patient consultation as part of this report has suggested that patients are keen to see that satisfactory outcomes of treatment by their doctors form part of this process and this is now made available to the public from doctors working in 10 specialties [29,39]. UK cardiac surgeons have responded to this agenda by articulating clearly their responsibilities to patients and the public in their publication Maintaining Patients’ Trust [33]. These themes have been reiterated clearly by the recent UK public inquiry into the events at Mid Staffordshire NHS Trust [20].
This direction of travel in the UK is a long way from the culture in which most doctors were trained. It will be a challenge for professional societies and the profession to deliver on this agenda.
Conclusion
Most audit projects will deliver results that demonstrate clinical practice is satisfactory. There is some evidence that scrutiny of results alone can contribute to improvements in quality. On occasions, audit will flag up concern about clinical processes or outcome, but it is important that the data and the methods are fit for purpose
. Ensuring that satisfactory quality of care is given and demonstrated is the responsibility of all involved in health care delivery, including individual practitioners, employers, commissioners, professional societies and regulators.
DOS AND DON’TS
Do
Act rapidly if audit data suggest results are not as expected
Ensure that data quality issues are resolved without unnecessary delay
Configure an improvement plan involving colleagues, the wider multidisciplinary team and organizational management
Ensure that patient safety and outcomes remain the primary consideration in all actions
Don’t
Assume that poor outcomes are due to data quality issues
Derive false reassurance on quality by benchmarking against a time-expired clinical standard
Try to act on poor outcomes without engaging colleagues and the organization
Underestimate the importance of excellent clinical leadership in optimizing the quality of clinical outcomes for patients
References
Grover FL, Shroyer LW, Hammermeister K, Edwards FH, Ferguson TB, Dziuban SW, Cleveland JS, Clark RE, McDonald G. A decade of experience with quality improvement in cardiac surgery using the Veterans Affairs and Society of Thoracic Surgeons national databases. Ann Surg 2001;234:464–74.
Keogh BE, Kinsman R. Fifth National Adult Cardiac Surgical Database Report 2003. London: Society of Cardiothoracic Surgeons of Great Britain and Ireland, 2004.
Northern New England Cardiovascular Disease Study Group. http://www.nnecdsg.org/.
Society for Cardiothoracic Surgery in Great Britain and Ireland. www.scts.org.
Roques F, Nashef SA, Michel P, et al. Risk factors and outcome in European cardiac surgery: analysis of the EuroSCORE multinational database of 19,030 patients. Eur J Cardiothorac Surg 1999;15:816–23.
Bhatti F, Grayson AD, Grotte GJ, Fabri BM, Au J, Jones MT, Bridgewater B. The logistic EuroSCORE in cardiac surgery: how well does it predict operative risk? Heart 2006 Dec;92(12):1817–20.
Hickey GL, Grant SW, Caiado C, Kendall S, Dunning J, Poullis M, Buchan I, Bridgewater B. Dynamic risk approaches to cardiac surgery. Circ Cardiovasc Qual Outcomes 2013;6(6):649–58.
Bridgewater B, Grayson AD, Jackson M, et al. Surgeon specific mortality in adult cardiac surgery: comparison between crude and risk stratified data. BMJ 2003;327: 13–17.
Parsonnet V, Dean D, Bernstein AD. A method of uniform stratification of risk for evaluating the results of surgery in acquired heart disease. Circulation 1989;79:I3–12.
Roques F, Michel P, Goldstone AR, Nashef SAM. The logistic EuroSCORE. Eur Heart J 2003;24:1–2.
Roques F, Nashef SA, Michel P, et al. Risk factors and outcome in European cardiac surgery: analysis of the EuroSCORE multinational database of 19,030 patients. Eur J Cardiothorac Surg 1999;15:816–23.
Nashef SA, Roques F, Sharples LD, Nilsson J, Smith C, Goldstone AR, Lockowandt U. EuroSCORE II. Eur J Cardiothorac Surg 2012;41(4):734–44; discussion 744–5.
Benneyan RC, Lloyd RC, Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care 2003;12(6):458–64.
Speigelhalter D. Funnel plots for comparing institutional performance. Stat Med 2005;24(8):1185–202.
Mason A, Glodacre MJ, Bettley G, Vale J, Joyce A. Using routine data to define clinical case-mix and compare hospital outcomes in urology. BJU Int 2006;97(6):1145–7.
Bridgewater B, on behalf of the Adult Cardiac Surgeons of NW England. Mortality data in adult cardiac surgery for named surgeons: retrospective examination of prospectively collected data on coronary artery surgery and aortic valve replacement. BMJ 2005;330(7490):506–10.
Bridgewater B, Hickey G, Cooper G, Deanfield J, Roxburgh J, on behalf of the Society for Cardiothoracic Surgery in Great Britain and Ireland. Publishing cardiac surgery mortality rates: lessons for other specialties. BMJ 2013;346:f1139.
Bridgewater B, Kinsman R, Walton P, Keogh B. Demonstrating Quality: The Sixth National Adult Cardiac Surgery Database Report. Henley-on-Thames: Dendrite Clinical Systems Ltd, 2009.
de Leval MR, Francois K, Bull C, et al. Analysis of a cluster of surgical failures. Application to a series of neonatal arterial switch operations. J Thorac Cardiovasc Surg 1994;107:914—23.
Stationery Office. Learning from Bristol: The Report of the Public Inquiry into Children’s Heart Surgery at the Bristol Royal Infirmary 1984–1995. London: Stationery Office, 2001.
Francis R, Chair. Mid Staffordshire NHS Foundation Trust Public Inquiry. http://www.midstaffspublicinquiry.com.
Royal College of Surgeons of England. Outline of services. http://www.rcseng.ac.uk/healthcare-bodies/support-services/irm.
Bridgewater B, Cooper G, Livesey S, Kinsman R, on behalf of the Society for Cardiothoracic Surgery in Great Britain and Ireland. Maintaining Patients’ Trust: Modern Medical Professionalism 2011. Henley-on-Thames: Dendrite Clinical Systems Ltd, 2011.
Blanchard KH, Ziagrmi D, Zigarmi P. Leadership and the One Minute Manager. New York: William Morrow, 1999.
Department of Health. Good Doctors, Safer Patients. Proposals to Strengthen the System to Assure and Improve the Performance of Doctors and to Protect the Safety of Patients. London: Department of Health, 2006.
Department of Health. An Organisation with a Memory. Report of an Expert Group on Learning from Adverse Events in the NHS. Department of Health, 2000.
Secretary of State. Trust, Assurance and Safety. The Regulation of Health Professionals in the 21st Century. London: Stationery Office, February 2007.
General Medical Council. Revalidation. http://www.gmc-uk.org/doctors/revalidation.asp.
NHS England. Everyone Counts. http://www.england.nhs.uk/everyonecounts/.
Bridgewater B, Irvine D, Keogh B. NHS transparency. BMJ 2013;347:f4402.(Websites last accessed January 2015)
CHAPTER 3
A critical assessment of surgical outcomes
Paul Merguerian
Seattle Children’s Hospital, Seattle, WA, USA
KEY POINTS
Outcomes between units or surgeons can be compared using non-risk-adjusted data; for example cardiac surgeons focus on mortality; while simple it may lead to risk averse behavior amongst surgeons
In pediatric urology, mortality is not frequent enough to use as a meaningful end point for comparison
Other techniques to analyze data include cumulative summation or life-adjusted display plots, statistical process control charts, funnel plots and classical statistical techniques with a benchmark
Introduction
In 1979, Lewis Thomas said: There is within medicine, somewhere beneath the pessimism and discouragement resulting from the disarray of the health care system and its stupendous cost, an undercurrent of almost outrageous optimism about what may lie ahead for the treatment of human disease if only we can keep learning
[1]. If he was concerned about the disarray and cost of the health care system in 1979, he would be disappointed today. Since 1979, health care costs have been rising at a much higher rate than the consumer price index, yet biomedical and technological research has been extremely productive [2]. In an attempt to control health care costs, an unfortunate consequence would be loss of new knowledge and innovation. Evidence suggests that around 40% of health care cost is due to waste or is non-value added to the patient, and reducing this waste should be the primary goal of health care providers [3].
As health care providers we can apply evidence-based medicine to provide the highest quality care to our patient population at the lowest possible cost.
Evidence-based medicine (EBM) was initiated by many in the medical field and is the critical evaluation of the current literature to obtain the best evidence and apply it to clinical practice. It is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients [4]. We, as health care providers, can apply this evidence to provide value (quality/cost) to the patients we treat.
It is now recognized that surgical outcomes vary by provider [5, 6, 7]. As such, surgeons and hospitals are being asked to provide evidence of the quality of care they deliver [8]. Social media is becoming more and more utilized by hospitals and medical professionals as a means of conveying general health information, sometimes even personalized help. Moreover, patients and their families are turning to the Internet and social media to make critical decisions about choice of surgeon and facility [9].
Surgeons have traditionally made therapeutic decisions based on personal experience, recommendations of surgical authorities and thoughtful application of surgical basic sciences. Evidence-based surgery emphasizes the need to evaluate properly the efficacy of diagnostic and therapeutic interventions before accepting them as standard surgical practice [10]. Evidence in clinical surgery, and especially in pediatric urology, varies in its quality.
Published research findings are sometimes refuted by subsequent evidence with ensuing confusion and disappointment. Refutation and controversy is seen across a range of research designs [11]. There is also concern that false findings may be present in the majority of published articles. In a recent article in the Journal of Urology, Turpen et al. analyzed randomized controlled trials (RCTs) presented as abstracts at the 2002 and 2003 American Urological Association annual meeting and found that the current quality of reporting RCTs at urological meetings is suboptimal, and raising concerns about their use to guide clinical decision making [12]. Recently, De Sio et al. assessed the reporting quality of randomized and nonrandomized, controlled trials presented in abstract form at the European Association of Urology annual meeting over a 10-year period and determined the impact on subsequent publication. Unfortunately, they found that the reporting quality of European Association of Urology meeting abstracts did not improve in a decade. They stress the importance of improving the quality of abstracts following currently-available guidelines [13].
Pediatric urologists often make substantially different management decisions for similar clinical situations [14, 15, 16]. This variation in practice occurs in geographically close communities and is not always explained by differences in patient characteristics or preferences. More importantly, variation in management can be costly and often includes practices that are inconsistent with good evidence about optimal care [17, 18].
Pediatric urologists should critically examine published evidence and then adjust their practices accordingly. The purpose of this chapter is to assist them in doing so.
The main elements that will be discussed are: (1) asking focused questions; (2) finding the evidence; (3) critical appraisal of the evidence; and (4) making a decision.
Asking focused questions
One of the fundamental skills related to finding the evidence is asking a well-built clinical question. To benefit patients and clinicians, such questions need to be both directly relevant to patients’ problems and phrased in ways that direct your search to relevant and precise answers. In practice, well-built clinical questions usually contain four elements (PICO: Population, Problem, Intervention, Comparison, Outcomes), summarized in Table 3.1. Included are some examples of asking these questions in pediatric urology [19].
Table 3.1 PICO elements in asking clinical questions.
By asking a concise and well-formed question it becomes straightforward to combine the terms needed to then query searching services such as PubMed.
Finding the evidence
You can now convert your PICO question into search words in a search engine such as PubMed as shown in Table 3.2.
Table 3.2 Converting PICO question to search words.
Critical appraisal of the evidence
Study designs
All study designs [20, 21] have similar components (PICO):
A defined population (P) from which groups of subjects are studied.
Outcomes (O) that are measured.
For experimental and analytical observational studies an intervention or exposure (I) that is applied and
Compared to different groups (C) of subjects.
The first distinction is whether the study is analytic or descriptive (Figure 3.1).
Figure 3.1 Summary of study designs.
A descriptive study uses the PO in PICO and therefore does not quantify the relationship. Instead it gives a picture of what is happening in the population, such as the incidence of urinary tract infections in males who are not circumcised in the first year of life. Descriptive studies include case reports, case series, qualitative studies and surveys (cross sectional) studies.
An analytic study attempts to quantify the relationship between the effect of an intervention or exposure (I) on the outcome (O).
Whether a researcher actively changes the intervention determines if the study is observational or experimental (Figure 3.1).
In the experimental study the researcher manipulates the exposure, such as in randomized controlled trials. If adequately randomized and blinded, these studies have the ability to control for most biases. This, though, depends on the quality of the study, design and implementation. Therefore, not all randomized trials are high quality [12].
In the analytical observational study the researcher simply measures the exposure or treatments of the groups. These include case–control studies, cohort studies, and some population cross-sectional studies. This is the bulk of studies published in the pediatric urological literature.
The type of study can generally be appreciated by asking three questions [21]:
What was the aim of the study?
To describe a population (PO question) – descriptive.
To quantify the relationship between factors – analytic.
If analytic, was the intervention randomly allocated:
Yes – RCT.
No – observational.
If observational, when were the outcomes determined?
After the exposure or intervention – cohort study.
At the same time as the exposure – cross-sectional study or survey.
Before the exposure was determined – case–control study.
There are advantages and disadvantages of the designs as shown in Table 3.3 [21].
Table 3.3 Advantages and disadvantages of study designs.
Systematic reviews and meta-analysis
Other study designs that are increasingly being reported in the pediatric urological literature are systematic reviews and meta-analyses. These are specifically designed to use with any one study that is underpowered to answer a research question. Meta-analysis is a method that pools the available data in the literature in an effort to increase statistical power.
An important consideration in appraising a meta-analysis is the homogeneity of the pooled studies. The quality of a systematic review and meta-analysis is dependent on the quality of the articles used in the review.
Significant heterogeneity as seen in most meta-analyses published in the pediatric urology literature creates more than chance variation in study outcomes and is a sign that the results of the included studies may not be compatible and should not have been pooled in the first place. This is particularly true in pediatric urology when observational and retrospective data is utilized. These studies tend to introduce confounding and bias and also have less control of variation [22].
Moreover, diagnostic accuracy may be overestimated when using certain study designs; thus, the inclusion of studies using different designs in meta-analyses may have important effects on their results, and influence clinical decision making [23].
Bias in pediatric studies
The Oxford Dictionary defines bias as: Prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair
[24].
Bias is defined as any process that leads to the systematic deviation of study results from the truth. This generally results from limitation in study design or reporting rather than from personal prejudices. Some forms of bias are inevitable but they need to be recognized and accounted for by the investigator.
Sackett [25] noted that bias occurs at different stages of the research and includes:
Pre-study literature review.
Selection of study sample.
Administration of intervention.
Measurement of outcome.
Statistical analysis.
Reporting and publishing results.
The Cochrane review database summarizes sources of bias well and can be used as a guide to assessing articles for bias [26]. It describes a useful classification of biases into the following categories: selection bias, performance bias, attrition bias, detection bias and reporting bias [27].
Selection bias
Selection bias refers to systematic differences between baseline characteristics of the groups that