Current issues of ACP Journal Club are published in Annals of Internal Medicine


Editorial

Misunderstandings, misperceptions, and mistakes

PDF

ACP J Club. 2007 Jan-Feb;146:A8. doi:10.7326/ACPJC-2007-146-1-A08



Discussions about evidence-based medicine (EBM) have engendered both positive and negative reactions from clinicians, researchers, and policymakers since the term was first coined in the early 1990s (1, 2). These discussions were brought to the forefront again in a recent commentary by Dr. Bernadine Healy, former Director of National Institutes of Health (NIH), in US News and World Report (3). She raised several issues that practitioners and teachers of EBM face when advocating this model of care. First, she stated that EBM practitioners advocate using the “best” evidence, which is mostly taken from randomized trials and cost–benefit studies. Second, she raised the issues of the interpretation of evidence for screening mammography and prostate specific antigen (PSA) as examples in which EBM has failed because EBM proponents did not advocate for these tests based on the available evidence. Third, she likened the practice of EBM to a “straitjacket” or a cookbook approach in which both clinician judgment and patient values and circumstances are ignored.

All of these criticisms of EBM stem from misperceptions or misunderstandings and can be answered by careful consideration of the definition of EBM. EBM is defined as the integration of the best available evidence with our clinical expertise and our patient's unique values and circumstances (4). Evidence, whether strong or weak, is never sufficient to make clinical decisions. Individual values and preferences must balance this evidence to achieve optimal shared decision making.

Others besides Dr. Healy have stated their concern that only randomized trials or systematic reviews constitute the evidence in EBM (5, 6). Proponents of EBM would acknowledge that several sources of evidence inform clinical decision making. The practice of EBM stresses finding the best available evidence to answer a question, and this evidence may come from randomized trials, rigorous observational studies, or even anecdotal reports from experts. Hierarchies of evidence have been developed to help describe the quality of evidence that may be found to answer clinical questions. Randomized trials and systematic reviews of randomized trials provide the highest quality evidence—that is, the lowest likelihood of bias, and thus the lowest likelihood to mislead because they establish the effect of an intervention. However, they are not usually the best sources for answering questions about diagnosis, prognosis, or the harmful impact of potentially noxious exposures. Although this hierarchy has been criticized for devaluing the basic sciences (6), we suggest that numerous studies have demonstrated the fallibility of extrapolating directly from the bench to the bedside without the intervening step of proving the assumptions to be valid in humans (7-10).

Dr. Healy's concern about emphasizing randomized trial evidence is intriguing considering that many important randomized trials were conducted on her watch as NIH director, including the landmark Women's Health Initiative study (11), which refuted decades of evidence from observational studies.

Dr. Healy referred to the “mammogram war” over whether women in their 40s should be routinely screened for breast cancer. She described the battle as between EBM advocates who argued against routine screening and radiologists and oncologists who argued in favor of this strategy. Rather than considering it a criticism of EBM, we believe this example highlights the usefulness of the practice of EBM to provide a framework for decision making. The NIH Consensus Conference (12) did not recommend against screening in women 40 to 49 years of age, but simply suggested that women in this age group should be informed of the downsides and the small, uncertain benefit (thousands of women to be screened to delay 1 death many years later). Even when screening is effective (and often it is not), at some point the gain is so marginal that properly informed patients may consider it not worthwhile, highlighting the need for clinicians to be able to understand and appraise evidence and to integrate it with our patients' values and circumstances.

Dr. Healy's second example of how EBM has failed in practice describes the use of prostate cancer screening. She refers to the unfortunate case involving a junior physician who discussed the risks and benefits of a PSA test for his 53 year-old patient, based on the available evidence (13). A PSA test was not done during this clinic visit, but the patient was later seen by another physician who completed the test and a diagnosis of prostate cancer was subsequently made. The patient sued the junior physician, the clinic, and the residency training program. The plaintiff's attorney argued that the junior physician should have done the test because it is the standard of care (that is, usual practice) rather than discuss its risks and benefits with the patient. The jury found the clinic and residency program negligent for operating a substandard system of health maintenance checks (for not having a policy of PSA tests) but exonerated the physician. The jury believed the medical experts, who testified that PSAs are appropriate for screening all patients. A recent Cochrane systematic review combined the results of 2 randomized trials with a total of 55 512 participants and found no difference in prostate cancer mortality between men randomized to prostate cancer screening and controls (relative risk 1.02, 95% CI 0.80 to 1.29) (14). Screening seems sensible. However, not all sensible things work in practice as shown by these results. We have to live with what the evidence shows about effectiveness, rather than what we wish it would show. This case highlights that in some countries the courts (and indeed some medical experts) have not kept pace with the need for EBM. Brian Hurwitz wrote that evidence such as practice guidelines can be introduced into courts in many countries, including the United States, Canada, and the United Kingdom, by expert witnesses but cannot as yet be introduced as a substitute for expert testimony (15). In this case, the jury did not seem to believe the U.S. national guidelines nor did they seem to trust the shared decision-making model.

Dr. Healy's key misperception of EBM is that: “By anointing only a small sliver of research as best evidence and discarding or devaluing physician judgment and more than 90% of the medical literature, patients are forced into a one-size-fits-all straitjacket.” This misperception arises from a failure to appreciate that the practice of EBM requires integration of the best available evidence (weak or strong) with clinical expertise and the individual patient's values and preferences. This model of practice is far from a one-size-fits-all strategy. Furthermore, because EBM does not substitute the values of its advocates (such as clinicians and funding bodies) for those of the society or the individual patient, it may (and often does) result in policies that increase rather than decrease costs (e.g., the provision of statin drugs for normocholesterolemic patients following myocardial infarction).

We recognize that EBM has limitations, and further innovation is required to resolve some of them such as the need to enhance integration of evidence with our patients' values at the bedside and clinic. However, EBM should be recognized for its strengths as well. EBM has always been about enhancing the use of sound evidence from research in health and ensuring decisions are consistent with individual patient values and preferences. It represents a framework for people to find, understand, and apply the current best scientific evidence, bearing values and preferences in mind, when making decisions concerning their health or when helping others to do so. We believe Dr. Healy is mistaken in her representation of EBM.

Sharon Straus, MD, MSc
University of Calgary
Calgary, Alberta, Canada

Brian Haynes, MD, PhD
McMaster University
Hamilton, Ontario, Canada

Paul Glasziou, MBBS, PhD
University of Oxford
Oxford, England, UK

Kay Dickersin, PhD
Johns Hopkins University
Baltimore, Maryland, USA

Gordon Guyatt, MD, MSc
McMaster University
Hamilton, Ontario, Canada


References

1. Evidence-based medicine. A new approach to teaching the practice of medicine. Evidence-Based Medicine Working Group. JAMA. 1992;268:2420-5. [PubMed ID: 1404801]

2. Straus SE, McAlister FA. Evidence-based medicine: a commentary on common criticisms. CMAJ. 2000;163:837-41. [PubMed ID: 11033714]

3. Healy B. Who says what's best? US News World Rep. 2006 Sep 3.

4. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ. 1996;312:71-2. [PubMed ID: 8555924]

5. Hampton JR. Evidence-based medicine, practice variations and clinical freedom. J Eval Clin Pract. 1997;3:123-31. [PubMed ID: 9276587]

6. Swales JD. Evidence-based medicine and hypertension. J Hypertens. 1999;17:1511-6. [PubMed ID: 10608462]

7. Cobb LA, Thomas GI, Dillard DH, Merendino KA, Bruce RA. An evaluation of internal-mammary-artery ligation by a double-blind technic. N Engl J Med. 1959;260:1115-8. [PubMed ID: 13657350]

8. Failure of extracranial-intracranial arterial bypass to reduce the risk of ischemic stroke. Results of an international randomized trial. The EC/IC Bypass Study Group. N Engl J Med. 1985;313:1191-200. [PubMed ID: 2865674]

9. Echt DS, Liebson PR, Mitchell LB, et al. Mortality and morbidity in patients receiving encainide, flecainide, or placebo. The Cardiac Arrhythmia Suppression Trial. N Engl J Med. 1991;324:781-8. [PubMed ID: 1900101]

10. Lachetti C, Guyatt G. Surprising results of randomized controlled trials. In: Guyatt G, Rennie D, eds. Users' guides to the medical literature. A manual for evidence-based clinical practice. Chicago: AMA Press; 2002:247-65.

11. Rossouw JE, Anderson GL, Prentice RL, et al. Risks and benefits of estrogen plus progestin in healthy postmenopausal women: principal results From the Women's Health Initiative randomized controlled trial. JAMA. 2002;288:321-33. [PubMed ID: 12117397]

12. NIH Consensus Statement. Breast cancer screening for women ages 40-49. NIH Consens Statement. 1997;15:1-35. [PubMed ID: 9267441]

13. Merenstein D. A piece of my mind. Winners and losers. JAMA. 2004;291:15-6. [PubMed ID: 14709561]

14. Ilic D, O’Connor D, Green S, Wilt T. Screening for prostate cancer. Cochrane Database Syst Rev. 2006;3:CD004720. [PubMed ID: 16856057]

15. Hurwitz B. How does evidence based guidance influence determinations of medical negligence? BMJ. 2004;329:1024-8. [PubMed ID: 15514351]