Current issues of ACP Journal Club are published in Annals of Internal Medicine


Letter

Drug dependence in a journal club

PDF

ACP J Club. 2000 May-June;132:A21. doi:10.7326/ACPJC-2000-132-3-A21



To the Editor

Redelmeier’s editorial (1) and your reply (2) have raised several issues of concern for residency program directors and those in charge of journal clubs. We are writing to address 2 issues. First, how should we select the clinical issues and the types of evidence to review in journal club? Second, what should these educational sessions be called?

The selection process for a journal club should be influenced by its learning goals. Journal clubs we have run, seen, or heard about usually simultaneously pursue more than 1 learning goal. Although often implicit, 3 goals we have seen repeatedly are to learn how best to handle patient problems that are common, serious, or vexing; to learn about advances in medical knowledge that should change practice; and to learn the skills of evidence-based practice, such as searching for best evidence or critical appraisal of new research reports. These goals are not mutually exclusive, but the final selection could well depend on which of them is preeminent.

If the first goal is preeminent, we would probably want to focus on knowledge that would be most useful for solving specific patient problems, regardless of how new it is or what type of evidence it is. Driven by necessity, we would want to find, appraise, and decide how to use the best available evidence, even if it is not of highest quality, befitting the “problem-solving” mode of evidence-based practice (3). Who should select the topics in this kind of journal club? We suggest that the whole group decides together which patient problems they want to address, which questions are most important to answer, and what kind of evidence they would like to review to find those answers. What proportion of the evidence reviewed should be about therapy? From audits of our own and our learners’ questions (4), we would expect questions about therapy to be frequent, but they probably would not dominate the curriculum to the extent that Dr. Redelmeier describes.

If the second goal is preeminent, we would probably want to focus on recent developments in our field that should change our practice. Ideally, we would select recent developments that are particularly relevant to patients we see, highly likely to be valid, and expected to have a substantial clinical effect if acted on (5). Thus, our selections for journal club would be driven by the evidence that becomes available, fitting the “current awareness” mode of evidence-based learning (3). What proportion of evidence should be about therapy? As Redelmeier points out, much of the medical “news” is about individual trials of drug treatments. Note also that the treatment selections in most issues of ACP Journal Club are more plentiful than the other types of evidence abstracted. It is likely then that therapeutics will dominate the curriculum in this kind of journal club. Who does the selecting of topics in this kind of journal club? While group members may exert some influence, the principal selection is done by outsiders, including the investors, investigators, and editors who determine which studies get done and which get published.

If the third goal is preeminent, we would probably start with an assessment of our group members’ current skills for evidence-based practice and then identify the gaps that we want to address. Selection would be made primarily on the basis of the skills we need to develop or refine, regardless of how new the evidence is or how urgent the problems are. Thus, our curriculum for journal club would be mainly “skills driven” rather than “necessity driven” or “evidence driven.” Who should select the topics in this type of journal club? While all group members should have some say, the main selections should be by those who have identified the learners’ skill needs and who are responsible for helping the learners meet their needs. What proportion should be selected about individual trials of drug treatment? Different groups of learners have different needs. Many of our learners are relatively comfortable finding and appraising individual trials of treatment; they are often much less comfortable with systematic reviews and studies of diagnosis, prognosis, etiology, or others kinds of evidence. This relative comfort with drug trials may contribute to the tendency for learners to select treatment trials, thereby reinforcing the “drug dependence” Redelmeier describes. For such groups of learners, we suggest that those in charge guide the group to increase the other kinds of evidence they review, to build both their competence and confidence in doing so, thereby deemphasizing individual trials of treatment.

Should all 3 of these learning sessions go by the same name of “journal club”? Indeed, should any session ever be called journal club (6)? We strongly suggest renaming these sessions, both to more clearly identify the session goals and to avoid the decades of baggage that ride with the label “journal club.” Readers might consider the following options, depending on their main learning goals: “Best Practices Team,” wherein the main goal is to figure out how best to solve actual patient problems, the necessity-driven approach; “New Knowledge Group,” wherein the main goal is to find advances that may change practice, the evidence-driven approach; and “Evidence-Based Practice Seminar,” wherein the main goal is to master the skills of evidence-based practice and lifelong learning, the skills-driven approach.

W. Scott Richardson, MD
Audie L. Murphy Memorial Veterans Administration Hospital
San Antonio, Texas, USA

Mark C. Wilson, MD, MPH
Wake Forest University School of Medicine Bowman Gray Campus
Winston-Salem, North Carolina, USA

To the Editor

I just read your ACP Journal Club editorial (7) about becoming a drug company shill. For similar reasons (and also because they are usually boring to read and discuss), we banned randomized controlled trials from journal club unless they were controversial or poorly done. But you are absolutely right; the residents gravitate toward them anyway.

Warren S. Browner, MD, MPH
San Francisco Veterans Affairs Medical Center
San Francisco, California, USA

To the Editor

I have decided to take up Redelmeier’s challenge (8): how to prevent evidence-based medicine from becoming a tool of industry. One of the temptations of residents (and more than a few faculty) is to read an article in complete isolation from clinical practice, which gives the illusion that medicine is like a smooth escalator going in only one direction—up. This is far from the truth.

An example comes from the field of HIV infection care, one of the most quickly developing fields in medicine. Early studies with zidovudine (AZT) showed promising effects in the late stage of HIV infection. Placebo-controlled studies of AZT in earlier-stage disease were done and widely trumpeted as conclusive when AZT showed advantages. Other studies looking at careful, selective use of AZT showed essentially the same benefit as universal early administration. But the Concorde study (9) showed leveling off of benefit and ultimately slightly worse prognosis when monotherapy with AZT was continued past 3 years. Our contemporary understanding of the dynamics of HIV replication in vivo helps us to understand these results, but pure empiricism could have been interpreted in several different ways until a greater understanding of pathophysiology kicked in.

What is it about “good” studies that leads experienced clinicians away from them? I think there are several factors. First, the task of recruiting patients into a study has an unpredictable effect on the selection process. Recruited patients may be more or less sick; more or less adherent; or from a different racial, ethnic, or religious background. It can be tough to make the proper extrapolation to your own practice. Second, the design requirements of a study may be strict to achieve scientific rigor, but then it may not match the give-and-take of clinical practice. Alternatively, the study may allow considerable flexibility after randomization, but then the difference in analysis between “intention to treat” and actual treatment may weaken the conclusions of the study. Third, practice evolves. Looking at trials of coronary artery stents done several years ago yields a very different sense of their utility when compared with recent trials. Technologic advances have overcome limitations in early studies. Fourth, trials may resemble “looking for your key under the streetlight when you dropped it half a block away.” Certain things are easy to study, and others are very difficult. Tough questions are often overlooked.

Fundamentally, the goal of teaching residents and colleagues about critical reading of the literature is to reach an understanding of the internal workings of study design and the applicability of the findings of these studies to individual patients. The art of capturing the true benefits and risks that a patient will undergo with the selection of a treatment, diagnostic test, or preventive measure is only part of the task. The remainder is to understand the interplay of our formal literature with life itself. Reading an essay by Oliver Sacks (10) can give better insight into the problems of a patient with a neurological disorder than reading the most rigorously constructed trial. Talking to patients and understanding their fears, their values, and their hopes can be immeasurably more important than prescribing a pill or performing surgery. Even expert “consumers” of medical care (physicians and their families) feel lost and confused at times. We do not need to take a Luddite approach to evidence-based medicine, but we have good reason to ask it to “talk” to the worldly aspects of the physician and the patient instead of chattering only to itself.

Thomas Fekete, MD
Temple University School of Medicine
Philadelphia, Pennsylvania, USA

To the Editor

You (1) and Redelmeier (2) make critical points about what (lifetime) learners of medicine need to study. As you indicate in your final paragraph, we need to allocate a large fraction of our studying as physicians—whether in training or in practice—to areas other than pharmacotherapeutics. Given the general lament that clinical skills have reached an all-time low among North American physicians, it would make sense to spend time reading works on history taking and physical examination (3-9). As the authors and editors of the series “The Rational Clinical Examination” (10) know only too well, good science on the physical examination is hard to come by, and a plethora of anecdote and opinion exists.

Clinical problems are often messy. Our task in clinical medicine requires that we accept this and work with it. In some domains, we can achieve scientific certainty akin to that in biologic science. Without being an apologist for bad science, and still less an opponent of using best evidence to optimize our practice, our calling insists that we deal with the whole realm of needy persons who are ill. Most of the problems to which we must respond call for becoming well informed and experienced in arenas where good science does not exist and, in some cases, may never exist.

Henry Schneiderman, MD
Hebrew Home & HospitalUniversity of Connecticut Health Center
West Hartford, Connecticut, USAFarmington, Connecticut, USA

To the Editor

I share Redelmeier’s concern that journal clubs may place undue emphasis on trials of drug therapy (1). To address the question you raise in your editorial comment, “What is a fair allocation of time for therapeutics topics in a journal club?” I offer the following: In the community teaching hospital where I work, my department’s weekly medical grand rounds could easily be dominated by speakers sponsored by drug companies who, not surprisingly, always want to talk about diseases for which they have a new drug therapy. Several years ago, I realized this was a real danger and allocated 1 week every month for presentations by these sponsored speakers. The remaining sessions are reserved for case presentations and discussions by residents, presentations by the attending staff, or other nonsponsored events. In these sessions, presentations are on many aspects of medical practice other than drug therapy. Attendance is about the same for sponsored and nonsponsored presentations.

Medical practice requires knowledge of the epidemiology of disease, diagnosis, treatments that include but are not limited to drug therapy, quality-improvement activities, and community health needs and should include discussion of the social, ethical, economic, and political influences that affect medicine. An agenda for a journal club or a grand rounds program should acknowledge these varied needs.

Jim Cowan, MD, MPH
St. Luke’s Hospital
Bethlehem, Pennsylvania, USA


References

1. Redelmeier DA.Drug dependence in a journal club [Editorial]. ACP J Club. 1999 Nov-Dec;131:A13-4.

2. Haynes RB.Editor’s response. ACP J Club. 1999 Nov-Dec;131:A14-5.

3. Sackett DL, Straus SE, Richardson WS, Rosenberg W, Haynes RB. Searching for best evidence. In: Evidence-Based Medicine: How to Practice and Teach EBM, 2d ed. Edinburgh: Churchill Livingstone; 2000.

4. Richardson WS. Teaching evidence-based medicine in morning report. Clinical Epidemiology Newsletter (McMaster University). 1993;13:9.

5. Lawrence VA, Richardson WS, Henderson ME, et al. The best evidence from ACP Journal Club for general internal medicine [Editorial]. ACP J Club. 1999 Sep-Oct;131:A13-6.

6. Wilson MC. Rename those Journal Clubs! Evidence-Based Health Care Newsletter. 1998;18:8.

7. Redelmeier DA.Drug dependence in a journal club [Editorial]. ACP J Club. 1999 Nov-Dec;131:A13-4.

8. Redelmeier DA.Drug dependence in a journal club [Editorial]. ACP J Club. 1999 Nov-Dec;131:A13-4.

9. Concorde Coordinating Committee. Concorde: MRC/ANRS randomised double-blind controlled trial of immediate and deferred zidovudine in symptom-free HIV infection. Lancet. 1994;343:871-81.

10. Sacks OW. An Anthropologist on Mars: Seven Paradoxical Tales. New York: Knopf; 1995.

1. Haynes RB.Editor’s response. ACP J Club. 1999 Nov-Dec;131:A14-5.

2. Redelmeier DA.Drug dependence in a journal club [Editorial]. ACP J Club. 1999 Nov-Dec;131:A13-4.

3. Schneiderman H, Peixoto AJ. Bedside Diagnosis. 3d ed. Philadelphia: American College of Physicians; 1996.

4. Schneiderman H. Physical diagnosis versus the oppression of medicine. Consultant. 1990;30(1):2, 10.

5. Verghese A, Gallemore G. Kernig’s and Brudzinski’s signs revisited. Rev Infect Dis. 1987;9:1187-92.

6. Williams JW, Simel DL. Does this patient have ascites? How to divine fluid in the abdomen. JAMA. 1992;267:2645-8.

7. Schneiderman H. Pushing off with the arms to arise, and physical evaluation of patients who are unable to cooperate. Consultant. 1997;37:2415-20.

8. Schneiderman H. Coin-rubbing, folk remedies, and physical examination of immigrants. Consultant. 1995;35:1349-52.

9. McGee S, Abernethy W 3d, Simel D. Is this patient hypovolemic? JAMA. 1999;281:1022-9.

10. Sackett DL, Rennie D. The science of the art of the clinical examination. JAMA. 1992;267:2650-2.

1. Redelmeier DA.Drug dependence in a journal club [Editorial]. ACP J Club. 1999 Nov-Dec;131:A13-4.