Current issues of ACP Journal Club are published in Annals of Internal Medicine


Editorial

The rocky road: qualitative research as evidence

PDF

ACP J Club. 2001 Jan-Feb;134:A11. doi:10.7326/ACPJC-2001-134-1-A11



Health research grows ever more holistic in its understanding of health and illness, more comprehensive in empirical questions, and more interdisciplinary in approaches. As we investigate social and personal aspects of health, we become drawn to social science knowledge in addition to biomedical and epidemiologic perspectives. With this multidisciplinary basis for clinical knowledge comes “qualitative” research, an empirical method seemingly at odds with traditional rules of evidence and with the hierarchy of research designs propounded by evidence-based medicine (1, 2). The philosophy of evidence-based medicine suggests that as ways of knowing, induction is inferior to deduction, subjective perceptions are inferior to objective quantification, and description is inferior to inferential testing. Qualitative tenets invert these imperatives: Investigators aim for inductive description using subjective interpretation.

New readers of qualitative reports thus confront 3 issues. First, does qualitative inquiry belong at the bottom of evidence-based medicine's traditional research design hierarchy? Second, if familiar rules of evidence do not apply, what features distinguish a noteworthy study? Third, what is the clinical usefulness of qualitative research information compared with that of quantitative information?

“Qualitative” health research is best characterized not by its qualitative data but by several assumptions about what social reality is like (ontology) and how we can best learn the truth about this reality (epistemology). These premises differ from those required to conduct, analyze, and believe in the results of quantitative research, such as a randomized controlled trial.

Quantitative research

Quantitative clinical research typically addresses biomedical questions. It tests hypothesized causal relations between quantified variables. (These include, of course, statistically “qualitative” variables, which are those that can be categorized and counted.) Quantitative research questions require key ingredients. First, they require variables that describe natural phenomena coupled with a belief that these variables exist and can be measured objectively. Second, they require a belief that causal laws govern the behavior of the variables. Third, they need a testable (falsifiable) hypothesis about a statistical relation between the variables. The resulting research question asks whether one variable (e.g., an intervention) quantitatively affects another (e.g., health status) and demands a “yes” or “no” answer. (This glosses over the falsification imperative, or the idea that we can only get “no” versus “maybe” answers by applying deductive logic, as promoted so convincingly by Karl Popper [3]. Converting “maybe” to “yes” relies on logically fallible induction. Despite evidence-based medicine's traditional distaste for induction, this leap is made of necessity by the users of hypothetico-deductive studies.) Critical appraisal addresses confidence in this answer, given researchers' adherence to such standards of rigor as controlling for influences beyond prespecified variables and preventing subjective expectations from distorting objective measurement or analysis (1, 2, 4).

Qualitative research

Qualitative research explores and describes social phenomena about which little is presumed a priori. It interprets and describes these phenomena in terms of their meaning and helps us make sense of these meanings. Qualitative reports offer access and insight into particular social settings, activities, or experiences. In contrast to quantitative approaches, qualitative research neither presumes that predetermined variables or causal relationships exist nor tries to find them. Paradoxically, the very features that strengthen the truthfulness of a quantitative study weaken a qualitative one. Prespecifying variables prohibits exploring and discovering other factors that may be important. Presupposing variables and causal laws precludes other meaningful models of social phenomena. Tacit knowledge, which quantitative researchers eschew as a bias, serves as an interpretation tool, a source of data, and a topic of analysis for qualitative researchers. Methodologic rigor derives from the depth of researchers' engagement with the data, the credibility of their interpretations, and others' agreement with narrated findings (5, 6). No single correct way exists to formulate interpretive conclusions; many different but credible findings could emerge from a given study. However, infinite incredible ones could also emerge: Qualitative research produces appraisable findings, but assessment involves nuanced, topic-specific judgments. Although critical-appraisal guides for qualitative studies vary in style and emphasis, they address similar issues (7-13). Key appraisal considerations are summarized elsewhere for the clinical user (5, 6).

Comparing qualitative and quantitative approaches

To illustrate the distinctive approaches and knowledge contributions of the 2 methods, consider a research program aimed to understand behavior at traffic lights (14). A quantitative researcher might hypothesize that red lights make cars stop, whereas green ones make them go. Researchers could randomly expose cars to red and green lights and record stopping and going responses. The study might disprove the hypothesis that the green light has less effect than the red light on stopping and going; it might estimate the likelihood of a car running a red light or sitting through a green one. Other quantitative researchers might be interested in drivers' rationales and whether they determine traffic behavior. They might administer a standardized questionnaire that would prespecify all plausible reasons for stopping and going, ask drivers to indicate which ones apply, and help quantify the association between certain rationales and driving behaviors.

Qualitative researchers would approach traffic behavior as a symbolically mediated social phenomenon. At the outset, the researchers would assume they know little about people's reasons for doing what they do or what their actions mean. Researchers would ask, “What do these lights mean to drivers, and why do they respond the way they do?” They might interview drivers, read traffic law, observe behavior at traffic lights, and try driving. On the basis of various information sources, they would develop a theory of driving behavior and report that green means “go” and red means “stop” (or to some, red means “go if you can get away with it”). The open-ended research question allows the researchers to discover the yellow light and its role.

The labels “qualitative” and “quantitative” are convenient, but they also oversimplify and sometimes mislead. One might just as well call the methods “chocolate” and “vanilla.” (Alternative labels for quantitative versus qualitative research include positivist versus interpretivist, deductive versus inductive, or experimental versus naturalistic; all of these labels tend to oversimplify, and they often lead to unnecessary misunderstandings and philosophical feuds.) As more qualitative research appears alongside quantitative research in the clinical literature, practitioners of evidence-based medicine may encounter 2 “rocky roads” to take toward understanding the unique contributions of each research method.

The first road follows an instinctive but philosophically futile desire to reconcile the 2 traditions' methodologic premises or standards of evidence. The best advice regarding this road is simply, “Don't go there”(15-17). Returning to the traffic example, which research provides a better representation of reality? The quantitative study's statistical findings are correct, and a qualitative approach could not generate these probabilities. But the former provides very limited information about what is “really” going on in this case. Although traffic light behavior seems law-like, it is governed not by natural laws but by social rules. We deal not only with brakes, accelerators, and light colors but also with differences of interpretation, lawfulness, and even social gestures. The qualitative study investigates this social meaning, which is at the heart of the action. Methodologic appropriateness depends foremost on the research question. Quantitative methods best answer questions about biomedical or natural causation. Qualitative methods best answer questions about social meanings. Applying either method to the other's domain of “reality” generates inadequate and potentially misleading evidence.

The second rocky road visits the appropriate places of the 2 traditions as contributions to knowledge and informed practice. This road is well worth traveling. Can we use qualitative evidence the same way that we use quantitative evidence? Can we combine quantitative and qualitative methods or use one to inform the other?

To determine which study yields more useful information depends on what we want to do. If we want to cross the street, the quantitative traffic study allows us to estimate the likelihood of getting run over, and on this basis, we can take an informed chance. However, the implied “law” that traffic light color makes cars go and stop would be useless if we want to reform drivers or simply understand them. The qualitative study gives more insight into why people do what they do. Intervening on the basis of evidence from this study (e.g., promoting the idea that yellow means “slow down”) will change the very patterns that the researchers so painstakingly explored and described. Future researchers (of either approach) will reach different conclusions. Further, evidence is never enough to guide clinical application, not only because values come into play (1) but also because clinical situations, clinicians, and patients always differ from research contexts and participants. Generalizing either type of research requires a rather “unscientific” inductive leap, invoking ideas beyond those provided by the research itself.

Some interdisciplinary health researchers suggest that qualitative and quantitative methods should alternate: The former generates hypotheses, the latter tests them, and so on (18-20). For certain research topics, both types of evidence can contribute to understanding. However, this reciprocal relationship is neither necessary nor always wise. Each study contributes to knowledge on its own (21). Leaping from one tradition to the other is ontologically and epistemologically hazardous. In particular, qualitative findings lose integrity when reduced and operationalized into quantitative variables (for example, such reduction has become routine practice in developing quality-of-life instruments). Practitioners of the 2 research approaches can learn from each other but only by creatively adapting each other's ideas rather than by following a systematic logic.

The 2 health-research traditions are distinctive in what they look at, how they see it, and what they can learn. Contrary to popular misunderstandings, both rely on systematic empirical observation, and both generate empirical evidence. Appealingly, they address essentially different questions about the world, so their findings tend to complement rather than compete as contributions to knowledge. By considering qualitative evidence, clinicians gain new and useful insights about social phenomena in health that are simply not available in any other flavor.

Mita K. Giacomini, PhD
Centre for Health Economics and Policy AnalysisMcMaster University
Hamilton, Ontario, Canada


References

1. Guyatt GH, Haynes RB, Jaeschke RZ, et al. Users' guides to the medical literature: XXV. Evidence-based medicine: principles for applying the users' guides to patient care. JAMA. 2000;284:1290-6.

2. Sackett DL, Haynes RB, Guyatt GH, Tugwell P. Deciding on the best therapy. In: Clinical Epidemiology: A Basic Science for Clinical Medicine. London: Little, Brown; 1991:187-248.

3. Popper KR. Science: conjectures and refutations. In: Setzer JH, ed. Foundations on the Philosophy of Science: Recent Developments. New York: Paragon House, 1993;41-60.

4. Greenhalgh T. How to read a paper: assessing the methodological quality of published papers. BMJ. 1997;315:305-8.

5. Giacomini MK, Cook DJ. A user's guide to qualitative research in health care: Part I. Are the results of the study valid? JAMA. 2000;284:357-62.

6. Giacomini MK, Cook DJ. A user's guide to qualitative research in health care: Part II. What are the results and how do they help me care for my patients? JAMA. 2000;284:478-82.

7. Altheide DL, Johnson JM. Criteria for assessing interpretive validity in qualitative research. In: Denzin N, Lincoln Y, eds. Handbook of Qualitative Research. London: Sage Publications; 1994:485-99.

8. Corbin J, Strauss A. Grounded theory research: procedures, canons, and evaluative criteria. Qualitative Sociology. 1990;13:3-23.

9. Devers KJ. How will we know “good” qualitative research when we see it? Beginning the dialogue in health services research. Health Serv Res. 1999;34:1153-88.

10. Elder NC, Miller WL. Reading and evaluating qualitative research studies. J Fam Pract. 1995;41:279-85.

11. Forchuk C, Roberts J. How to critique qualitative research articles. Can J Nurs Res. 1993;25:47-55.

12. Innui TS, Frankel RM. Evaluating the quality of qualitative research: a proposal pro tem. J Intern Med. 1991;6:485-6.

13. Patton MQ. Enhancing the quality and credibility of qualitative analysis. In: Qualitative Evaluation and Research Methods. London: Sage Publications; 1990:460-506.

14. Rosenberg A. Why a philosophy of social science? In: Philosophy of Social Science. Boulder, CO: Westview Press; 1988:1-21.

15. Smith JK. Quantitative versus qualitative research: an attempt to clarify the issue. Educational Researcher. 1983;12:6-13.

16. Smith JK, Heshusius L. Closing down the conversation: the end of the qualitative-quantitative debate among educational inquirers. Educational Researcher. 1986;15:4-12.

17. Hughes J. The interpretive alternative. In: The Philosophy of Social Research. New York: Longman; 1990:89-112.

18. Stange KC, Zyzanski SJ. Integrating qualitative and quantitative research methods. Fam Med. 1989;21:448-51.

19. Goering P, Streiner DL. Reconcilable differences: the marriage of qualitative and quantitative methods. Can J Psychiatry. 1996;41:491-7.

20. Morgan DL. Practical strategies for combining qualitative and quantitative methods: applications to health research. Qual Health Res. 1998;8:362-376.

21. Morse J. Is qualitative research complete? Qual Health Res. 1996;6:3-5.