Evidence-Based Medicine and Practice Guidelines - An Overview

Steven H.Woolf, MD, MPH, Department of Family Practice, Medical College of Virginia - Virginia Commonwealth University, Fairfax, Virginia.

Cancer Control. 2000;7(4) 

In This Article

Developing Evidence-Based Practice Guidelines

In general, evidence-based practice guidelines emerge from six steps that are conducted with varying intensity and in different sequences, depending on the topic.

The first step is to give precision to the focus of the review, specifying the target condition, the interventions to be reviewed, relevant patient populations and clinical settings, and outcome measures of significance. The boundaries for the search are also determined, such as bibliographic databases and exclusion criteria (eg, studies published before a given date, foreign-language articles, editorials, uncontrolled studies, nonhuman studies). An evidence model often helps to clarify the linkages in the analytic framework for which evidence is sought. [15]

The review of evidence follows procedures that have become standardized in recent years. [1] Three basic steps include (1) a comprehensive literature search, using explicitly documented search terms and other techniques to assure the reviewers and read-ers that all relevant evidence has been gathered, (2) critical appraisal of individual studies, using explicit analytic criteria to judge internal and external validity and documenting the findings in abstraction forms and evidence tables, and (3) synthesis of results, summarizing the results in narrative text, evidence tables, or balance sheets. The last step may involve quantitative pooling of data in meta-analyses to estimate overall effect sizes, especially when individual studies lack statistical power, or in decision analytic models that predict outcomes under varied assumptions about determinant variables.

In many systematic reviews, studies are assigned a "grade," or evidence code, that reflects the position of the study in a hierarchy of evidence quality. Several coding schemes exist. [16] In grading studies of the effectiveness of treatments, a common feature is to place RCTs at the top of the hierarchy, followed by observational and epidemiologic studies. Other coding schemes are appropriate for studies evaluating diagnostic tests, epidemiologic trends, and natural history. [17]

Reviewers examine a variety of factors to assess internal validity (eg, study population, allocation to groups, interventions, outcome measures, attrition rates, statistical measurements). [18] The generalizability of the study population, intervention, and setting are considered to judge external validity. Over 20 instruments are available to grade the quality of RCTs, [19] but the Jadad scale is most well known and validated. [20]

Expert opinion plays a role in all practice guidelines. Even when evidence is available, subjective judgments are made in assessing the strength or generalizability of the evidence and in weighing the tradeoffs between benefits and harms. When evidence is lacking, groups differ on the extent to which they are willing to make recommendations based on opinion. A hallmark of EBM is to be explicit when opinion is used so that read-ers understand the basis for the recommendations and can make their own judgment about validity.

In an era of limited health care resources, guideline developers must often consider the cost effectiveness or cost utility of interventions. Other policy considerations, such as access to care, availability of qualified personnel and technology, insurance policies, and medicolegal implications are considered to varying degrees depending on the topic and panel philosophy. It is in this context that conflicts of interest among panel members become especially problematic. [21] Some groups rigidly avoid making opinion-based recommendations, instead offering the neutral conclusion that there is insufficient evidence to make a recommendation.

The wording of recommendations receives great attention in practice guidelines since even the slightest nuances of language can have serious policy implications and affect the quality of patient care. A characteristic of evidencebased guidelines is the use of letter codes (eg, "A" recommendation) or recommendation categories (eg,"standards,""guidelines," "options") to reflect how strongly the intervention is recommended. Almost always, this grading scheme reflects the strength of supporting evidence.

For interventions with complex tradeoffs or equivocal evidence, the best choice may depend on the values patients assign to potential benefits and harms. Increasingly, guidelines on such topics eschew universal recommendations and instead urge physicians to help patients assess and weigh personal preferences in a process of shared decision making. [22] That process involves discussing with patients the potential benefits and harms of interventions and their likelihood, helping them decide how strongly they feel about potential outcomes, and determining from this information which option is best.

As with any scholarly document, evidence-based guidelines are typically circulated in draft form to content experts to obtain feedback on the comprehensiveness of the review and the validity of the critical appraisal. The draft is also sent to stakeholders, such as relevant professional societies and advocacy organizations, for further feedback.