The Didact Is Dead: Long Live CME Interactivity?

Robert M. Centor, MD; Robert W. Donnell, MD; Bradley P. Fox, MD; Désirée Lie, MD; R. Brian Haynes, MD, PhD; Robert W. Morrow, MD; Charles P. Vega, MD


March 26, 2010

Roundtable Question

What is the most effective way to educate? Have we evolved away from butts-in-seats didactic lectures to a more interactive form of education, with changes in practice and outcomes as the measure of success -- not the simple documentation of attendance at a lecture?

Bradley P. Fox, MD

There are many new models of education. One of our panel members, Robert W. Morrow, MD, has been a pioneer of small group learning. I am sure that he will respond to this question with a much better outline of small group learning than I could offer. If you want more information on the changes in continuing medical education (CME), read this Medscape article from December 2007: "Changing the Status Quo in Continuing Medical Education: A Discussion With Experts."

We tried several different forms of education of the masses at the 2009 American Academy of Family Physicians (AAFP) Scientific Assembly in Boston, Massachusetts -- for example, blast presentations, in which 3 presenters gave 15-minute bursts of information on a topic, followed by a 15-minute panel discussion and question and answer session. This concept has been used for years intermittently, but seems to fit the shorter attention spans of today's physicians. There were also large group sessions that had breakouts within the room. Audience response systems were used to give the attendees instant feedback and a feeling of involvement.

The one innovation that I consistently use, but never report for credit, is point-of-care (POC) learning. POC has been available for AAFP credit for years, but it is not highly utilized. To earn credits, a physician, during a patient visit, needs to access one of several approved resources -- such as UpToDate, Inc. -- to address a problem that the patient has presented. The physician must then use the information gathered to provide treatment or education to the patient. For each instance of POC learning, a physician can claim 0.25 credits of CME with the AAFP. As an example, I had a new patient in the office last week who is 12 years old and has Blackfan-Diamond syndrome. I had never heard of this, so I opened up one of my online resources and was able to learn about the syndrome, print some information for my staff, and make a small change to this young girl's day-to-day and week-to-week treatment plan. To me, this is how education should be: real time; practical; and practice changing.

The physicians coming out of residency and medical school today are very different from those of us who went into practice in the late 1980s and early 1990s, and earlier. Although many of us are technologically savvy, the current group of grads knows no other media. They do not want to sit in 1000-seat lecture halls looking at 1-hour PowerPoint presentations. They tweet in sound bites and want 140 characters or less. "Get to the point; give me usable information; and give it to me now." They want their information available wherever they are: during their commute; while they are on Facebook; and while they see patients.

Désirée Lie, MD

I agree with Brad's viewpoint that CME offered as didactic instruction is unlikely to change behavior and practice patterns, and -- as some of the education literature suggests -- interactive, peer-generated discussion has more impact.

POC learning is certainly the way to go, but it is usually limited to individual rather than group learning. We cannot underestimate the power of the community in stimulating sustainable learning and the impact of peer pressure on practice behavior. It would be great to see some outcomes studies comparing the differential effect of CME strategies with evidence-based practice.

Whatever model that we use for CME should be introduced early in the medical education pipeline, immediately after medical school, if possible.

Another approach that deserves examination is interprofessional CME/continuing education unit (CEU) in which members of several health professions participate -- nursing, medicine, and pharmacy, for example -- focusing on case discussion and resolution from a multidisciplinary angle.

Yet another new CME strategy is the use of shared reflection, which involves small groups facilitated by experts to bring out learning. The goal would be to improve communication and time management, to reduce medical errors, or to reduce burnout among physicians. For example, physicians in these groups might participate in a writing exercise, sharing de-identified clinical cases from their practices on common themes from which they learn new clinical pearls.

I would like to see the Accreditation Council for Continuing Medical Education (ACCME) take a much broader approach to CME delivery than it has so far and to permit greater innovation.

Charles P. Vega, MD

I agree with both Desiree and Brad, and certainly appreciate the insight into the current state of affairs in regard to traditional CME. I'll never forget the lecture that I gave during which the audience missed more questions on the postlecture vs prelecture audience-response quiz. (Of course, this had to be the audience's fault, right? Not the lecturer -- no.)

Healthcare is evolving to examine much more critical and pertinent outcomes compared with pre- and post-CME tests. Brad alludes to POC learning in the office. This is an example of how CME could be applied directly to improve patient outcomes, and I think that physicians should receive more credit for their educational efforts that directly benefit patient care. Using education to improve performance in practice jibes well with the current movement away from a more centralized, prescribed format of CME, and improvements in technology could allow physicians to achieve and document this type of education in the course of their everyday practice. Broader measures to improve evidence-based care should also begin with CME, and the evaluation and follow-up of these interventions deserve CME credit as well.

Although physicians may understand the benefit of synergy between CME and performance measures, patients have a lot to gain from this model in terms of receiving higher-quality care. That is the most important outcome of all.

R. Brian Haynes, MD, PhD

In my view, the current didactic system of CME should be trashed because it is demonstrably broken. No one should pay for it or sponsor it because it has no proven value -- or worse still, has been shown to be ineffective in changing knowledge and actions.

This, of course, raises the matter of what should replace didactic CME. Systematic reviews of interventions to enhance physician performance have shown for some time that the following have some predictable effects (Table).

Table. Results From an Overview of Previous Systematic Reviews of Professional Behavior Change


Generally ineffective

  • Dissemination of printed educational materials

  • Didactic educational sessions

Mixed effects

  • Audit and feedback

  • Local opinion leaders

Generally effective

  • Reminders

  • Educational outreach

  • Multifaceted interventions

From Grimshaw J, Eccles M, Tetroe J. Implementing clinical guidelines: current evidence and future implications. J Contin Educ Health Prof, 2004, 24(suppl1): S31-S37.

Bottom line: Banish ineffective CME (printed educational materials and didactic educational sessions) and promote professional behavior change strategies that work.

Robert W. Donnell, MD

Several of my roundtable colleagues are critical of traditional lecture-based CME. Purported advantages of alternative methods include measurable clinical performance change and better user engagement through interactive formats. However, individual learning needs vary. I argue that multiple methods are needed, including traditional didactics. Although case-based methods are important because they seek to apply best evidence to clinical decisions, traditional lecture-based CME is more likely to address cutting-edge advances that have not been incorporated into performance measures. Traditional CME also adds depth by incorporating pathophysiology and background information, areas of learning that are essential but not easily linked to measurable processes. It would be simplistic and inappropriately restrictive to insist that all learning activities be directly translated into performance metrics. Many important dimensions of learning are intangible.

High-quality evidence concerning the effects of CME is almost nonexistent. The notable exception is worth looking at in some detail. It is a grand experiment involving an educational program that contains all the elements that my Roundtable colleagues find desirable: interactive format; performance measurement; immediate feedback; and rigorous adherence to "best practices." I'm referring to advanced cardiac life support. Quality evidence exists for both performance and patient outcomes. According to both levels of evidence, the program has failed. Studies have indicated that learner retention deteriorates rapidly over time.[1] Real-world adherence to the guidelines is as low as 40%.[2] Survival in cardiac arrest has been dismal, with negligible improvement over decades despite multiple evidence-based updates in course content and certification requirement for virtually all providers.[3] Exceptional improvements have been realized by only a handful of communities, which have departed from the performance measures to employ methods of resuscitation developed by researchers at the University of Arizona.[4] Although considered new, these methods have been used in select communities for several years, regardless of that fact their penetration into CME has been limited to the very activities that many would abandon: the traditional lecture.

Robert W. Morrow, MD

Thanks for your thoughtful presentation, Robert. I think scholarly responses are feasible, and I have a few, but let me strike a middle road with this recent review in Academic Medicine.[5] Speaking for myself, the issue of lecture is intimately knotted with that of sponsorship, which really confuses the issues. If the question is how to improve clinical outcomes, then the issues are clearer.

The weight of evidence is that didactics alone do not cause practice change. Intelligent, experienced adults need to test ideas against their own activities to solve problems, which needs the feedback of external reality, of context. Perhaps a trip to the science of cognitive load theory would help. In this fairly well-thought-out model, advanced learners need to fit ideas from active memory into learned schemas in long-term memory. Modeling of such schemas goes on through mirroring neurons, which are highly contextual. This is great stuff for adult educators, but complex. In summary, unreadable slides and "cover it all" lectures leave scant impressions except for their overarching logical schema, insofar as it fits prior knowledge. However, only rehearsal and contemplation can bring new ideas into practice before they are forgotten (20 seconds in some studies).

However, what of the new, "cutting-edge" ideas? Don't they require didactics? For example, cutting-edge robotics only increase the cost of prostate cancer surgery, not good outcomes, despite lectures to the contrary. Proprietary funded lecturers emphasize their data in systematic ways (framing) to convince learners that the new approach (robotics) fits the learned schema. An interactive discussion quickly tests these cutting-edge concepts against the learners' life experiences (learned schema). Cutting edge needs the lens of practice reality, but didactics rarely go there. Think about stent lectures.

Many useful, cutting-edge concepts do not make it to medical didactics because of silos created by those who control the agenda. One would hope that nationally funded agenda would insert engineering principles of safety and error avoidance into our CME system.

Did you like the cognitive load theory and mirroring neuron theory? How about the chaos of large data sets? I'd sit with peers from other science fields to discuss these, but no context, no memory. Sorry, that's how primate brains work.

As an educator, I have no idea what my learners are thinking unless I ask them, and no idea what they will do unless I use a simulated interaction. I do not know their barriers, their biases, or their uptake of information. I do come prepared to trainings with solid teaching points and access to tools, though, including the Internet. Teaching adults is more difficult than PowerPoint, but much richer and challenging, and it allows for the context of the learners to be added.

Let me add a word about advanced life support, especially the advanced life support of obstetrics. This brilliant course melds hands-on training with didactics, and has reduced perinatal mortality internationally. It will never have a randomized controlled trial because it works so well, and educational randomization is wickedly tough. Other evaluation methods have demonstrated success.

We are coming to an age when we can receive funding to examine these methods' questions through "implementation science." How do we make science work in the exam room, and in the community? Let's not blow it on didactic lectures.

Robert W. Donnell, MD

There's no simple answer to Bob Morrow's point about whether lecture-based CME changes practice. Maybe it doesn't in a directly measurable way, but does it, nonetheless? Perhaps in intangible ways, or is it too simple a question to ask?

If CME content were restricted to performance measures, it would lack background information and be shallow. Let me give you an example from the didactic sessions of the 2009 Society of Hospital Medicine (SHM) meeting.

A speaker discussed the pathogenesis of healthcare-associated pneumonia. He made a very basic science point that affected me and deepened my understanding of the disease. I always had trouble understanding why patients in the intensive care unit are susceptible to gram-negative pneumonia, whereas healthcare workers in the intensive care unit, though exposed to the same bugs, are not. (They never get colonized with gram negatives.) It turns out that severe illness of any kind, or even severe physical trauma, quickly alters mucous membrane fibronectin expression. Because fibronectin regulates bacterial adhesion and colonization, this altered expression results in altered bacterial colonization. I never knew that before, and it was an "aha" moment for me. I'll never forget the pearl. Its impact on my practice will never be "measured," but to me it was valuable.

Lecture-based CME is full of background information of this sort that goes beyond mere facts and performance to build a conceptual framework that is necessary for a deeper understanding, one that results in fewer mistakes when rote memory fails or when patients don't fit the textbooks. The impact of this type of learning, complex and multifaceted, does not lend itself to any type of "metric."

I chose advanced cardiac life support because there's no more robust example of the type of interactive, performance-based learning that you advocate. There's also, to my knowledge, no better studied educational activity in terms of impact on performance and patient outcomes. Also, as I pointed out, the results -- at least as far as anyone has been able to measure -- are poor. Therefore, although I agree that there are no "measurable benefits" of didactic CME, neither are there measurable benefits for any other type of CME. It goes back to my previously mentioned point: We may be asking the wrong questions.

Robert W. Morrow, MD

I think the link to the meeting sessions that you sent is an excellent example of how not to do education because it immediately leads to distraction, overload of information, poor filtering of extraneous ideas, and yucky production values. I can do a review frame by frame, but I would be embarrassed to let this one out of my studio! It puts most learners to sleep in a few minutes, and speaks down to them.

The idea that basic science is excluded from interactive education is at best wrongheaded. One must know more to lead a facilitated discussion than doing a structured, one-track slideshow, and plan tactically and strategically how to focus on key issues. I frequently reach out to Web-based databases when teaching groups. Slides are forgotten quickly; pithy facts at the time of discussion of a simulated patient are not.

However, we can no more solve the educational issues of CME with a roundtable exchange than we can solve the issues of UME or GME. These are complex areas with varied literature, much of it from the neurocognitive world.

By the way, I'd like to point out a supplement in Chest published last year.[6]

Robert W. Donnell, MD

We disagree concerning the educational quality of the link that I sent, but on what basis? On the basis of the Chest articles that you sent (the evidence in that supplement was all over the place) and my previous comments about advanced cardiac life support training, neither of us can appeal to evidence to support our positions. Didactic presentations such as those to which I have linked have influenced my practice profoundly over the years. Perhaps different learners have different needs? Who am I to say what's best for you or anyone else? How do you know what works best for me?

I never intended to say that basic science content is excluded from interactive education. (Indeed, didactic lecture-based programs often use panels, breakout sessions, and audience-response systems to enhance learning). I did mean to say that restricting content to that which is directly associated with measurable behavioral change does exclude basic science content.

I am not arguing against any particular type of CME. I am asking that although new and interactive methods are useful, why trash the didactic-based programs? Didactic CME works for me because I am an active learner in that format. I ask questions, review handouts, watch the archived presentations again, and go to primary sources. Many types of CME can be effective, but it all boils down to the individual participant.

Robert M. Centor, MD

CME is now required, but, amazingly, we do not specify the content of that education. I can receive CME on topics that I know, or topics that I do not need to know.

Pharmaceutical funding for CME has come under attack and for good reason. When a company has a new drug for heart failure, it will fund heart failure talks, even if it does not specifically address the new drug. Companies with useful drugs (or even with minimally useful drugs) champion diseases. They want us thinking about their disease because that gives their representatives a chance to detail their options. Therefore, too often industry has defined the topics for CME.

We physicians need CME, and we need CME targeted to our specialties. We need input from practicing physicians and academicians to pick the topics on which we should focus.

I always thought that I had a handle on hypertension. However, last year I heard that a colleague gave a wonderful talk on resistant hypertension. Thus, I arranged for him to give his talk again. He taught me lessons about hypertension that have greatly improved my care of hypertension. I did not know that I needed that talk. If you had given me a list of 50 potential talks, I would not have selected hypertension in the top 25.

In another example, several years ago we needed to treat cellulitis differently with the emergence of community-acquired methicillin-resistant Staphylococcus aureus (MRSA). That new information should have been placed in every CME curriculum for family medicine physicians, internists, pediatricians, and emergency medicine physicians to speed diffusion of that important information.

Using the term curriculum reluctantly, perhaps we need a CME curriculum that has some required hours and some elective hours. The trick is defining in an unbiased way to identify the "hot topics," that is, the topics on which we need updating. We should approach true continuing education through prioritization and needs assessments. Topic selection should follow from an understanding of those issues that are changing, and that physicians need updated.

If I were the czar of ongoing education, I would establish learning communities. Each community would have an experienced clinician educator. That educator would present cases chosen for their educational impact. Communities could either meet by phone, Webinar, or in person. The goal would be for the learners to understand the concepts that are essential to patients.

For example, in 2010 we might have a vigorous discussion of any of the following:

  • Choice of antihypertensives;

  • Management of stage III chronic kidney disease;

  • Soft-tissue and skin infections;

  • Secondary prevention in cirrhosis; and

  • Management of community-acquired pneumonia.

However, I still see a role for excellent lectures. We can absorb important information from excellent lectures, lessons that will help patients in the future.

Many critics of CME come from the outcomes police, who argue that it does not positively change practice, with the assumption that we physicians do such a bad job that we need remediation.

The critics also make a crucial error in their demand for outcomes data. CME should be part of a process. Good CME can help us in subtle ways, including helping our thought processes. We might not easily find these improvements in a research study, but that fact does not invalidate the value of hearing an important talk.

For example, I remember hearing a talk in the early 1990s that totally changed how I thought about congestive heart failure. I have used those lessons in patient care and teaching for the past 20 years. Here's another example: A couple of years ago I went to a talk on hyponatremia that had a positive impact on teaching and practice. This issue is not common enough to show benefit in a study, but I have helped patients because of that talk.

I believe that the goal of CME should be to stimulate how we think about disease and patient care. We should strive to find the best speakers and have them speak on the most important topics. CME talks can have great value. We must find those examples and hold all CME to that higher standard.

Although some CME may lead to major changes in our practice and even our prescribing patterns, other CME may only affect the occasional patient. However, if the CME allows us to help 1 patient every few months, I would argue that that CME has substantial value.


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as: