Addressing COVID-19 Misinformation on Social Media Preemptively and Responsively

Emily K. Vraga; Leticia Bode

Disclosures

Emerging Infectious Diseases. 2021;27(2):396-403. 

In This Article

Methods

Study Design

In this study we considered the effectiveness of sharing a WHO graphic (on social media) that debunks 2 related coronavirus myths: that taking a hot bath both raises body temperature and prevents coronavirus infection (Figure). Scientific evidence suggests that hot baths can minimally affect body temperature; studies have found a change of roughly 0.5°C –1.0°C in body temperature.[11,12] Temperatures needed to deactivate coronavirus are typically >56°C,[13–15] which exceed safe bath temperatures; scalding is likely within 10 minutes at 48°C.[16] In other words, this graphic explains the science for why hot baths do not prevent COVID-19 and directly disputes the prevention efficacy of baths. The graphic follows many best practices for combating misinformation: it is fact-based, colorful, simple, and easy to understand; focuses on the fact rather than the myth; and includes a label signaling that it comes from an expert source.[7,9,10] These aspects fulfill many of the 5 Cs of correction: is consensus based, includes corroborating evidence, and is consistent, coherent, and credible.[6] Addressing the science behind why hot baths do not prevent COVID-19 infection also corroborates the argument with a science-based alternative explanation shown to boost correction effectiveness.[6,7,17] Therefore, we expected that exposure to a post containing this graphic would reduce the 2 misperceptions among persons targeted by the graphic as compared with persons who did not see any information on the topic.

Figure.

Original World Health Organization myth buster graphic used in study of addressing COVID-19 misinformation on social media. COVID-19, coronavirus disease.

Such a graphic might be shared in multiple ways, which we also tested. The first factor manipulates whether the graphic was shared preemptively on a social media feed, compared with whether it was shared in response to misinformation on the topic (we refer to this as placement). When offered preemptively, a user shares the graphic as a social media post without addressing the misinformation directly. In this case, it might function like a fact check, addressing an inaccurate claim made elsewhere but not directly linking to that claim on the social media platform.[18–20] Alternatively, the graphic could be shared in response to someone posting misinformation. These responsive corrections are a relatively common behavior[21] and reduce belief in misinformation among other social media users who witness the correction.[8,9,22] Given the relative dearth of research in this space, we explored whether preemptive or responsive posting strategies are more effective in reducing misperceptions.

The second factor manipulates who shares the information. Previous research on correction has emphasized the ability of an expert source like WHO to address misinformation[7,22,23] but offers mixed evidence about the effectiveness of a single user in correcting misinformation on social media.[22,24] Therefore, we expect that a graphic shared by WHO will more effectively reduce misperceptions than the same graphic (still with WHO branding) shared by an unknown Facebook user.

In addition, we explored the combination of these 2 elements: who shared a graphic and whether it was shared in response or preemptively. Although it is not clear how these 2 elements interact, several possibilities seem plausible. For instance, it might seem strange to see a powerful organization like WHO responding directly to misinformation, making this form of correction less effective for WHO but not for users. Alternatively, research suggests that a user debunking a myth preemptively using facts might be less effective than when sharing a correction after misinformation,[24] but we do not have research to determine whether this pattern should similarly hold for organizations. Although research does not clearly specify what to expect, the interaction between source and type of sharing is worth exploring.

Finally, not enough correction research has been done to investigated the enduring effect of exposure to misinformation and its correction. Some research suggests that corrections fade over time, and the myth could actually be reinforced through an illusory truth effect of seeing misinformation repeated.[6,7] Alternatively, if the correction follows best practices by emphasizing facts and providing an alternative explanation, as we believe the WHO graphic does, lowered misperceptions may endure over time. Therefore, we tested whether the effects of correction endure over 1 week.

Experimental Design

An experimental design enabled us to best consider the effects of who corrected and whether the correction was in response to misinformation or independent of it. This experiment received approval from the Institutional Review Board at the University of Minnesota on April 27, 2020.

We fielded a survey experiment to 1,596 participants during May 4–5, 2020 (wave 1) using Amazon's Mechanical Turk service (https://www.mturk.com). Of these, 1,453 were willing to continue participation and 1,419 passed an attention check in the first wave of the study; these participants were contacted 1 week later (on May 12, with a recontact on May 14) for a follow-up survey (wave 2). A total of 1,122 participants (79%) completed wave 2 an average of 7.5 days later (mean 7.54, SD 0.75).

Each participant viewed a screenshot of a Facebook feed and was asked to read it as if it were on their own feed (Appendix 1, https://wwwnc.cdc.gov/EID/article/27/2/20-3139-App1.pdf). The experiment consisted of 6 experimental conditions (Appendix 2, https://wwwnc.cdc.gov/EID/article/27/2/20-3139-App2.pdf): a pure control condition, a misinformation-only condition, and 4 correction conditions manipulated in a crossed factorial design with the 2 factors we described earlier: placement (preemptive versus responsive) and source (WHO versus user).

In the pure control condition, participants viewed 5 control posts on the simulated feed. In the misinformation-only condition, they viewed the same 5 posts, with the addition of a misinformation post: a status posted by a user saying "This is such an easy thing to do! Take a hot bath to keep yourself healthy and protect you from coronavirus!" on a bright pink background.

For all correction conditions, participants viewed the same WHO infographic, which prominently labels the source, to isolate the effects of who is sharing the graphic rather than the graphic itself. Those who viewed the preemptive correction saw the correction infographic as the second post in the feed, posted either by WHO or by a social media user but with no misinformation post as part of the feed. Those who viewed the responsive correction saw the misinformation post described earlier, with the corrective graphic posted in response, either by a user or by WHO in the form of a WHO "info bot." Although no such bot exists as far as we know, WHO and Facebook have partnered to offer a Facebook messenger bot to answer user questions about coronavirus,[25] so this sort of correction is plausible, if not currently being deployed. Moreover, a bot offers a scalable and realistic responsive mechanism, rather than assuming that WHO would directly respond to individual Facebook users on their official feeds.

After exposure to the simulated Facebook feed in wave 1, participants answered questions regarding their beliefs regarding the myths targeted by the WHO graphic to measure misperceptions about body temperature and COVID-19 prevention (Appendix 3, https://wwwnc.cdc.gov/EID/article/27/2/20-3139-App3.pdf). These questions were replicated in wave 2 of the study.

Sample Characteristics

Of the 1,596 participants who completed our initial survey, participants skewed male (62.9%) and highly educated (72% had a bachelor's degree or higher). Participants averaged 37 years of age (mean 36.94 years, SD 11.31 years), were relatively diverse in terms of race and ethnicity (18.5% African-American, 7.9% Asian-American, 70.6% White; 21.3% considered themselves Hispanic or Latino) and income (median $50,000–$75,000) and leaned Democratic (5-point scale, mean 3.73, SD 2.00) and liberal (5-point scale, mean 3.69, SD 1.93). These characteristics were consistent among participants who completed the second wave of the study (Appendix 2 Table 1).

Statistical Analysis

We performed 2 sets of analyses based on our preregistration.[26] First, we compared each of the experimental conditions to the pure control condition using linear regression to determine whether the corrections reduced misperceptions as compared with baseline beliefs (absent any information regarding hot baths or COVID-19). We replicated these analyses for wave 2. Second, we isolated the effects of source and placement using a regression approach (not preregistered) excluding both the control and misinformation-only conditions, and entering 2 factors (placement and source) as well as the interaction between the two.

Comments

3090D553-9492-4563-8681-AD288FA52ACE

processing....