An Evaluation of Learning about COVID-19 Misinformation Via an Interactive Multimedia Narrative
As a result of the COVID-19 pandemic, online misinformation has proliferated, requiring innovative interventions to improve information literacy for the public. In this paper we report on an evaluation of an interactive narrative education intervention developed as part of a design-based research project into microlearning and COVID-19 misinformation. The intervention aimed to enable learners to (1) name the role that fear and anger play in the spread of misinformation, and (2) identify a strategy for interrupting the spread of misinformation driven by fear or anger. Using a pre- and post-test design, we surveyed 195 Canadian women to evaluate whether the intervention was effective at achieving its learning outcomes. Results indicate that the intervention was effective in improving understanding about emotionally-driven misinformation, the role of emotions in the circulation of misinformation, and self-efficacy with respect to this type of misinformation. Based on these results, we suggest that short interactive narratives may be useful in education efforts aimed to address online misinformation and that this technique may have wide appeal to educators seeking accessible web tools for teaching this type of content.
Introduction
Misinformation, or information that is either intentionally or unintentionally false or misleading, has been a persistent public health problem throughout the COVID-19 pandemic (WHO, 2020). Educators and education researchers have also been concerned with misinformation during this time (e.g., Barzilai & Chinn, 2020; Brodsky et al., 2021; Kendeou, Robinson, & McCrudden, 2019; Rea, 2022), and in response to the rapid spread of COVID-19 misinformation online, many have developed strategies to intervene in the spread of misinformation and improve information literacy. Their tools and approaches are varied and focus on everything from policy and platform-change recommendations (Yaraghi, 2019), to lifelong learning approaches (Jaeger & Green Taylor, 2021), to pre-bunking and debunking efforts (Garcia & Shane, 2021). In this paper we report on an interactive multimedia narrative designed as one such educational intervention into online misinformation in the context of the COVID-19 pandemic. The interactive narrative is the culmination of a design-based research (DBR) project that began in the early months of the COVID-19 pandemic in 2020.
As a research methodology, DBR tackles education problems in real-world contexts, which for this project is the realm of online social networks rife with misinformation. DBR is systematic in its approach and relies on iterative analysis, design, development, implementation, and evaluation (Wang & Hannafin, 2006). This project has resulted in the development of design principles and a theoretical framework for educational interventions into health misinformation (Veletsianos et al., 2022b, Houlden et al., 2022), as well as evaluation of a previous iteration of the intervention (Veletsianos et al., 2022a), with a focus on developing both principles and interventions specific to women in online spaces as part of a larger project on health misinformation education (Houlden et al. 2022). In this paper, we report on learning outcomes associated with a short multimedia interactive narrative aimed at educating users about some of the techniques used to spread misinformation, as well as towards empowering users with strategies for identifying and disrupting those techniques so that they avoid inadvertently spreading misinformation further. We next discuss literature relevant to this topic, the theoretical framework informing this research, and present the research methods and findings.
Background
Overview of interactive narratives and games focused on misinformation
The research into misinformation and information literacy has grown substantially in recent years, with pandemic misinformation being an especially rich area of study (Ali, 2020). As indicated above, numerous approaches have been pursued in an effort to mitigate both the spread of misinformation as well as its impact (Pian et al., 2021). In this research project, an area of interest is interventions in the form of digital learning experiences aimed at responding to and mitigating misinformation. Broadly, we are interested in multimedia games and simulations, and more specifically in interactive narratives, which can be thought of as “stories whose unfolding, pace, and outcome can be influenced by some intervention by a spectator” (Cavazza & Young, 2017, p. 378). While education-focused interactive narratives, games, and simulations have some distinguishing characteristics, in this paper we proceed with the understanding that the main goal of such approaches is not to provide entertainment or fun (Abt, 1970), but instead to situate individuals in an active and decision-making role with the intention of fostering critical thinking, self-efficacy, awareness, and knowledge in the context of real-life applications (Backlund et al., 2008; Weyrich et al., 2021). Earlier research has demonstrated that such approaches, and in particular interactive narratives, may be advantageous for web-based learning interventions given their usefulness as active learning educational formats (Grass & Melcer, 2014; Green & Jenkins, 2014).
There are two main types of multimedia games and interactive narratives focused on misinformation. The first focuses on information literacy (i.e., on teaching people “to identify, find, evaluate, and use information effectively” (Information literacy, 2022)), and many misinformation games aim to increase individuals’ ability to identify and debunk misinformation. One type, for example, focuses on teaching users how to verify news sources by placing them in the role of a fact-checker such as in Factitious, MAthE, and The Fake News Detective, which are games designed to inform players of the tools available to perform fact checks as well as to increase awareness of the clues that indicate lack of credibility (Grace & Hone, 2019; Junior, 2020; Katsaounidou et al., 2019; Yang et al., 2021). Others target vaccine education specifically (Ohannessian et al., 2016; Montagni et al, 2020), and several others have addressed COVID-specific misinformation, including COVID Challenge by Doctors Without Borders (Médecins sans Frontières, 2020), Covid-19 Você Sabia? by the Federal University of Minas Gerais (Gaspar et al., 2020), and a COVID-19 version of Factitious (Farley & Hone, 2020).
The second type of misinformation games and interactive narratives relies on inoculation theory, which is a framework for media education developed by McGuire and colleagues (McGuire & Papageorgis, 1961, 1962; McGuire, 1964). Drawing on a biological metaphor, inoculation theory proposes that individuals can, through low-dose exposure to misinformation, be better equipped to identify and dismiss misinformation as though they have been immunized to it. Furthering the immunization analogy, the literature explores both prophylactic and therapeutic approaches (Compton, 2020), meaning that the positive impacts of inoculative effects have been evaluated through the use of warnings about misinformation (prophylactic), as well as through teaching about misinformation even after someone may already hold beliefs shaped by misinformation (therapeutic). For instance, Roozenbeek and Van Der Linden (2020) created Go Viral!, a game that used a prophylactic approach by exposing players to warnings about small COVID-19 misinformation pieces with the goal of increasing participants’ ability to identify misinformation. Results indicate that the preemptive warnings led to a significant increase in participants’ confidence in assessing COVID-19 misinformation, suggesting that the inoculation framework is a useful approach for educating about misinformation.
A number of these interventions have been developed by Roozenbeek, Van Der Linden, and their colleagues at Cambridge University. They first created Bad News, https://www.getbadnews.com/books/english/, (see figure 1), which placed players in the role of a disinformer and aimed to teach them how to recognize six underlying strategies of misinformation: polarization, invoking emotions, spreading conspiracy theories, trolling, deflecting blame, and impersonation (Basol et al., 2020; Roozenbeek & Van Der Linden, 2019).
Figure 1
Bad News game (from Roozenbeek & Van der Linden, 2019) (Credit: Tilt/University of Cambridge)
That framework was later expanded into a COVID-specific version of GoViral! (Basol et al., 2021; see https://www.goviralgame.com/)). Informed by inoculation theory, both games have focused on three areas: increasing capability to identify misinformation; increasing confidence in ability to identify misinformation; and reducing the likelihood of the participant sharing misinformation. While GoViral! was assessed using self-reported evaluation data, a follow-up study conducted on the original Bad News intervention found that playing inoculation theory-based games “significantly improves people’s ability to spot misinformation techniques compared to a gamified control group, and crucially, also increases people’s level of confidence in their own judgments” (Basol et al., 2020).
Multimedia approaches to addressing misinformation.
Games are some of the best examples of multimedia approaches to misinformation, because they combine images, text, animations and interactive storytelling to teach people how misinformation spreads. Bad News was evaluated with 15,000 individuals who, following game-play, reported less trust in online news such as tweets. This works well, in part due to the principle of active refutation, which requires a multimedia approach as participants are asked to generate pro and counter arguments as a way to encourage stronger reasoning (Roozenbeek & Van der Linden, 2019). Similarly, Harmony Square (https://harmonysquare.game/en) is an online game developed by Roozenbeek and Van der Linden (2020) to specifically inoculate against political misinformation. Like Bad News, Harmony Square used a graphical interface to expose people to common techniques for spreading political misinformation (figure 2).
Figure 2
Harmony Square start page and gameplay (from Roozenbeek & Van der Linden, 2020) (Credit:Tilt/University of Cambridge)
Importantly, multimedia doesn’t have to include fancy graphical interfaces or extravagant animations, and Mayer (2009) has long cautioned against using seductive details in multimedia learning environments. For example, Cook et al., (2022) developed the simple Cranky Uncle game (crankyuncle.com) which used a cartoon interface and “humor based active inoculation” (2022, p. 7) to counter misinformation related to climate change (see figure 3). This game could be described as a transmedia approach, as it involved web-based content, an online leaderboard, and in-person activities. For the game, researchers set up an installation in a park that taught participants about logical fallacies, and then presented them, via cartoon posters, with different arguments that a “cranky uncle” would make about climate change, asking participants to use their newfound rhetorical knowledge to counter the cranky uncle arguments. The website and leaderboard were used to motivate people from different regions to play against one another, inspiring greater social engagement.
Figure 3
Cranky Uncle website showing cartoon images (from Cook et. Al. 2022) (Credit: John Cook)
Some scholars suggest that just as games can address misinformation, they can also spread it. For example, Davies (2022) described the conspiracy theory QAnon as an alternate reality game, stating that since alternate reality games “favour collective and collaborative detective work to progress through the story, each participant contribut[es] with their own skills and expertise” (65) and since play usually starts with clues that are followed to progress through the storyline, Q, which leads people through a conspiracy via opaque clues, problem solving and collaborative detective work functions to draw people down a misinformation rabbit hole using gamification. Malik and Sharaq (2019) suggest that social media itself is gamified, with likes, follows, comments and shares functioning as a points system that rewards some actions over others. Importantly, these authors argue that actions that are rewarded on social media tend to be ones that lead to the spread of misinformation, and recent studies have shown a decisive link between social media use and belief in misinformation (e.g., Enders et al., 2021). Of course, social media platforms are inherently multimedia platforms even without the added gamified points system of engagement metrics like likes or shares. This is why organizations like Science Up First! recommend that scientists use social media platforms to debunk misinformation. Importantly, initiatives like Science Up First! recommend an approach to addressing misinformation that uses different media across different platforms, including images, video, text, in-person events, and shareable content in multiple languages. Each of these various initiatives, whether designed intentionally as a game, or unintentionally gamified as a result of social media metrics benefit highly from the technological affordance of interactivity, meaning that interactivity, guided by design-based principles rather than extraneously added, can be useful approach within a toolkit intended to teach about or respond to misinformation.
Study Rationale
Building on the research cited above, we developed a multimedia interactive narrative in the tradition of games like Harmony Square (Roozenbeek & Van Der Linden, 2020), meaning we utilized inoculation theory to ground our objectives. However, rather than design and create a digital game, we chose to develop an interactive narrative to assess the degree to which such a format might also be beneficial to misinformation education particularly by combining it with inoculation theory, which has yet to be well studied. Moreover, while multimedia interactive narratives and digital games are related, they offer different strengths and weaknesses. For example, one advantage of interactive narratives is that they require less technical skill to build and can be developed using free tools, meaning that the barriers to creating such intervention are greatly reduced.
To develop our interactive narrative, we used microlearning strategies as part of the context of the narrative, including adding explicit information about the nature of how misinformation spreads, as described below. This information comprised key aspects of the learning objectives of the intervention as was embedded directly into the narrative. This information was conveyed with brevity and convenience as recommended in the microlearning literature, and composed of short activities designed for high engagement with a limit of one or two learning objectives. Accessibility and flexibility are also key (McLoughlin & Lee, 2011), with no external resources or materials required, and learners should be actively engaged to achieve the learning objectives (Defelice & Kapp, 2019; Zhang & West, 2019). We chose this pedagogical strategy as a means to potentially increase efficiency and engagement with the learning associated with misinformation games, particularly in the context of noisy online environments that compete aggressively for users’ attention.
We narrowed our focus to teaching one aspect of misinformation strategy, namely the exploitation of strong emotions to increase the spread of misinformation, and how to disrupt that spread through awareness of emotions. We focused on these learning outcomes for two reasons. First, emotional heuristics, or information-processing shortcuts, play a large role in how people assess whether or not something is true (Brashier & Marsh, 2020), and emotions also drive online engagement (Schrenier et al., 2021). By focusing on emotions, our interactive narrative aims to interrupt heuristic shortcuts through either emotional self-awareness, which can either potentially be used to prompt disengagement or to initiate improved information habits based on evidence-based information literacy techniques. The interactive narrative and learning objectives we developed are a supplement to both inoculation and information literacy approaches as both can benefit from a strong emphasis on the actual first step needed for both to work, namely, how to actually stop when one is confronted with emotionally charged online content.
Second, understanding the strategically used relationship between anger, fear and misinformation does not require learners to have a depth of knowledge or skills to be disruptive to the flow of misinformation. As noted, assessing the content of misinformation is a different type of engagement with information than connecting with the emotions that information might elicit, meaning that one doesn’t have to be able to critically assess whether something may or may not be true if they focus on how something makes them feel and negotiate their engagement accordingly. Indeed, it may not even be that one isn’t able to do so, but that one doesn’t have time to do so effectively. Moreover, this focus on emotions and stopping not only may improve individuals’ self-efficacy with respect to misinformation, and potentially disrupt the spread of misinformation itself by stopping its flow short when emotions are triggered but may also improve how likely someone is to move on to the higher order strategies used by inoculation theory and the information literacy approach. In other words, a tight focus could have multiple and complementary effects facilitated in a short period of engagement.
Present Study
To test our learning outcomes, we designed an interactive narrative using Twine, “an open-source tool for telling interactive, non-linear stories” (Twine, 2022). Titled Playing with your emotions: A game about how misinformation spreads, the goal of this interactive narrative was to teach participants about the role emotions play in spreading misinformation and to demonstrate potential consequences of sharing misinformation. Two learning objectives guided our efforts. We designed the interactive narrative so that by the end of participating in it, users would be able to:
- Name the role that anger and fear play in the spread of misinformation.
- Identify a strategy for interrupting the spread of misinformation driven by fear or anger.
We chose the first objective based on the growing body of literature that emphasizes the role anger and fear play in the spread of misinformation (Han et al., 2020; Rains et al., 2021), and the understanding that more attention needs to be brought to this aspect of misinformation (Chou & Budenz, 2020; Heffner et al., 2021). We chose the second learning objective to enable learners to tune into the experience of strong emotions in response to online content or information they know or suspect to be untrue because emotions are strong drivers of misinformation. In turn, we aimed to help learners note emotional responses as signals to slow down and pause before reacting through various forms of online engagement such as liking or commenting, even when comments are meant to be corrective in nature, which can inadvertently further the spread of misinformation.
Research Questions
Three questions guide this research. The first research question was intended to measure the effects of the first learning objective described above:
- Does the interactive narrative effectively teach people that fear or anger have a role to play in the spread of misinformation?
The second research question was intended to measure the effects of the second learning objective described above:
- Does the interactive narrative effectively teach people to identify at least one strategy for interrupting the spread of misinformation driven by fear or anger?
The final research question aimed at assessing whether or not players of the interactive narrative increased their own sense of capacity to respond in a healthy way to misinformation which elicits strong emotions:
- Does playing through the interactive narrative result in improved self-efficacy about how to respond to fear- and anger-driven misinformation?
The Interactive Narrative
The interactive narrative begins with a microlearning lesson. In the introduction we explained to participants that researchers have found that strong emotional responses to posts seen on social media can lead people to share, comment, or like those posts more often. We also explained that misinformation feeds on strong emotions like fear and anger with the hope that people will share it so that it spreads further. With this minimal amount of information, the interactive narrative begins.
The narrative begins with Lauryn relaxing one day as her children play outside. She grabs her phone to check social media. At this point participants are given the option for Lauryn to open Instagram or Facebook. Once Lauryn opens the social media platform of the participants choosing, she sees a post from Lee, one of the neighborhood parents that she knows. To present this information we generated posts from an online post generator to look like real Instagram and Facebook posts from news networks. The posts read that health authorities are requiring children to be double vaccinated before returning to school in January. It warns that if they are not double vaccinated, they’ll have to switch to remote learning. The narrative explained that Lauryn is well informed, and she is certain that this information is not true. The participant is told that Lauryn recognizes she feels a bit agitated after reading the post. At this point, participants are presented with two choices: comment on the post or not comment on the post. This is the main path division in the narrative (figure 4) and depending on the participant’s choice, the story unfolds in two different ways.
Figure 4
Main path division in the narrative
The first path offered participants the option of not commenting on the social media post. If participants did not comment they discover that Lauryn checks in with herself and realizes that she is no longer feeling relaxed and decides not to act on that. Later that day Lauryn speaks with someone who is an authority figure on the topic and confirms that the post was indeed incorrect. Here we learn that Lauryn was exposed to misinformation, identified the emotional response she had to the information in the post, and helped to curb the spread of false and misleading information by taking the time to think about what she should do. The main point of this path was to demonstrate how recognizing one’s emotions, naming those emotions, and choosing to step away from content that engages emotions of fear and anger can help to stop the spread of misinformation.
If participants choose the second path offered to them—the option to comment—their experience was more complex. When Lauryn comments she does so to point out that the information is incorrect. She exclaims, “Whoa! This isn’t true. Children under 12 are just starting to get the vaccine. There’s no way a timeline like that would be imposed.” Despite her effort to point out that this information is wrong, the post circulates on her social network’s feed. One of her friends, Jamie, sees the post but does not see Lauryn’s comment. For Jamie, the emotions overtake her, and she creates an online petition aimed at the school board to repeal this decision. The petition gains traction and other parents become involved. Some of the parents take things too far and the campaign has some unintended consequences (e.g., online abuse directed toward school officials). The main point of this path was to demonstrate how commenting on the post, even though the comment was pointing out potential misinformation, can mislead others and have the unintended consequence of pushing the misinformation further. Thus, there is great benefit from not sharing online content that causes feelings of anger and fear.
At the end of both paths the learning outcome was presented again. We reiterated to participants that researchers have found that social media content spreads faster if it makes someone fearful or angry, and that online misinformation benefits from this effect. Based on previous aspects of this research (Veletsianos et al., 2022a), we recommended some simple steps to recognize content that plays on a person’s emotions with the goal of helping people slow down their response to online information and thus slow the spread of misinformation. These steps include encouraging people to pause for a moment, reassess their emotional state, and then decide if it’s worth responding. The false nature of the Instagram and Facebook posts participants saw were also underscored so as to minimize the likelihood of this effort unintentionally spreading misinformation among participants.
Methods
We employed a pre-test-post-test design to examine the effectiveness of the interactive narrative. Next, we describe the methods used in this study.
Researcher Positionality
Our interdisciplinary research team was composed of six members, including two professors, two post-doctoral researchers, and two doctoral research assistants. Five members are women, three of whom are mothers, and one is a man, all working in Canadian universities. The team’s disciplinary training includes education, media and communication studies, linguistics, and English and cultural studies. The research was funded through a COVID-19 rapid response federal funding opportunity granted in early 2020, with the overarching goals of the project being to develop targeted educational interventions into health misinformation online.
Participants
Potential participants had to meet the criteria of residing in Canada, being female, and be 18+ years of age. This study’s participants were women because the broader project focused on women and mothers due to a variety of factors that influence the ways in which people of different positionalities may or may not respond to online misinformation (e.g., Houlden et al., 2022). In total, 195 female participants took part in the study, with most being between the ages of 30 and 39 (37.95%) and 40-49 (25.13%), and more than 70% of them having attended at least some college (see Table 1). Participants resided in Ontario (51.28%), Alberta (15.90%), British Columbia (11.28%), Manitoba (6.15%), Nova Scotia (4.10%), Saskatchewan 3.59%), Newfoundland and Labrador (3.08%), Quebec (2.05%), and New Brunswick (2.05%). One participant did not disclose their province of residence.
Table 1
Participants
| Total (N=195) |
---|
Age |
---|
18-29 | 10 (5.1%) |
30-39 | 74 (37.9%) |
40-49 | 49 (25.1%) |
50-59 | 37 (19.0%) |
60-69 | 23 (11.8%) |
70-79 | 1 (0.5%) |
NA | 1 (0.5%) |
Education |
---|
College | 77 (39.5%) |
Doctoral degree | 4 (2.1%) |
High school | 19 (9.7%) |
MA | 38 (19.5%) |
NA | 1 (0.5%) |
Professional | 34 (17.4%) |
Some college | 22 (11.3%) |
Data Collection
We collected data in December 2021 using the services of Prolific, a platform used by researchers, including misinformation researchers (Basol et al., 2021), to source prospective participants and deliver the questionnaire (Palan & Schitter, 2018). In total, 195 Canadian women ages 18+ participated. Prior to engaging with the interactive narrative, we asked participants to consent to the collection of data and complete a pre-test which consisted of statements that aimed to address the research questions and learning outcomes (e.g., the role of fear and emotion in spreading misinformation, strategies to avoid sharing misinformation online, etc.). Next, we asked participants to evaluate the statements in a Likert scale consisting of five options: 1) strongly agree, 2) agree, 3) neither agree nor disagree, 4) disagree, and 5) strongly disagree. Once participants entered all their responses, they were redirected to the interactive narrative.
Upon completing the interactive narrative, we provided participants with the post-test where they were asked to complete a questionnaire that included the initial statements presented in the pre-test as well as demographic information (e.g., age, education, etc.). Participants were also asked to answer one open-ended question which aimed to elicit what they had learned from the interactive narrative. Finally, as we focus on the spread of online misinformation, we asked participants how frequently they engaged with social media (daily, every few days, weekly, every few weeks, monthly, rarely, or never).
Data Analysis
Once data collection was completed, we imported the results into Microsoft Excel and R for analyses (R Core Team, 2021). As questions presented in the pre-test were replicated in the post-test, we conducted an analysis of changes in the responses given to each set of questions, using paired t-tests, thus examining the hypothesis that the test was effective. We removed all questions left blank prior to the analysis. As t-tests do not offer details about the significance of the results, we used descriptive statistical analyses to guide the interpretation of the results.
Results
Does the learning intervention effectively teach people that fear or anger have a role to play in the spread of misinformation?
Table 2 shows results from the analyses of the first four question statements that address research question 1 which reveal statistically significant changes between pre- and post-test for three out of the four statements. Results from the analysis of the pre- and post-test responses to statement 1 reveal a significant increase in awareness about the role of anger in the spread of misinformation online (t = 3.32, df = 194, p = .001, CI95=0.094 : 0.368). Changes are more noticeable in the number of individuals who strongly agree with the statement in the post-test when compared to the pre-test (see Table 2). In the case of statement 2, results show statistically significant differences between the responses to the pre- and post-test (t = 2.85, df = 194, p= .004882, CI95= 0.058 : 0.321), with an increase in strongly agree with the statement that misinformation that generates fear is more likely to spread online in the post-test, and a related decrease in agreement, while the remaining categories remained relatively unchanged. This suggests that the interactive narrative was effective in raising awareness about the role of fear in the spread of misinformation online.
We also found statistically significant results in the analysis of responses given in the pre- and post-test to statement 3 (“Misinformation that is controversial is more likely to spread online”) (t = 2.47, df = 194, p = .014, CI95= 0.037 : 0.332). These results indicate that the interactive narrative was effective in increasing participants’ awareness of the role of controversy in the spread of online misinformation. There was an increase in the number of individuals that strongly agree with the statement in the post-test when compared to the pre-test, and a decrease in agree, following the same pattern as statement 2. Finally, for question 4, although there is an increase in the post-test in the raw number of participants that agree with the statement that “misinformation that uses humor is more likely to spread online” when compared to the pre-test, results were not statistically significant (t = 0.46, df = 194, p = .647), indicating that the interactive narrative did not lead to significant changes in participants’ view on this topic.
Table 2
Responses to statements addressing Research Question 1
Does the learning intervention effectively teach people to identify at least one strategy for interrupting the spread of misinformation driven by fear or anger?
Results from the analysis of questionnaire statements 5 to 10 that address research question 2 are shown in Table 3 and reveal statistically significant changes between pre- and post-test answers for all questions. For statement 5, results from the analysis of responses from the pre-test and post-test show statistically significant results (t = 5.52, df = 194, p = <0.001, CI95=0.27 : 0.571) and show an increase in strongly agree in the post-test with the statement that states that pausing when coming across information that makes you angry or afraid is a strategy to slow the spread of misinformation. Changes are found more noticeably in the number of individuals that strongly agree with the statement in the post-test when compared to the pre-test, with a decline in agree as participants presumably shifted towards stronger agreement. These results suggest that participants became more aware of the importance of pausing when encountering information that makes you angry or afraid as a step to slow the spread of misinformation online. Results from the analysis of responses to statement 6 (“One strategy to slow the spread of misinformation is to identify how it makes you feel”) also revealed significant differences between pre- and post-test (t = 9.39, df = 193, p < 2.2e-16, CI95=0.627 : 0.961). 84% of participants who stated that they neither agreed nor disagreed, disagreed, or strongly disagreed in the pre-test declined, while either agree or strongly agree with the statement in the post-test increased. These results suggest the efficacy of the interactive narrative regarding the value of assessing one’s feelings as a way to slow the spread of online misinformation.
Analysis of the responses given to statement 7 (“One strategy to slow the spread of misinformation is to avoid commenting on social media”) reveal significant differences in the level of agreement with the statement between the pre- and the post-test (t = 6.94, df = 193, p = <0.001, CI95=0.483 : 0.867). Following a similar pattern to statement 6, 84% of those who stated that they neither agreed nor disagreed, disagreed, or strongly disagreed in the pre-test appeared to either agree or strongly agree with the statement in the post-test, suggesting that the interactive narrative was effective in raising participants’ awareness of the value of avoiding commenting on social media as a strategy to slow the spread of misinformation. Regarding statement 8 (“One strategy to slow the spread of misinformation is to correct posts you know are false or misleading”), results from the analysis of responses also reveal statistically significant changes between the pre- and post-test responses (t = -11.19, df = 194, p < 2.2e-16, CI95=-1.786 : -1.250). For instance, strong agreement and agreement with the statement saw a decrease of 57.1% and 71.4%, respectively, in the post-test, suggesting that participants became more aware of the role that correcting misinformation can have on the spread of misinformation after the interactive narrative. Relatedly, there was a large increase in the number of those who disagreed with the statement in the post-test (n=48, over three times more than the responses from the pre-test, n=14).
Results from the analysis of responses to statement 9 (“One strategy to slow the spread of misinformation is to debunk or fact check people’s posts”) also reveal statistically significant results when comparing the answers between the pre- and post-test (t = -10.14, df = 194, p < 2.2e-16, CI95=-1.538 : -1.037). Some important trends worth mentioning are the decrease of over 52% of participants who strongly agree in the post-test when compared to the pre-test as well as a decrease of 31% in the number of participants who agreed with the statement in the post-test compared to the pre-test. Conversely, participants were four times more likely to disagree with the statement in the post-test and over 23 times more likely to strongly disagree with it in the post-test when compared to the pre-test, suggesting the effectiveness of the interactive narrative in raising awareness about how debunking or fact checking can increase the visibility of misinformation and, as a result, help it spread online. Finally, for statement 10 (“One strategy to slow the spread of misinformation is to avoid social media”), the analysis revealed statistically significant changes between the pre- and post-test responses (t = 1.97, df = 194, p = 0.050, CI95= <0.001 : 0.502). Notable results include the increase of those who strongly agreed with the statement in the post-test (n=80) when compared to the pre-test (n=57) and the decrease of those who initially disagreed with the statement, from 40 in the pre-test to 25 in the post-test, suggesting that post-intervention, participants perceived avoiding social media as a possible strategy to avoid spreading misinformation.
Table 3
Responses to statements addressing Research Question 2
Does playing through the interactive narrative result in improved self-efficacy about how to respond to fear- and anger-driven misinformation?
Table 4 shows results from the analysis of questionnaire questions 11 to 14 that address research question 3. These results reveal statistically significant changes between pre- and post-test answers for most questions. In response to statement 11 (“When I am online, I feel confident in dealing with online information that makes me angry”) analysis reveals statistically significant results when comparing the pre- and the post-test (t = 5.08, df = 193, p = <0.001, CI95=0.306 : 0.694). There was a decrease of over 72% of participants who neither agreed nor disagreed (n=33 in the pre-test, n=9 in the post-test) and of over 65% of those who disagreed with the statement in the post-test when compared to the pre-test (n=23 in the pre-test and n=8 in the post-test); conversely, there was an increase to 52.3% (n=102) from 31.3% (n=61) for participants who strongly agreed. Results suggest an increase in self-efficacy, with participants indicating higher confidence in their ability to deal with online information that makes them angry.
Regarding statement 12 (“When I am online, I feel confident in dealing with online information that makes me feel afraid”), results from the analysis also reveal statistically significant changes between the pre- and post-test responses (t = 4.35, df = 194, p = <0.001, CI95=0.235 : 0.626). Some noteworthy results include the increase in the number of those who strongly agreed with the statement in the post-test (n=97) when compared to the pre-test (n=59) and the decrease of those who initially neither agreed nor disagreed with the statement, from 30 in the pre-test to 12 in the post-test. Additionally, there was a decrease of approximately 53% of those who initially disagreed with the statement in the post-test, suggesting an increase in participants’ confidence dealing with online information that makes them feel afraid.
Analysis of the responses to statement 13 (“When I am online, I feel confident in my ability to recognize when my feelings of anger or fear might lead me to spread misinformation”) show statistically significant differences between the pre- and post-test responses (t = 3.58, df = 193, p = <0.001, CI95=0.139 : 0.480). Notable here was the increase of those who strongly agreed with the statement from 80 in the pre-test to 119 in the post-test and the decrease of those who neither agreed nor disagreed in the pre-test (n=20) and the post-test (n=11). These results suggest that participants felt more confident about their ability to recognize feelings that might result in them sharing misinformation online. Finally, results of the analysis of responses given to statement 14 (“When I am online, I feel confident in my ability to avoid spreading misinformation that makes me angry or afraid”) reveal that the changes between the pre- and post-test were not statistically significant (t = 0.44, df = 194, p = .659), suggesting that there were no statistically significant changes in the participants’ confidence in avoiding spreading misinformation that makes them feel afraid or angry.
Table 4
Responses to statements addressing Research Question 3
Discussion & Future Research
Misinformation is a complex problem that requires multiple strategies to effectively mitigate its widespread circulation online as well as the ongoing harmful impacts associated with it. The problem has been studied from multiple related angles by both education and misinformation studies researchers. Contributing to this interdisciplinary work, this research demonstrates that (a) much like digital educational games, multimedia educational interactive narratives have a role to play in mitigation efforts, and (b) even short narratives based around microlearning protocols may offer benefits. Notably, the findings from this study offer a potentially useful starting point for unrolling rapid responses to misinformation as such interventions are not as involved in their production as longer, more complicated ones tend to be, and there are freely available online tools such as Twine to enable this development. Although the interactive narrative studied here can be completed in just a couple of minutes, the pre-post assessment indicated growth in key areas, including through improved understanding of the relationship between strong emotions and misinformation, improved understanding of strategies for disengaging with emotionally-charged content online, and improved self-efficacy for recognizing the experience of emotions that might trigger the spread of misinformation, all of which can potentially support both immediate intervention into misinformation as well as more complex efforts to educate about misinformation.
While participants reported improvements in confidence in dealing with online information that makes them angry or afraid, as well as in recognizing that the emotions that might result in sharing misinformation, we did not observe a substantial difference in the pre- and post-test in confidence in avoiding spreading such misinformation. Nevertheless, with respect to what to do when one comes across misinformation, in the pre-test evaluation, the results suggest that participants thought that debunking was a good approach to take. After playing through the interactive narrative these opinions dispersed, and we saw an increase in disagree and strongly disagree, which we deem a successful result in our effort to teach people how to slow the spread of misinformation online. Significantly, while debunking can be a useful strategy, engaging with misinformation in any way – even when attempting to debunk it – is risky as doing so may spread misinformation further. By having participants stop and disengage, misinformation flows may be disrupted, thereby reducing risks still associated with debunking or correcting, such as through the lingering consequences of the continued influence effect, in which people’s opinions can still be impacted by false information even if they know it to be untrue (Brashier & Marsh, 2020).
Choosing to not correct misinformation may seem counter-intuitive, but given the volume of information people interact with online on a daily basis, understanding where to step back is profoundly relevant, as noted by such misinformation scholars as Caulfield (2019). This is especially true in light of the way social media algorithms are designed to exploit strong emotions in order to sustain engagement, even if that engagement is socially destructive or harmful. Recent research about the limits of fact-checking and the weaponization of facts in service of propaganda echoes this understanding given it is not enough for information to be true or accurate, but that it needs to be understood in context and against other claims and evidence (How to Mislead with Facts, 2022). Moreover, for both inoculation methods (e.g., Basol et al., 2020; Roozenbeek & Van Der Linden, 2019) and information literacy to be effective, individuals need to be both motivated and aware enough to stop before engaging with online information that may in fact be misinformation. In other words, the results underscore the idea that certain forms of inoculation are not the only way to approach the problem of misinformation, and that while literacy that includes understanding things like source and context is important, simply encouraging mindful disengagement may prove a useful strategy in the fight against misinformation.
The pre-test evaluation allowed us to have a better understanding of the degree to which people already knew about the connection between misinformation and emotions. For example, the pre-test results from the first two statements, which assessed whether people had already established comprehension of the relationship between fear, anger, and misinformation, indicated that more than half of all participants strongly agreed that this relationship impacted the spread of misinformation online. This suggests that there is in some portion of the population a previously established sense of the techniques exploited by misinformation, though where this knowledge came from is unclear (i.e., to what extent is this becoming a kind of common knowledge?). Further research into how people understand misinformation in general, would be useful, given the sustained efforts of educators and media to improve public understanding of misinformation in general and health misinformation in particular, such as with the many games discussed above (e.g., Basol et al., 2020; Basol et al., 2021; Roozenbeek & Van Der Linden, 2019). Because of this context, we also do not have a good sense of how this interactive narrative might affect users beginning from a different baseline of little understanding of the techniques used to promote misinformation. Youth or seniors, for example, may have a different understanding of misinformation, suggesting that further research might focus on different age groups to test the effectiveness of this interactive narrative on people beginning with different information literacies. Trans-cultural studies would also be of benefit to assess what places and communities are lacking in these skills, and what education interventions and programs are working in which settings.
The interactive narrative itself was designed with a more prophylactic inoculation (Compton, 2020) approach in mind as the learning outcomes targeted actually taught about the nature of misinformation, rather than about reversing misinformed perspectives. As such, further research ought to apply similar techniques but with a therapeutic inoculation approach in order to examine whether similar effects can be achieved with people who have already established beliefs around misinformation. For example, researchers could purposively sample for anti-vaccination attitudes to assess the effects of the narrative, not with the intention of changing people’s position on a topic, but by teaching the meta-skills of information literacy that inoculation approaches take.
Limitations
This study presents an effective microlearning strategy to help people slow the spread of misinformation. While the findings are promising, there are important factors to consider in thinking about how this work applies to broader contexts. First, we included an immediate assessment of outcomes relying on self-reported measures in the context of an interactive narrative. This means that participants answered the post-test questions directly after they completed the intervention. What remains unclear is whether the information learned will persist or fade over time, and whether behavioral impacts will be observed in real-world settings.
Second, the sample is specific to Canadian women, the majority of who were college educated and in their 30s and 40s. This means the results are limited to individuals with a similar background, and we cannot with confidence generalize them to other groups. The perception of misinformation has cultural and worldview components (Cook, Ecker, & Lewandowsky, 2015), and people from diverse cultural and social backgrounds may rely on a different set of heuristic cues when responding to online information.
Third, we cannot be certain that statistically significant results were a product of the intervention, and neither can we be certain which aspect of the intervention led to statistically significant results. It is possible that participants had time to reflect on the pre-test questions before revisiting them in the post-test. To account for this possibility, we have however minimized the time between the pre- and post-tests as a way to limit the acquisition of further information via other sources. With respect to which aspect of the intervention led to statistically significant results, it is worthwhile emphasizing that there were several dimensions in the intervention that may have influenced the results, including the interactive narrative, the authoritative statements, and the reinforcement of reminders at the end of the intervention. While we cannot say exactly what may have led to changes between the pre- and post-test, and therefore make specific claims about particular dimensions of this intervention, our aim was to evaluate the intervention as a whole. Researchers interested in assessing specific factors or dimensions of similar interventions need to eliminate information or activities in interventions that may influence learning outcomes, and this has been an issue that has been raised by multimedia learning researchers aiming to study particular factors in multimedia learning environments (e.g., see Clark & Choi, 2005).
Fourth, this analysis used t-tests. T-tests confirm or deny a null hypothesis, and thus do not offer specific information about the significance found in the data. To address this and examine statistically significant results we also used descriptive statistical analysis to guide interpretation of the data. Future studies might benefit from the use of inferential analyses to determine the degree to which other variables (such as socioeconomic status) might influence participants’ responses.
Conclusion
Recent research has demonstrated that educational games and interactive narratives can have a beneficial impact on improving people’s understanding of the problem of misinformation while empowering them to better identify and reject misinformation. Results from this interactive narrative similarly suggest the benefit of using such interventions, with a narrow focus on providing the knowledge and skills necessary to interrupt the exploitation of emotions that comes with much online misinformation being a short and potentially highly useful tool in the work to diminish the reach of online misinformation.
Acknowledgements
This work was supported by the Canadian Institutes of Health Research (CIHR) grant number 170367.
References
Backlund, P., Engstrom, H., Johannesson, M., Lebram, M., & Sjoden, B. (2008, July). Designing for self-efficacy in a game based simulator: An experimental study and its implications for serious games design. In 2008 International Conference Visualisation (pp. 106-113). IEEE. https://ieeexplore.ieee.org/document/4568680
Barzilai, S., & Chinn, C. A. (2020). A review of educational responses to the “post-truth” condition: Four lenses on “post-truth” problems. Educational Psychologist, 55 (3), 107-119. https://doi.org/10.1080/00461520.2020.1786388
Basol, M., Roozenbeek, J., Berriche, M., Uenal, F., McClanahan, W. P., & Linden, S. van der. (2021). Towards psychological herd immunity: Cross-cultural evidence for two prebunking interventions against COVID-19 misinformation. Big Data & Society, 8 (1), 1-18. https://doi.org/10.1177/20539517211013868.
Basol, M., Roozenbeek, J., & Linden, S. van der. (2020). Good news about bad news: Gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition, 3 (1), 2. https://doi.org/10.5334/joc.91
Brashier, N.M., & Marsh, E.J. (2020). Judging truth. Annual Review of Psychology 71, 499-515. https://doi.org/10.1146/annurev-psych-010419-050807
Brodsky, J. E., Brooks, P. J., Scimeca, D., Galati, P., Todorova, R., & Caulfield, M. (2021). Associations between online instruction in lateral reading strategies and fact-checking COVID-19 News Among College Students. AERA Open, 7, 23328584211038937
Caulfield, M. (2019, June 19). SIFT (the four moves). Hapgood. Retrieved February 11, 2022, from https://hapgood.us/2019/06/19/sift-the-four-moves/
Cavazza, M. & Young, R. M. (2017). Introduction to interactive storytelling. In R. Nakatsu, M. Rauterberg, & P. Ciancarini (Eds.). Handbook of digital games and entertainment technologies (pp. 361-375). Singapore, Springer Science+Business Media. https://link.springer.com/referencework/10.1007/978-981-4560-50-4
Chou, W.-Y. S., & Budenz, A. (2020). Considering emotion in COVID-19 vaccine communication: Addressing vaccine hesitancy and fostering vaccine confidence. Health Communication, 34 (14), 1718-1722. https://doi.org/10.1080/10410236.2020.1838096
Clark, R. E., & Choi, S. (2005). Five design principles for experiments on the effects of animated pedagogical agents. Journal of Educational Computing Research, 32 (3), 209-225. https://doi.org/10.2190/7LRM-3BR2-44GW-9QQY
Compton, J. 2019. Prophylactic versus therapeutic inoculation treatments for resistance to influence. Communication Theory 30, 330-343. https://doi.org/10.1093/ct/qtz004
Cook, J., Ecker, U. K., Trecek-King, M., Schade, G., Jeffers-Tracy, K., Fessmann, J., ... & McDowell, J. (2022). The cranky uncle game—Combining humor and gamification to build student resilience against climate misinformation. Environmental Education Research, 1-17. https://doi.org/10.1080/13504622.2022.2085671
Cook, J., Ecker, U., & Lewandowsky, S. (2015). Misinformation and how to correct it. In R. Scott & S. Kosslyn (Eds.). Emerging Trends in the Social and Behavioral Sciences (pp. 1-17). John Wiley & Sons.
Cook J, Lewandowsky S, Ecker UKH (2017) ‘Neutralizing misinformation through inoculation: exposing misleading argumentation techniques reduces their influence’. PloS ONE 12 (5):1–21. https://doi.org/10.1371/journal.pone.0175799
Davies, H. (2022). The Gamification of Conspiracy: Qanon as Alternate Reality Game. Acta Ludologica, 5 (1), 60-79. https://www.ceeol.com/search/article-detail?id=1049996
Defelice, R., & Kapp, K. (2019). Microlearning: Short and sweet. Association for Talent Development.
Enders, A. M., Uscinski, J. E., Seelig, M. I., Klofstad, C. A., Wuchty, S., Funchion, J. R., Maohar, N.M., Premaratne, K. & Stoler, J. (2021). The relationship between social media use and beliefs in conspiracy theories and misinformation. Political behavior, 1-24. https://doi.org/10.1007/s11109-021-09734-6
Farley, M. & Hone, B. (2020). Factitious: Pandemic Edition [Video game]. AU Game Studio.
Garcia, L., & Shane, T. (2021, June 29). A guide to prebunking: A promising way to inoculate against misinformation. First Draft. Retrieved February 4, 2022, from https://firstdraftnews.org/articles/a-guide-to-prebunking-a-promising-way-to-inoculate-against-misinformation/
Gaspar, J. D. S., Lage, E. M., Silva, F. J. D., Mineiro, É., Oliveira, I. J. R. D., Oliveira, I., Souza, G. D., Gusmão, J. R. O., Souza, C. F. D. D., & Reis, Z. S. N. (2020). A mobile serious game about the pandemic (COVID-19—Did You Know?): Design and evaluation study. JMIR Serious Games, 8 (4), e25226. https://doi.org/10.2196/25226
Grace, L., & Hone, B. (2019). Factitious: Large scale computer game to fight fake news and improve news literacy. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 1–8. https://doi.org/10.1145/3290607.3299046
Grasse, K.M., Melcer, E.F., Kreminski, M., Junius, N., & Wardrip-Fruin, N. (2021). Improving undergraduate attitudes towards responsible conduct of research through an interactive storytelling game. CHI ‘21 extended abstracts: CHI conference on human factors in computing systems extended abstracts, Yokohama, Japan. https://doi.org/10.1145/3411763.3451722
Green, M.C., & Jenkins, K.M. (2014). Interactive narratives: Processes and outcomes in user-directed stories. Journal of Communications 64 (3), 479-500. https://doi.org/10.1111/jcom.12093
Han, J., Cha, M., & Lee, W. (2020, September 17). Anger contributes to the spread of COVID-19 misinformation. Harvard Kennedy School Misinformation Review 1 (3). Retrieved January 20, 2022, from https://misinforeview.hks.harvard.edu/article/anger-contributes-to-the-spread-of-covid-19-misinformation/
Houlden, S., Veletsianos, G., Hodson, J., Reid, D. and Thompson, C.P. (2022). COVID-19 health misinformation: using design-based research to develop a theoretical framework for intervention. Health Education, 122 (5), 506-518. https://doi.org/10.1108/HE-05-2021-0073
How to Mislead with Facts. (2022). The Consilience Project. Retrieved January 31, 2022. https://consilienceproject.org/how-to-mislead-with-facts/
Information literacy. (2022). Common sense education. Retrieved January 20, 2022, from https://www.commonsense.org/education/digital-citizenship/information-literacy
Jaeger, P.T. & Greene Taylor, N. (2021). Arsenals of lifelong information literacy: Educating users to navigate political and current events information in world of ever-evolving misinformation. The Library Quarterly: Information, Community, Policy, 91(1), 19-31. https://doi.org/10.1086/711632
Junior, R. B. (2020). The fake news detective: A game to learn busting fake news as fact checkers using pedagogy for critical thinking. SMARTech: Georgia Tech Library. Retrieved January 20, 2022, from https://smartech.gatech.edu/handle/1853/63023
Katsaounidou, A., Vrysis, L., Kotsakis, R., Dimoulas, C., & Veglis, A. (2019). MathE the game: A serious game for education and training in news verification. Education Sciences, 9 (2), 155. https://doi.org/10.3390/educsci9020155
Kendeou, P., Robinson, D. H., & McCrudden, M. T. (Eds.). (2019). Misinformation and fake news in education. Information Age Publishing.
Malki, M., & Shaqrah, A. (2019). Analysis of gamification elements to explore misinformation sharing based on U&G theory: A software engineering perspective. International Journal of Software Engineering & Applications (IJSEA), 10 (4), 1-8.
Mayer, R. E. (2009). Multimedia learning (2nd ed.). Cambridge University Press.
McGuire, W. J. (1964). Inducing resistance against persuasion: some contemporary approaches. Advances in Experimental Social Pyschology 1, 191–229. https://doi.org/10.1016/S0065-2601(08)60052-0
McGuire, W.J., & Papageorgis, D. (1961). Resistance to persuasion conferred by active and passive prior refutation of the same and alternative counterarguments. The Journal of Abnormal Social Psychology, 63 (2), 326–332. https://psycnet.apa.org/doi/10.1037/h0048344
McGuire, W.J., & Papageorgis, D. (1962). Effectiveness of forewarning in developing resistance to persuasion. Public Opinion Quarterly, 26 (1), 24–34. https://doi.org/10.1086/267068
McLoughlin, C., & Lee, M. (2011). Pedagogy 2.0: Critical challenges and responses to Web 2.0 and social software in tertiary teaching. In M. J. W. Lee, & McLoughlin, C. (Eds.). Web 2.0-Based E-Learning: Applying Social Informatics for Tertiary Teaching. IGI Global.
Médecins sans Frontières. (2020). COVID Challenge: Quiz game by MSF [Video game]. Pixel Impact.
Montagni, I., Mabchour, I., & Tzourio, C. (2020). Digital gamification to enhance vaccine knowledge and uptake: Scoping review. JMIR Serious Games, 8 (2), e16983. https://doi.org/10.2196/16983
Ohannessian, R., Yaghobian, S., Verger, P., & Vanhems, P. (2016). A systematic review of serious video games used for vaccination. Vaccine, 34 (38), 4478–4483. https://doi.org/10.1016/j.vaccine.2016.07.048
Palan, S., & Schitter, C. (2018). Prolific. ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance, 17, 22-27. https://doi.org/10.1016/j.jbef.2017.12.004
Pian, W., Chi, Jianxing, & Ma, F. (2021). The causes, impacts and countermeasures of COVID-19 “Infodemic:” A systematic review using narrative synthesis. Information Processing and management 58, 102713. https://doi.org/10.1016/j.ipm.2021.102713
Rains, S.A., Leroy, G., Echo, W.L., & Harber, P. (2021). Psycholinguistic markers of COVID-19 conspiracy tweets and predictors of tweet dissemination. Health Communication. https://doi.org/10.1080/10410236.2021.1929691
R Core Team (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.
Rea, S. C. (2022). Teaching and confronting digital extremism: contexts, challenges and opportunities. Information and Learning Sciences, 123 (1/2), 7-25. https://doi.org/10.1108/ILS-08-2021-0065
Roozenbeek, J., & van der Linden, S. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5 (1), 1–10. https://doi.org/10.1057/s41599-019-0279-9
Roozenbeek, J., & van der Linden, S. (2020). Breaking Harmony Square: A game that “inoculates” against political misinformation. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-47
Schreiner, M., Fischer, T., & Riedl, R. (2021). Impact of content characteristics and emotion on behavioral engagement in social media: literature review and research agenda. Electronic Commerce Research 21, 329-354. https://doi.org/10.1007/s10660-019-09353-8
Veletsianos, G., Houlden, S., Hodson, J., Thompson, C.P., & Reid, D. (2022a). An evaluation of a microlearning intervention to limit COVID-19 online misinformation. Journal of Formative Design in Learning, 6 (1), 13-24. https://doi.org/10.1007/s41686-022-00067-z
Veletsianos, G., Houlden, S., Reid, D., Hodson, J., & Thompson, C.P. (2022b). Design principles for an educational intervention into online vaccine misinformation. TechTrends, 66 (5), 748-759. https://doi.org/10.1007/s11528-022-00755-4
Wang, F., & Hannafin, M. J. (2005). Design-based research and technology-enhanced learning environments. Educational Technology Research and Development, 53 (4), 5–23. https://doi.org/10.1007/BF02504682
Weyrich, P., Ruin, I., Terti, G., & Scolobig, A. (2021). Using serious games to evaluate the potential of social media information in early warning disaster management. International Journal of Disaster Risk Reduction, 56, 102053.
World Health Organization. (2020, February 2). Novel Coronavirus (2019-nCoV): Situation report - 13. Retrieved January 20, 2022, from https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200202-sitrep-13-ncov-v3.pdf
Yang, S., Lee, J. W., Kim, H.-J., Kang, M., Chong, E., & Kim, E. (2021). Can an online educational game contribute to developing information literate citizens? Computers & Education, 161, 104057. https://doi.org/10.1016/j.compedu.2020.104057
Yaraghi, N. (2019, April 9). How should social media platforms combat misinformation and hate speech? Brookings, Techtank. Retrieved February 4, 2022, from https://www.brookings.edu/blog/techtank/2019/04/09/how-should-social-media-platforms-combat-misinformation-and-hate-speech/
Zhang, J., & West, R.E. (2019). Designing microlearning instruction for professional development through a competency based approach. TechTrends 64, 310-318. https://doi.org/10.1007/s11528-019-00449-4