Content of review 1, reviewed on February 16, 2024

This paper examines public attitudes towards learning about techniques that may help them to identify misinformation, highlighting that lack of perceived need and lack of trust are key potential barriers to uptake. Examining potential barriers to acceptance/voluntary engagement with misinformation interventions is important, and I believe this paper has the potential to make a strong contribution to the literature. However, there are currently several aspects of the paper that I believe could be improved, along with several limitations/potential issues that should be considered and addressed. I have outlined my specific points in more detail below.

Major points:

1) Quite a few demographic variables were collected but only the results for age are reported in Experiment 1. It is noted that age was the most consistent difference, but it would be good to at least report the other demographic results in supplementary materials or similar. Indeed, given the introduction sets up political affiliation and trust as important moderators. perhaps those results (even if null) should be reported in the main body and other, less theoretically relevant, demographics (including age) could be placed in supplementary materials.
Okay, the trust in institutions results for Experiment 1 are briefly mentioned in the Experiment 2 results. Please move these to the appropriate section.

2) The demographic breakdowns of the samples are very important for understanding and interpreting the results (particularly given the higher mean for watching videos from the Democratic Party than Republican Party). Can the authors please report the demographic details for both experiments within the body of the paper (e.g., mean age, gender breakdown, mean political ideology, vote intentions for Exp .2).

3) Can the authors please provide more information about the pilot test that was conducted for Experiment 2. Ideally, please report the full results and trust ratings for all potential sources in supplementary materials. Relatedly, given the potential that different sources will be trusted differently by different groups (particularly based on political ideology), did the authors ensure that the sources chosen had similar trust ratings across political ideology or other important demographics?

4) Related to the above, it seems somewhat problematic to switch from “Ivy League University” in the pilot to “Harvard University”. People often have different attitudes/feelings towards abstract categories and specific concrete examples from within that category.

5) I also think the selection of the Russian Government as a low-trust source is problematic given the incredibly low plausibility of that occurring, and the fact that it is the only non-US based source. It would have been much more beneficial to have used all US-based sources that differed in the level of trust.

6) The data from Experiment 1 are used as a “no-source” baseline condition but I do not think this is an appropriate comparison.
In Experiment 1, when introducing the techniques to be inoculated against, the survey says “Cambridge researchers have identified 5 persuasion techniques that are associated with misinformation.”
Later when asking about watching the videos it says:
“Researchers have found that you can reduce susceptibility to misinformation by informing people about how they might be misinformed. This works by showing people a series of short training videos explaining common persuasive techniques used to spread misinformation and how to refute those techniques.
In the following, we want you to consider whether you and the average person would benefit from such training.”
Therefore, this setup seems very, very similar to the “Harvard University” source used in Experiment 2. Cambridge and Harvard are both elite academic institutions, and even if Cambridge isn’t re-mentioned in the second set of questions, it’s still comparing a “researchers” source (for which the default assumption is likely academic/university researchers) in the “no-source” condition with a “Harvard University” source in Experiment 2. It would have been much better to have just included a no-source control condition within Experiment 2, which also would have avoided the need to compare the results across different experiments. It is likely preferable to just compare the sources within Experiment 2, rather than using Experiment 1 as a baseline. Either way, this potential limitation should be clearly acknowledged and discussed.

7) Can the authors please make it clear within the text whether source was a between or within-subject variable (seems like within-subject?) and whether the presentation order of sources was randomised or counterbalanced in some way etc. (I see from the pre-reg it was randomised but this should be in the paper).

8) Why is trust in institutions treated as a categorical variable with 7 levels rather than a continuous (or ordinal) variable ranging from extremely untrustworthy to extremely trustworthy? Then the correlations between trust and likelihood of watching/benefit of watching could be calculated and reported for each source, which would be much easier to report and interpret than the multitude of ANOVAs and post hoc comparisons reported on page 8. If kept as categorical, please at least report all of the post hoc comparisons in tables for transparency and to make it easier for readers to follow.

9) I think the discussion would benefit greatly from some additional proofreading and editing. The first few paragraphs are difficult to follow in places. There are some run-on sentences and other errors.

10) I commend the authors for pre-registering the experiments and sharing the materials and data. If possible, it would be greatly beneficial if the authors could also share the analysis code (or syntax if conducted using software with a graphical interface) to enable easy replication of the results by reviewers and/or readers.

11) Finally, I request that the authors add a statement to the paper confirming whether, for all experiments, they have reported all measures, conditions, data exclusions, and how they determined their sample sizes. The authors should, of course, add any additional text to ensure the statement is accurate. This is the standard reviewer disclosure request endorsed by the Center for Open Science [see http://osf.io/hadz3]. I include it in every review.

Minor points:

1) Page 2, paragraphs 2-3 - There is ongoing debate about the efficacy of various inoculation techniques, such as whether they actually improve one’s ability to detect misinformation or instead impact response bias, making people more sceptical of both true and false items (Modirrousta-Galian & Higham, 2023). Additionally, much of the inoculation research has focused on improving detection of specific misleading techniques but whether inoculation actually improves misinformation discernment is less well established (Pennycook et al., 2024). These concerns should at least be noted within the relevant section.

2) Page 3, paragraph 3 – just because some people rate themselves more favorably than the average person and this is associated with reduced vaccination doesn’t necessarily mean that they are wrong? Some people are at less risk than others so this may actually be somewhat rational (e.g., younger people and covid/covid vaccination).

3) Pages 4 and 6 – please add links to the pre-registrations at the start of each results section. It might also be useful to add them where pre-reg is first mentioned (top of page 4).

4) I found it a little odd that the wording of the question for misinformation exposure was “How often do you think other people encounter information that they later find out is untrue or misleading?” whereas every other item asks about the “average person”. “Other people” could potentially be interpreted as a collective group rather than an individual, in which case it is not surprising that people would respond that others encounter misinformation more than they do as an individual. This isn’t a major problem, but it would be beneficial to note this potential issue somewhere in the manuscript.

5) Why have separate sentences for Cohen’s d? Why not just report it alongside the other results, e.g., “t(151) = −6.19, p < 0.001, d = 0.38”. It would also be good to add in 95% CIs for the reported effect sizes wherever possible.

6) Figures 1-5 – Please add 95% confidence intervals to all graphs (SE or SD could be used instead, but confidence intervals are strongly preferred).

7) Table 1 – It is described as Demographic variables but it is just Age? Better to label it as Age instead (although I recommend moving it to Supp).

8) Page 7, paragraph 3 – saying “The partisan sources did not fit the same pattern.” Doesn’t provide much detail about the actual results. The next section only looks at partisan sources separated out by political affiliation, so please also report the results for the partisan sources for the overall sample.

9) Page 8, paragraph 2 – this paragraph talks about trust in the different institutions but then ends by saying “This suggests that people’s belief in the trustworthiness of government overall moderates their willingness to engage with misinformation training videos across conditions.” This is very confusing. Please make it clear whether this paragraph is just about government trustworthiness or also about the others.

10) Page 8, paragraph 3 – I am pretty sure there is a mistake in this paragraph. It first reports ANOVA results for the high-trust source inoculation but then talks about the Democratic party in the post-hoc comparisons section. I think it was meant to say “Harvard University” but please check and verify everything is reported correctly (although as mentioned above, I think treating institutional trust as continuous or ordinal would be better).

11) Page 9, paragraph 3 – “This has implications for any future campaigns to spread inoculation interventions, because it means that the primary focus should be on younger age groups, though this pattern did not hold true in the second study so further research is needed.” This is a run-on sentence. Additionally, the results for age in Experiment 2 aren’t reported at all in the paper. Can these please be added in, ideally in supp alongside the Experiment 1 age and other non-theoretically relevant demographic results.

12) Page 9, paragraph 7 – perhaps worth noting the difference in trust between academic researchers and social media/online platforms here. Youtube and Cambridge, Bristol, etc. is not identical to Meta and Harvard, but the parallels seem strong so it would be worthwhile mentioning.

Typos and similar:
1) Page 5, paragraph 2 – “registered pool” should probably be “registered participant pool”
2) Page 6, paragraph 6 – “data will be evaluated” should be “data were analysed”
3) Page 7, paragraph 4 – “To test this, we conduct two one-way ANOVA tests were conducted” should be “To test this, two one-way ANOVA tests were conducted”
4) Discussion, sentence 2 – “This takes a step back from most inoculation studies, which explore the effect of inoculation given the intervention and ask if there are barriers to rolling out the interventions in the first place.” Should say “This takes a step back from most inoculation studies, which explore the effect of inoculation given the intervention, and instead asks if there are barriers to rolling out the interventions in the first place.”
5) Page 9, paragraph 8 – “see more accurate levels of uptake.” Should be “more accurately measure uptake”.

References mentioned in the review that are not already in the paper:

Modirrousta-Galian, A., & Higham, P. A. (2023). Gamified inoculation interventions do not improve discrimination between true and fake news: Reanalyzing existing research with receiver operating characteristic analysis. Journal of Experimental Psychology: General, 152(9), 2411–2437. https://doi.org/10.1037/xge0001395

Pennycook, G., Berinsky, A., Bhargava, P., Cole, R., Goldberg, B., Lewandowsky, S., & Rand, D. (2024). Misinformation inoculations must be boosted by accuracy prompts to improve judgments of truth. https://doi.org/10.31234/osf.io/5a9xq

Source

    © 2024 the Reviewer.

Content of review 2, reviewed on April 03, 2024

I would like to thank the authors for being so receptive to my comments. I’d particularly like to thank them for providing the additional demographic and pilot-testing information that was requested (seeing the actual materials and results alleviated my concerns), and for conducting the additional requested analyses. I believe the updated version of the manuscript is greatly improved and makes a strong contribution to the literature. See below for a couple of minor edits that could be made during the proofing process, but I am happy to endorse manuscript for publication as is. Congratulations on a great piece of work.

Minor/Typos:
Link for Study 1 pre-reg is to the overall OSF rather than pre-reg specifically. Not a big issue but if you want to link to the pre-registration directly the link should be https://osf.io/6d5gr instead.
Page 5, paragraph 2: “demographic effects was explored” should be “demographic effects were explored”
Page 5, paragraph 3: “prolific” should be capitalised (it is at the start of the paragraph but not when mentioned again halfway through).

Source

    © 2024 the Reviewer.

References

    Alexandra, J., Koed, M. J. 2024. Inoculation hesitancy: an exploration of challenges in scaling inoculation theory. Royal Society Open Science.