Content of review 1, reviewed on April 19, 2022
Dear authors,
Below is my review on your work examining the relation between the widely publicized Trump covid diagnosis and subsequent updates of misperceptions about covid. I hope that my comments are helpful to improve the manuscript.
The story is way too much written in a causal way. This is already reflected by the title where it is stated that an effect is examined of Trump getting a diagnosis for covid. Obviously, this is a mere correlation. In the introduction this issue is not treated in a nuanced way. It stands to reason that during the covid pandemic much was unknown and therefore misinformation or misperceptions may always be updated over time even in the absence of any prominent events. This issue applied to the whole manuscript. For instance, in the first paragraph of the general discussion it is again suggested an effect of a manipulation was examined. This is incorrect. A single sentence at the end of the General discussion is insufficient to alleviate this issue. This issue needs to be clear throughout the manuscript in the language used (e.g., “correlate with” “relates to”), and alternative possibilities for the effect need to be mentioned much more clearly in the intro already.
It is unclear which data were collected for which purpose. It is mentioned that the data were collected for a different purpose but it remains very vague for which purpose.
The sample size justification is not satisfactory. Although it is understandable that the sample size was not determined based on the present research question, and hence no power analysis was conducted for the present RQ, it would nevertheless be informative and important to at least give some indication whether the sample size is large enough to answer the current research question. What is reasonable to expect in these kinds of studies with regard to the effect size?
Unfortunately, the studies were not preregistered. Therefore, it remains unclear which and how many decisions were made to select the appropriate measures and analyses. This is an important limitation that needs to be addressed.
Some of the measures were single item. How valid and reliable are these measurements? Is there any documentation on this. This also requires discussion.
Some work that suggests that misinformation among the US general public was no very prominent may be relevant to consider in light of the current findings (Van Stekelenburg, A., Schaap, G., Veling, H., & Buijzen, M. (2021). Investigating and improving the accuracy of US citizens’ beliefs about the COVID-19 pandemic: longitudinal survey study. Journal of medical Internet research, 23(1), e24069.)
Source
© 2022 the Reviewer.
Content of review 2, reviewed on June 22, 2022
Dear editor and authors,
In the revised manuscript it appears that the research questions and hypotheses have been adjusted or have been formulated during the revision process. I want to point out that this appears an example of HARking (Kerr, 1998). I want to point out that this HARking practice can be very problematic. In order to evaluate the evidential value of research findings based on null hypothesis testing it is crucial to make a clear distinction between confirmatory and exploratory research findings. When a clear unambiguous hypothesis is formulated before conducting a study together with a clear plan for statistical analysis (including inclusion and exclusion criteria, the exact analyses, etcetera) and the hypothesis is confirmed than the evidential value is much higher compared to a situation were the research finding is observed after some exploration. (This is why preregistration is an interesting tool in the toolbox of researchers.) That is because there are many degrees of freedom and decisions that can be made during the processing and analyses of the data when data are explored (e.g., RQ’s to examine, inclusion and exclusion criteria, the exact analyses to be performed). Both confirmatory and exploratory findings are of interest, but it should be very clear to readers which is which. As a result it is very problematic to present findings as full confirmatory when they are not.
It appears that in the present revision hypotheses have been adjusted and even formulated that were not present in the original version. I do understand the comment by the action editor that the questions were vague. But I do not understand the current solution to turn these vague RQ’s into clear hypotheses. This may give the impression to uninformed readers that the evidential value of the reported findings is much higher than they actually are.
So I hope that the editor and authors are willing to consider these arguments and come up with a solution to avoid possible misunderstandings about the strength of the evidence of the reported findings. Possible solutions that I can see are:
1) Revert back to the research questions, so that it becomes clear when they were of an exploratory nature (or that they were not formulated very clearly) rather than very specific and of a confirmatory nature. Note that it is acceptable reformulate exploratory questions to make them more clear.
or
2) Acknowledge explicitly that in the previous version of this manuscript the research questions and hypotheses were not clearly formulated and that these were adjusted in the revision based on comments by the action editor (for instance in a footnote). This is transparent and informative to the educated reader.
For the same reason I do think it is good scientific practice to explain the context of the research. In my view empirical quantitative science using frequentist statistics is not about story telling but about clear and transparent reporting of findings so that the credibility and evidential value can be evaluated as good a possible. But I can see that too much elaboration can be distracting and that shortening this in the current version is ok.
Apart form these issues I think the authors handled my suggestions satisfactory.
Source
© 2022 the Reviewer.
References
Lisa-Maria, T., John, K., J., F. A. L., R., S. C. 2022. COVID-19 risk perception and hoax beliefs in the US immediately before and after the announcement of President Trump's diagnosis. Royal Society Open Science.
