Abstract

Responding to calls to take a more active role in communicating their research findings, scientists are increasingly using open online platforms, such as Twitter, to engage in science communication or to publicize their work. Given the ease with which misinformation spreads on these platforms, it is important for scientists to present their findings in a manner that appears credible. To examine the extent to which the online presentation of science information relates to its perceived credibility, we designed and conducted two surveys on Amazon's Mechanical Turk. In the first survey, participants rated the credibility of science information on Twitter compared with the same information in other media, and in the second, participants rated the credibility of tweets with modified characteristics: presence of an image, text sentiment, and the number of likes/retweets. We find that similar information about scientific findings is perceived as less credible when presented on Twitter compared to other platforms, and that perceived credibility increases when presented with recognizable features of a scientific article. On a platform as widely distrusted as Twitter, use of these features may allow researchers who regularly use Twitter for research-related networking and communication to present their findings in the most credible formats.


Authors

Boothby, Clara;  Murray, Dakota;  Waggy, Anna Polovick;  Tsou, Andrew;  Sugimoto, Cassidy R.

Publons users who've claimed - I am an author
Contributors on Publons
  • 1 author
  • 3 reviewers
  • pre-publication peer review (FINAL ROUND)
    Decision Letter
    2021/08/01

    01-Aug-2021

    Dear Ms. Boothby:

    It is a pleasure to accept your manuscript entitled "Credibility of Scientific Information on Social Media: Variation by Platform, Genre and Presence of Formal Credibility Cues" for publication in Quantitative Science Studies. All reviewers are positive about your revised manuscript. One of the reviewers (reviewer 3 of the original version of your manuscript) still has a very small suggestion, which can be found at the bottom of this message,

    I would like to request you to prepare the final version of your manuscript using the checklist available at https://tinyurl.com/qsschecklist. Please also sign the publication agreement, which can be downloaded from https://tinyurl.com/qssagreement. The final version of your manuscript, along with the completed checklist and the signed publication agreement, can be returned to qss@issi-society.org.

    Thank you for your contribution. On behalf of the Editors of Quantitative Science Studies, I look forward to your continued contributions to the journal.

    Best wishes,
    Dr. Ludo Waltman
    Editor, Quantitative Science Studies
    qss@issi-society.org

    Reviewers' Comments to Author:

    Reviewer: 1

    Comments to the Author
    I applaud the authors' efforts to revise this manuscript. I believe this paper has reached the criteria to be published in QSS. My only suggestion is that the Discussion section is really long. So it might be helpful if some subsections are used.

    Decision letter by
    Cite this decision letter
    Reviewer report
    2021/04/29

    I applaud the authors' efforts to revise this manuscript. I believe this paper has reached the criteria to be published in QSS. My only suggestion is that the Discussion section is really long. So it might be helpful if some subsections are used.

    Reviewed by
    Cite this review
    Author Response
    2021/04/28

    Revision Comments and Responses for “Credibility of Scientific Information on Social Media: Variation by Platform, Genre and Presence of Formal Credibility Cues”

    Please note that author responses will be preceded by indents for improved legibility

    Reviewer 1: Comments to the Author
    This paper was interesting in the use of AMT to collect information about perceived credibility of scientific information on various online platforms with a specific emphasis on Twitter. I thought the paper was well written, logical, and utilized proper methodology.

    We thank the reviewers for these comments.

    However, the data was collected in 2015 and the newest reference was from 2018. I know that a lot of work on Twitter credibility (in general) has been done during this time frame and I would urge the authors to take a look at other credibility studies to see if there have been changes in user perception of tweets across time. For instance, the authors cite an article from 2012 at the beginning of the article that suggest information on Twitter is less credible than other platforms.... has this attitude changed in 8 years? I could argue that Turkers in 2015 have a different mindset toward tweets than users today and this isn't discussed enough.

    Thank you for pointing this out. We have taken another literature search on this topic and have added the following more recent studies of credibility and social media with a focus on Twitter.
    • Bode, L., Vraga, E. K., & Tully, M. (2020). Correcting Misperceptions About Genetically Modified Food on Social Media: Examining the Impact of Experts, Social Media Heuristics, and the Gateway Belief Model. Science Communication, 1075547020981375. https://doi.org/10.1177/1075547020981375
    • Borah, P., & Xiao, X. (2018). The Importance of ‘Likes’: The Interplay of Message Framing, Source, and Social Endorsement on Credibility Perceptions of Health Information on Facebook. Journal of Health Communication, 23(4), 399–411. https://doi.org/10.1080/10810730.2018.1455770
    • Nadarevic, L., Reber, R., Helmecke, A. J., & Köse, D. (2020). Perceived truth of statements and simulated social media postings: An experimental investigation of source credibility, repeated exposure, and presentation format. Cognitive Research: Principles and Implications, 5(1), 56. https://doi.org/10.1186/s41235-020-00251-4
    • Smith, C. N., & Seitz, H. H. (2019). Correcting Misinformation About Neuroscience via Social Media. Science Communication, 41(6), 790–819. https://doi.org/10.1177/1075547019890073
    • Anderson, A. A., & Huntington, H. E. (2017). Social Media, Science, and Attack Discourse: How Twitter Discussions of Climate Change Use Sarcasm and Incivility. Science Communication, 39(5), 598–620. https://doi.org/10.1177/1075547017735113
    While we could not find many studies directly comparing the credibility of Twitter to other news and social media platforms, we argue that Twitter’s poor reputation as a source of reliable information is supported by the extensive attention given to misinformation on the platform in the literature; we have noted this in lines 43-46: “Furthermore, this reputation is often accepted as a premise in the strong body of recent literature devoted to assessing the spread of misinformation, hostility, or bot-like behavior on Twitter (Anderson & Huntington, 2018; Robinson-Garcia et al., 2017; Shao et al., 2018; Vosoughi et al., 2018) and to containing and correcting misinformation (Bode et al., 2020; Smith & Seitz 2019).” We also acknowledge the potential changes in the attitudes of twitter users between 2015 and the present as a limitation in lines 449-450.

    In addition, the authors seem to not take into account the user profile of the tweets being displayed (they talked about placing a gray box over profile information). I may consider a tweet to be more credible if I see that it comes from Bill Gates than some tweeter I have never heard of. Also, a tweeter who has 5 followers vs a tweeter who has 5k followers may cause me to interpret the material as more credible. Does this impact my perceived credibility of the information? I don't believe this makes the data invalid, however this does need to be discussed a bit more. Not fully discussing the potential impacts a user profile might have on the perceived credibility is a shortcoming of this study, in my opinion. A tweet is not only the language/photos/links shared, it is packaged within a context that contains the user who tweeted it. This may have a real impact on perceived credibility. In addition, there may be a thread of tweets associated with the original being viewed and this also lends to perceived credibility (either positively or negatively). I just think some more discussion about this is warranted.

    We agree that the identity of the poster will have a sizeable effect on the credibility of information on Twitter; in fact, this understanding is the main reason why we chose to remove the influence of the Tweeters’ identities from the study. A fascinating and frustrating issue about studying Twitter is that the information that readers encounter is often curated through their followers, who the reader may consider trustworthy. We considered this carefully during the study design and interpretation and ultimately chose to resolve the matter in the following way: since we know nothing about the following behavior of the participants and because user estimations of the credibility of statements from people they follow are biased, we chose to instead to study how users encounter information from strangers rather than how they would encounter information from people they know. We are more concerned with information encountered during random browsing: when a Twitter user encounters science information disconnected from a known source, they need to determine, what formal features do they rely on to determine trustworthiness? We have added text to better state our focus, and address your concern, in our discussion and limitations sections. Please see lines 59-63, lines 117-120 and lines 443-449.

    Reviewer 2: Comments to the Author
    The manuscript "Credibility of Scientific Information on Social Media: Variation by Platform and Presence of Formal Credibility Cues" presents the results of a survey study regarding the perceived credibility of scientific information that is conveyed via different channels: (i) abstract of a scientific paper, (ii) news article, (iii) blog post, (iv) video, and (v) Twitter. The scientific abstract has been found to have the highest perceived credibility, Twitter the lowest, and no significant differences were found between the other channels. Most strikingly and somewhat disturbingly, a scientific abstract was also perceived most credible even if the letters were not readable because of a bad screenshot quality.
    Overall, the manuscript is well written and should be of interest to the readers of QSS. After a few improvements, the manuscript should be publishable.

    Thank you. We appreciate that you feel this paper will be of interest to QSS readers, and we are committed making improvements to our research

    Line 102/103: Minor typo in: "Our sample was drawn from a s single field-specific journal ... ."

    This has been corrected.

    Fig. 2: The retweet and favorite counts are not readable. The numbers should be mentioned if the readability can't be improved.

    Good catch, thank you. We have increased the reception number sizes in the appropriate row, and we have also noted the number of likes and retweets in the caption.

    Lines 112/113: Minor typo in: "Each tweet contained a URL to its corresponding journal article as it common for science tweets ... ."

    This has been corrected.

    Plural/singular mixture below Fig. 3: "Once the respondent accepted a HIT, they were presented ... ."

    We have opted to use the gender-neutral singular “they” in line with best practices for the inclusion of nonbinary individuals.

    Fig. 4: The label "platform" should read "medium" in panel a analogously to the headings of panels b and c. The same applies to Table S2.
    Thanks for pointing this out; we have resolved the inconsistency. We chose to change “Medium” in panel b and c to “platform” to be consistent with the use of platform in the rest of the manuscript.

    Minor typo in line 283: "... associates degrees e or lower (“Associates-“),. A respondent being ... ."

    This has been corrected.

    I appreciate that the authors share their R scripts along with the manuscript. The authors load several packages. The R packages and R itself should be formally cited in the manuscript.

    Thank you for this reminder about best practices. We have cited R and Rstudio along with lme4.

    I think that there is a closing parenthesis too many in the file credibility_survey1_notebook.rmd in line 354.
    Thank you! This was a typo that was somehow introduced before we merged our code. We have fixed the error and updated our repository.

    Reviewer 3: Comments to the Author
    This manuscript, entitled “Credibility of Scientific Information on Social Media: Variation by Platform and Presence of Formal Credibility Cues,” offers a quantitative examination of the credibility of scientific information on various social media platforms/mediums. The authors examined the credibility differences across mediums as well as what textual and visual features could influence the credibility on Twitter. The topic of this paper is certainly relevant to one of the biggest challenges sciences are facing: the credibility crisis. I find this paper to be very well-designed and certainly well-written. However, I am having some concerns that I hope the authors could address in the next revision.

    Thank you for your helpful comments. We hope that by addressing them, we have improved this manuscript.

    Relative major issues

    I am feeling there might be other simpler explanations for the results due to the ways in which this research is designed. The primary reason is that in the first experiment, the authors only used screenshots for all media/platform (except for the video), which is a rather poor representation of these platforms the authors claimed to study. So there are two issues here: the content and the functionality. In terms of the content, I can only see the title as well as the first paragraph of the news article from the screenshot and the screenshot of the blog post is quite similar. For the functionality, I would argue that the media is always beyond the screenshot: videos can be played (and certainly this was treated different in this study) while articles and blog posts can be read through and there are links in the item that can further supplement the persuasiveness of the genre. But after such functions are removed from this experiment, I feel the idea of media or platform is reduced to something much simpler, possibly the amount of information available to the participants. This seems to be a rather clear explanation of the results: that the more information the audience can get from the screenshot, the more credible the item is. But I don’t think this is the conclusion supposed to be drawn from this research.

    Thank you for this interesting point. In the previous manuscript we had recognized the vast differences of the part 1 media types in the paragraph discussing limitations to part one on page 25In addition, we have now added more targeted language to acknowledge that the different media formats also contain different amounts of information, in lines 425-430.
    We also note that our findings do not necessarily indicate that additional information on its own resulted in increased credibility. The blurry abstract, which actually contained little legible information, was also rated as credible as the unblurry abstracts, giving credence to the notion that the form, not necessarily the content, is important when judging credibility.

    I am curious why the authors decided to use “platform” in the title? Like the authors acknowledged, what they compared are mediums/genres instead of platforms. Can these two terms be used in the title instead?

    We selected the term “platform” due to our focus on Twitter, which is best described as a platform. We recognize the challenges of conflating media with platform, particularly as this study looks at the variable credibility in the context of both media (which is a more general form of content that may be on different hosted sites) and platforms (which are associated with a particular proprietary hosting websites, like Twitter). We will include “genre” in the title to better reflect this.

    After reading the article, I have the feeling that there is a lack of theoretical implications in this work, which is unfortunate. One broad area this study could have had more conversations with is the public understanding of science, where issues like the credibility of scientific evidence is a core topic. One review of this field that may be useful to this paper is as follows:
    - Bucchi, M., & Trench, B. (2016). Science communication and science in society: A conceptual review in ten keywords. Tecnoscienza (Italian Journal of Science & Technology Studies), 7(2), 151-168.
    We appreciate the recommendation; Bucci’s work has been very helpful, and we have engaged with this and several other references in the first paragraph of the discussion to better situate our work in the field of science communication and make stronger connections to theoretical implications.

    Another part of the paper that can be supplemented by sociological theories is the general conclusion that visual elements increase the credibility of scientific evidence. This is an important argument in the Latourian tradition of STS, stressing that visuality significantly contributes to knowledge production, including the persuasiveness of the outputs. One good source for this argument is this:
    - Latour, B. (1990). Drawing things together. In M. Lynch & S. Woolgar (Eds.), Representation in scientific practice. Cambridge, Massachusetts, MIT Press.
    We agree that the connection to Latour is appropriate and have included a short discussion of Latour's concept of visual signifiers in the discussion in lines 383-388. We have also drawn in Tal & Wansink ‘s work on the effects of images, as scientific signifiers, on credibility as additional support.

    Minor issues

    Both “general-use social media” and “general use social media” are used.

    Done. We used the hyphenated version to indicate that general use is a compound adjective

    On page 5 line 77, I don’t think the full name + abbreviation needs to be given again, since it was already introduced in the previous paragraph.

    This makes sense, but we personally find it useful to re-introduce the full name + abbreviation after each IMRAD heading, accommodating readers who may jump to the middle of the paper or otherwise skip around.

    In section 2.3, I am wondering if the missing value situation applies to both experiments?

    Thank you for drawing attention to this. Yes, this situation is true for both regressions, though fewer records were removed in the second analysis than in the first. We hope that the edited language makes this clear.

    On page 14 line 215, since you used R, you should cite it: there is an official citation format for R.

    Thank you for this reminder about best practices. We have cited R and Rstudio along with lme4.



    Cite this author response
  • pre-publication peer review (ROUND 1)
    Decision Letter
    2021/01/13

    13-Jan-2021

    Dear Ms. Boothby:

    Your manuscript QSS-2020-0089 entitled "Credibility of Scientific Information on Social Media: Variation by Platform and Presence of Formal Credibility Cues", which you submitted to Quantitative Science Studies, has been reviewed. The comments of the reviewers are included at the bottom of this letter.

    In general the reviewers are positive about your manuscript, but they also identify some issues that require further attention. My editorial decision therefore is to invite you to prepare a revision of your manuscript.

    To revise your manuscript, log into https://mc.manuscriptcentral.com/qss and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions." Under "Actions," click on "Create a Revision." Your manuscript number has been appended to denote a revision.

    You may also click the below link to start the revision process (or continue the process if you have already started your revision) for your manuscript. If you use the below link you will not be required to login to ScholarOne Manuscripts.

    PLEASE NOTE: This is a two-step process. After clicking on the link, you will be directed to a webpage to confirm.

    https://mc.manuscriptcentral.com/qss?URL_MASK=d5c3db9e3ec84486a7c59be62ce41785

    You will be unable to make your revisions on the originally submitted version of the manuscript. Instead, revise your manuscript using a word processing program and save it on your computer. Please also highlight the changes to your manuscript within the document by using the track changes mode in MS Word or by using bold or colored text.

    Once the revised manuscript is prepared, you can upload it and submit it through your Author Center.

    When submitting your revised manuscript, you will be able to respond to the comments made by the reviewers in the space provided. You can use this space to document any changes you make to the original manuscript. In order to expedite the processing of the revised manuscript, please be as specific as possible in your response to the reviewers.

    IMPORTANT: Your original files are available to you when you upload your revised manuscript. Please delete any redundant files before completing the submission.

    If possible, please try to submit your revised manuscript by 13-May-2021. Let me know if you need more time to revise your work.

    Once again, thank you for submitting your manuscript to Quantitative Science Studies and I look forward to receiving your revision.

    Best wishes,
    Dr. Ludo Waltman
    Editor, Quantitative Science Studies
    qss@issi-society.org

    Reviewers' Comments to Author:

    Reviewer: 1

    Comments to the Author
    This paper was interesting in the use of AMT to collect information about perceived credibility of scientific information on various online platforms with a specific emphasis on Twitter. I thought the paper was well written, logical, and utilized proper methodology. However, the data was collected in 2015 and the newest reference was from 2018. I know that a lot of work on Twitter credibility (in general) has been done during this time frame and I would urge the authors to take a look at other credibility studies to see if there have been changes in user perception of tweets across time. For instance, the authors cite an article from 2012 at the beginning of the article that suggest information on Twitter is less credible than other platforms.... has this attitude changed in 8 years? I could argue that Turkers in 2015 have a different mindset toward tweets than users today and this isn't discussed enough. In addition, the authors seem to not take into account the user profile of the tweets being displayed (they talked about placing a gray box over profile information). I may consider a tweet to be more credible if I see that it comes from Bill Gates than some tweeter I have never heard of. Also, a tweeter who has 5 followers vs a tweeter who has 5k followers may cause me to interpret the material as more credible. Does this impact my perceived credibility of the information? I don't believe this makes the data invalid, however this does need to be discussed a bit more. Not fully discussing the potential impacts a user profile might have on the perceived credibility is a shortcoming of this study, in my opinion. A tweet is not only the language/photos/links shared, it is packaged within a context that contains the user who tweeted it. This may have a real impact on perceived credibility. In addition, there may be a thread of tweets associated with the original being viewed and this also lends to perceived credibility (either positively or negatively). I just think some more discussion about this is warranted.

    Reviewer: 2

    Comments to the Author
    The manuscript "Credibility of Scientific Information on Social Media: Variation by Platform and Presence of Formal Credibility Cues" presents the results of a survey study regarding the perceived credibility of scientific information that is conveyed via different channels: (i) abstract of a scientific paper, (ii) news article, (iii) blog post, (iv) video, and (v) Twitter. The scientific abstract has been found to have the highest perceived credibility, Twitter the lowest, and no significant differences were found between the other channels. Most strikingly and somewhat disturbingly, a scientific abstract was also perceived most credible even if the letters were not readable because of a bad screenshot quality.

    Overall, the manuscript is well written and should be of interest to the readers of QSS. After a few improvements, the manuscript should be publishable.

    Line 102/103: Minor typo in: "Our sample was drawn from a s single field-specific journal ... ."

    Fig. 2: The retweet and favorite counts are not readable. The numbers should be mentioned if the readability can't be improved.

    Lines 112/113: Minor typo in: "Each tweet contained a URL to its corresponding journal article as it common for science tweets ... ."

    Plural/singular mixture below Fig. 3: "Once the respondent accepted a HIT, they were presented ... ."

    Fig. 4: The label "platform" should read "medium" in panel a analogously to the headings of panels b and c. The same applies to Table S2.

    Minor typo in line 283: "... associates degrees e or lower (“Associates-“),. A respondent being ... ."

    I appreciate that the authors share their R scripts along with the manscript. The authors load several packages. The R packages and R itself should be formally cited in the manuscript.

    I think that there is a closing parenthesis too many in the file credibility_survey1_notebook.rmd in line 354.

    Reviewer: 3

    Comments to the Author
    This manuscript, entitled “Credibility of Scientific Information on Social Media: Variation by Platform and Presence of Formal Credibility Cues,” offers a quantitative examination of the credibility of scientific information on various social media platforms/mediums. The authors examined the credibility differences across mediums as well as what textual and visual features could influence the credibility on Twitter. The topic of this paper is certainly relevant to one of the biggest challenges sciences are facing: the credibility crisis. I find this paper to be very well-designed and certainly well-written. However, I am having some concerns that I hope the authors could address in the next revision.

    Relative major issues

    I am feeling there might be other simpler explanations for the results due to the ways in which this research is designed. The primary reason is that in the first experiment, the authors only used screenshots for all media/platform (except for the video), which is a rather poor representation of these platforms the authors claimed to study. So there are two issues here: the content and the functionality. In terms of the content, I can only see the title as well as the first paragraph of the news article from the screenshot and the screenshot of the blog post is quite similar. For the functionality, I would argue that the media is always beyond the screenshot: videos can be played (and certainly this was treated different in this study) while articles and blog posts can be read through and there are links in the item that can further supplement the persuasiveness of the genre. But after such functions are removed from this experiment, I feel the idea of media or platform is reduced to something much simpler, possibly the amount of information available to the participants. This seems to be a rather clear explanation of the results: that the more information the audience can get from the screenshot, the more credible the item is. But I don’t think this is the conclusion supposed to be drawn from this research.

    I am curious why the authors decided to use “platform” in the title? Like the authors acknowledged, what they compared are mediums/genres instead of platforms. Can these two terms be used in the title instead?

    After reading the article, I have the feeling that there is a lack of theoretical implications in this work, which is unfortunate. One broad area this study could have had more conversations with is the public understanding of science, where issues like the credibility of scientific evidence is a core topic. One review of this field that may be useful to this paper is as follows:
    - Bucchi, M., & Trench, B. (2016). Science communication and science in society: A conceptual review in ten keywords. Tecnoscienza (Italian Journal of Science & Technology Studies), 7(2), 151-168.

    Another part of the paper that can be supplemented by sociological theories is the general conclusion that visual elements increase the credibility of scientific evidence. This is an important argument in the Latourian tradition of STS, stressing that visuality significantly contributes to knowledge production, including the persuasiveness of the outputs. One good source for this argument is this:
    - Latour, B. (1990). Drawing things together. In M. Lynch & S. Woolgar (Eds.), Representation in scientific practice. Cambridge, Massachusetts, MIT Press.

    Minor issues

    Both “general-use social media” and “general use social media” are used.

    On page 5 line 77, I don’t think the full name + abbreviation needs to be given again, since it was already introduced in the previous paragraph.

    In section 2.3, I am wondering if the missing value situation applies to both experiments?

    On page 14 line 215, since you used R, you should cite it: there is an official citation format for R.

    Decision letter by
    Cite this decision letter
    Reviewer report
    2021/01/12

    This manuscript, entitled “Credibility of Scientific Information on Social Media: Variation by Platform and Presence of Formal Credibility Cues,” offers a quantitative examination of the credibility of scientific information on various social media platforms/mediums. The authors examined the credibility differences across mediums as well as what textual and visual features could influence the credibility on Twitter. The topic of this paper is certainly relevant to one of the biggest challenges sciences are facing: the credibility crisis. I find this paper to be very well-designed and certainly well-written. However, I am having some concerns that I hope the authors could address in the next revision.

    Relative major issues

    I am feeling there might be other simpler explanations for the results due to the ways in which this research is designed. The primary reason is that in the first experiment, the authors only used screenshots for all media/platform (except for the video), which is a rather poor representation of these platforms the authors claimed to study. So there are two issues here: the content and the functionality. In terms of the content, I can only see the title as well as the first paragraph of the news article from the screenshot and the screenshot of the blog post is quite similar. For the functionality, I would argue that the media is always beyond the screenshot: videos can be played (and certainly this was treated different in this study) while articles and blog posts can be read through and there are links in the item that can further supplement the persuasiveness of the genre. But after such functions are removed from this experiment, I feel the idea of media or platform is reduced to something much simpler, possibly the amount of information available to the participants. This seems to be a rather clear explanation of the results: that the more information the audience can get from the screenshot, the more credible the item is. But I don’t think this is the conclusion supposed to be drawn from this research.

    I am curious why the authors decided to use “platform” in the title? Like the authors acknowledged, what they compared are mediums/genres instead of platforms. Can these two terms be used in the title instead?

    After reading the article, I have the feeling that there is a lack of theoretical implications in this work, which is unfortunate. One broad area this study could have had more conversations with is the public understanding of science, where issues like the credibility of scientific evidence is a core topic. One review of this field that may be useful to this paper is as follows:
    - Bucchi, M., & Trench, B. (2016). Science communication and science in society: A conceptual review in ten keywords. Tecnoscienza (Italian Journal of Science & Technology Studies), 7(2), 151-168.

    Another part of the paper that can be supplemented by sociological theories is the general conclusion that visual elements increase the credibility of scientific evidence. This is an important argument in the Latourian tradition of STS, stressing that visuality significantly contributes to knowledge production, including the persuasiveness of the outputs. One good source for this argument is this:
    - Latour, B. (1990). Drawing things together. In M. Lynch & S. Woolgar (Eds.), Representation in scientific practice. Cambridge, Massachusetts, MIT Press.

    Minor issues

    Both “general-use social media” and “general use social media” are used.

    On page 5 line 77, I don’t think the full name + abbreviation needs to be given again, since it was already introduced in the previous paragraph.

    In section 2.3, I am wondering if the missing value situation applies to both experiments?

    On page 14 line 215, since you used R, you should cite it: there is an official citation format for R.

    Reviewed by
    Cite this review
    Reviewer report
    2021/01/11

    The manuscript "Credibility of Scientific Information on Social Media: Variation by Platform and Presence of Formal Credibility Cues" presents the results of a survey study regarding the perceived credibility of scientific information that is conveyed via different channels: (i) abstract of a scientific paper, (ii) news article, (iii) blog post, (iv) video, and (v) Twitter. The scientific abstract has been found to have the highest perceived credibility, Twitter the lowest, and no significant differences were found between the other channels. Most strikingly and somewhat disturbingly, a scientific abstract was also perceived most credible even if the letters were not readable because of a bad screenshot quality.

    Overall, the manuscript is well written and should be of interest to the readers of QSS. After a few improvements, the manuscript should be publishable.

    Line 102/103: Minor typo in: "Our sample was drawn from a s single field-specific journal ... ."

    Fig. 2: The retweet and favorite counts are not readable. The numbers should be mentioned if the readability can't be improved.

    Lines 112/113: Minor typo in: "Each tweet contained a URL to its corresponding journal article as it common for science tweets ... ."

    Plural/singular mixture below Fig. 3: "Once the respondent accepted a HIT, they were presented ... ."

    Fig. 4: The label "platform" should read "medium" in panel a analogously to the headings of panels b and c. The same applies to Table S2.

    Minor typo in line 283: "... associates degrees e or lower (“Associates-“),. A respondent being ... ."

    I appreciate that the authors share their R scripts along with the manscript. The authors load several packages. The R packages and R itself should be formally cited in the manuscript.

    I think that there is a closing parenthesis too many in the file credibility_survey1_notebook.rmd in line 354.

    Reviewed by
    Cite this review
    Reviewer report
    2020/12/19

    This paper was interesting in the use of AMT to collect information about perceived credibility of scientific information on various online platforms with a specific emphasis on Twitter. I thought the paper was well written, logical, and utilized proper methodology. However, the data was collected in 2015 and the newest reference was from 2018. I know that a lot of work on Twitter credibility (in general) has been done during this time frame and I would urge the authors to take a look at other credibility studies to see if there have been changes in user perception of tweets across time. For instance, the authors cite an article from 2012 at the beginning of the article that suggest information on Twitter is less credible than other platforms.... has this attitude changed in 8 years? I could argue that Turkers in 2015 have a different mindset toward tweets than users today and this isn't discussed enough. In addition, the authors seem to not take into account the user profile of the tweets being displayed (they talked about placing a gray box over profile information). I may consider a tweet to be more credible if I see that it comes from Bill Gates than some tweeter I have never heard of. Also, a tweeter who has 5 followers vs a tweeter who has 5k followers may cause me to interpret the material as more credible. Does this impact my perceived credibility of the information? I don't believe this makes the data invalid, however this does need to be discussed a bit more. Not fully discussing the potential impacts a user profile might have on the perceived credibility is a shortcoming of this study, in my opinion. A tweet is not only the language/photos/links shared, it is packaged within a context that contains the user who tweeted it. This may have a real impact on perceived credibility. In addition, there may be a thread of tweets associated with the original being viewed and this also lends to perceived credibility (either positively or negatively). I just think some more discussion about this is warranted.

    Reviewed by
    Cite this review
All peer review content displayed here is covered by a Creative Commons CC BY 4.0 license.