Abstract

The Matthew effect has become a standard concept in science studies and beyond to describe processes of cumulative advantage. Despite its wide success, a rigorous quantitative analysis for Merton’s original case for Matthew effects – the Nobel Prize – is still missing. This paper aims to fill this gap by exploring the causal effect of Nobel Prizes in Economics. Furthermore, we test another of Merton’s ideas: successful papers can draw attention to cited references leading to a serial diffusion of ideas. Based on the complete Web of Science 1900–2011, we estimate the causal effects of Nobel Prizes compared to a synthetic control group which we constructed by combining different matching techniques. We find clear evidence for a Matthew effect upon citation impacts, especially for papers published within five years before the award. Further, scholars from the focal field of the award are particularly receptive to the award signal. In contrast to that, we find no evidence that the Nobel Prize causes a serial diffusion of ideas. Papers cited by future Nobel laureates do not gain in citation impact after the award.


Authors

Rudolf Farys;  Tobias Wolbring

Publons users who've claimed - I am an author

No Publons users have claimed this paper.

Contributors on Publons
  • 2 reviewers
  • pre-publication peer review (FINAL ROUND)
    Decision Letter
    2021/02/15

    15-Feb-2021

    Dear Prof. Wolbring:

    It is a pleasure to accept your manuscript entitled "Matthew Effects in Science and the Serial Diffusion of Ideas: Testing Old Ideas with New Methods" for publication in Quantitative Science Studies. All three reviewers have recommended acceptance of your manuscript.

    I would like to request you to prepare the final version of your manuscript using the checklist available at https://bit.ly/2QW3uV5. Please also sign the publication agreement, which can be downloaded from https://bit.ly/2QYuW4w. The final version of your manuscript, along with the completed checklist and the signed publication agreement, can be returned to qss@issi-society.org.

    Thank you for your contribution. On behalf of the Editors of Quantitative Science Studies, I look forward to your continued contributions to the journal.

    Best wishes,
    Dr. Ludo Waltman
    Editor, Quantitative Science Studies
    qss@issi-society.org

    Reviewers' Comments to Author:

    Reviewer: 1

    Comments to the Author
    (There are no comments.)

    Reviewer: 2

    Comments to the Author
    (There are no comments.)

    Reviewer: 3

    Comments to the Author
    I am happy with the revisions made by the authors. This is a nice piece of work, which I recommend for publication in Quantitative Science Studies.

    Decision letter by
    Cite this decision letter
    Reviewer report
    2021/02/15

    I am happy with the revisions made by the authors. This is a nice piece of work, which I recommend for publication in Quantitative Science Studies.

    Reviewed by
    Cite this review
    Author Response
    2020/12/16

    Revision Report for Manuscript QSS 2020-0076

    Dear Ludo Waltman, dear reviewers,

    Thanks a lot for your efforts in improving the quality of the paper. We took all the feedback very seriously, substantially revised the manuscript and hope to have addressed all your concerns and recommendations. Many of your comments turned out to be extremely helpful. Below we will explain for each point, how we addressed it in the course of the revision; in the manuscript itself we tracked all the changes.

    All best,
    Rudolf Farys & Tobias Wolbring

    Reviewer 1

    R1: I miss some references throughout the whole manuscript. The authors have included many place holders.

    Authors: We had originally prepared a blinded manuscript and then forgot to plug these references to our own work back in. Thanks!

    R1: P2, L44: The authors write that “to the best of our knowledge, no study exists which provides a rigorous analysis of the causal effect of Nobel Prize reception on the accumulation of further citations for a group of laureates”. However, there exists studies investigating causal effects with respect to the Matthew effect. The results of these studies should be reported in the manuscript (and discussed against the backdrop of the own findings of the authors).

    Authors: We now report results from these studies in the introduction and discuss our own contribution in the conclusion section against the backdrop of these studies.

    R1: P3, L58: It is no longer Thomson Reuters’ Web of Science, but Clarivate Analytics.

    Authors: Fixed throughout the manuscript

    R1: P4, L26: It is not clear what is meant with “coverage” here. Also, a sample size of 23 seems to be low (and not “sufficient” as the authors write). It should be no problem at all to use WOS data from 1980 onwards. Thus, I encourage the authors to extent the database of the study.

    Authors: Thanks, that passage was indeed confusing. We rewrote it. What we wanted to say is that it is problematic to study earlier Nobel laureates in economics because of coverage problems of their work in the WoS. For the laureates 2000-2010 we use citation data going back up to 1900. We also clarified that sample size of the treatment group is not 23 laureates, but 184 publications.

    R1: P6, L38: There are many different matching procedures available (e.g., entropy balancing and inverse probability weighting). Why did the authors decided to use CEM?

    Authors: We now give a more detailed motivation of our matching apporach in our methods section. Unlike propensity score matching, CEM ensures that imbalances in covariates between matched observations from the treatment and control group do not exceed a certain threshold level defined ex ante by the specified coarsening of variables. CEM offers a good trade-off between bias reduction and the curse of dimensionality, provided that variables with numerous values are matched. CEM ensures that entropy nalancing can not use papers as controls, if these exceed these defined thresholds. EB tries to balance the pre-Nobel citation path by reweighting control papers, but it may not use papers for this purpose if they are, for instance, from a different publication period or from a different citation percentile.

    R1: P6, L33: The authors decided to use a very limited set of variables for matching (field, publication year, cumulative citations). They could improve the matching by considering further variables, such as the number of co-authors, number of cited references, and number of pages. These are variables which might influence the number of citations a paper receives. In my experience (and it is generally recommended that), matching should include as many variables as possible.

    Authors: Generally, we agree that the matching literature suggest a „more is better“ approach. As we explain in the paper, we do not include these additional variables for two reasons. First, we might run into the curse of dimensionality, because Nobel publications are highly selective, already only leaving a selected set of potential controls. Second, and more importantly, we match on citation paths before Nobel receipt. Our approach thus captures these and other (sometimes unknown) factors influencing on citation impacts (see also Abadie 2020).

    Reviewer 2

    Comments to the Author
    The manuscript examines the so-called “Matthew effect” for a set of papers linked to laureates of the Bank of Sweden Award in Economic Sciences in Honor of Alfred Nobel - often mistakenly referred to as the “Nobel Prize in Economics” - compared to a set of so-called “synthetic controls”. The authors motivate their study by claiming that hitherto no “rigorous quantitative analysis for Merton’s original case for Matthew effects - Nobel Prizes” have been done, and consequently that their study is filling this gap by “exploring the causal effect of Nobel Prizes in Economics. Secondary, the study also examines claims for so-called “serial diffusion of ideas”, i.e. potential spillover effects of getting a prize on publications cited by the laureates. The study finds “clear evidence for a “Matthew effect upon citation impacts” for the papers linked to the laureates; and no evidence for “serial diffusion”. The authors substantiate their findings by arguing that such “boosts” in citations to older papers when linked to a prize are essentially “caused” by ceremonial reference behaviors.

    R2: While I generally don’t think that “novelty” should play a decisive role in reviews, I do think that the present manuscript to a some extent propagate what we already know, and unfortunately in a restricted way.
    Authors: The paper makes at least two main contributions to the literature. First, to the best of our knowledge, this is the first study investigating the effects of a Nobel Prize upon citation impacts with modern methods of causal inference. While our findings are not ground breaking, we still believe that this is a contribution. Second, the matching approaches we use are likely of interest for the bibliometric community.

    R2: Given our knowledge of cumulative effects and reference (citing) behavior especially in the soft sciences, it is not surprising that a shock effect, such as a prize, results in sudden increase of citations to presumably linked papers and that such immediate attention most probably can be ascribed to ceremonial and persuasive citing behavior. Such shock effects however is not necessarily equal to a Matthew effect, and indeed such cumulative advantages have been a challenges to quantify, but easier to declare in a qualities sense.
    What is missing is a more proper analysis and discussion of these effects - what are they?
    [….]
    Despite my reservations for the main analyses, I think the results, especially for the first analysis is quite clear (and not surprising). A suggestion though would be to reframe what is actually examined in relation to above remarks about the Matthew effect, careers etc.

    Authors: Thanks, we now added a new section discussing the role of citations in science and introducing normative and constructivist theories of citations. We are not certain, what is meant by shock effects, Matthew effects and cumulative advantages, but following Zuckerman (2011; "The Matthew Effect Writ Large and Larger”) we interpret Matthew effects as a special form of cumulative advantages and we think what we empirically show are Matthew effects. The conclusion section now also highlights a related limitation, namely that we can not distinguish the direct effect of the Nobel Prize on citations from indirect effects caused by further cumulative advantages such as awards, funding, memberships on future citations.

    R2: What would also have been interesting to see is to what extent such shock effects were comparable across other areas e.g., the real Nobel Prizes, the Turing prize, Fields Medals or whatever. I think it is important because it is questionable whether the prize itself “cause” the presumed Matthew effect - it may already be in play?
    [….]
    It is unfortunate that the study is not comparative with at least one other field because, an obvious question is to what extent the field of economics and its recipients compare to physics, chemistry or physiology or whatever.
    Are recipients generally younger and still pursuing their careers? And would this mean that the shock effect actually turns into a permanent Matthew effect in economics but perhaps not in other fields? Are citing practices comparable? If not what consequences would that have? The authors acknowledge the very limited generalizability of their findings, albeit not in the abstract, but at the end of the manuscript in typically vague formulations. So be it.
    Nevertheless, the discussion section seems very shallow compared to the readers of QSS. This might as well have been written in a few motivating sentences in the beginning of the analysis. We do know that a large part of references given are persuasive. We would therefore expect a boost in selective citing when someone is lifted up on a pedestal when giving the prize - the argument here is that it is actually the persons you want to be associated with through the reference to her/his work. All this seems evident. And so does the consequences. Therefore, this could/should have been predicted to begin with.

    Authors: We agree with you that this would be interesting, and we already wrote that in the discussion section. However, such a comparative analysis is beyond the scope of this paper which is also ok for the editor.

    R2: I am therefore somewhat uneasy with the way the authors treat the Matthew effect. If my recollection of Merton and Zuckerman is not very wrong, what they derived from Zuckerman’s interviews of real Nobel laureates affiliated to American institutions was that the laureates had very advantageous career opportunities prior to being awarded the Prize. As such, the Prize itself was “merely” the “crowning achievement”. The cumulative advantages or the Matthew effects were already in place and had been for a long time when the prize was received. Initially, real Nobel Prizes were typically given for discoveries made a few years earlier. But steadily the gap between discoveries and receiving the prize grew wider, so that for many years now, recipients - at least for the real Nobel Prizes - are often of considerable age (there are exceptions of course, especially in physics) and practically in retirement.
    So to examine Matthew effects one should take the career perspective and not only or simply citations to papers after a prize is given. While it is most likely that such prizes bring even more attention (but perhaps only temporarily?), one would assume that on average they come on top of an already existing cumulative advantage.
    The questions for the present study are to what extent such effects are already in place, whether it the “synthetic control set” takes care of it, and/or to what extent the citation boots is an amplifying (double)effect?

    Authors: Yes, who receives a Nobel Prize is highly selective; we try to take this into account by our matching procedure. As Merton’s (1968) example of the 21st chair in the royal academy shows despite this selectivity he expects the accumulation of further forms of peer recognition, including a citation boost. The introduction as well as the first paragraph of the new section 2 hopefully clarify the issue. We do not claim to study all forms of Matthew effects, but focus on citations as one though important form of peer recognition. Looking at the citation paths before Nobel receipt for treated and controls suggests that the “synthetic control set” takes care of selectivtiy issues with respect to citations.

    R2: This brings me to the “rigorous analysis of the causal effects”. While I agree that receiving a prize can be seen as a natural experiment of some kind, I do wonder to what extent it is linked to a presumed causal Matthew effect in this case. And, even if it is, the resulting “causal effects” still come with uncertainty and are closely tied to the numerous strong assumptions that is present in, for example, the matching and synthetic control techniques, the model specifications, outcome transformation, let alone the plenty of hidden researcher degrees of freedom present in the study. I would therefore encourage the encourage the authors, as a minimum to disclose 1) the UT-numbers chosen for the studies; 2) if available all processing codes; 3) an elaborate decision tree for options and choices made; and 4) the data matrix or matrices behind the regression models with untransformed citation scores. Such transparency is needed if this study should move beyond exploration as it aims to do.

    Authors: Scientific conclusions are always uncertain (see King et al. 1994). We try to minimize uncertainty by using sophisticated methods but never claim that our approach is without assumptions or that our conclusions are certain. We are also very enthusiastic about the open science movement and have provided code for replications for previous publications and planned to do so for this paper anyway. UT numbers and processing code will be made available via the Harvard dataverse. We believe the decisions we have made when processing the data are transparent enough by the description in the paper and the available code and hence will not create a decision tree showing every option and choice. We cannot provide the data due to license agreements.

    R2: While technically sophisticated, I am not sure that the “synthetic control” set of papers are meaningful here. Again, since prizes are given to persons and the treatment set of papers are all linked to such persons; it seems to me that the control set should also be made up of “potential” recipients and their papers, and not a mix of presumably comparable papers with numerous different origins. I know the authors dismiss this approach, preferring the “synthetic control”, but to be me this means that we are actual not comparing like with like. Does it matter for the eventual results, probably not in a qualitative sense, but perhaps in relation to estimated effect sizes? Please reflect.

    Authors: We had already explained in the first draft of the paper that searching adequate controls at the author level is close to impossible. We added a few more details to the footnote explaining that. Using such inadequate control cases with very different citation impact and citation paths will lead to biased estimates. It might make sense for a study on career paths starting in early and mid-career but not for Nobel laureates.

    R2: The regression results are presented in the usual manner seen in economics journals with stars and asterisks indicating arbitrary significant thresholds. The regressions capitalize on extremely large sample sizes, and while p-values therefore become very low, the authors do not correct for the obvious multiple testing that is present. While certainly not excessive, the authors do a couple of times seem to care about whether results are “significant” going from one specification to another. Given the non-random nature of the data combined with considerable size of it, this is hardly interesting.

    Authors: We ackowledge some more general issues such as non-randomness on the use of inferential statistics in bibliometrics, which we discuss now in the paper. The issue of multiple testing however is not a problem that well applies to our case. Actually, the number of significance tests is rather parsimonious as compared to the common use of this approach. We also discuss the issue of seemingly “extremely large sample sizes“ and clarify that statistical power is not as large as it might appear at first glance.

    R2: I have also noticed what I consider to be biased reference practices in some circumstances. Indeed the authors’ themselves state “citations are not only building blocks of scientific claims and mark the origin of certain ideas …” (page 16), hence we would expect them to adhere to this. But, for example on page 5, lines 40-45 the authors state and cite: “[t]he citation rates of most articles typically peak several years after publication and then steady decline in relevance (Amin and Mabe, 2003; Wang, Song and Barabási, 2013). “Decline in relevance” is probably not a proper formulation, obsolescence is more in line with concept the authors implicitly point to, namely “cited half-life”. A concept that has been around since the early 1960s due Burton and Kebler, as well as De Solla Price. Many research papers have addressed this concept in relation to articles and literatures. It is therefore highly surprising and misleading that the authors chose to cite an editorial piece from an Argentinian medical journal containing no references at all to these or other concepts discussed. This is not giving credit where credit is due. Likewise, it would also be misleading to simply cite Wang, Song and Barabarsi (2013). Their findings are disputed and not really central to the claim in the citing context. What is needed is a proper reference to the fact that most articles depending on field, have comparable citing histories with a peak 2, 3, 4, 5 or whatever years after publication and then a steady declining distribution (disregarding Sleeping Beauties or citation classics). I suggest that the authors go through their references one more time, aiming to cite the origin ideas, or at least proper substitutes, and at the same time try to disregard where studies are published only focusing on content and its relevance.

    Authors: Thanks, we checked all the references and changed them were it appeared adequate.

    R2: Finally, while the authors do mention the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” in the final discussion section, they should have done this in the beginning of the manuscript instead. The authors do what most others also do, conflate the memorial prize with the real Nobel Prize which is a mistake. The aim of Sveriges Riksbank with instigating the former was ideological to promote independence which was to be based on “economics as a science”, and therefore the whish to make economics comparable to the existing sciences and Nobel Prizes (something the family was against). The latter certainly succeeded, especially among economist and politicians, not among natural scientist. However, the scientific fields and the basis for the awards are still very different. I agree that a prize is prizes and its potential effects on the social system of science can and should be examined, but the underlying mechanisms and background contexts across fields may differ substantially, hence, explanations and generalizations may differ as well.

    Authors: Good point. When preparnig the first draft for QSS, both a footnote already discussing that as well as a terminological clarification got lost. We fixed that problem now. We use more precise wording and added a footnote in the introduction discussing these political dimensions of the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel.

    Reviewer 3

    R3: I find it hard to understand the details of the matching methodology. The methodological explanation provided on p. 6-7 needs to be extended. The explanation of matching based on publication year and WoS subject category is clear, but it is not clear to me how exactly the matching based on number of citations was performed. This requires a more elaborate discussion. Also, in step 3, the authors refer to ‘the weights from the second step’, but no weights are mentioned in the discussion of step 2. Moreover, the method of entropy balancing used in step 3 needs a more detailed discussion. I am not familiar with this method, and this probably applies to most readers. Without a basic understanding of this method, it is hard to understand step 3.

    Authors: Thanks for that helpful feedback. We rewrote the paragraphs introducing our approach and hope that all steps are now easier to understand.

    R3: In the science studies literature, there have been extensive discussions of the reasons researchers have for choosing to cite particular works and not to cite other works. These discussions are often framed in terms of the normative citation theory and the social constructivist citation theory. The authors need to carefully study this literature and they should make sure their paper is properly embedded in the literature. For instance, on p. 15, the authors conclude that “our findings suggest that science is a social system which is not only driven by meritocratic considerations but also by social expectations and peer pressure”. Given the extensive discussions that have taken place in the literature, I find it quite problematic to present this as a new finding, without providing any literature references.

    Authors: Extremely helpful! To properly embed our study in the literature, we added a new section on the role of citations in science which introduces the normative citation theory and the social constructivist citation theory and links Matthew effects in science.

    R3: Throughout the paper, several references are missing. Instead, a placeholder is presented (‘--------’). This needs to be fixed.

    Authors: We had originally prepared a blinded manuscript and then forgot to plug these references to our own work back in. Thanks!

    R3: Web of Science is no longer owned by Thomson Reuters. It is owned by Clarivate Analytics.

    Authors: Fixed throughout the manuscript

    R3: On p. 4, it is not clear to me why the authors refer explicitly to the Book Citation Index. More recently, the Emerging Sources Citation Index was added to Web of Science, but the authors do not mention this.

    Authors: We now write „we had to exclude other sources such as the Emerging Sources Citation Index and the Book Citation Index from our analysis.”



    Cite this author response
  • pre-publication peer review (ROUND 1)
    Decision Letter
    2020/11/22

    22-Nov-2020

    Dear Prof. Wolbring:

    Your manuscript QSS-2020-0076 entitled "Matthew Effects and the Serial Diffusion of Ideas in Science: Testing Old Ideas with New Methods", which you submitted to Quantitative Science Studies, has been reviewed. The comments of the reviewers are included at the bottom of this letter.

    Based on the comments of the reviewers as well as my own reading of your manuscript, my editorial decision is to invite you to prepare a major revision of your work. Please carefully consider the comments and suggestions of the reviewers. Reviewer 2 suggests to extend your analysis to other prizes. While this would definitely be of interest, I do not consider this necessary for publication of your work in Quantitative Science Studies.

    To revise your manuscript, log into https://mc.manuscriptcentral.com/qss and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions." Under "Actions," click on "Create a Revision." Your manuscript number has been appended to denote a revision.

    You may also click the below link to start the revision process (or continue the process if you have already started your revision) for your manuscript. If you use the below link you will not be required to login to ScholarOne Manuscripts.

    PLEASE NOTE: This is a two-step process. After clicking on the link, you will be directed to a webpage to confirm.

    https://mc.manuscriptcentral.com/qss?URL_MASK=0fd7695a74334d9f834e5209a93e868a

    You will be unable to make your revisions on the originally submitted version of the manuscript. Instead, revise your manuscript using a word processing program and save it on your computer. Please also highlight the changes to your manuscript within the document by using the track changes mode in MS Word or by using bold or colored text.

    Once the revised manuscript is prepared, you can upload it and submit it through your Author Center.

    When submitting your revised manuscript, you will be able to respond to the comments made by the reviewers in the space provided. You can use this space to document any changes you make to the original manuscript. In order to expedite the processing of the revised manuscript, please be as specific as possible in your response to the reviewers.

    IMPORTANT: Your original files are available to you when you upload your revised manuscript. Please delete any redundant files before completing the submission.

    If possible, please try to submit your revised manuscript by 22-Mar-2021. Let me know if you need more time to revise your work.

    Once again, thank you for submitting your manuscript to Quantitative Science Studies and I look forward to receiving your revision.

    Best wishes,
    Dr. Ludo Waltman
    Editor, Quantitative Science Studies
    qss@issi-society.org

    Reviewers' Comments to Author:

    Reviewer: 1

    Comments to the Author
    The authors investigated the Matthew effect in the process of being cited. The topic is interesting and the methods applied seem to be sound. However, the manuscript should be revised following these points:

    I miss some references throughout the whole manuscript. The authors have included many place holders.

    P2, L44: The authors write that “to the best of our knowledge, no study exists which provides a rigorous analysis of the causal effect of Nobel Prize reception on the accumulation of further citations for a group of laureates”. However, there exists studies investigating causal effects with respect to the Matthew effect. The results of these studies should be reported in the manuscript (and discussed against the backdrop of the own findings of the authors).

    P3, L58: It is no longer Thomson Reuters’ Web of Science, but Clarivate Analytics.

    P4, L26: It is not clear what is meant with “coverage” here. Also, a sample size of 23 seems to be low (and not “sufficient” as the authors write). It should be no problem at all to use WOS data from 1980 onwards. Thus, I encourage the authors to extent the database of the study.

    P6, L38: There are many different matching procedures available (e.g., entropy balancing and inverse probability weighting). Why did the authors decided to use CEM?

    P6, L33: The authors decided to use a very limited set of variables for matching (field, publication year, cumulative citations). They could improve the matching by considering further variables, such as the number of co-authors, number of cited references, and number of pages. These are variables which might influence the number of citations a paper receives. In my experience (and it is generally recommended that), matching should include as many variables as possible.

    Reviewer: 2

    Comments to the Author
    The manuscript examines the so-called “Matthew effect” for a set of papers linked to laureates of the Bank of Sweden Award in Economic Sciences in Honor of Alfred Nobel - often mistakenly referred to as the “Nobel Prize in Economics” - compared to a set of so-called “synthetic controls”. The authors motivate their study by claiming that hitherto no “rigorous quantitative analysis for Merton’s original case for Matthew effects - Nobel Prizes” have been done, and consequently that their study is filling this gap by “exploring the causal effect of Nobel Prizes in Economics. Secondary, the study also examines claims for so-called “serial diffusion of ideas”, i.e. potential spillover effects of getting a prize on publications cited by the laureates. The study finds “clear evidence for a “Matthew effect upon citation impacts” for the papers linked to the laureates; and no evidence for “serial diffusion”. The authors substantiate their findings by arguing that such “boosts” in citations to older papers when linked to a prize are essentially “caused” by ceremonial reference behaviors.

    While I generally don’t think that “novelty” should play a decisive role in reviews, I do think that the present manuscript to a some extent propagate what we already know, and unfortunately in a restricted way. Given our knowledge of cumulative effects and reference (citing) behavior especially in the soft sciences, it is not surprising that a shock effect, such as a prize, results in sudden increase of citations to presumably linked papers and that such immediate attention most probably can be ascribed to ceremonial and persuasive citing behavior. Such shock effects however is not necessarily equal to a Matthew effect, and indeed such cumulative advantages have been a challenges to quantify, but easier to declare in a qualities sense.
    What is missing is a more proper analysis and discussion of these effects - what are they?

    What would also have been interesting to see is to what extent such shock effects were comparable across other areas e.g., the real Nobel Prizes, the Turing prize, Fields Medals or whatever. I think it is important because it is questionable whether the prize itself “cause” the presumed Matthew effect - it may already be in play?

    I am therefore somewhat uneasy with the way the authors treat the Matthew effect. If my recollection of Merton and Zuckerman is not very wrong, what they derived from Zuckerman’s interviews of real Nobel laureates affiliated to American institutions was that the laureates had very advantageous career opportunities prior to being awarded the Prize. As such, the Prize itself was “merely” the “crowning achievement”. The cumulative advantages or the Matthew effects were already in place and had been for a long time when the prize was received. Initially, real Nobel Prizes were typically given for discoveries made a few years earlier. But steadily the gap between discoveries and receiving the prize grew wider, so that for many years now, recipients - at least for the real Nobel Prizes - are often of considerable age (there are exceptions of course, especially in physics) and practically in retirement.

    So to examine Matthew effects one should take the career perspective and not only or simply citations to papers after a prize is given. While it is most likely that such prizes bring even more attention (but perhaps only temporarily?), one would assume that on average they come on top of an already existing cumulative advantage.
    The questions for the present study are to what extent such effects are already in place, whether it the “synthetic control set” takes care of it, and/or to what extent the citation boots is an amplifying (double)effect?

    This brings me to the “rigorous analysis of the causal effects”. While I agree that receiving a prize can be seen as a natural experiment of some kind, I do wonder to what extent it is linked to a presumed causal Matthew effect in this case. And, even if it is, the resulting “causal effects” still come with uncertainty and are closely tied to the numerous strong assumptions that is present in, for example, the matching and synthetic control techniques, the model specifications, outcome transformation, let alone the plenty of hidden researcher degrees of freedom present in the study.
    I would therefore encourage the encourage the authors, as a minimum to disclose 1) the UT-numbers chosen for the studies; 2) if available all processing codes; 3) an elaborate decision tree for options and choices made; and 4) the data matrix or matrices behind the regression models with untransformed citation scores. Such transparency is needed if this study should move beyond exploration as it aims to do.
    While technically sophisticated, I am not sure that the “synthetic control” set of papers are meaningful here. Again, since prizes are given to persons and the treatment set of papers are all linked to such persons; it seems to me that the control set should also be made up of “potential” recipients and their papers, and not a mix of presumably comparable papers with numerous different origins. I know the authors dismiss this approach, preferring the “synthetic control”, but to be me this means that we are actual not comparing like with like.

    Does it matter for the eventual results, probably not in a qualitative sense, but perhaps in relation to estimated effect sizes? Please reflect.
    The regression results are presented in the usual manner seen in economics journals with stars and asterisks indicating arbitrary significant thresholds. The regressions capitalize on extremely large sample sizes, and while p-values therefore become very low, the authors do not correct for the obvious multiple testing that is present. While certainly not excessive, the authors do a couple of times seem to care about whether results are “significant” going from one specification to another. Given the non-random nature of the data combined with considerable size of it, this is hardly interesting.
    Despite my reservations for the main analyses, I think the results, especially for the first analysis is quite clear (and not surprising). A suggestion though would be to reframe what is actually examined in relation to above remarks about the Matthew effect, careers etc.

    It is unfortunate that the study is not comparative with at least one other field because, an obvious question is to what extent the field of economics and its recipients compare to physics, chemistry or physiology or whatever.

    Are recipients generally younger and still pursuing their careers? And would this mean that the shock effect actually turns into a permanent Matthew effect in economics but perhaps not in other fields? Are citing practices comparable? If not what consequences would that have? The authors acknowledge the very limited generalizability of their findings, albeit not in the abstract, but at the end of the manuscript in typically vague formulations. So be it.

    Nevertheless, the discussion section seems very shallow compared to the readers of QSS. This might as well have been written in a few motivating sentences in the beginning of the analysis. We do know that a large part of references given are persuasive. We would therefore expect a boost in selective citing when someone is lifted up on a pedestal when giving the prize - the argument here is that it is actually the persons you want to be associated with through the reference to her/his work. All this seems evident. And so does the consequences. Therefore, this could/should have been predicted to begin with.

    I have also noticed what I consider to be biased reference practices in some circumstances. Indeed the authors’ themselves state “citations are not only building blocks of scientific claims and mark the origin of certain ideas …” (page 16), hence we would expect them to adhere to this. But, for example on page 5, lines 40-45 the authors state and cite: “[t]he citation rates of most articles typically peak several years after publication and then steady decline in relevance (Amin and Mabe, 2003; Wang, Song and Barabási, 2013). “Decline in relevance” is probably not a proper formulation, obsolescence is more in line with concept the authors implicitly point to, namely “cited half-life”. A concept that has been around since the early 1960s due Burton and Kebler, as well as De Solla Price. Many research papers have addressed this concept in relation to articles and literatures. It is therefore highly surprising and misleading that the authors chose to cite an editorial piece from an Argentinian medical journal containing no references at all to these or other concepts discussed. This is not giving credit where credit is due. Likewise, it would also be misleading to simply cite Wang, Song and Barabarsi (2013). Their findings are disputed and not really central to the claim in the citing context. What is needed is a proper reference to the fact that most articles depending on field, have comparable citing histories with a peak 2, 3, 4, 5 or whatever years after publication and then a steady declining distribution (disregarding Sleeping Beauties or citation classics). I suggest that the authors go through their references one more time, aiming to cite the origin ideas, or at least proper substitutes, and at the same time try to disregard where studies are published only focusing on content and its relevance.

    Finally, while the authors do mention the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” in the final discussion section, they should have done this in the beginning of the manuscript instead. The authors do what most others also do, conflate the memorial prize with the real Nobel Prize which is a mistake. The aim of Sveriges Riksbank with instigating the former was ideological to promote independence which was to be based on “economics as a science”, and therefore the whish to make economics comparable to the existing sciences and Nobel Prizes (something the family was against). The latter certainly succeeded, especially among economist and politicians, not among natural scientist. However, the scientific fields and the basis for the awards are still very different. I agree that a prize is prizes and its potential effects on the social system of science can and should be examined, but the underlying mechanisms and background contexts across fields may differ substantially, hence, explanations and generalizations may differ as well.

    Reviewer: 3

    Comments to the Author
    This paper seems to present a solid analysis of the Matthew effect caused by the Nobel Prize in economics. I have two major comments and a few minor ones.

    Major comments

    I find it hard to understand the details of the matching methodology. The methodological explanation provided on p. 6-7 needs to be extended. The explanation of matching based on publication year and WoS subject category is clear, but it is not clear to me how exactly the matching based on number of citations was performed. This requires a more elaborate discussion. Also, in step 3, the authors refer to ‘the weights from the second step’, but no weights are mentioned in the discussion of step 2. Moreover, the method of entropy balancing used in step 3 needs a more detailed discussion. I am not familiar with this method, and this probably applies to most readers. Without a basic understanding of this method, it is hard to understand step 3.

    In the science studies literature, there have been extensive discussions of the reasons researchers have for choosing to cite particular works and not to cite other works. These discussions are often framed in terms of the normative citation theory and the social constructivist citation theory. The authors need to carefully study this literature and they should make sure their paper is properly embedded in the literature. For instance, on p. 15, the authors conclude that “our findings suggest that science is a social system which is not only driven by meritocratic considerations but also by social expectations and peer pressure”. Given the extensive discussions that have taken place in the literature, I find it quite problematic to present this as a new finding, without providing any literature references.

    Minor comments

    Throughout the paper, several references are missing. Instead, a placeholder is presented (‘--------’). This needs to be fixed.

    Web of Science is no longer owned by Thomson Reuters. It is owned by Clarivate Analytics.

    On p. 4, it is not clear to me why the authors refer explicitly to the Book Citation Index. More recently, the Emerging Sources Citation Index was added to Web of Science, but the authors do not mention this.

    Decision letter by
    Cite this decision letter
    Reviewer report
    2020/11/22

    This paper seems to present a solid analysis of the Matthew effect caused by the Nobel Prize in economics. I have two major comments and a few minor ones.

    Major comments

    I find it hard to understand the details of the matching methodology. The methodological explanation provided on p. 6-7 needs to be extended. The explanation of matching based on publication year and WoS subject category is clear, but it is not clear to me how exactly the matching based on number of citations was performed. This requires a more elaborate discussion. Also, in step 3, the authors refer to ‘the weights from the second step’, but no weights are mentioned in the discussion of step 2. Moreover, the method of entropy balancing used in step 3 needs a more detailed discussion. I am not familiar with this method, and this probably applies to most readers. Without a basic understanding of this method, it is hard to understand step 3.

    In the science studies literature, there have been extensive discussions of the reasons researchers have for choosing to cite particular works and not to cite other works. These discussions are often framed in terms of the normative citation theory and the social constructivist citation theory. The authors need to carefully study this literature and they should make sure their paper is properly embedded in the literature. For instance, on p. 15, the authors conclude that “our findings suggest that science is a social system which is not only driven by meritocratic considerations but also by social expectations and peer pressure”. Given the extensive discussions that have taken place in the literature, I find it quite problematic to present this as a new finding, without providing any literature references.

    Minor comments

    Throughout the paper, several references are missing. Instead, a placeholder is presented (‘--------’). This needs to be fixed.

    Web of Science is no longer owned by Thomson Reuters. It is owned by Clarivate Analytics.

    On p. 4, it is not clear to me why the authors refer explicitly to the Book Citation Index. More recently, the Emerging Sources Citation Index was added to Web of Science, but the authors do not mention this.

    Reviewed by
    Cite this review
    Reviewer report
    2020/11/20

    The manuscript examines the so-called “Matthew effect” for a set of papers linked to laureates of the Bank of Sweden Award in Economic Sciences in Honor of Alfred Nobel - often mistakenly referred to as the “Nobel Prize in Economics” - compared to a set of so-called “synthetic controls”. The authors motivate their study by claiming that hitherto no “rigorous quantitative analysis for Merton’s original case for Matthew effects - Nobel Prizes” have been done, and consequently that their study is filling this gap by “exploring the causal effect of Nobel Prizes in Economics. Secondary, the study also examines claims for so-called “serial diffusion of ideas”, i.e. potential spillover effects of getting a prize on publications cited by the laureates. The study finds “clear evidence for a “Matthew effect upon citation impacts” for the papers linked to the laureates; and no evidence for “serial diffusion”. The authors substantiate their findings by arguing that such “boosts” in citations to older papers when linked to a prize are essentially “caused” by ceremonial reference behaviors.

    While I generally don’t think that “novelty” should play a decisive role in reviews, I do think that the present manuscript to a some extent propagate what we already know, and unfortunately in a restricted way. Given our knowledge of cumulative effects and reference (citing) behavior especially in the soft sciences, it is not surprising that a shock effect, such as a prize, results in sudden increase of citations to presumably linked papers and that such immediate attention most probably can be ascribed to ceremonial and persuasive citing behavior. Such shock effects however is not necessarily equal to a Matthew effect, and indeed such cumulative advantages have been a challenges to quantify, but easier to declare in a qualities sense.
    What is missing is a more proper analysis and discussion of these effects - what are they?

    What would also have been interesting to see is to what extent such shock effects were comparable across other areas e.g., the real Nobel Prizes, the Turing prize, Fields Medals or whatever. I think it is important because it is questionable whether the prize itself “cause” the presumed Matthew effect - it may already be in play?

    I am therefore somewhat uneasy with the way the authors treat the Matthew effect. If my recollection of Merton and Zuckerman is not very wrong, what they derived from Zuckerman’s interviews of real Nobel laureates affiliated to American institutions was that the laureates had very advantageous career opportunities prior to being awarded the Prize. As such, the Prize itself was “merely” the “crowning achievement”. The cumulative advantages or the Matthew effects were already in place and had been for a long time when the prize was received. Initially, real Nobel Prizes were typically given for discoveries made a few years earlier. But steadily the gap between discoveries and receiving the prize grew wider, so that for many years now, recipients - at least for the real Nobel Prizes - are often of considerable age (there are exceptions of course, especially in physics) and practically in retirement.

    So to examine Matthew effects one should take the career perspective and not only or simply citations to papers after a prize is given. While it is most likely that such prizes bring even more attention (but perhaps only temporarily?), one would assume that on average they come on top of an already existing cumulative advantage.
    The questions for the present study are to what extent such effects are already in place, whether it the “synthetic control set” takes care of it, and/or to what extent the citation boots is an amplifying (double)effect?

    This brings me to the “rigorous analysis of the causal effects”. While I agree that receiving a prize can be seen as a natural experiment of some kind, I do wonder to what extent it is linked to a presumed causal Matthew effect in this case. And, even if it is, the resulting “causal effects” still come with uncertainty and are closely tied to the numerous strong assumptions that is present in, for example, the matching and synthetic control techniques, the model specifications, outcome transformation, let alone the plenty of hidden researcher degrees of freedom present in the study.
    I would therefore encourage the encourage the authors, as a minimum to disclose 1) the UT-numbers chosen for the studies; 2) if available all processing codes; 3) an elaborate decision tree for options and choices made; and 4) the data matrix or matrices behind the regression models with untransformed citation scores. Such transparency is needed if this study should move beyond exploration as it aims to do.
    While technically sophisticated, I am not sure that the “synthetic control” set of papers are meaningful here. Again, since prizes are given to persons and the treatment set of papers are all linked to such persons; it seems to me that the control set should also be made up of “potential” recipients and their papers, and not a mix of presumably comparable papers with numerous different origins. I know the authors dismiss this approach, preferring the “synthetic control”, but to be me this means that we are actual not comparing like with like.

    Does it matter for the eventual results, probably not in a qualitative sense, but perhaps in relation to estimated effect sizes? Please reflect.
    The regression results are presented in the usual manner seen in economics journals with stars and asterisks indicating arbitrary significant thresholds. The regressions capitalize on extremely large sample sizes, and while p-values therefore become very low, the authors do not correct for the obvious multiple testing that is present. While certainly not excessive, the authors do a couple of times seem to care about whether results are “significant” going from one specification to another. Given the non-random nature of the data combined with considerable size of it, this is hardly interesting.
    Despite my reservations for the main analyses, I think the results, especially for the first analysis is quite clear (and not surprising). A suggestion though would be to reframe what is actually examined in relation to above remarks about the Matthew effect, careers etc.

    It is unfortunate that the study is not comparative with at least one other field because, an obvious question is to what extent the field of economics and its recipients compare to physics, chemistry or physiology or whatever.

    Are recipients generally younger and still pursuing their careers? And would this mean that the shock effect actually turns into a permanent Matthew effect in economics but perhaps not in other fields? Are citing practices comparable? If not what consequences would that have? The authors acknowledge the very limited generalizability of their findings, albeit not in the abstract, but at the end of the manuscript in typically vague formulations. So be it.

    Nevertheless, the discussion section seems very shallow compared to the readers of QSS. This might as well have been written in a few motivating sentences in the beginning of the analysis. We do know that a large part of references given are persuasive. We would therefore expect a boost in selective citing when someone is lifted up on a pedestal when giving the prize - the argument here is that it is actually the persons you want to be associated with through the reference to her/his work. All this seems evident. And so does the consequences. Therefore, this could/should have been predicted to begin with.

    I have also noticed what I consider to be biased reference practices in some circumstances. Indeed the authors’ themselves state “citations are not only building blocks of scientific claims and mark the origin of certain ideas …” (page 16), hence we would expect them to adhere to this. But, for example on page 5, lines 40-45 the authors state and cite: “[t]he citation rates of most articles typically peak several years after publication and then steady decline in relevance (Amin and Mabe, 2003; Wang, Song and Barabási, 2013). “Decline in relevance” is probably not a proper formulation, obsolescence is more in line with concept the authors implicitly point to, namely “cited half-life”. A concept that has been around since the early 1960s due Burton and Kebler, as well as De Solla Price. Many research papers have addressed this concept in relation to articles and literatures. It is therefore highly surprising and misleading that the authors chose to cite an editorial piece from an Argentinian medical journal containing no references at all to these or other concepts discussed. This is not giving credit where credit is due. Likewise, it would also be misleading to simply cite Wang, Song and Barabarsi (2013). Their findings are disputed and not really central to the claim in the citing context. What is needed is a proper reference to the fact that most articles depending on field, have comparable citing histories with a peak 2, 3, 4, 5 or whatever years after publication and then a steady declining distribution (disregarding Sleeping Beauties or citation classics). I suggest that the authors go through their references one more time, aiming to cite the origin ideas, or at least proper substitutes, and at the same time try to disregard where studies are published only focusing on content and its relevance.

    Finally, while the authors do mention the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” in the final discussion section, they should have done this in the beginning of the manuscript instead. The authors do what most others also do, conflate the memorial prize with the real Nobel Prize which is a mistake. The aim of Sveriges Riksbank with instigating the former was ideological to promote independence which was to be based on “economics as a science”, and therefore the whish to make economics comparable to the existing sciences and Nobel Prizes (something the family was against). The latter certainly succeeded, especially among economist and politicians, not among natural scientist. However, the scientific fields and the basis for the awards are still very different. I agree that a prize is prizes and its potential effects on the social system of science can and should be examined, but the underlying mechanisms and background contexts across fields may differ substantially, hence, explanations and generalizations may differ as well.

    Reviewed by
    Cite this review
    Reviewer report
    2020/10/23

    The authors investigated the Matthew effect in the process of being cited. The topic is interesting and the methods applied seem to be sound. However, the manuscript should be revised following these points:

    I miss some references throughout the whole manuscript. The authors have included many place holders.

    P2, L44: The authors write that “to the best of our knowledge, no study exists which provides a rigorous analysis of the causal effect of Nobel Prize reception on the accumulation of further citations for a group of laureates”. However, there exists studies investigating causal effects with respect to the Matthew effect. The results of these studies should be reported in the manuscript (and discussed against the backdrop of the own findings of the authors).

    P3, L58: It is no longer Thomson Reuters’ Web of Science, but Clarivate Analytics.

    P4, L26: It is not clear what is meant with “coverage” here. Also, a sample size of 23 seems to be low (and not “sufficient” as the authors write). It should be no problem at all to use WOS data from 1980 onwards. Thus, I encourage the authors to extent the database of the study.

    P6, L38: There are many different matching procedures available (e.g., entropy balancing and inverse probability weighting). Why did the authors decided to use CEM?

    P6, L33: The authors decided to use a very limited set of variables for matching (field, publication year, cumulative citations). They could improve the matching by considering further variables, such as the number of co-authors, number of cited references, and number of pages. These are variables which might influence the number of citations a paper receives. In my experience (and it is generally recommended that), matching should include as many variables as possible.

    Reviewed by
    Cite this review
All peer review content displayed here is covered by a Creative Commons CC BY 4.0 license.