Abstract

Information about the relative strengths of scholars is needed for the efficient running of knowledge systems. Since academic research requires a wide range of skills, more experienced researchers might produce better research and attract more citations. This article assesses career citation impact changes 2001-2016 for the international publications of domestic researchers (beginning and ending in the same country) from the twelve nations with most Scopus documents. Careers are analysed longitudinally, so that changes are not due to personnel evolution, such as researchers leaving or entering a country. The results show that long term researchers do not tend to improve their citation impact over time but tend to achieve their average citation impact by their first or second Scopus journal article. In some countries, this citation impact subsequently declines. Longer-term researchers have higher citation impact than the national average in all countries, however, whereas scholars publishing only one journal article have substantially lower citation impact in all countries. The results are consistent with an efficiently functioning researcher selection system but cast slight doubt on the long-term citation impact potential of long-term researchers. Research and funding policies may need to accommodate these patterns when citation impact is a relevant indicator.


Authors

Nabeil Maflahi;  Mike Thelwall

Publons users who've claimed - I am an author
Contributors on Publons
  • 1 author
  • 1 reviewer
  • pre-publication peer review (FINAL ROUND)
    Decision Letter
    2021/04/11

    11-Apr-2021

    Dear Dr. Thelwall:

    It is a pleasure to accept your manuscript entitled "Domestic researchers with longer careers generate higher average citation impact but it does not increase over time" for publication in Quantitative Science Studies. The comments of the reviewers who reviewed your manuscript are included at the foot of this letter. Both reviewers recommend acceptance of your work, but reviewer 2 still has a few small comments that you may want to address in the final version of your manuscript.

    I would like to request you to prepare the final version of your manuscript using the attached checklist. Please also sign the publication agreement, which can be downloaded from https://direct.mit.edu/DocumentLibrary/PubAgreements/QSS_pub_agreement.pdf. The final version of your manuscript, along with the completed checklist and the signed publication agreement, can be returned to qss@issi-society.org.

    Thank you for your contribution. On behalf of the Editors of Quantitative Science Studies, I look forward to your continued contributions to the journal.

    Best wishes,
    Dr. Ludo Waltman
    Editor, Quantitative Science Studies
    qss@issi-society.org

    Reviewers' Comments to Author:

    Reviewer: 1

    Comments to the Author
    The authors have added references and a short discussion about their definition of domestic researchers that enable the reader to better compare and assess the definition. From my point of view, the authors has thereby resolved my concern.

    Reviewer: 2

    Comments to the Author
    The authors have done a good job in revising the manuscript and my major comments and concerns have been properly addressed. Particularly, I find relevant that the exclusion of the 9+ co-authoring researchers does not represent a strong challenge to the main analysis excluding them.
    Below I list a couple of minor issues that could be addressed before final acceptance for publication:
    - In page 8, it is said that a “researcher with a first publication in 2001 or afterwards was assumed to have started publishing internationally in that year”. What is meant with “internationally” in this case?
    - In page 20, it is not entirely clear what is meant with “the value of the first and last articles is likely to be based on a more publishing sample the other values”. What is a “publishing sample”? and what is meant with “the other values”?

    Decision letter by
    Cite this decision letter
    Reviewer report
    2021/04/11

    The authors have done a good job in revising the manuscript and my major comments and concerns have been properly addressed. Particularly, I find relevant that the exclusion of the 9+ co-authoring researchers does not represent a strong challenge to the main analysis excluding them.
    Below I list a couple of minor issues that could be addressed before final acceptance for publication:
    - In page 8, it is said that a “researcher with a first publication in 2001 or afterwards was assumed to have started publishing internationally in that year”. What is meant with “internationally” in this case?
    - In page 20, it is not entirely clear what is meant with “the value of the first and last articles is likely to be based on a more publishing sample the other values”. What is a “publishing sample”? and what is meant with “the other values”?

    Reviewed by
    Cite this review
    Reviewer report
    2021/04/09

    The authors have added references and a short discussion about their definition of domestic researchers that enable the reader to better compare and assess the definition. From my point of view, the authors has thereby resolved my concern.

    Reviewed by
    Cite this review
    Author Response
    2021/03/11

    *Thank you very much to the reviewers for reading the paper and the comments. We really appreciate the detailed advice and insightful suggestions. Please see below for the changes made, marked with *.
    Reviewers' Comments to Author:

    Reviewer: 1

    Comments to the Author
    The authors have revised the manuscript and resolved most om my concerns. However, I am not convinced by the authors answer to my first major concern regarding the operationalization of domestic researchers. The comparative design, which is the novel part of this study, depends on the validity of the units of comparison. I do not believe that the authors have convincingly showed that their operationalization attains validity, since most of my concerns addressed in my comment still applies. Due to the importance of the operationalization of domestic researchers, I believe that the authors should try to show (by some test or by some descriptive statistics) how well their operationalization captures their idea of domestic researchers that is supposed to enable cross-country comparisons.
    ****This is a category that we have introduced in this paper so we have nothing to benchmark it against other than our own definition. We have added the following text to explain that it is one possible interpretation of the term. “A domestic researcher is defined here as someone that is affiliated with the same country in their chronologically first and last Scopus-indexed publications , even if they spend part of their time abroad. Since there are other reasonable definitions of domestic researchers, such as never collaborating internationally (Tan, et al., 2015), or just being based in a country, however temporarily, (Akhmadieva, et al., 2020; Ponomariov & Toivanen, 2014), the definition used here is only one way of interpreting domesticity.” We have also emphasised the definition in the abstract and conclusion.

    Reviewer: 2

    Comments to the Author
    The authors have made relevant changes that make the text more readable and understandable from the first version. I thank the authors for these efforts.
    However, despite these improvements, there are still important limitations that need to be considered should the manuscript be accepted for publication.

    Major comment.
    Exclusion of highly collaborative researchers. Unfortunately, the reasons given to exclude researchers with ten or more co-authors in one publication are not very convincing. I don't see why it is an analytical advantage to exclude researcher "in large co-authorship lists" or belonging to "consortia". It is also unclear what is the role of this exclusion in the assessment of "individual contributions". As a matter of fact, the only time the authors mention "individual contributions" is precisely to justify this exclusion, although the role of individual contributions is not discussed anywhere else in the manuscript.
    The exclusion of this set may have severe fundamental effects on the overall conclusions of the manuscript. Essentially, by excluding these authors, the most successful set of researchers (e.g. the more collaborative ones, the more international ones, the ones probably acquiring more funding, etc.) is excluded. If one would one argue that the increasing of citation impact of a researcher is probably related to the acquisition of funding, resources and collaboration; then the manuscript could be also interpreted as that those domestic researchers who do not secure large collaboration networks will decrease in their impact.
    I think that leaving this issue open calls into question the main conclusions of the manuscript. At a minimum, the potential effect of the exclusion of these collaborative authors should be tested (e.g. by running a test on a random sample including/excluding these collaborative authors and assessing the differences; or simply including them in the analysis to test their effect).
    ****The analysis has been re-run with the collaborative authors and the results added as an Appendix, and the differences commented on at the start of the Discussion, “If researchers ever collaborating with 9+ authors are not excluded, so that all authors with their first and last Scopus journal articles from the same country were analysed, then there are similar trends in the results (Appendix). The main difference is that the average impact of all researchers is higher, due to the inclusions of some higher impact collaborative papers. This similarity suggests that the results of this paper might apply to all domestic researchers although, as argued in the Methods section, the inclusion of highly collaborative papers reduces the validity of the results.”. The following extra text has been included for the justification of their exclusion “[It is difficult to evaluate the collaborations of researchers in large co-authorship lists partly because they may be from consortia with publishing agreements] ensuring that people with no connection to a study become co-authors (Thelwall, 2020). For example, one CERN paper had 5,154 co-authors and including this one paper may create thousands of extra authors, altering country profiles. Similarly, many long term collaborations with almost identical lists of hundreds or a thousand of authors for a series of papers (Thelwall, 2020) could substantially influence the results here with large numbers of additional authors for some countries.”. The restriction about collaboration has been mentioned more in the Discussion and Conclusion to emphasise that the results are not about all researchers, and the following added as a limitation, “Finally, the restriction to researchers that never co-author Scopus-indexed articles with 9+ people and the domestic researcher restriction mean that the set analysed is artificial, created with conditions related to indicator validity rather than management decision-making.”. The point above about less successful researchers has also been added, as follows, “An alternative plausible interpretation of the results (suggested by a reviewer) is that domestic researchers who do not secure large collaboration networks tend to have decreasing citation impact.”.

    Other minor issues

    • A domestic researcher is defined as "trained by a country and remains focused on that country". I think the "training" and "focused" ideas are too ambitious for the approach applied in the manuscript. I would suggest rephrasing this to refer to these researchers as those who are "affiliated to the same country in their chronologically first and last publications". Or at least a similar wording.
      ****This change has been made.

    • In Table 1 what are "reference researchers"? It is not totally clear from the descriptions in the text.
      ****This has been changed to “all domestic researchers” with the following added to the table caption, “The set of all domestic researchers (first and last Scopus publication from the country) is used for reference in some of the graphs.”

    • The analysis in figures 1 to 5 present different analysis. One wonders why not using the same analysis for the four samples (long term, medium term, short term, and single paper authors). For example, the MNLCS Diff analysis is not presented for the longer-term set. Overall, it would be nice to provide the same analyses for all the sets, and clarify the sets in the methodological section.
      ****We tried this first, but the graphs were too messy and confidence intervals too wide. We have a bespoke solution for each one that we think conveys the most information from it. The following has been added to clarify this “This was calculated only for the 11-year and 6-year researchers because there is only one cohort for the long term researchers (so nothing to average). Although each cohort could be analysed separately, the low numbers per cohort gives wide confidence intervals and messy graphs, which so the aggregation of cohorts in this way adds precision to the career trends found. In contrast, whilst the single year researchers could be averaged across all years, it is more informative to report values for individual years and the same sizes are sufficient to not need aggregating.”

    • In page 2, what is meant with "unfortunate limitation of keeping these people is that mid-career citation patterns might be due to periods spent abroad."
      ****The following has been added to illustrate one way in which this is possible, “(e.g., increased citation impact due to working abroad with higher quality infrastructure and support)”

    It is also not clear what is meant with "permanent changes in infrastructure quality". What is meant here?
    ****The following has been added to illustrate this, “(e.g., moving to a richer lab in a wealthier country)”

    • In page 8 it is explain what is done with multi-affiliated authors in a paper. As I understand it, multi-affiliated authors are assigned to the first affiliation to which they are linked in the byline of the paper. Am I correct? If I am correct, this could introduce some noise in the analysis, since in those collaborative papers the multi-affiliated author may show the links depending on the order of affiliations introduced by the author co-authors (e.g. a researcher is affiliated with A and B, with A being the main affiliation. In a paper with a first co-author from B, the byline would – very likely- be B, A, with the multi-affiliated author, being first affiliated to B). Perhaps a simpler solution, why not simply including all the multi-affiliated researchers in the study as they appear in the publications?
      ****This does not seem to be an improvement. It would create its own problems since it is probably more common for multi-affiliation researchers to have one primary affiliation. It seems likely that the additional affiliation will often be an honorary or minor one as a visiting professor. So we think that the alternative approach will make the data more misleading.

    • Page 9, there is a repetition with “This procedure was used…”
      ****The extra one has been deleted.

    • Page 9, It is still not totally clear how the MNCLS values are calculated for the set of researcher. If my understanding is correct, this is what the authors have done: 1. Calculate NLCS values for all the publications in the set. 2. Average the NLCS values for each researcher-year combination (what is now mentioned as "modified NLCS values"). 3. Averaging the individual mean values of all researchers (average of all the “modified NLCS values”, with the exception of single-article researchers for whom the modified NLCS coincides with the NLCS), resulting in the final MNLCS. It would be appreciated if the authors could clarify this aspect in the methodological description.
      ****Yes this is right – the text has been expanded to clarify this and give an alternative calculation route that might also help, “Citation impact of a set of researchers (MNLCS): For a set of researchers, the MNLCS was calculated as above except that if a researcher had published multiple articles in the same year, then the average NLCS of those articles was used instead of averaging them separately. Averaging researcher average NLCS for a year instead of all NLCS for all qualifying papers prevents the results from being dominated by prolific researchers because their publications are averaged rather than counted separately. The MNLCS for any set of researchers was then calculated as the arithmetic mean of the modified NLCS values. This is equivalent to calculating the MNLCS for each researcher and year separately, then averaging the researcher MNLCS values for each year (ignoring researchers that did not publish in that year). This procedure was used for domestic short term, medium term and longer term researchers, as defined above, and was calculated separately for each year and country.”

    • In the conclusions section, some of them sound rough, considering both the limitations of the study and the results presented. For example, the conclusion that “citation impact does not increase during careers” sounds very rough, since considering all the restrictions and limitations introduced in the study, at best, it can be said that in the national USA normalized impact has decreased over time, which is also reflected in the individual career impact of the researchers from this country. Moreover, the impact of researchers from china has increased, probably because of the overall increase of the national impact in the country (black line in Figure 1). In other words, the two main countries in the study present rather different patterns, and the decrease in the impact of the US may be related to the increase of China’s overall impact, particularly considering that citations are normalized (meaning that a net increase of citations, may still be relatively low if other actors in the system increase even more in their net impact). For the other countries, the patterns are very different and variable, without a clear overall pattern (some increase, some decrease, some are stable).
      ****The conclusions have been modified to add caveats about researchers never co-authoring with 9+ co-authors. The following has been added to emphasise that the results are relative to the national average, “The impact of interest here is relative to the national average rather than absolute or relative to the world average, under the assumption that factors outside the control of a researcher, such as economic development and research investment, can have a substantial influence on the national research capacity.” The following extra limitation has been added, mainly for the China case, “Changes in national research infrastructure may affect researchers differently by career stage. For example, substantial increases in research funding and infrastructure over many years (e.g., in China) may help senior researchers (who may win most of the funding) or young researchers (who can more easily learn expensive new technologies), so impact comparisons for long careers may not be fair on some groups.”

    • Another conclusion that could be challenged is that “last articles have lower citation impact than first articles”. This does not seem to be the case in China, Russia, Spain, Italy (as per Figure 5). And if the national reference is considered, one could argue that in France Japan and India, researchers have in their countries a higher impact than their national benchmark. Is this correct? (if not please, consider explaining better the results of these countries).
      ****This has been corrected and stated more precisely, as follows, “The above factors may also help to explain the lower citation impact of medium and short term researchers’ final articles for all countries. This pattern cannot be checked for long term researchers because many of their careers may be continuing.”



    Cite this author response
  • pre-publication peer review (ROUND 2)
    Decision Letter
    2021/02/20

    20-Feb-2021

    Dear Dr. Thelwall:

    Your manuscript QSS-2020-0082.R1 entitled "Domestic researchers with longer careers generate higher average citation impact but it does not increase over time", which you submitted to Quantitative Science Studies, has been reviewed. The comments of the reviewers are included at the bottom of this letter.

    Based on the comments of the reviewers as well as my own reading of your manuscript, my editorial decision is to invite you to prepare a second revision of your manuscript. Reviewer 1 feels that the operationalization of domestic researchers is still not convincing. Reviewer 2 is still concerned about the exclusion of researchers that have publications with a large number of co-authors. These two points are important because they may strongly influence the conclusions of your research. These points therefore need careful attention in the second revision of your manuscript.

    To revise your manuscript, log into https://mc.manuscriptcentral.com/qss and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions." Under "Actions," click on "Create a Revision." Your manuscript number has been appended to denote a revision.

    You may also click the below link to start the revision process (or continue the process if you have already started your revision) for your manuscript. If you use the below link you will not be required to login to ScholarOne Manuscripts.

    PLEASE NOTE: This is a two-step process. After clicking on the link, you will be directed to a webpage to confirm.

    https://mc.manuscriptcentral.com/qss?URL_MASK=0a0d27cd14ad4a2facb6e7d8cda904f0

    You will be unable to make your revisions on the originally submitted version of the manuscript. Instead, revise your manuscript using a word processing program and save it on your computer. Please also highlight the changes to your manuscript within the document by using the track changes mode in MS Word or by using bold or colored text.

    Once the revised manuscript is prepared, you can upload it and submit it through your Author Center.

    When submitting your revised manuscript, you will be able to respond to the comments made by the reviewers in the space provided. You can use this space to document any changes you make to the original manuscript. In order to expedite the processing of the revised manuscript, please be as specific as possible in your response to the reviewers.

    IMPORTANT: Your original files are available to you when you upload your revised manuscript. Please delete any redundant files before completing the submission.

    If possible, please try to submit your revised manuscript by 20-Jun-2021. Let me know if you need more time to revise your work.

    Once again, thank you for submitting your manuscript to Quantitative Science Studies and I look forward to receiving your revision.

    Best wishes,
    Dr. Ludo Waltman
    Editor, Quantitative Science Studies
    qss@issi-society.org

    Reviewers' Comments to Author:

    Reviewer: 1

    Comments to the Author
    The authors have revised the manuscript and resolved most om my concerns. However, I am not convinced by the authors answer to my first major concern regarding the operationalization of domestic researchers. The comparative design, which is the novel part of this study, depends on the validity of the units of comparison. I do not believe that the authors have convincingly showed that their operationalization attains validity, since most of my concerns addressed in my comment still applies. Due to the importance of the operationalization of domestic researchers, I believe that the authors should try to show (by some test or by some descriptive statistics) how well their operationalization captures their idea of domestic researchers that is supposed to enable cross-country comparisons.

    Reviewer: 2

    Comments to the Author
    The authors have made relevant changes that make the text more readable and understandable from the first version. I thank the authors for these efforts.
    However, despite these improvements, there are still important limitations that need to be considered should the manuscript be accepted for publication.

    Major comment.

    Exclusion of highly collaborative researchers. Unfortunately, the reasons given to exclude researchers with ten or more co-authors in one publication are not very convincing. I don't see why it is an analytical advantage to exclude researcher "in large co-authorship lists" or belonging to "consortia". It is also unclear what is the role of this exclusion in the assessment of "individual contributions". As a matter of fact, the only time the authors mention "individual contributions" is precisely to justify this exclusion, although the role of individual contributions is not discussed anywhere else in the manuscript.
    The exclusion of this set may have severe fundamental effects on the overall conclusions of the manuscript. Essentially, by excluding these authors, the most successful set of researchers (e.g. the more collaborative ones, the more international ones, the ones probably acquiring more funding, etc.) is excluded. If one would one argue that the increasing of citation impact of a researcher is probably related to the acquisition of funding, resources and collaboration; then the manuscript could be also interpreted as that those domestic researchers who do not secure large collaboration networks will decrease in their impact.
    I think that leaving this issue open calls into question the main conclusions of the manuscript. At a minimum, the potential effect of the exclusion of these collaborative authors should be tested (e.g. by running a test on a random sample including/excluding these collaborative authors and assessing the differences; or simply including them in the analysis to test their effect).

    Other minor issues

    • A domestic researcher is defined as "trained by a country and remains focused on that country". I think the "training" and "focused" ideas are too ambitious for the approach applied in the manuscript. I would suggest rephrasing this to refer to these researchers as those who are "affiliated to the same country in their chronologically first and last publications". Or at least a similar wording.

    • In Table 1 what are "reference researchers"? It is not totally clear from the descriptions in the text.

    • The analysis in figures 1 to 5 present different analysis. One wonders why not using the same analysis for the four samples (long term, medium term, short term, and single paper authors). For example, the MNLCS Diff analysis is not presented for the longer-term set. Overall, it would be nice to provide the same analyses for all the sets, and clarify the sets in the methodological section.

    • In page 2, what is meant with "unfortunate limitation of keeping these people is that mid-career citation patterns might be due to periods spent abroad." It is also not clear what is meant with "permanent changes in infrastructure quality". What is meant here?

    • In page 8 it is explain what is done with multi-affiliated authors in a paper. As I understand it, multi-affiliated authors are assigned to the first affiliation to which they are linked in the byline of the paper. Am I correct? If I am correct, this could introduce some noise in the analysis, since in those collaborative papers the multi-affiliated author may show the links depending on the order of affiliations introduced by the author co-authors (e.g. a researcher is affiliated with A and B, with A being the main affiliation. In a paper with a first co-author from B, the byline would – very likely- be B, A, with the multi-affiliated author, being first affiliated to B). Perhaps a simpler solution, why not simply including all the multi-affiliated researchers in the study as they appear in the publications?
    • Page 9, there is a repetition with “This procedure was used…”
    • Page 9, It is still not totally clear how the MNCLS values are calculated for the set of researcher. If my understanding is correct, this is what the authors have done: 1. Calculate NLCS values for all the publications in the set. 2. Average the NLCS values for each researcher-year combination (what is now mentioned as "modified NLCS values"). 3. Averaging the individual mean values of all researchers (average of all the “modified NLCS values”, with the exception of single-article researchers for whom the modified NLCS coincides with the NLCS), resulting in the final MNLCS. It would be appreciated if the authors could clarify this aspect in the methodological description.
    • In the conclusions section, some of them sound rough, considering both the limitations of the study and the results presented. For example, the conclusion that “citation impact does not increase during careers” sounds very rough, since considering all the restrictions and limitations introduced in the study, at best, it can be said that in the national USA normalized impact has decreased over time, which is also reflected in the individual career impact of the researchers from this country. Moreover, the impact of researchers from china has increased, probably because of the overall increase of the national impact in the country (black line in Figure 1). In other words, the two main countries in the study present rather different patterns, and the decrease in the impact of the US may be related to the increase of China’s overall impact, particularly considering that citations are normalized (meaning that a net increase of citations, may still be relatively low if other actors in the system increase even more in their net impact). For the other countries, the patterns are very different and variable, without a clear overall pattern (some increase, some decrease, some are stable).
    • Another conclusion that could be challenged is that “last articles have lower citation impact than first articles”. This does not seem to be the case in China, Russia, Spain, Italy (as per Figure 5). And if the national reference is considered, one could argue that in France Japan and India, researchers have in their countries a higher impact than their national benchmark. Is this correct? (if not please, consider explaining better the results of these countries).

    Decision letter by
    Cite this decision letter
    Reviewer report
    2021/02/07

    The authors have made relevant changes that make the text more readable and understandable from the first version. I thank the authors for these efforts.
    However, despite these improvements, there are still important limitations that need to be considered should the manuscript be accepted for publication.

    Major comment.

    Exclusion of highly collaborative researchers. Unfortunately, the reasons given to exclude researchers with ten or more co-authors in one publication are not very convincing. I don't see why it is an analytical advantage to exclude researcher "in large co-authorship lists" or belonging to "consortia". It is also unclear what is the role of this exclusion in the assessment of "individual contributions". As a matter of fact, the only time the authors mention "individual contributions" is precisely to justify this exclusion, although the role of individual contributions is not discussed anywhere else in the manuscript.
    The exclusion of this set may have severe fundamental effects on the overall conclusions of the manuscript. Essentially, by excluding these authors, the most successful set of researchers (e.g. the more collaborative ones, the more international ones, the ones probably acquiring more funding, etc.) is excluded. If one would one argue that the increasing of citation impact of a researcher is probably related to the acquisition of funding, resources and collaboration; then the manuscript could be also interpreted as that those domestic researchers who do not secure large collaboration networks will decrease in their impact.
    I think that leaving this issue open calls into question the main conclusions of the manuscript. At a minimum, the potential effect of the exclusion of these collaborative authors should be tested (e.g. by running a test on a random sample including/excluding these collaborative authors and assessing the differences; or simply including them in the analysis to test their effect).

    Other minor issues

    • A domestic researcher is defined as "trained by a country and remains focused on that country". I think the "training" and "focused" ideas are too ambitious for the approach applied in the manuscript. I would suggest rephrasing this to refer to these researchers as those who are "affiliated to the same country in their chronologically first and last publications". Or at least a similar wording.

    • In Table 1 what are "reference researchers"? It is not totally clear from the descriptions in the text.

    • The analysis in figures 1 to 5 present different analysis. One wonders why not using the same analysis for the four samples (long term, medium term, short term, and single paper authors). For example, the MNLCS Diff analysis is not presented for the longer-term set. Overall, it would be nice to provide the same analyses for all the sets, and clarify the sets in the methodological section.

    • In page 2, what is meant with "unfortunate limitation of keeping these people is that mid-career citation patterns might be due to periods spent abroad." It is also not clear what is meant with "permanent changes in infrastructure quality". What is meant here?

    • In page 8 it is explain what is done with multi-affiliated authors in a paper. As I understand it, multi-affiliated authors are assigned to the first affiliation to which they are linked in the byline of the paper. Am I correct? If I am correct, this could introduce some noise in the analysis, since in those collaborative papers the multi-affiliated author may show the links depending on the order of affiliations introduced by the author co-authors (e.g. a researcher is affiliated with A and B, with A being the main affiliation. In a paper with a first co-author from B, the byline would – very likely- be B, A, with the multi-affiliated author, being first affiliated to B). Perhaps a simpler solution, why not simply including all the multi-affiliated researchers in the study as they appear in the publications?
    • Page 9, there is a repetition with “This procedure was used…”
    • Page 9, It is still not totally clear how the MNCLS values are calculated for the set of researcher. If my understanding is correct, this is what the authors have done: 1. Calculate NLCS values for all the publications in the set. 2. Average the NLCS values for each researcher-year combination (what is now mentioned as "modified NLCS values"). 3. Averaging the individual mean values of all researchers (average of all the “modified NLCS values”, with the exception of single-article researchers for whom the modified NLCS coincides with the NLCS), resulting in the final MNLCS. It would be appreciated if the authors could clarify this aspect in the methodological description.
    • In the conclusions section, some of them sound rough, considering both the limitations of the study and the results presented. For example, the conclusion that “citation impact does not increase during careers” sounds very rough, since considering all the restrictions and limitations introduced in the study, at best, it can be said that in the national USA normalized impact has decreased over time, which is also reflected in the individual career impact of the researchers from this country. Moreover, the impact of researchers from china has increased, probably because of the overall increase of the national impact in the country (black line in Figure 1). In other words, the two main countries in the study present rather different patterns, and the decrease in the impact of the US may be related to the increase of China’s overall impact, particularly considering that citations are normalized (meaning that a net increase of citations, may still be relatively low if other actors in the system increase even more in their net impact). For the other countries, the patterns are very different and variable, without a clear overall pattern (some increase, some decrease, some are stable).
    • Another conclusion that could be challenged is that “last articles have lower citation impact than first articles”. This does not seem to be the case in China, Russia, Spain, Italy (as per Figure 5). And if the national reference is considered, one could argue that in France Japan and India, researchers have in their countries a higher impact than their national benchmark. Is this correct? (if not please, consider explaining better the results of these countries).

    Reviewed by
    Cite this review
    Reviewer report
    2021/02/07

    The authors have revised the manuscript and resolved most om my concerns. However, I am not convinced by the authors answer to my first major concern regarding the operationalization of domestic researchers. The comparative design, which is the novel part of this study, depends on the validity of the units of comparison. I do not believe that the authors have convincingly showed that their operationalization attains validity, since most of my concerns addressed in my comment still applies. Due to the importance of the operationalization of domestic researchers, I believe that the authors should try to show (by some test or by some descriptive statistics) how well their operationalization captures their idea of domestic researchers that is supposed to enable cross-country comparisons.

    Reviewed by
    Cite this review
    Author Response
    2021/01/07

    *Thank you very much to the reviewers for reading the paper and for these very helpful comments. Please see below for the changes made, marked with *

    Reviewer 1:
    The study examines the relation between publication career length and average citation impact of researchers at the country level. The topic of the study fits with the aims and scope of QSS. The study is well written and the methods seem overall proper. With the comparative focus I believe the study will make a relevant contribution. However, I have some concerns about the operationalization of domestic authors, the threshold for exclusion based on co-authorships, and the use of the theoretical framework, that that makes me recommend major revision. I will outline these concerns below and hopefully my comments and recommendations can help the authors to improve the manuscript.
    Major concerns:
    1. My first major concern is related to the use and operationalization of domestic researchers (Page 2, line 58-82). The authors argue that the choice of using domestic authors is to avoid biases in the average citation impact due to variation between countries (e.g., due to national research infrastructure quality). A researcher is defined as domestic if the first Scopus-indexed and last Scopus-indexed publication of the researcher has the same country affiliation. All researchers that change country affiliation in the address-field in a Scopus-indexed publication after their first one and do not change back to this affiliation in the last Scopus indexed publication are excluded from the study. I’m not sure I understand how this operationalization of domestic researchers counter the variation between countries suggested by the authors. A researcher can have published her first 50 publications with the same country affiliation over 20 years and the 51st with a new country affiliation. This researcher would be excluded. Another researcher could have published all her first and last publications in the same country, and the other 50 publications in one or several other countries over her career. This researcher would be included. There is probably a large group of doctoral students that complete their degrees abroad (and potentially publish their first paper abroad) and then return to their home countries. These will be excluded. For me this operationalization seems somewhat arbitrary. It does, for example, not clearly demarcate internationally mobile researchers (i.e., researchers with more than two different country affiliations in the address field) from researchers that publish with the same affiliation during their whole career. It is therefore difficult to know to which degree the observed relationships between publication career age and average citation impact is actually a consequence of publication career age or a consequence of the variation between countries as described by the authors.

    An alternative approach could be to, e.g., categorize a researcher belonging to the country to which the largest share of that researchers’ publications is affiliated. This is not a perfect solution since it would be arbitrary for researchers with high mobility. However, if the purpose of operationalizing domestic researchers is to avoid biases due to country variation, the “largest share” approach would at least assign each researcher to the country where that researcher has published most publications during the career and as such potentially decreasing the between country variation bias to a larger extent than the approach used by the authors.

    My specific recommendation for the authors is to either: (1) provide a theoretical definition of a domestic researcher (the authors does not provide a theoretical definition of this concept in the manuscript - currently the theoretical definition and the operationalization is one and the same), and then conduct a test of the validity of this operationalization that provide a clear picture of how well the operationalization capture the theoretical definition. Such a test could, for example, be conducted by comparing the results of the used operationalization with the results of an operationalization that more accurately delineate domestic researchers, e.g., researchers that have all their publications in one single country, or 90% of their publications in a single country, or 75% and first and last publications in the same country. It could also be clarifying to examine how the results change when the requirement of which and how many publications researchers should have from one country to be categorized as domestic, e.g., the difference between having 100, 90, 80, 70, and 60%, etcetera, of their publications in a single country; or (2) a theoretical definition of a domestic researcher, and provide a convincing argument in the manuscript how their current operationalization captures this theoretical definition where my concerns about the operationalization is met. Maybe the authors could provide some descriptive statistics that could strengthen such an argument.
    ****The following has been added to define domestic researchers in a way that matches the operationalisation and to justify this, “A domestic researcher is conceived here as someone that has been trained by a country and remains focused on that country, even if they spend part of their time abroad. It would also be possible to restrict the focus to domestic-only researchers that never work abroad but this may tend to exclude the best funded researchers that might move abroad temporarily for collaborative projects, or the best overall researchers that attract international sabbaticals or job offers. The unfortunate limitation of keeping these people is that mid-career citation patterns might be due to periods spent abroad. Although from a pragmatic perspective, it would be more useful to study the career trajectories of all researchers working in a country, international moves may be associated with permanent changes in infrastructure quality or may be a mark of success.”
    And after this, the operationalisation is matched with the definition as follows, “The affiliation of the first publication is assumed to be usually the country that the researcher completed their PhD, since the publication might originate from the PhD. The affiliation of the last publication is assumed to be the country where the researcher completed their career. This is a simplification because the researcher might move abroad afterwards but stop researching or publish different types of document.”

    1. Researchers with ten or more authors are excluded from the study (Page 6, line 250-252). This seem to be an arbitrary threshold. Should not this type of thresholds/delineations be based on the actual research fields and an understanding of how the scientific communication is practiced in the fields and the meaning of co-authorships? For example, it seems reasonable to exclude authors within high energy physics since some of the basic assumptions of bibliometrics does not seem to align with how the scientific communication works in this field (see e.g., Cronin, 2001; Kretschmer & Rousseau, 2001). But why, for example, exclude an author in computer science that have published one article with more than nine other authors? Previous research suggest that collaboration is positively correlated with impact (see e.g., Sonnenwald, 2007). I believe there is two issues here: (1) This threshold does not account for the more common effects of collaboration on citation impact, e.g., international collaboration, the number of co-authors; and (2) Is there not a risk that this threshold imposes an arbitrary and unnecessary bias on the analyses which is difficult to assess in terms of how it may affect the results?

    My recommendation is that the authors strengthen this choice with a reference and an argument of the reasonability of using this threshold. Or maybe, as an alternative approach, exclude researchers in fields of research that does not align with the traditional definition of scientific authorships and scientific communication and utilize a method that allowed to control for the number of co-authors (or alternatively, apply a fractionalization method to the bibliometric indicators to account for collaboration) in the reduced dataset where problematic research fields are excluded.
    ****We prefer to keep in all fields because there are large publishing consortia in quite a lot of different fields, including physics/astronomy, health, biology and psychiatry. The following justification has been added, “It is difficult to evaluate the collaborations of researchers in large co-authorship lists partly because they may be from consortia with publishing agreements (Thelwall, 2020). The ten author threshold is relatively arbitrary, designed to exclude highly co-authoring researchers without excluding too many others. Whilst the average numbers of co-authors varies substantially between countries and fields (Thelwall & Maflahi, 2020), the purpose of the threshold is to eliminate the possibility that the results are affected by highly collaborative authors that may have contributed little to their publications. The threshold ten was used in the similar prior study of the USA (Thelwall & Fairclough, 2020), and accounts for less than 3% of articles in all broad fields (Thelwall & Maflahi, 2020). The results will therefore not be relevant for researchers that routinely collaborate more, such as in high value large international health-related studies.”.

    1. My third major concern is related to the authors understanding and use of the theoretical framework suggested by Laudel and Gläser (2008) and the authors main finding that “for the twelve countries analysed, the linear model of career trajectories (e.g., apprentice; colleague; master; elite: Laudel & Gläser, 2008) does not fit the pattern for citation impact of the careers of academics” (Page 527-529). First, I am not sure that the model is linear in the sense the authors suggests. The actual model in Laudel and Gläser is quite complex with three different but parallel careers that interact as the career progress. For example – given the definition of the stages – while masters usually co-author with doctoral students their average impact might go down (see e.g., Larivière 2012), it may be reasonable to assume that elites have higher average citation impact due to the definition of this stage. However, elites also co-author allot with doctoral students and may have more administrative duties).

    To my understanding the stages are correlated with academic age, but not dependent on academic age. The criterion for being categorized as belonging to a particular stage is qualitative (e.g., in the case of elite one who have attained the experience and skills of the previous stages and also shapes the knowledge production in her field). So, at a particular academic age we could have both colleagues, masters and elites. From my point of view it does not seem like the model predict that academic age is positively correlated with average citation impact (or some dimension of research quality) of individual researchers at the country level. Second, can averages at the country level discern the hypothesized linear progression? Suppose that researchers that have attained the career stage of elite have higher average citation impact than researchers on lower stages and that the average citation impact is higher for each stage. If it was the case that those who have attained the elite stage have higher citation impact on average and each stage below has lower citation impact than the stage above, is it not possible that researchers with lower average citation impacts “evens out” the average citation impact for the whole group?

    My recommendation is that the authors elaborate the theoretical model (i.e., the linear model) and their understanding of it more so that it makes sense to use it as a theoretical framework to generate hypothesis from/compare the results with, or give the model a less prominent role in the study and change/reformulate the main finding (Page 527-529).
    ****The model has been removed from the findings to align with this.

    Minor concerns:
    1. The authors state that only “Scopus source type Journal and document type Article” (Page 6, line 235-236) are included. Should not “proceedings” also be included since in some fields, e.g., computer science, this document type is the main communication channel (see e.g., Lee, 2019)?
    ****The following has been added to justify this exclusion, “Whilst it would be possible, in theory, to add these document types for fields in which they are important, there is no public list of such fields, alternative document types might be relevant for some specialties but not others in a field, and mixing document types would complicate the interpretation of the results.”

    1. This concern is related to section 5.3 Citation impact does not increase during careers (Line 490-494). While this is an interesting discussion and the authors provide several reasons for the observed non-increase over time, I believe that a fifth reason that the authors briefly mention in the conclusions, i.e., “mentoring junior researchers” (Page 18, line 556) could be further elaborated in this section. Senior researchers often have doctoral students and previous research suggests that publications with doctoral students as authors have lower citation impact (see e.g., Larivière 2012). It seems reasonable to mention this as a potential reason since co-authoring with doctoral students might lower the average citation impact for senior researchers.
      ****This extra reason has been added, “Longer term researchers might co-author an increasing fraction of their papers with doctoral students, achieving lower citation impacts with them. In some fields (excluding science and engineering) in Quebec, one study suggests such papers have lower citation impacts (Larivière, 2012).”

    2. The citation impact indicator could be complemented with a more robust measure of impact, e.g., a percentile-based indicator. The aim of the study “is to compare the career-long international citation impact trajectories of domestic researchers, separating them by career length” (Page 58-59). To test yet another citation impact indicator (e.g., top 10%) as a measure of impact trajectories would most certainly make the results even more interesting.
      ****We prefer not to do this because the current indicator is more precise than a threshold indicator and seems likely to complicate the interpretation of the overall pattern of results by introducing a similar set of results on lower quality data (thresholds).

    3. How are double affiliations handled in the operationalization of domestic researchers?
      ****The following has been added to clarify this, “Affiliations after the first for each article were ignored since multiply affiliated researchers seem to record their main affiliation first.”

    References
    Cronin, B. (2001). Hyperauthorship: A postmodern perversion or evidence of a structural shift in scholarly communication practices?. Journal of the American Society for Information Science and Technology, 52(7), 558-569.
    Kretschmer, H., & Rousseau, R. (2001). Author inflation leads to a breakdown of Lotka's law. Journal of the American Society for Information Science and Technology, 52(8), 610-614.
    Larivière, V. (2012). On the shoulders of students? The contribution of PhD students to the advancement of knowledge. Scientometrics, 90(2), 463-481.
    Lee, D. H. (2019). Predicting the research performance of early career scientists. Scientometrics, 121(3), 1481-1504.
    Sonnenwald, D. H. (2007). Scientific collaboration. ARIST, 41(1), 643-681.
    Thelwall, M. & Fairclough, R. (2020). All downhill from the PhD? The typical impact trajectory of US academic careers. Quantitative Science Studies, 1(3), 1334–1348.

    Reviewer: 2

    Comments to the Author
    The manuscript submitted deals with a very relevant topic: the study of academic careers and the scientific workforce. This type of studies, related with careers, mobility and collaboration of the individual researchers, are very important to increase our understanding of how research activities are carried out, and how science can be evaluated. This type of novel approaches should be welcome in the realm of quantitative science studies.
    However, I have spent several hours reading the manuscript and I still struggle to understand exactly what the authors tried to do. In my view the manuscript cannot be accepted for publication, unless three main forms of “unclarities” are solved, which in my view would require a substantial rewriting of the whole manuscript.

    I present these three main forms of unclarities below:

    1. Unclarities regarding the main questions and justification of the study.
      The manuscript includes a relatively interesting background section describing some of the most relevant topics related to the analysis and study of academic careers from a large-scale perspective, including aspects related with the academic age of researchers, their mobility, their collaboration and their productivity and impact. However, none of these aspects are substantially discussed in the results, and even some of the them are excluded from the study (e.g. internationally mobile researchers are partly excluded from the analysis).
      The main aim of the study is “to compare the career-long international citation impact trajectories of domestic researchers, separating them by career length”. The statement of the aim already shows some unclarities: what are “career-long international citation impact trajectories”? if the focus is on “domestic researchers”, what is then the role of “international citation impact trajectories”? Moreover, what is the justification and relevance of just focusing on domestic researchers (operationalized later as researchers who published at least two publications at two different points in time, affiliated with the same country)? The justification that “the average citation impact of nations varies widely, and part of the variation is presumably due to differing national research infrastructure quality” is not very strong, particularly in a globalized context were the impact of nations is determined by mobility flows and collaboration. The overall motivation of the manuscript is therefore not properly justified.
      ****The word “international” should not be there and has been deleted so the phrase should be “career long citation impact trajectories”. As mentioned above for the first reviewer, the text has been rewritten to clarify the that focus on domestic researchers is a pragmatic one rather than theory driven.

    2. Unclarities in the methods and presentation of the results
      The methodological description also exhibits some important unclarities, and some of the choices in the experimental design are fundamental enough to challenge the main results and interpretations of the whole study. For example, the exclusion of “researchers with at least one journal article with ten or more authors” is very arbitrary and may have a strong impact on the final results, since many long-term career researchers may also develop more extensive collaboration networks that may end in highly cited works. By excluding these researchers, the authors may be excluding a very successful cohort of researcher, and the fact that these scholars are excluded may contribute to explain the main results of the manuscript.
      ****This is added as a limitation, “The career citation impact trajectories of scientists publishing at least one article with more than ten authors might differ from the set analysed here.”. The ten threshold has been justified as follows, “It is difficult to evaluate the collaborations of researchers in large co-authorship lists partly because they may be from consortia with publishing agreements (Thelwall, 2020). The ten author threshold is relatively arbitrary, designed to exclude highly co-authoring researchers without excluding too many others. Whilst the average numbers of co-authors varies substantially between countries and fields (Thelwall & Maflahi, 2020), the purpose of the threshold is to eliminate the possibility that the results are affected by highly collaborative authors that may have contributed little to their publications. The threshold ten was used in the similar prior study of the USA (Thelwall & Fairclough, 2020), and accounts for less than 3% of articles in all broad fields (Thelwall & Maflahi, 2020). The results will therefore not be relevant for researchers that routinely collaborate more, such as in high value large international health-related studies.”.

    My understanding of the calculation of the MNLCS is that it is the same approach a for the MNCS indicator but calculating ln(1+c), instead of using the non-transformed citations. What becomes unclear is how values at the publication-level are later on aggregated at the individual-level, and how the different normalizations are performed (e.g. by country, by year, by career age, etc. all these should be clearly explained in the manuscript). Here the methodology becomes very complex. It is unclear to me if the values are aggregated at the individual-level (are they?) or if they are calculated for sets of publications. To give an example, in Figure 1, are the MNLCS values plotted there the aggregation of all the individual values of all the researchers identified for each country? Is the mean (median?) values of all the researchers what is represented by the dots? How are the national reference values established? (I assume is the mean of all individual MNLCS of all researchers identified).
    ****A new methods section has been added ” Citation impact of a set of researchers (MNLCS): For a set of researchers, the MNLCS was calculated as above except that if a researcher had published multiple articles in the same year, then the average NLCS of those articles was used instead of averaging them separately. This prevents the results from being dominated by prolific researchers. The MNLCS for any set of researchers was then calculated as the arithmetic mean of the modified NLCS values.”

    In the first line of the Results section, “four groups” are mentioned. Which groups are these?
    ***The following has been added, “: long-term, 11-year, 6-year and single article”

    It is also not totally clear what the problem is with the end points 2001 and 2016. Why they need to be ignored?
    ****This sentence has been deleted.

    I don’t understand what is meant with “the sample from 2001 is comprehensive and the 2016 sample is likely to be more comprehensive than average”. What samples are you talking about?
    ****This section has been extended as follows, “The first and last dates (2001, 2016) should be interpreted cautiously since they are based on larger samples. This is because all researchers qualifying as long-term have at least one journal article in 2001 and all have at least one publication 2016-2019, so the sample from 2001 is comprehensive (i.e., including all qualifying researchers, since they must publish in 2001 to qualify) and the 2016 sample is likely to be more comprehensive than average (because every researcher must have a publication in 2016-2019 but not necessarily any publications 2002-2015).”

    Moreover, the expression “the initial increase and final decrease in MNLCS may be due to changes in the nature of the sample rather than changes over time in the average citation impact of long-term researchers” essentially challenges the whole validity of the study.
    ****This problem means that the first and last points needed to be interpreted cautiously, so the text discusses its possible influence.

    There are many more unclarities (e.g. how “first”, “second” publications of researchers are established, what are the exact populations you are studying, what are the number of researchers, years of activity, etc. included in each of the analysis, etc.). I cannot list all of them. My overall recommendation would be to clarify much more the methodological section, provide descriptive values, and whenever possible maybe with some graphic support, so the reader can easily understand what is being done at each step. Also use a much more consistent language (e.g. “long-term”, “Scopus publication age”, “international publishing careers”, etc. are all expressions without proper definitions, and sometimes even contradicting value – you look at domestic researchers but you talk about international publishing careers, what do you mean with this).
    ****The following has been added, “For researchers publishing multiple articles in the same year with different national affiliations, the first article published was used for their first affiliation and the last article published for their last affiliation. Order of publication within a year was judged by Scopus article ID.” Also to define groups, “These rules were used to identify long term researchers (16+ year career publishing in Scopus), medium term researchers (11 year career publishing in Scopus) and short term researchers (6 year career publishing in Scopus).” Also, “Scopus journal articles 1996-2019 were used as the data source for this study because of the wide multidisciplinary and international coverage of Scopus. Its coverage expanded in 1996, so earlier data is not comparable. All data presented in this paper is therefore within the scope of this database. For example, a researcher that had one journal article published in Scopus but many publications not indexed by Scopus would be treated as having written one Scopus journal article and nothing else.” And, to give sample size stats, ” The sample sizes and exact values of all data points in the graph are available in the online supplement (https://doi.org/10.6084/m9.figshare.13537178).”. Other small changes have also been made.

    1. Unclarities in the main results and their translation into discussions/conclusions.
      As described above, it is hard sometimes to understand what the authors tried to do at each stage of the manuscript. The Discussion and Conclusion section therefore also suffer from these limitations. For example, that international collaboration, team size or mobility are all aspects that have been not properly considered in the study, challenge the main conclusions of the study. In general, your conclusions only refer to a subset of researchers (which is hard for the reader to assess), so one wonders how generalizable your results are.
      Based on these issues, one wonders how valid are conclusions such as “academics should not expect to increase the average citation impact of their work with age”, or whether how valid is the conclusion that the impact of researchers “decline towards the end of their careers (or after around 10-16 years for ongoing researchers)”. In my view, the study contains too many limitations to be able to make such concluding remarks and derived recommendations.
      *The following has been added to the paper as uncontrolled variables, “This procedure was used for domestic researchers publishing a single article. This procedure was used for the articles of researchers publishing a single article, and was calculated separately for each year and country.”
      **The conclusions have been modified to add “domestic” before “researcher” in several places and have been phrased to be more cautious about recommendations following this change. They now make more appropriately delimited claims rather than the wider claims of the previous version.



    Cite this author response
  • pre-publication peer review (ROUND 1)
    Decision Letter
    2020/12/30

    30-Dec-2020

    Dear Dr. Thelwall:

    Your manuscript QSS-2020-0082 entitled "Domestic researchers with longer careers generate higher average citation impact but it does not increase over time", which you submitted to Quantitative Science Studies, has been reviewed. There are two reviewers. The comments of reviewer 2 are included at the bottom of this letter. The comments of reviewer 1 can be found in the attached PDF file.

    Reviewer 1 is fairly positive about your work. This reviewer recommends a major revision. Reviewer 2 is critical and recommends rejection. Both reviewers indicate that your work requires significant improvements. Based on the comments of the reviewers, I would like to invite you to prepare a revised version of your manuscript. Given the critical comments of especially reviewer 2, a major revision is necessary. In the revision, both the design and the presentation of your research need to be carefully reconsidered.

    To revise your manuscript, log into https://mc.manuscriptcentral.com/qss and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions." Under "Actions," click on "Create a Revision." Your manuscript number has been appended to denote a revision.

    You may also click the below link to start the revision process (or continue the process if you have already started your revision) for your manuscript. If you use the below link you will not be required to login to ScholarOne Manuscripts.

    PLEASE NOTE: This is a two-step process. After clicking on the link, you will be directed to a webpage to confirm.

    https://mc.manuscriptcentral.com/qss?URL_MASK=9fdd0e7fd0534718b23dde83621f431c

    You will be unable to make your revisions on the originally submitted version of the manuscript. Instead, revise your manuscript using a word processing program and save it on your computer. Please also highlight the changes to your manuscript within the document by using the track changes mode in MS Word or by using bold or colored text.

    Once the revised manuscript is prepared, you can upload it and submit it through your Author Center.

    When submitting your revised manuscript, you will be able to respond to the comments made by the reviewers in the space provided. You can use this space to document any changes you make to the original manuscript. In order to expedite the processing of the revised manuscript, please be as specific as possible in your response to the reviewers.

    IMPORTANT: Your original files are available to you when you upload your revised manuscript. Please delete any redundant files before completing the submission.

    If possible, please try to submit your revised manuscript by 29-Apr-2021. Let me know if you need more time to revise your work.

    Once again, thank you for submitting your manuscript to Quantitative Science Studies and I look forward to receiving your revision.

    Best wishes,
    Dr. Ludo Waltman
    Editor, Quantitative Science Studies
    qss@issi-society.org

    Reviewers' Comments to Author:

    Reviewer: 1

    Comments to the Author
    See the attached PDF file.

    Reviewer: 2

    Comments to the Author
    The manuscript submitted deals with a very relevant topic: the study of academic careers and the scientific workforce. This type of studies, related with careers, mobility and collaboration of the individual researchers, are very important to increase our understanding of how research activities are carried out, and how science can be evaluated. This type of novel approaches should be welcome in the realm of quantitative science studies.
    However, I have spent several hours reading the manuscript and I still struggle to understand exactly what the authors tried to do. In my view the manuscript cannot be accepted for publication, unless three main forms of “unclarities” are solved, which in my view would require a substantial rewriting of the whole manuscript.

    I present these three main forms of unclarities below:

    1. Unclarities regarding the main questions and justification of the study.
      The manuscript includes a relatively interesting background section describing some of the most relevant topics related to the analysis and study of academic careers from a large-scale perspective, including aspects related with the academic age of researchers, their mobility, their collaboration and their productivity and impact. However, none of these aspects are substantially discussed in the results, and even some of the them are excluded from the study (e.g. internationally mobile researchers are partly excluded from the analysis).
      The main aim of the study is “to compare the career-long international citation impact trajectories of domestic researchers, separating them by career length”. The statement of the aim already shows some unclarities: what are “career-long international citation impact trajectories”? if the focus is on “domestic researchers”, what is then the role of “international citation impact trajectories”? Moreover, what is the justification and relevance of just focusing on domestic researchers (operationalized later as researchers who published at least two publications at two different points in time, affiliated with the same country)? The justification that “the average citation impact of nations varies widely, and part of the variation is presumably due to differing national research infrastructure quality” is not very strong, particularly in a globalized context were the impact of nations is determined by mobility flows and collaboration. The overall motivation of the manuscript is therefore not properly justified.

    2. Unclarities in the methods and presentation of the results
      The methodological description also exhibits some important unclarities, and some of the choices in the experimental design are fundamental enough to challenge the main results and interpretations of the whole study. For example, the exclusion of “researchers with at least one journal article with ten or more authors” is very arbitrary and may have a strong impact on the final results, since many long-term career researchers may also develop more extensive collaboration networks that may end in highly cited works. By excluding these researchers, the authors may be excluding a very successful cohort of researcher, and the fact that these scholars are excluded may contribute to explain the main results of the manuscript.
      My understanding of the calculation of the MNLCS is that it is the same approach a for the MNCS indicator but calculating ln(1+c), instead of using the non-transformed citations. What becomes unclear is how values at the publication-level are later on aggregated at the individual-level, and how the different normalizations are performed (e.g. by country, by year, by career age, etc. all these should be clearly explained in the manuscript). Here the methodology becomes very complex. It is unclear to me if the values are aggregated at the individual-level (are they?) or if they are calculated for sets of publications. To give an example, in Figure 1, are the MNLCS values plotted there the aggregation of all the individual values of all the researchers identified for each country? Is the mean (median?) values of all the researchers what is represented by the dots? How are the national reference values established? (I assume is the mean of all individual MNLCS of all researchers identified).
      In the first line of the Results section, “four groups” are mentioned. Which groups are these? It is also not totally clear what the problem is with the end points 2001 and 2016. Why they need to be ignored? I don’t understand what is meant with “the sample from 2001 is comprehensive and the 2016 sample is likely to be more comprehensive than average”. What samples are you talking about? Moreover, the expression “the initial increase and final decrease in MNLCS may be due to changes in the nature of the sample rather than changes over time in the average citation impact of long-term researchers” essentially challenges the whole validity of the study.
      There are many more unclarities (e.g. how “first”, “second” publications of researchers are established, what are the exact populations you are studying, what are the number of researchers, years of activity, etc. included in each of the analysis, etc.). I cannot list all of them. My overall recommendation would be to clarify much more the methodological section, provide descriptive values, and whenever possible maybe with some graphic support, so the reader can easily understand what is being done at each step. Also use a much more consistent language (e.g. “long-term”, “Scopus publication age”, “international publishing careers”, etc. are all expressions without proper definitions, and sometimes even contradicting value – you look at domestic researchers but you talk about international publishing careers, what do you mean with this).

    3. Unclarities in the main results and their translation into discussions/conclusions.
      As described above, it is hard sometimes to understand what the authors tried to do at each stage of the manuscript. The Discussion and Conclusion section therefore also suffer from these limitations. For example, that international collaboration, team size or mobility are all aspects that have been not properly considered in the study, challenge the main conclusions of the study. In general, your conclusions only refer to a subset of researchers (which is hard for the reader to assess), so one wonders how generalizable your results are.
      Based on these issues, one wonders how valid are conclusions such as “academics should not expect to increase the average citation impact of their work with age”, or whether how valid is the conclusion that the impact of researchers “decline towards the end of their careers (or after around 10-16 years for ongoing researchers)”. In my view, the study contains too many limitations to be able to make such concluding remarks and derived recommendations.

    Decision letter by
    Cite this decision letter
    Reviewer report
    2020/12/29

    The manuscript submitted deals with a very relevant topic: the study of academic careers and the scientific workforce. This type of studies, related with careers, mobility and collaboration of the individual researchers, are very important to increase our understanding of how research activities are carried out, and how science can be evaluated. This type of novel approaches should be welcome in the realm of quantitative science studies.
    However, I have spent several hours reading the manuscript and I still struggle to understand exactly what the authors tried to do. In my view the manuscript cannot be accepted for publication, unless three main forms of “unclarities” are solved, which in my view would require a substantial rewriting of the whole manuscript.

    I present these three main forms of unclarities below:

    1. Unclarities regarding the main questions and justification of the study.
      The manuscript includes a relatively interesting background section describing some of the most relevant topics related to the analysis and study of academic careers from a large-scale perspective, including aspects related with the academic age of researchers, their mobility, their collaboration and their productivity and impact. However, none of these aspects are substantially discussed in the results, and even some of the them are excluded from the study (e.g. internationally mobile researchers are partly excluded from the analysis).
      The main aim of the study is “to compare the career-long international citation impact trajectories of domestic researchers, separating them by career length”. The statement of the aim already shows some unclarities: what are “career-long international citation impact trajectories”? if the focus is on “domestic researchers”, what is then the role of “international citation impact trajectories”? Moreover, what is the justification and relevance of just focusing on domestic researchers (operationalized later as researchers who published at least two publications at two different points in time, affiliated with the same country)? The justification that “the average citation impact of nations varies widely, and part of the variation is presumably due to differing national research infrastructure quality” is not very strong, particularly in a globalized context were the impact of nations is determined by mobility flows and collaboration. The overall motivation of the manuscript is therefore not properly justified.

    2. Unclarities in the methods and presentation of the results
      The methodological description also exhibits some important unclarities, and some of the choices in the experimental design are fundamental enough to challenge the main results and interpretations of the whole study. For example, the exclusion of “researchers with at least one journal article with ten or more authors” is very arbitrary and may have a strong impact on the final results, since many long-term career researchers may also develop more extensive collaboration networks that may end in highly cited works. By excluding these researchers, the authors may be excluding a very successful cohort of researcher, and the fact that these scholars are excluded may contribute to explain the main results of the manuscript.
      My understanding of the calculation of the MNLCS is that it is the same approach a for the MNCS indicator but calculating ln(1+c), instead of using the non-transformed citations. What becomes unclear is how values at the publication-level are later on aggregated at the individual-level, and how the different normalizations are performed (e.g. by country, by year, by career age, etc. all these should be clearly explained in the manuscript). Here the methodology becomes very complex. It is unclear to me if the values are aggregated at the individual-level (are they?) or if they are calculated for sets of publications. To give an example, in Figure 1, are the MNLCS values plotted there the aggregation of all the individual values of all the researchers identified for each country? Is the mean (median?) values of all the researchers what is represented by the dots? How are the national reference values established? (I assume is the mean of all individual MNLCS of all researchers identified).
      In the first line of the Results section, “four groups” are mentioned. Which groups are these? It is also not totally clear what the problem is with the end points 2001 and 2016. Why they need to be ignored? I don’t understand what is meant with “the sample from 2001 is comprehensive and the 2016 sample is likely to be more comprehensive than average”. What samples are you talking about? Moreover, the expression “the initial increase and final decrease in MNLCS may be due to changes in the nature of the sample rather than changes over time in the average citation impact of long-term researchers” essentially challenges the whole validity of the study.
      There are many more unclarities (e.g. how “first”, “second” publications of researchers are established, what are the exact populations you are studying, what are the number of researchers, years of activity, etc. included in each of the analysis, etc.). I cannot list all of them. My overall recommendation would be to clarify much more the methodological section, provide descriptive values, and whenever possible maybe with some graphic support, so the reader can easily understand what is being done at each step. Also use a much more consistent language (e.g. “long-term”, “Scopus publication age”, “international publishing careers”, etc. are all expressions without proper definitions, and sometimes even contradicting value – you look at domestic researchers but you talk about international publishing careers, what do you mean with this).

    3. Unclarities in the main results and their translation into discussions/conclusions.
      As described above, it is hard sometimes to understand what the authors tried to do at each stage of the manuscript. The Discussion and Conclusion section therefore also suffer from these limitations. For example, that international collaboration, team size or mobility are all aspects that have been not properly considered in the study, challenge the main conclusions of the study. In general, your conclusions only refer to a subset of researchers (which is hard for the reader to assess), so one wonders how generalizable your results are.
      Based on these issues, one wonders how valid are conclusions such as “academics should not expect to increase the average citation impact of their work with age”, or whether how valid is the conclusion that the impact of researchers “decline towards the end of their careers (or after around 10-16 years for ongoing researchers)”. In my view, the study contains too many limitations to be able to make such concluding remarks and derived recommendations.

    Reviewed by
    Cite this review
    Reviewer report
    2020/12/23

    This reviewer report was submitted to the journal in an attached file. Its contents are not displayed directly.

    Reviewed by
    Cite this review
All peer review content displayed here is covered by a Creative Commons CC BY 4.0 license.