Go to publons.com

Why would we want another way of evaluating scientific research? (Part II)

This is the second post in the evaluating research series from the Publons R&D team. In the first post we showed that across six scientific disciplines there is a negative correlation between cited half-life and aggregate impact factor. Our interpretation of this is that impact factor measures more of the context of a paper than its quality.

That post generated a fair bit of comment in the twitterverse. Some of the discussion (like Pep Pàmies' plot of impact factor vs. cited half-life for Nature Publishing Group (NPG) journals) really got us thinking. Here we (i) extend our analysis of aggregate impact factors and cited half-life to all disciplines covered in the 2011 JCR, and (ii) examine the correlation between individual journal impact factors and cited half-life.

I An amusing interlude

Let us begin with a fun correlation. XKCD gave us the lowdown on scientific disciplines arranged by 'purity'. It turns out this classification has a basis in reality, as is the case with most humor. As you can see in the figure below, there is a clear negative correlation between purity and impact factor.

image


II Examining citation lifetime and impact factor by subject area

In our previous blog post, we looked at the trend between the aggregate impact factor and cited half-life for 6 disciplines (48 subjects), and found a negative correlation between cited half-life and aggregate impact factor. Here we extend that analysis.

But first, we'll show exactly how we get the aggregate impact factor and cited half-life.

The figure below shows the distribution of journal impact factors and cited half-life within one particular JCR subject category - 'Biochemistry and Molecular Biology'. The impact factor for all journals follows a positively skewed distribution while the cited half-life appears to follow a normal distribution with a spike at 10 years. This spike is an artefact of the JCR data: all journals with cited half lives greater than 10 years are described by '>10'. We have represented these non-numerical values as the numerical value '10'.

The aggregate impact factor and cited half-life represent the average values of the distribution (dashed red lines).

image

We have extracted these values for each of the 176 subject areas available in the JCR. These data are plotted below, along with the subset we reported in our last post. The negative correlation between aggregate impact factor and cited half-life remains, and the slope of the linear trend is similar.

The filled blue region around the line of best fit is the 95% confidence interval. The statistics for the population are 8.64 +/- 0.19 and -0.51 +/- 0.07 for the intercept and slope respectively. The statistics for the subset are 9.25 +/- 0.30 (intercept) and -0.96 +/- 0.14 (slope).

image

III Examining citation lifetime and impact factor correlation by journal

Nature editor Pep Pàmies responded here to our first post with a plot of impact factors vs. citation half-life for journals published by the Nature Publishing Group (NPG) alone. This plot showed an interesting bifurcation in the trend between individual journal impact factor and the cited half-life: (i) negative correlation for journals with impact factors less than 10 (consistent with our report), and (ii) positive correlation for journals with impact factors above 10. The trend persisted even after review-only journals were removed from the dataset.

Intrigued by this bifurcation, we decided to take a further look at impact factors and citation half lives for individual journals. In particular, we wanted to see if the bifurcation seen in the NPG journals is present in other publishing groups. If the bifurcation is reproducible it suggests we can only include low impact factor journals in our analyses. If not, it suggests NPG is somehow special.

Our two datasets:

(i) Population: 490 journals in the high impact factor range (>5.16)

(ii) Subset; journals from the following publishers:

  • American Chemical Society (ACS)

  • Elsevier Science Ltd. (ESL)

  • Nature Publishing Group (NPG)

  • Cambridge University Press (CUP)

It is worth noting that some other major publishers with several high impact factor journals e.g. American Association for the Advancement of Science (AAAS) who publish Science, and American Physical Society (APS), were not analysed because they did not publish enough journals for statistical analysis.

Let us first look at the entire population. For journals with impact factors between 5.16 and 101.78, there is a large spread of data with no defining characteristics. The histogram shows the data more clearly. Both high and low subsets of the population appear to have a flat distribution.

image

A similar lack of defining characteristics are seen for most of the publishers except NPG (see figure below). The analysis for this publisher was already done by Pep Pàmies but we have replotted it with 95% confidence intervals. The slope for the large impact factor dataset is 0.09 +/- 0.04 while the slope for the small impact factor dataset is -0.46 +/- 0.11. The latter is several times larger than the former.


image

The small correlation coefficient combined with the spread in the data strongly implies that the positive correlation between impact factor and cited half-life (seen for journals with impact factors greater than 10) is very weak.

This does not mean that the trend is non-existent. It simply means that any interpretation of this correlation can only be an estimate.

Summary

In summary, we have conclusively shown that the average (aggregate) impact factor and cited half-life are negatively correlated. The population trendline shows that a subject with an average impact factor of 1.0 has an average citation lifetime of 8.13 years. A subject with a higher average impact factor of 3.0 has a reduced average citation lifetime of 7.11 years.

* *So, higher average impact factors indicate subjects which progress faster and articles lose their context faster (because they are superseded by newer ones). If one goes by the wisdom of XKCD, progress is inversely proportional to the relative 'purity' of the subject.

Interestingly, the trend seen in the averaged data is not seen in the impact factors and cited half-lives of individual journals. The spread in the data wipes out the trend. The exception to this statement is the subset of journals published by the Nature Publishing Group (as mentioned earlier by Pep Pàmies).

Our analysis here reiterates the point made in our previous post: that impact factors are a convoluted measure of both quality and context. Perhaps we can do better? In our next post we will propose an alternative.

EDIT: See the next post in this series here.

comments powered by Disqus

Subscribe to our mailing list

* indicates required