The management of research is changing. It used to be thought that the value of a scientist’s work was best judged by the intelligently subjective opinion of one’s peers: fellow scientists, especially those who were more experienced or were known for their breadth of knowledge and interests. In the last few years, many funding bodies (with some honourable exceptions) have shifted towards a supposedly more objective system of assessment, by quantifying each scientist’s output. In effect, well-informed value judgements have been replaced by bureaucratic bean-counting
It’s amazing what the bean-counters can do with a spreadsheet. A staggering amount of information was compiled for the recent BBSRC institutes’ Visiting Groups and universities’ Research Selectivity Exercises. Some of this information may well be useful, but some is only impressive in the pettiness of its detail: does the number of abstracts printed in conference proceedings in the last four years really tell you anything at all about the quality of a scientist’s work?
The single figure which is most prized as a quantitative measure of the value of a scientist’s work is the Citation Rating of the journals in which their papers are published. This is the average number of times a paper in that journal is cited, within two years of its publication, in other papers by other people in refereed journals. This measurement is particularly dangerous because of its spurious air of relevance.
The Citation Rating is often called the Impact Factor. Perhaps this name was coined by the organisations who make their living from compiling these statistics, to imply that a paper’s Impact Factor really does indicate the impact it has had on science. No doubt, a paper in Nature or Science does have, on average, a greater impact than one in a less prestigious journal. But it’s not hard to demolish the notion that the so-called Impact Factor is necessarily accurate when applied to a single paper, or even a group of papers. Imagine a letter from a bean-counter to Fr Mendel: “Sorry, but our statistics indicate that your research, which was totally ignored for 36 years after its publication, has had no impact on biology”. Or recall that the most-cited paper of the last ten years was the false alarm about low-temperature nuclear fusion: hundreds of papers have cited it, along the lines of, “despite this report, there is in fact no evidence for cold fusion”.
The Citation Rating has become a victim of Goodhart’s Law, that “any performance indicator loses its value when it becomes a policy target”. It’s named after an economist who noticed that the money supply, which had been a perfectly good predictor of future inflation, began to fluctuate wildly when, in 1979, the government tried to control the money supply itself rather than the underlying factors which made it inflate. Much the same has happened to the Citation Rating: resources are allocated, not because of the quality of the research, but according to the size of readership of the journal in which the work is published.
The Citation Rating is most damaging when used by funding bodies to allocate money or posts. Research has many users: government departments, for formulating policy, commercial companies, for devising new products and processes, and the general public, to improve the quality of life. But the Citation Rating concentrates on a different group of users. Who it is that cites a paper a certain number of times? Fellow scientists, of course. For papers to achieve high ratings, researchers must forget that their work might be of value to industry, the public or the state and focus entirely on whether their papers will be read by other researchers like themselves.
In his novel, “The Glass Bead Game”, Hermann Hesse depicted a society driven to the brink of self-destruction because its most able elite have been encouraged to devote their energies and intellects to a life of academic seclusion: “The real scholars were left in almost total freedom to ply their studies and their Games, and no one objected that a good many of their works … seemed to nonscholars merely luxurious frivolities”. That is the direction in which research is steadily being pushed – even if not intentionally – by a system of assessment which values importance to the academic community above all else.
The widespread mis-use of Citation Ratings naturally affects the journals themselves. Plant Pathology, for instance, has the second-highest Rating among general pathology journals, behind Phytopathology. Journals published in the USA almost always have higher Citation Ratings than those from other countries in the same field, simply because the USA has by far the largest number of scientists. Nevertheless, the difference in Citation Ratings has had the absurd effect that some research directors, in the UK and other European countries, have all but obliged their staff to publish work on diseases which are most significant in Europe in an American journal.
What should Plant Pathology do about this situation? While we may hope that Citation Madness, like BSE, will gradually fade away, we cannot ignore the problem and refuse to play the Citation Rating Game. We also surely cannot contemplate the other extreme, which would be to refuse to publish papers on diseases which are unimportant in Europe or North America _ and would therefore not be cited very often _ even though they might be of major significance to developing countries.
A middle way, which may help to increase our journal’s Citation Rating, without lowering its standards or limiting its service to plant pathology, would be to make a particular effort to publish types of papers which are likely to be cited widely. Citation statistics show that two such types are critical reviews and papers reporting technical advances. What should Plant Pathology do to attract papers such as these?
Finally, is our publicity up to scratch? Can BSPP do more to make sure that the academic community is aware of Plant Pathology _ and of the advantages of publishing in it (see p.80)? Let us know if any part of the pathology world has managed to remain ignorant of our journal!
The continued existence of a diverse range of journals is essential for scientific progress across the full range of our subject. BSPP would therefore value greatly the views of members on how the profile of Plant Pathology – and, while it’s used (and mis-used), its Citation Rating – can be enhanced.
James Brown