What would scientific life be like without the Impact Factor? How would we best measure the worth of a scientist or a publication? Surely the simplest way would be to read the great volume of published works, and judge each individual paper on its own merits? But is there any real replacement for reading journals? Reading every single publication is close to impossible, and I think we can all agree that a post-publication form of metric is seemingly more reliable than a pre-publication one. The Impact Factor itself does not seem to be a mathematically sound metric for judging the level of citations. Surprisingly, there is a weak correlation of individual article citation rates with journal Impact Factor. But if you think about it, that seems like common sense. Often we have repeated the mantra it’s not where you publish but what you publish that should get you noticed.
“no one can keep up with the literature, so having colleagues point out the recent important advances in unfamiliar areas is enormously useful”
The publish-or-perish mentality breeds a certain form of science, one more restricted to simply generating output and getting noticed. With all the talk of Impact Factors and their role in modern science we should not forget that there are other ways to measure the quality of publications (other than reading them of course). Firstly, if what is published is of exceptional quality, you will get cited. This is a simple fact. Post-publication expert ratings (the exceptional F1000 for example) also adds a certain level of peer-orientated judging system to published articles. And let’s not forget the h-index, proposed to measure the scientific output of an individual scientist. With competition for grants and tenured jobs becoming increasingly harder, distilling all the information on a publication record down to a single number seems the easy solution, however dangerous the precedent.
In the ever-increasing Web2.0 world we live in we can include web flows, blogs, online discussions, twitter, and facebook to the list. Nature’s new journal Scientific Reports has even gone as far as to include this type of internet-level metric system, allowing the scientific community in general to assess the importance of each article individually post-publication. It includes most-downloaded, most-emailed, and most-blogged about lists of articles. Media and news coverage are considered, in some higher echelons of the ivory tower, as a good indicator of excellence. After all, we want the science that we do to go on to change the world. However, it is the opinion of this writer that “science by press release” should not be confused with good scientific communication, fostering public understanding of science.
I suppose the underlying question is that as the science we do and partake in are increasingly coming in different forms (the first point of call is the original research paper published, but as the discussion disseminates to the wider community it takes on other forms such as blogs, media articles and such); what is their relative impact on science? Unique Author Identifiers (such as ResearcherID and Open ID) are becoming more common partly due to a need to address this fact. In essence, you and your complete scholarly output - from articles to blogs - could be identified online in an easy to track way. Distilling your scientific career into a more manageable form. Indeed, they have their advantages. As we move deeper and deeper into a digital world we need more efficient ways of collating and keeping track of the journals we produce.
In the end, the great big caveat is that citations, most-downloaded, and front page headlines do not make great science. It seems counter-intuitive to try and quantify something as large and as qualitative as scholarly work. How does your contribution to the shared understanding of the way the world works compare against someone else? The ever-evolving, progressive nature of science means that the most-significant scientific work you’ll do is not your next paper, but the one after that.
Now do everyone a favor, and leave your best ideas about impact factor and how could influence it to a journal of negative results below in the comments. Please don't just write "good post" or "I like that"...instead, add some value and contribute to this conversation with an insight, a practice, or a resource that we can all use to create more value. Thank you!
Now do everyone a favor, and leave your best ideas about impact factor and how could influence it to a journal of negative results below in the comments. Please don't just write "good post" or "I like that"...instead, add some value and contribute to this conversation with an insight, a practice, or a resource that we can all use to create more value. Thank you!
Do we need Scientometrics at all???? Measuring science to quantify its quality? Impact Factor it is not perfect neither Hirsch index (H) it is so. However, IMHO I prefer by large a quantitive measure to decide e.g., between many candidates ...to whom assign a contract, a project, an award. Conversely, qualitative opinions instead of quantitative ones are often biased by personal interests and allows ignoring meritocracy to promote not ideal candidates based only on personal afinity. Both IF and H are not pefect, but scientometrics needs to improve them not to return to personal biased opinions!!! In any case, all the stuf is complicated and needs careful analysis to propose better solutions, not in my hands.
ReplyDeleteI agree with Humberto. Numbers of citations are as good as it gets for a metric, and impact factor is a good way of predicting roughly the chances of being cited well in the future - and obviously citations themselves can't be used for this ! (The exact ways in which these are calculated could of course possibly be improved).
ReplyDeleteHowever, how impact factors are used, though, eg. for grants, scholarships, qualifications is a different matter. In medical research it is common for someone to notch up 3 x 1 impact factor, and feel very proud to have the total of impact factor 3 - when in fact often the articles are complete nonsense. This takes roughly the same time as someone who does a proper study for one article at impact factor 3. I call this the 3 ones effect and it is probably the most damaging effect in medical research at the moment.
Use of impact factor squared (up to a limit above which linear increase is used) for above applications would encourage extra resources to gain eg. 3 squared as opposed to 3 x 1 squared, especially if to gain a qualification a higher total is now needed - and would bring a lot of medical research into the realm of medical science.