Impact Factor: A popular but Invalid Measure of Research Quality

Research is a key to success. Whether there are basic sciences or applied sciences, it is equally important. The nations, which had not bothered the research work as a lifeline to their developmental activities, remained underdeveloped. Hence in the present century a huge investment is being made across the world in research programmes and Pakistan is not an exception. All the stakeholders involved in research programmes, oftenly put a question mark on the quality of research work.

Research is a key to success. Whether there are basic sciences or applied sciences, it is equally important. The nations, which had not bothered the research work as a lifeline to their developmental activities, remained underdeveloped. Hence in the present century a huge investment is being made across the world in research programmes and Pakistan is not an exception. All the stakeholders involved in research programmes, oftenly put a question mark on the quality of research work. Few techniques and tools have been devised to measure the quality of research work but the measures of research have still many flaws and biases.

The researchers are increasingly being measured by the impact of their research, currently often using a tool such as impact factors of the journals they publish in. The scientist, who has more number of published articles in impact factor journals, is considered as more qualified. So there is an increasing trend among researcher to earn impact factor value, to prove them qualified. But the evaluation of the research work or researcher by using impact factor tool is problematic. Keeping in view the reservations expressed by renowned scientists, this article elaborates the relationship between the quality of research work/researches and Impact factor.

In Pakistani context, Higher Education Commission (HEC) maintained a list of indigenous recognized journals. All HEC recognized Journals are classified as W, X, Y, Z. Only W category has Impact Factor. Including all categories HEC enlisted 102 sciences journals, among them only 2 journals i.e. Pakistan journal of botany, and journal of chemical society of Pakistan, having impact factor. Similarly in social sciences, HEC enlisted 132 journals and none of them having impact factor. According to HEC after 30th June 2009, the Journals that have an impact factor will be considered as HEC recognized journals. It is obvious that if present standard of enlisted journals prevails, only 2 journals would be on the list of the recognized journals after June 2009. Even HEC imposed the limitation of impact factor (at least 5) for university teachers to supervise PhD students. Now the survival of the scientists can be directly linked with their earned impact factor values. So it is quite rational to understand that what is an impact factor?

The Impact factor, often abbreviated IF, is a measure of the citations to science and social science journals. It is frequently used as a substitute for the importance of a journal to its field. The Impact factor was devised by Eugene Garfield, the founder of the Institute for Scientific Information (ISI), now acquired by Thomson; a large worldwide publisher, based in USA. Impact factors are calculated each year by the ISI for those journals, which it indexes, and the factors and indices are published in Journal Citation Reports (JCR). JCR is widely (though not freely) available to use and understand.  It covers more than 7,500 of the world’s most highly cited, peer-reviewed journals from more than 60 countries, in approximately 200 disciplines. JCR is published in two editions: The Science Edition; covers over 5,900 leading international science journals, and The Social Sciences Edition; covers over 1,700 leading international social sciences journals. These measures apply only to journals, not individual articles or individual scientists. Eugene Garfield warns about the “misuse in evaluating individuals” because there is “a wide variation from article to article within a single journal”. Most papers published in a high impact factor journal will ultimately be cited many fewer times than the impact factor may seem to suggest, and some will not be cited at all. Therefore the Impact Factor of the source journal should not be used as a substitute measure of the citation impact of individual articles in the journal. But unfortunately it is widely practiced including Pakistan.

It is important to explain the term citation for better understanding of IF calculation, its disadvantages, and manipulation as well. The citation is a process of acknowledging or citing the ISI indexed journal’s article by other scientists in their research work, which again get published in ISI indexed journal. ISI is counted such citations as measures of the usage and impact of the cited work. This is also called citation analysis or bibliometrics. The calculation of the impact factor for a journal is based on citation analysis. IF can be calculated by a formula devised by Eugene Garfield, but an easy way of thinking about it is that a journal that is cited once, on average, for each article published has an IF of 1. IF of journals ranges between zero to more than 35.

It is debatable question that is Impact factor a true measure of journal quality? For example, it is unclear whether the number of citations a paper receive, measures its actual worth or simply reflects the sheer number of publications in that particular area of research. To consider IF as a universal quality marker is biased due to the ISI’s inadequate international coverage. Although ISI indexes journals from more than 60 countries, the coverage is very uneven. Overwhelming percentage of publications are included from English language while very few publications from other international languages i.e. French, Chinese etc. Similarly very few journals are included from the less-developed countries. For example, from Pakistan only two journals are included. Even the ones that are included are under-cited, because most of the citations to such journals will come from other journals in the same language or from the same country, most of which are not indexed by ISI. The failure to include many high quality journals in the applied aspects of some subjects, such as marketing communications, public relations etc, is another reason of biasedness. The number of citations to papers in a particular journal does not really directly measure the excellence of a journal, and citation does not reflect the scientific worth of the papers. All the citations are weighed equally; even negative citation. For instance if paper X cites paper Y as containing errors, the citation actually improves the impact factor of a journal Y. It reflects, to some extent, the intensity of publication or citation in that area, and the current popularity of that particular topic, along with the accessibility of particular journals. Journals with low circulation, regardless of the scientific value of their contents, would never obtain high impact factors in an absolute sense. Since defining the quality of an academic publication by IF is problematic, involving non-quantifiable factors, such as the influence on the next generation of scientists, assigning this value a specific numeric measure cannot tell the whole story. Classic articles are cited repeatedly even after several decades, but this should not affect specific journals, but in many cases it affects the IF calculation. A scientific study was conducted reveals that the absolute number of researchers, the average number of authors on each paper, and the nature of results in different research areas, as well as variations in citation habits between different disciplines, particularly the number of citations in each paper, all combine to make impact factors between different groups of scientists. Generally, for example, medical journals have higher impact factors than mathematical journals and engineering journals. The publishers accept this drawback. It has never been claimed that they are valuable between fields: such a use is an indication of misunderstanding. By merely counting the frequency of citations per article and disregarding the real standing of the citing journals, the impact factor becomes simply a metric of popularity, not of excellence.

IF can also be manipulated. A journal can adopt editorial policies that boost its impact factor. These editorial policies may not solely involve improving the quality of published scientific work. Journals sometimes may publish a larger percentage of review articles. A study showed that many original research articles remain uncited after 3 years, nearly all review articles receive at least one citation within three years of publication, and therefore review articles can raise the impact factor of the journal. An editor of a journal may encourage authors to cite articles from that journal in the papers they submit, because ISI also counts that self-citation in IF calculation. Carefully it can be stated that simply to adopt the policy of IF measure, would probably discourage the creativity and original research work. Recently The HEFCE was urged by the Parliament of the United Kingdom Committee on Science and Technology to remind Research Assessment Exercise (RAE) panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published

Besides IF another tool to quantify the research quality is h-Index. The index was suggested in 2005 by Jorge E. Hirsch and is sometimes called the Hirsch index or Hirsch number. The h-index is currently not a widely accepted measure of scientific output. The index is calculated on the basis of the citations received by a given researcher’s publications. The h-index is proposed to measure simultaneously the quality and sustainability of scientific output, as well as, to some extent, the diversity of scientific research, but calculation of h-index again based on citation. Hence some possible drawbacks of the impact factor apply equally to the h-index. It is not difficult to come up with situations in which h-index may provide deceptive information about a scientist’s output. In practice, the alternative measure of quality is “prestige.” This is rating by reputation, which is very slow to change, and cannot be quantified or objectively used. It just demonstrates popularity.

The above discussion could be concluded with the view, what Hoeffel expressed concisely. He stated that “Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation”. What is the absolute tool that should be used for evaluating research work? The question in hand is still unanswered. Surely it would be a matter of honor, if Pakistan could contribute in this regard.

Muhammad Ramzan Rafique
Muhammad Ramzan Rafique

I am from a small town Chichawatni, Sahiwal, Punjab , Pakistan, studied from University of Agriculture Faisalabad, on my mission to explore world I am in Denmark these days..

Articles: 4630

Leave a Reply

Your email address will not be published. Required fields are marked *