How science journals rig their rankings
The Wall Street Journal tells us that science journals have succumbed to a sort of academic product placement. It goes like this: editors and publishers at, for instance, the American Journal of Respiratory and Critical Care Medical, will urge scientist-contributors to frontload their papers with citations to the journal's past studies. Doing so will increase the journal's rankings--or "impact factors"--thereby increasing the likeliness that libraries and other institutions will subscribe.
[I]mpact factors are essentially a grading system of how important the papers a journal publishes are. "Importance" is measured by how many other papers cite it, indicating that the discoveries, methodologies or insights it describes are advancing science. Impact factors are calculated annually for some 5,900 science journals by Thomson Scientific.
Apparently, this has been going on for years and is deeply entrenched (though, naturally, publishers deny it). But it seems to me that there's a possible (partial) solution: why not change the rankings formula to somehow discount citations of one's own journal? Or of journals linked to the same parent company? At the very least, a journal with self-citations over a certain percentage should get a good whipping.
Full story copied below the jump.
Science Journals Artfully Try
To Boost Their Rankings
By SHARON BEGLEY
June 5, 2006; Page B1
John B. West has had his share of requests, suggestions and demands from the scientific journals where he submits his research papers, but this one stopped him cold.
Dr. West, the Distinguished Professor of Medicine and Physiology at the University of California, San Diego, School of Medicine, is one of the world's leading authorities on respiratory physiology and was a member of Sir Edmund Hillary's 1960 expedition to the Himalayas. After he submitted a paper on the design of the human lung to the American Journal of Respiratory and Critical Care Medicine, an editor emailed him that the paper was basically fine. There was just one thing: Dr. West should cite more studies that had appeared in the respiratory journal.
If that seems like a surprising request, in the world of scientific publishing it no longer is. Scientists and editors say scientific journals increasingly are manipulating rankings -- called "impact factors" -- that are based on how often papers they publish are cited by other researchers.
"I was appalled," says Dr. West of the request. "This was a clear abuse of the system because they were trying to rig their impact factor."
Just as television shows have Nielsen ratings and colleges have the U.S. News rankings, science journals have impact factors. Now there is mounting concern that attempts to manipulate impact factors are harming scientific research.
Conceived 40 years ago, impact factors are essentially a grading system of how important the papers a journal publishes are. "Importance" is measured by how many other papers cite it, indicating that the discoveries, methodologies or insights it describes are advancing science.
Impact factors are calculated annually for some 5,900 science journals by Thomson Scientific, part of the Thomson Corp., of Stamford, Conn. Numbers less than 2 are considered low. Top journals, such as the Journal of the American Medical Association, score in the double digits. Researchers and editors say manipulating the score is more common among smaller, newer journals, which struggle for visibility against more established rivals.
Thomson Scientific is set to release the latest impact factors this month. Thomson has long advocated that journal editors respect the integrity of the rankings. "The energy that's put into efforts to game the system would be better spent publishing excellent papers," says Jim Testa, director of editorial development at the company.
Impact factors matter to publishers' bottom lines because librarians rely on them to make purchasing decisions. Annual subscriptions to some journals can cost upwards of $10,000.
The result, says Martin Frank, executive director of the American Physiological Society, which publishes 14 journals, is that "we have become whores to the impact factor." He adds that his society doesn't engage in these practices.
Journals can manipulate impact factors with legitimate editorial decisions. One strategy is to publish many review articles, says Vicki Cohn, managing editor of Mary Ann Liebert Inc., a closely held New Rochelle, N.Y., company that publishes 59 journals. Reviews don't report new results but instead summarize recent findings in a field. Since it is easier for scientists to cite one review than the dozens of studies that it summarizes, reviews get a lot of citations, raising a journal's impact score.
"Journal editors know how to increase their impact factor legitimately," says Ms. Cohn. "But there is growing suspicion that journals are using nefarious means to pump it up."
One questionable tactic is to ask authors to cite papers the journal already has published, as happened to UCSD's Dr. West, who says that he has great respect for the journal and its editors despite this episode. He declined the request, and the journal published his paper anyway, in March.
Richard Albert, the deputy editor of the American Journal of Respiratory and Critical Care Medicine, says that the request goes out to every scientist who submits a paper. "It's boilerplate, a form letter," he says. The letter has been in use for many years, according to Dr. Albert, who says he has always opposed the inclusion of the passage but was overruled by the journal's former editor.
Journals also can resort to "best-of" features, such as running annual summaries of their most notable papers. When Artificial Organs did this in 2005, all 145 citations were to other Artificial Organs papers. Editor Paul Malchesky says the feature was conceived "as a service to the readership. It was not my intention to affect our impact factor. In terms of how we run our operation, I don't base that on impact factor."
Self-citation can go too far. In 2005, Thomson Scientific dropped the World Journal of Gastroenterology from its rankings because 85% of the citations it published were to its own papers and because few other journals cited it. Editors of the journal, which is based in Beijing, did not answer emails requesting comment.
Journals can limit citations to papers published by competitors, keeping the rivals' impact factors down. An analysis of citations in the Journal of Telemedicine and Telecare shows very few citations of papers in a competitor, Telemedicine and e-Health, "while we cited them liberally," says editor Rashid Bashshur, director of telemedicine at the University of Michigan, Ann Arbor.
Richard Wootton, editor of JTT, says that he believes it's true that his journal cites its competitor less frequently than Dr. Bashshur's journal cites JTT, "but it doesn't seem to me that there is a sinister explanation." Dr. Wootton adds that "when we edit a paper...we sometimes ask authors to ensure that the relevant literature is cited." But "I can state unequivocally that we do not attempt to manipulate the JTT's impact factor. For a start, I wouldn't know how to."
Scientists and publishers worry that the cult of the impact factor is skewing the direction of research. One concern, says Mary Ann Liebert, president and chief executive of her publishing company, is that scientists may jump on research bandwagons, because journals prefer popular, mainstream topics, and eschew less-popular approaches for fear that only a lesser-tier journal will take their papers. When scientists are discouraged from pursuing unpopular ideas, finding the correct explanation of a phenomenon or a disease takes longer.
"If you look at journals that have a high impact factor, they tend to be trendy," says immunologist David Woodland of the nonprofit Trudeau Institute, of Saranac Lake, N.Y., and the incoming editor of Viral Immunology. He recalls one journal that accepted immunology papers only if they focused on the development of thymus cells, a once-hot topic. "It's hard to get into them if you're ahead of the curve."
As examples of that, Ms. Liebert cites early research on AIDS, gene therapy and psychopharmacology, all of which had trouble finding homes in established journals. "How much that relates to impact factor is hard to know," she says. "But editors and publishers both know that papers related to cutting-edge and perhaps obscure research are not going to be highly cited."
Another concern is that impact factors, since they measure only how many times other scientists cite a paper, say nothing about whether journals publish studies that lead to something useful. As a result, there is pressure to publish studies that appeal to an academic audience oriented toward basic research.
Journals' "questionable" steps to raise their impact factors "affect the public," Ms. Liebert says. "Ultimately, funding is allocated to scientists and topics perceived to be of the greatest importance. If impact factor is being manipulated, then scientists and studies that seem important will be funded perhaps at the expense of those that seem less important."
Write to Sharon Begley at firstname.lastname@example.org
URL for this article:
Posted by carrie on 06/07/2006 | Permalink
Further to Richard Wootton's comments about JTT citations of TJEH, I would refer the reader to the Whan et al. article at http://www.ingentaconnect.com/content/rsm/jtt/2006/00000012/A00307s3/art00032. The work reveals one possible reason for the less-frequent citations of TJEH -- the journal publishes fewer articles: 354 in 1998-2005, compared with 1020 in JTT in the same period.
Posted by: Tasha Louiza | Jul 6, 2007 11:32:55 AM
The comments to this entry are closed.