Per Ola Kristensson | Blog

Blog
Publications
Software
Other Stuff

Archive for the ‘bibliometrics’ Category

My second paper reaches exactly 100 citations

Wednesday, August 29th, 2012

On August 3, 2010 I wrote that my first paper reached exactly 100 citations on Google Scholar. Now my second publication, a UIST 2004 paper, has exactly 100 citations as well. Coincidently, we will host UIST 2013 in St Andrews next year.

Bentham open access journal accepted nonsense manuscript

Saturday, November 6th, 2010

The previous post described various commercial open access publishers spamming for book chapters and journal articles. Now I read that Phil Davis at the blog The Scholarly Kitchen sent a SCIgen-generated nonsense manuscript to “The Open Information Science Journal” published by Bentham. Four months later Bentham accepted the paper and offered to publish it for a $800 fee. No reviews were provided with the acceptance letter.

It looks like at least some of these commercial open access publishers are essentially vanity presses that will publish anything for a hefty fee. It is a shame they are doing this since they may drag down the reputation of reputable open access publishers, such as PLoS. Another danger is that we will see even more use of pseudo-scientific rankings and measures of the impact of journals based on citations just so that people can filter out all the meaningless junk. I suppose no one is any longer interested in reading the actual articles.

In defense of open access journals, a journal called “Applied Mathematics and Computation” which has been published by Elsevier since 1975 has apparently also accepted an automatically generated nonsense paper.

How to be a “top-100” university

Wednesday, August 18th, 2010

The idea that you can rank universities in the world has to be one of the most misguided ideas ever conceived. The assumption must be that all universities in all countries have the same objectives. If this isn’t true, the ranking is meaningless.

The Academic Ranking of World Universities (ARWU) has the following criteria:

  1. 10%: Alumni winning Nobel Prizes and Fields Medals.
  2. 20%: Academic Staff winning Nobel Prizes and Fields Medals.
  3. 20%: ISI Highly Cited Researchers.
  4. 20%: Articles published in Science and Nature.
  5. 20%: Papers published that are indexed by the Science Citation Index (SCI) or the Social Science Citation Index (SSCI).
  6. 10%: Per capita academic performance of the above indicators.

Now, assume I am a worried university administrator. How can I improve my institution’s ranking?

I would propose the following strategy:

  1. Greatly expand research in selected areas of medicine and the natural sciences that tend to have articles published in Science and Nature.
  2. Expand selected areas in engineering that publish in easy-to-publish IEEE conferences and journals that are indexed by SCI. Better yet, require all undergraduate and graduate students to publish about six or so “SCI papers” with their advisor’s name on it to get their degree.
  3. Sack everyone else.

Also: Make sure you work in an institution which is at least about 100 years old and have focused on medicine and the natural sciences in the past. Remember: Nobel Prizes collected by staff and alumni over 50 years ago, they still count!

I leave it up to someone else to decide if such a rank-optimized university is what should characterize every university in the world.

My first paper reaches exactly 100 citations

Tuesday, August 3rd, 2010

I just noticed that according to Google Scholar my first publication, a CHI 2003 paper, has exactly 100 citations now. It seems to be my most cited paper so far.

Bibliometrics: How easy it is to manipulate citation counts

Friday, June 4th, 2010

There is a trend to use citation counts as an estimator of scientific esteem of journals, university departments, and even individual researchers. Douglas Arnold has written an interesting editorial on the danger of relying on such citation counts to evaluate anything (pdf copy). The editorial provides evidence of just how easy it is to manipulate citation counts. I find the examples provided very disturbing. I would encourage anyone concerned with bibliometrics to read this article.

Bibliometrics: The importance of conference papers in computer science

Friday, May 28th, 2010

In this month’s issue of Communications of the ACM there is a paper that shows that selective ACM conference papers are on par, or better than, journal articles in terms of citation counts.

From the paper:

“First and foremost, computing researchers are right to view conferences as an important archival venue and use acceptance rate as an indicator of future impact. Papers in highly selective conferences—acceptance rates of 30% or less—should continue to be treated as first-class research contributions with impact comparable to, or better than, journal papers.”

Considering that the authors only compared these conference papers against the top-tier journals (ACM Transactions), their finding is surprisingly strong. It also strengthens my view that in computer science, selective conference papers are as good, if not better, than journal articles.