August 18th, 2010
The idea that you can rank universities in the world has to be one of the most misguided ideas ever conceived. The assumption must be that all universities in all countries have the same objectives. If this isn’t true, the ranking is meaningless.
The Academic Ranking of World Universities (ARWU) has the following criteria:
- 10%: Alumni winning Nobel Prizes and Fields Medals.
- 20%: Academic Staff winning Nobel Prizes and Fields Medals.
- 20%: ISI Highly Cited Researchers.
- 20%: Articles published in Science and Nature.
- 20%: Papers published that are indexed by the Science Citation Index (SCI) or the Social Science Citation Index (SSCI).
- 10%: Per capita academic performance of the above indicators.
Now, assume I am a worried university administrator. How can I improve my institution’s ranking?
I would propose the following strategy:
- Greatly expand research in selected areas of medicine and the natural sciences that tend to have articles published in Science and Nature.
- Expand selected areas in engineering that publish in easy-to-publish IEEE conferences and journals that are indexed by SCI. Better yet, require all undergraduate and graduate students to publish about six or so “SCI papers” with their advisor’s name on it to get their degree.
- Sack everyone else.
Also: Make sure you work in an institution which is at least about 100 years old and have focused on medicine and the natural sciences in the past. Remember: Nobel Prizes collected by staff and alumni over 50 years ago, they still count!
I leave it up to someone else to decide if such a rank-optimized university is what should characterize every university in the world.
August 4th, 2010
I bought a random book about two years ago called “Milking the moon” that I have just finished reading. It tells us the life story of Eugene Walter and is based on extensive interviews with him carried out by Katherine Clark, then a professor of literature.
Eugene grew up in Mobile, Alabama. He had no formal education, yet he quickly became a member of the literary community in the 1940s and 1950s. The book chronicles his life in Mobile, then New York, then Paris, then Rome, and then back to Mobile. Eugene Walter is probably not well known. However, he hanged around with the famous and influential all his life. He won a literature award and made contributions to the Paris Review literary magazine. He also served as an editor for Botteghe Oscure for many years.
I enjoyed the positive message in this back: let’s not worry too much about tomorrow, if you spot a good opportunity then just go for it! By following this advice Eugene got to befriend some of the most interesting people of his time, for instance the Italian film director Fellini. Eugene also had minor roles in some of Fellini’s films. Eugene clearly had a good time, in particular in Rome, where he also wrote the lyrics for the song “What is a youth” in the 1968 film Romeo and Juliet. Yet I think this book is only giving us half the story. The book clinically omits any mention of Eugene’s love life. Did he ever have a family of his own? In one part of the book it suggests his friendships tended to last up to 15 years. So while we get to hear a lot about his views about other people (most of it interesting, and sometimes fascinating), we get to hear very little about Eugene himself. I think it is a shame. The foreword gives me the impression he was quite the character. Despite this, I would recommend reading this book. In particular, the first chapters in the book about his upbringing in Mobile was a vibrant and refreshing read on life in the American south during the first half of the 20th century.
August 3rd, 2010
I just noticed that according to Google Scholar my first publication, a CHI 2003 paper, has exactly 100 citations now. It seems to be my most cited paper so far.
July 25th, 2010
The College seen from the Garden:
The Garden in the snow (view from the Hall):
The Old Granary:
The bridge to the College’s first island on the river Cam:
July 21st, 2010
A recent paper by Gonzalo Génova in the Communications of the ACM talks about the role of empirical evidence in evaluating computer science research. The article talks about computer science in general but it reminds me of Henry Lieberman’s 2003 paper The Tyranny of Evaluation, which attacks the tendency in HCI to reject papers describing groundbreaking systems and techniques solely due to their lack of empirical evidence. Henry makes a comparison to Galileo’s experiments of dropping balls from the Tower of Pisa. As he eloquently puts it: “Trouble is, people aren’t balls.“
June 24th, 2010
A recent paper in the Proceedings of the National Academy of Sciences shows that 97-98% of active climate scientists are in agreement with IPCC. Perhaps not particularly surprising (data is always good though). This is slightly more amusing: the paper also shows that those in disagreement have substantially lower climate expertise and scientific prominence. I suppose this settles the question on whether there is consensus among climate scientists.
June 24th, 2010
As a Fellow of Darwin College I am a member of the Regent House. This means I occasionally get voting papers from the University about important and not so important matters. Today I was asked to vote on the matter of wasting £352,000 in removing a lift from the 15th century University Combination Room. Why? To teach the Council a lesson. The Council installed the lift without consulting the members of the Regent House. Hence, by removing the lift, starting a new investigation on how to enable disabled access to the room, and then (in all likelihood) deciding to reinstall the lift, the Council is taught how expensive it will be for them to make decisions without consulting the members of the Regent House!
June 4th, 2010
There is a trend to use citation counts as an estimator of scientific esteem of journals, university departments, and even individual researchers. Douglas Arnold has written an interesting editorial on the danger of relying on such citation counts to evaluate anything (pdf copy). The editorial provides evidence of just how easy it is to manipulate citation counts. I find the examples provided very disturbing. I would encourage anyone concerned with bibliometrics to read this article.
May 28th, 2010
In this month’s issue of Communications of the ACM there is a paper that shows that selective ACM conference papers are on par, or better than, journal articles in terms of citation counts.
From the paper:
“First and foremost, computing researchers are right to view conferences as an important archival venue and use acceptance rate as an indicator of future impact. Papers in highly selective conferences—acceptance rates of 30% or less—should continue to be treated as first-class research contributions with impact comparable to, or better than, journal papers.”
Considering that the authors only compared these conference papers against the top-tier journals (ACM Transactions), their finding is surprisingly strong. It also strengthens my view that in computer science, selective conference papers are as good, if not better, than journal articles.
May 10th, 2010
Swedish universities with Nobel Prize-winning faculty:
|Karolinska Institute, Stockholm
|Royal Institute of Technology, Stockholm
|Stockholm School of Economics
|University of Gothenburg
Swedish cities with Nobel Prize-winning faculty:
- Stockholm: 10 Nobel Prizes
- Uppsala: 5 Nobel Prizes
- Gothenburg: 1 Nobel Prize
Source: Nobel Prize Foundation: Nobel Laureates and Universities
- Researchers in Uppsala and Stockholm have won all Swedish Nobel Prizes awarded to faculty except one.
- Uppsala University is the only Swedish university to have Nobel Prize winners in more than one category. They have won in Chemistry, Physics, and Physiology or Medicine. All winners at Karolinska Institute are in Physiology or Medicine, and all winners at Stockholm University are in Chemistry.
- The rate of Nobel Prizes won per decade reached a peak in the 1970s and 1980s. Thereafter it dropped to the same rate as in the 1930s and 1940s. See the figure below.