Per Ola Kristensson | Blog

Other Stuff

Archive for the ‘peer review’ Category

The Chronicle writes about open access spam journals

Wednesday, March 7th, 2012

I have blogged before about how low-quality open access “publishers” indiscriminately spam researchers for manuscripts for the most ridiculous journals and “edited books” (here, here and here). The peer-review and quality-control appears to be minimal and the publication fees are outrageously high.

Now The Chronicle of Higher Education has an article about how ‘Predatory’ Online Journals Lure Scholars Who Are Eager to Publish. It is excellent and very important this is brought to attention. There is of course nothing wrong with a desire to communicate research in the form of open access articles. However, having tax-funded researchers paying ridiculously high fees for someone essentially hosting PDF files without any real peer-review whatsoever is a huge waste of tax money. Even worse, having researchers passing these “publications” off as properly peer-reviewed scholarly articles is essentially a form of academic misconduct in my view.

More open access spam

Thursday, October 6th, 2011

Relating to a previous post on academic spam it seems the spammers from inTech now have competition in the open access “book chapter” business.

I just received the following email:

Dear Per Ola Kristensson,

On behalf of iConcept Press, I would like to invite you to submit a paper
to our new book project under the working title “Handbook of Pattern
Recognition: Methods and Application”. The editor of this book is Khalid
Hosny. The book description can be found at:

We notice that you have some publications related to this book project. For
instance, we are interested in your paper entitled “Parakeet: a
demonstration of speech recognition on a mobile touch-screen device
(2009)”. (Please note that we are not asking you to republish this paper).
You are welcome to submit an extended/expanded version of this paper, or
any work that is related to this book. We expect each chapter has a minimum
of 16 pages.

All iConcept Press books are published as hard copy with ISBN and as open
access. For more information, please visit our web site:

Since we are open access, we would like to ask our authors to contribute
part of the publication expense. This publication fee might be
waived/reduced if: Most authors are from countries that are not classified
as high-income economies
( Please
contact us for more information. The standard publication fee for each
accepted manuscript is USD$38/page for the first 16 pages and USD$18/page
thereafter. The corresponding author of each paid chapter will receive one
hard copy for free.

I sincerely hope that you would accept the invitation and that you support
the open access idea.

Next step:
1. Please inform us about your decision via email by: 18 Oct 2011
2. Submit a tentative abstract and title via our online submission system
( by: 10 Nov
3. Upload a full chapter by: 15 Jan 2012

Please free feel to contact us if you have any question.


iConcept Press

Again, I wonder why I would pay for an open access book chapter? These chapters seem to undergo minimal (if any) peer review (I have never been asked to review any and I have never heard of anyone else being approached either) so all an open access book chapter amounts to is a PDF document hosted on a website. I might as well make the text available on the web myself. For free.

Bentham open access journal accepted nonsense manuscript

Saturday, November 6th, 2010

The previous post described various commercial open access publishers spamming for book chapters and journal articles. Now I read that Phil Davis at the blog The Scholarly Kitchen sent a SCIgen-generated nonsense manuscript to “The Open Information Science Journal” published by Bentham. Four months later Bentham accepted the paper and offered to publish it for a $800 fee. No reviews were provided with the acceptance letter.

It looks like at least some of these commercial open access publishers are essentially vanity presses that will publish anything for a hefty fee. It is a shame they are doing this since they may drag down the reputation of reputable open access publishers, such as PLoS. Another danger is that we will see even more use of pseudo-scientific rankings and measures of the impact of journals based on citations just so that people can filter out all the meaningless junk. I suppose no one is any longer interested in reading the actual articles.

In defense of open access journals, a journal called “Applied Mathematics and Computation” which has been published by Elsevier since 1975 has apparently also accepted an automatically generated nonsense paper.

Academic spam and open access publishing

Thursday, November 4th, 2010

Judging by how much spam I get nowadays it seems academic open access publishing is lucrative.

I keep getting targeted spam from Bentham, Hindawi, InTech, and others. The strategy seems to be to mine reputable conference and journal papers for email addresses and then use them for targeted spam.

I have now received five emails from open access publisher InTech about a book chapter based on a previously published paper. These guys never give up! This is an excerpt from the last one:

Dear Dr. Kristensson,

We apologize for contacting you again on the matter of your nomination to contribute to the book named in the title of this email, but since we haven’t received an answer from you, we are taking the liberty of contacting you again (you may have been busy or our previous emails may have ended up in your email filters). However, this is the last email you will receive from us. If you can find time, please reply to our previous email which is below:

My name is MSc Iva Lipovic and I am contacting you regarding a new InTech book project under the working title “Speech Technologies”, ISBN: 978-953-307-152-7.

This book will be published by InTech – an Open Access publisher covering the fields of Science, Technology and Medicine.

You are invited to participate in this book project based on your paper “Automatic Selection of Recognition Errors by Respeaking the Intended Text”, your publishing history and the quality of your research. However, we are not asking you to republish your work, but we would like you to prepare a new paper on one of the topics this book project covers.

Why on earth would I spend time and effort to write a book chapter for a random individual I have never heard of and who doesn’t seem to have any credentials whatsoever in the field? And who reads these book chapters? And what exactly is the point of an open access “book chapter”? Sounds like a web page to me. With the exception I have to pay InTech plenty of money to put it up. I might as well just make the text available on the web myself.

Another open access publisher that likes to send spam is Hindawi. However, news to me was that Hindawi now spams on behalf of EURASIP, an organization I thought was reputable (until now):

Dear Dr. Kristensson,

I am writing to invite you to submit an article to “EURASIP Journal on Audio, Speech, and Music Processing,” which provides a rapid forum for the dissemination of original research articles as well as review articles related to the theory and applications of audio, speech, and music processing.

EURASIP Journal on Audio, Speech, and Music Processing is published using an open access publication model, meaning that all interested readers are able to freely access the journal online at without the need for a subscription.

Another example is Bentham who wants me to write a review on random patents based on keyword searches (the weirdest concept I have heard of so far for a journal):

Dear Dr. Kristensson,

Bentham Science Publishers has launched a series of innovative
journals publishing review articles on recent patents in major
therapeutic areas of drug discovery as well as biotechnology,
nanotechnology, engineering, computer science and material science
disciplines. Please refer to Bentham Science’s website at for further details.

An exciting journal entitled “Recent Patents on Computer Science
(CSENG)” was launched in January 2008. This journal publishes review
articles written by experts on recent patents in the field of Computer
Science. Please visit the journal‘s website at for the Editorial Board, sample issue,
abstracts of recent issues and other details.

Recent Patents on Computer Science (CSENG) is indexed in Genamics
JournalSeek, Compendex,Scopus

If you would like to submit a review article to the journal on an
important patent area in Computer Science, then please provide us the
title of your proposed article and a tentative date of submission at Moreover in your reply, could you please
suggest some specific keywords, keyword phrases related to your topic,
so that detailed patents may be sent to you for the preparation of
your manuscript.

I keep wondering who is actually editing and reviewing all these journals and books. While they keep spamming me for paper submissions (and lucrative fees after they have accepted the papers), I haven’t received any invitations to do any reviews.

Bibliometrics: The importance of conference papers in computer science

Friday, May 28th, 2010

In this month’s issue of Communications of the ACM there is a paper that shows that selective ACM conference papers are on par, or better than, journal articles in terms of citation counts.

From the paper:

“First and foremost, computing researchers are right to view conferences as an important archival venue and use acceptance rate as an indicator of future impact. Papers in highly selective conferences—acceptance rates of 30% or less—should continue to be treated as first-class research contributions with impact comparable to, or better than, journal papers.”

Considering that the authors only compared these conference papers against the top-tier journals (ACM Transactions), their finding is surprisingly strong. It also strengthens my view that in computer science, selective conference papers are as good, if not better, than journal articles.

British MPs: climate science is OK

Thursday, April 1st, 2010

It seems the results obtained from climate science are indeed reliable. It is amazing that scientists are held in so low regard nowadays that British MPs feel they need to jump in and “investigate” a bunch of leaked internal emails from the University of East Anglia. Yes, the peer-review process has problems, any scientist can probably tell you that. Yet it is so much better than the alternative: a flood of bad articles and uninformed opinions swamping reports of actual scientific progress.

For anyone that quickly wants to know more about why global warming is highly probable I recommend you read The Economist‘s nice summary.

The lack of expertise among peer reviewers in HCI

Tuesday, December 15th, 2009

Peer review is often highlighted as a cornerstone of good scientific practice, at least in engineering and the natural sciences. The logic behind peer review is that peers (i.e. other researchers knowledgeable in your research field) review your manuscript to make sure the research is valid, interesting, cites related work, etc.

However, what if reviewers do not really qualify as your peers? Then this validation process isn’t really something that can be called peer review, is it?

I have been submitting and reviewing research papers for the major human-computer interaction (HCI) conferences for six years now, this year as an associate chair (AC) for CHI 2010. I have to say our peer review process leaves something to be desired. A typical outcome is that 1-2 reviewers are actually experts (peers!) and the remaining 2-3 reviewers have never worked in the submission’s particular research area at all. Sometimes the ignorance is so glaringly obvious it is disheartening. For example, my note at CHI 2009 had two reviewers who rated themselves “expert” and “knowledgeable” respectively that argued for rejection because my study “was stating what was already known” [paraphrased]. However, the truth is that the result in this study contradicted what was generally believed in the literature, something I made clear in the rebuttal. In the end, the paper was accepted but it is hard for me to argue that my paper was “peer reviewed”. In this case only one reviewer knew what he or she was talking about and the rest (including the primary and secondary AC) clearly had no research expertise in the area.

In order to have a paper accepted at CHI I have found that above everything else you need to ensure you educate non-peers about your research area. You can safely assume several of the reviewers do not know your research area very well at all (sometimes they even rate themselves as having no knowledge in the area). This is a problem because it means that many good papers get rejected for superficial reasons. It also means that many bad papers end up being accepted. The latter tends to happen for well-written “visually beautiful” papers that either show nothing new or are methodologically invalid. If you are not an expert, you probably won’t spot the subtle methodological flaws that invalidate a paper’s conclusions. Likewise, you won’t realize that the research has already been done much better in a previous paper the authors didn’t know about, or chose not to cite.

CHI tries to fix the issue of reviewer incompetency by having a second stage of the review process – the program committee meeting. However, this is even more flawed because the associate chairs at the committee meeting cannot possibly represent all research areas. As an example, in my committee I was the only one who was active in text entry research. Therefore my power to reject or accept a particular submission involving text entry was immense (even though I chose not to exercise this power much). In the committee meeting the primary and secondary AC are supposed to argue for rejection or acceptance of their assigned submissions. However, if your AC is not an expert he or she will most likely completely rely on the reviewers’ judgments – reviewers, who themselves are often non-experts. This means that the one and only expert-AC in the committee (if there is even one!) needs to speak up in order to save a good paper from being rejected because of AC/reviewer ignorance. Vice versa, bad papers end up being accepted unless someone speaks up at the committee meeting. There is also a third alternative. An AC who for whatever reason does not like a particular paper can kill it at will by raising superficial concerns. This is possible because most likely there is not enough expertise on a particular paper’s topic in the committee room to properly defend it from such attacks (and the authors have obviously no way to address concerns raised at this late stage of the reviewing process).

I think a useful self-assessment indicator would be to ask each reviewer (including the AC) to indicate how many of the references in the submission the reviewer has read before they started to review a particular paper. In many cases, I strongly suspect many honest reviewers would be forced to state they haven’t read a single reference in the reference list! Are such reviewers really peers? No!

This problem of non-expertise among reviewers is probably hard to solve. One huge problem is our insistence on viewing conference publications as the primary publication venue. It means the reviewing system is swamped at one particular point in time each year. As an AC I know how hard it is to find competent reviewers when all the well-qualified candidates you can think of are already busy reviewing other work. Publishing in journals with a rapid turnaround process would be an obvious way to spread the reviewing load over the entire year and therefore maximize the availability of expert reviewers at any given point in time. However, to my surprise, I find that this idea meets a lot of resistance so I am not optimistic this problem is going away anytime soon.