The Quantity Problem with Peer Review in the Sciences

Jack Hitt’s recent article in the New York Times touting the virtues of crowdsourcing peer review, of public comments on to-be-published or just-published scientific research, prompts me to offer a few thoughts on the problems in traditional peer review in a discipline—computer science—that I have had some exposure to in the past. In this post, I will concentrate on the forum for publication provided by academic meetings such as conferences or workshops.

First and foremost, far too much material is submitted for publication. There are thousands of  computer science conferences and workshops held annually. (This is not an exaggeration.) The reviewing for these events is carried out by the program committee, a loosely organized group of academics brought together to organize the event. Some members of the program committee constitute the original brains trust for the conference; yet others are invited to serve to increase the star rating of the conference—a conference’s quality is often gauged by the pedigree visible in the program committee—and to aid in the reviewing of conference submissions.

When submissions arrive, they are parceled out to the program committee for reviewing. In some cases, to ensure a more nuanced review, papers are assigned to more than one member of the committee; sometimes, however, a paper might receive only one review. This stage of the reviewing is often one-way-blind; the name of the author is known, but the referee remains anonymous to the author. In larger conferences, the reviewing is double-blind.

Academic schedules mean, inevitably, that the program committee member is over-committed: he has signed up for many academic events, and accepted as many invitations as he can, all in a rush to add lines to the CV, to increase his visibility in the community, to network. But now, conference submissions are in the Inbox, demanding careful, sincere, and honest review.

Unsurprisingly, the committee member is late with his reviews. The program committee chair sends out reminder emails; the harried committee member rushes off to review the paper,  gives it a perfunctory reading, and writes a brief summary and critique. This review is almost invariably a superficial assessment. Unsurprisingly a great deal of garbage gets past gatekeepers. Sometimes, the committee member with outsource or sub-contract the reviewing to a PhD student or a colleague.  PhD students can either very harsh or very mild reviewers; the former type bristles with aggression, eager to show off his newly acquired knowledge; the latter, diffidently taking his first steps into the professional academic world, hesitates to make critical judgments.

Sometimes a workshop or a conference will not receive enough submissions. Panic sets in among the program committee; the conference’s viability is threatened. Instructions go out to committee members: ‘Accept papers if they will spark discussion; accept them if they show some promise; accept them even if many call-for-papers guidelines are not met’. The conference is held; the less said about the quality of the papers presented, the better.

In computer science, publications in the proceedings of ‘premier’ conferences confer considerable prestige and are valuable additions to CVs; paper acceptances are a desired commodity. Interestingly enough, at the premier conferences, attendance lists are often made up of the usual suspects. This is partially ensured by the quality of the papers and also by the established authority of  the authors. Double-blind reviewing isn’t really ‘blind’; it is quite easy to determine the identity of authors by an inspection of the writing style, the  subject matter i.e., favorite hobby horses, sometimes even the formatting of mathematical symbols. (One research group always uses MS-Word to format their papers, as opposed to Latex; yet other uses idiosyncratic symbols for mathematical operators.) A not-so-confident reviewer, confronted with a paper written by an ‘authority’, holds fire. The paper makes it through. Yet another, knowing that this is written by an ‘authority’, lets it go through, because ‘it must be good’; others support ‘friendly’ research groups.

That last point brings us to the ‘paradigm problem.’  Fields of research often feature paradigms jostling for preeminence. Thus, reviewers are sometimes disinclined to favorably assess papers cast in the frameworks of competing paradigms but only too happy to enthusiastically accept those that show their own favored paradigm in a good light. Stories abound of academics who have experienced great difficulty in suggesting alternative frameworks to established paradigms that have cornered the market on conference committees and submissions.

Little can be done about the volume of research submitted for review. The modern academy has placed its members on the writing and publishing treadmill; like obedient children, confronted with the promotion and tenure process, academics comply.

But a great deal can be done about the reviewing process. More on that in a future post.

Note: This post is merely a cleaned up version of a post written on an older, now defunct blog (Decoding Liberation, named after the book.) I”m reposting it here because I wanted to reiterate my older worries, and to set up a preliminary to some soon-to-follow thoughts on crowd-sourcing peer review.

One thought on “The Quantity Problem with Peer Review in the Sciences

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: