The Blind Spot: When Peer Review Reproduces Privilege, not Quality
To make a diverse range of scholarly voices finally heard, we must not rely on blind reviewing policies but open our eyes wide.
In the words of the distinguished Argentine world-systems analyst Fernanda Beigel, “[a]cademic publishing is one of the most unequal areas of the circulation of ideas”.
Critical bibliometricians refer to a malevolent “metric tide” that is at the core of this inequality. It has been generated by several forces: a state auditing fetish, the fashion for big data, an all-consuming desire to manage scholars, artificially-created competition within and between universities, and the lust for profit of private-sector publishers. Double-blind reviewing is a core ally of that tendency.
Do the metric tide and this alleged blindness make for academic quality and diversity? I suspect they do not; that our methods of reviewing for journals in the humanities and social sciences are part of a system that reproduces dominant people, institutions, and forms of thought. They make for sameness and privilege.
The Journal Impact Factor, now so often taken as holy writ/indubitable science, was designed to assist librarians in their purchasing decisions. But bibliometric systems in the Anglosphere fetishize it while largely ignoring databases such as Latindex, Dialnet, and Redib that counter the hegemony of Web of Science and Scopus—powerful companies that are nonetheless vulnerable to unscrupulous practices. (Scopus, for example, has failed in its efforts to exclude problematic publishing.) Yet, manuscripts subject to blind review again and again seem to be evaluated based, in part, on whom and what they cite, per Impact Factor.
Citation Gaming
I recently agreed to jointly guest-edit a very prominent European journal, only to learn during the process that manuscripts submitted to it were being automatically excluded if they did not involve computational research that cited the journal. Dozens of renowned and emergent authors had their work mechanistically rejected sans its being sent to us, because only after ensuring that everything met these bizarre “standards” did the journal proceed to review. Scholars who obeyed these requirements had their work dispatched to reviewers who a) did mathematical manipulation and b) had cited the publication in question. The journal’s reward for this brazen onanism? It is rocketing up the charts, despite the practice’s exposure by Nature and others.
“Postproduction misconduct” is another addition to scholarly sharp practice, along with co-citation agreements entered into by journal editors and colleges proposing that authors cite their colleagues wherever possible. It’s all part of “Citation Gaming Induced by Bibliometric Evaluation”. Such self-dealing is rife across universities. And with the increase in blind reviewing has come an increase in such practices. At best, the former has failed to diminish the latter. Why?
“Blind reviewing is not, it seems, blind to reproducing dominant norms of citation.”
“[T]he content of … [an] article has become less important than its metadata,” such that “[e]valuation does not start with reading but with the identification of an article as a bibliographical object,” wrote Mario Biagioli in 2020. For honest researchers, especially progressive/qualitative ones, the implications for academic freedom are dire. “Blind” reviewing is not, it seems, blind to reproducing dominant norms of citation.
For years, critics of this tendency towards mechanistic evaluation have been ignored or ridiculed. But now tens of thousands of organizations and individuals across 145 countries have signed the San Francisco Declaration on Research Assessment (DORA). Sponsored by the Dutch Research Council, the American Society for Cell Biology, Wellcome, the Research Council of Norway, and the Swiss National Science Foundation, amidst other eminent entities, the Declaration points out that dominant academic ranking systems skew the distribution of citations, treat different genres of writing as identical, and are simultaneously open to manipulation by editorial policy and closed to external scrutiny. DORA proposes eliminating them from faculty hiring and promotion and focusing instead on the quality of research and writing—not the outlets where work appears. In my time on panels of Britain’s Research Excellence Framework and Hong Kong’s Research Grants Committee, the watchword has rightly been the quality of work published, not its housing.
Ignorance or Bias?
It is now clear that putatively “blind reviewing” has contributed to a parthenogenetic closed shop. People just keep endorsing approaches, names, methods, and findings on an ethnocentric basis. Consider the data re openness to theorization and research done beyond Europe, the US, and their chorines, such as Australia and Canada. About 90% of social-science authors listed in the Web of Science have addresses in wealthy nations. Publications in various disciplines from 1975 to 2017 saw just 4% of articles from the Global South.
“Powerful people cite those who live in their own country, or culturally-similar ones.”
The two hundred highest-ranked journals in the Social Science Citation Index and Web of Science find scholars blissfully ignoring knowledge from the Global South—the citation of such “others” by European and North American researchers over the last thirty years is virtually zero. US authors in particular overwhelmingly mention their compatriots (from 1983 to 1985, the figure was 82.9%; 1993 to 1995, 80.2%; and 2003 to 2005, 76.7%). Sometimes they deem Western Europeans worthy of citation (1983-1985: 15.8%; 1993-1995: 18.3%; and 2003-2005: 21.9%). But they abjure scholarship from the rest of the world, and English-speaking countries dominate as sources of knowledge. The same pattern exists in the sciences—powerful people cite those who live in their own country, or culturally-similar ones. This isn’t about bias against certain people; it’s about ignorance.
I have spent decades recruiting folks to doctoral programs, Faculty positions, book series, editorial boards, association committees, and journal issues. I have found that diversity and quality do not derive from simply placing an advertisement or call for papers and going through the usual rigmarole; that reproduces established forces and categories of person.
Such bureaucratic niceties have been invidious for most of the world’s population, historically and today. Unsurprisingly, ideas of rigor invoked in the name of those norms frequently turn off candidates, readers, and authors from emergent discourses and places. To recruit new professors and writers necessitates reaching out to those who are historically unrepresented and saying, “People like you are truly wanted.” That applies to my experiences editing journals, books, and book series for Duke, Sage, Routledge, Peter Lang, Blackwell, De Gruyter, and Minnesota.
Eyes Wide Open
The system is badly broken. Blind reviewing has improved nothing—it is all part of increased bureaucratization and monolingualism. Let’s replace it with a process that requires knowledge to bloom from everywhere—not just Euro and US-dominated professional associations, editorial boards, and universities.
That will involve not blindness, but eyes wide open. The idea that any kind of blind reviewing diminishes discrimination is not backed up by the facts—it’s a fantasy. For such discrimination is not necessarily to do with reviewers disliking certain social identities, such as gender—it’s about a failure to understand different forms of knowledge, which helps underpin a flawed capitalist and governmental model of control.
Instead, we must reach out to countries and theories that are not among the usual culprits; draw on methods foreign to the norm; and open up reviewing such that it is an expectation that scholars will treat the world not only as an object of study, but a source of understanding.
“Attitudes must change, and the drive to audit and profit be subordinated to diversity and quality.”
Again and again, I review manuscripts for journals and book series that fail to do this; assume knowledge comes in one language; and reproduce established categories of expertise. And those authors get published, time after time after time. For when I point out these gaps in knowledge, the response is “That’s not what I’m doing.” Of course it isn’t. And editors accept that. Whether my or the authors’ identities were known or not would make no difference. Attitudes must change, and the drive to audit and profit be subordinated to diversity and quality.
Toby Miller is Editor-in-Chief of the Open Access journal “Open Cultural Studies”
[Title Image by Jeremy Lishner via Unsplash]