Skip to main content
kfitz

Open Access Publishing and Scholarly Values (part three)

There’s a fascinating exchange around open access publishing and the reasons scholars might resist it developing right now, beginning with Dan Cohen’s post, Open Access Publishing and Scholarly Values, which he wrote for the Hacking the Academy volume, a crowd-sourced book he and Tom Scheinfeldt are editing (to be published by the University of Michigan Press’s Digital Culture Books). Dan argues for the ethical — as well as the practical — imperative for contemporary scholars to publish their work in openly distributed forms and venues.

Stephen Ramsay then published a response, Open Access Publishing and Scholarly Values (continued), in which he points out that the ways we substitute what we now understand as “peer review” for real evaluation and judgment by our peers, particularly at the stage of tenure and promotion reviews, so overwhelms this ethical/practical imperative that we never even really get to the stage of deciding whether publishing openly could be a good thing or not.

I’ve left a comment on that response, which got lengthy enough that I thought I’d reproduce and expand upon it here. Steve writes, in the latter paragraphs on his post,

The idea of recording “impact” (page hits, links, etc.) is often ridiculed as a “popularity contest,” but it’s not at all clear to me how such a system would be inferior to the one we have. In fact, it would almost certainly be a more honest system (you’ll notice that “good publisher” is very often tied to the social class represented by the sponsoring institution).

My response to this passage begins with a big “amen.” At many institutions, in fact, the criteria for assessing a scholar’s research for tenure and promotion includes some statement about that scholar’s “impact” on the field at a national or international level, and we treat the peer-review process as though it can give us information about such impact. But the fact of an article or a monograph’s having been published by a reputable journal/press that employed the mechanisms of peer review as we currently know it — this can only ever give us binary information, and binary information based on an extraordinarily small sample size. Why should the two-to-three readers selected by a journal/press, plus that entity’s editor/editorial board, be the arbiter of the authority of scholarly work — particularly in the digital, when we have so many more complex means of assessing the effect of/response to scholarly work via network analysis?

I don’t mean to suggest that going quantitative is anything like the answer to our current problems with assessment in promotion and tenure reviews — our colleagues in the sciences would no doubt present us with all kinds of cautions about relying too exclusively on metrics like citation indexes and impact factor — but given that we in the digital humanities excel at both uncovering the networked relationships among texts and at interpreting and articulating what those relationships mean, couldn’t we bring those skills to bear on creating a more productive form of post-publication review that serves to richly and carefully describe the ongoing impact that a scholar’s work is having, regardless of the venue and type of its publication? If so, some of the roadblocks to a broader acceptance of open access publication might be broken down, or at least rendered break-down-able.

There seem to me two key imperatives in the implementation of such a system, however, which get at the personnel review issues that Steve is pointing to — one of them is that senior, tenured scholars have got to lead the way not just in demanding the development and acceptance of such a system but in making use of it, in committing ourselves to publishing openly because we can, worrying about the “authority” or the prestige of such publishing models later. And second, we have got to present compelling arguments to our colleagues about why these models must be taken seriously — not just once, but over and over again, making sure that we’ve got the backs of the more junior scholars who are similarly trying to do this work.

It comes back to the kinds of ethical obligation that both Dan and Steve are writing about — but for the reasons Steve articulates, the obligation can’t stop with publishing in open access venues, but must extend to working to develop and establish the validity of new means of assessment appropriate to those venues.

Webmentions

No replies yet.