Authority 3.0
One of the speakers at the “New Structures, New Texts” summit in early June was Michael Jensen, the director of web communication for the National Academies, as well as the director of publishing technologies for the National Academies Press. His talk was the one that most captured my attention of the course of the day, intersecting as it did with MediaCommons’s key interest in redefining the processes and purposes of peer review.
The talk was based in part on two articles of his, one in the Journal of Electronic Publishing and one that was published shortly after the summit in the Chronicle of Higher Education. It’s this latter piece that I’m most interested in, at the moment, as Jensen here lays side by side the authority models of traditional scholarship (which are based, as he points out, on an assumption of information scarcity) and of “web 2.0” (which are based on information abundance), and attempts to project what the values of “authority 3.0” might be, how it might be computed, and, most crucially, what scholars and institutions need to start thinking about in order to be ready to participate — as authors, as researchers, as evaluators — in such a model.
As Jensen points out, most people talking about things “3.0” today are focused on creating modes of algorithmic filtration and other forms of artificial intelligence in order to cope with increasing information abundance; these technologies will no doubt have powerful effects on the ways that authority — whether scholarly or otherwise — is measured. Included amongst the factors that “authority 3.0” algorithms will likely take into consideration, Jensen indicates, are:
– Prestige of the publisher (if any).
– Prestige of peer prereviewers (if any).
– Prestige of commenters and other participants.
– Percentage of a document quoted in other documents.
– Raw links to the document.
– Valued links, in which the values of the linker and all his or her other links are also considered.
– Obvious attention: discussions in blogspace, comments in posts, reclarification, and continued discussion.
– Nature of the language in comments: positive, negative, interconnective, expanded, clarified, reinterpreted.
– Quality of the context: What else is on the site that holds the document, and what’s its authority status?
– Percentage of phrases that are valued by a disciplinary community.
– Quality of author’s institutional affiliation(s).
– Significance of author’s other work.
– Amount of author’s participation in other valued projects, as commenter, editor, etc.
– Reference network: the significance rating of all the texts the author has touched, viewed, read.
– Length of time a document has existed.
– Inclusion of a document in lists of “best of,” in syllabi, indexes, and other human-selected distillations.
– Types of tags assigned to it, the terms used, the authority of the taggers, the authority of the tagging system.
I’m particularly interested in the inclusion of “peer prereviewers” — and in particular the specification of “pre” in that designation — as only one in a long list of other metrics, and a caveated one (“if any”), at that. MediaCommons is, as we discussed at some length at this spring’s editorial board meeting, interested in the development of a mode of “peer-to-peer review” that would take into account both a qualitative assessment of the comments made on a scholarly text and more web-native metrics such as links, downloads, tagging, and so forth. Implicit in this model, however, is a sense that the most important thing we’ll be working on, in developing peer-to-peer review, is a schema for “reviewing the reviewers,” for determining not just the authority of a text but the authority of the commentary on that text.
As MediaCommons moves forward, we’re hoping to provide the tools for scholars to have a hand in developing such systems. Right now, we’ve got a number of “sociable” bookmarking buttons that appear beneath both blog posts and In Media Res entries, and we’re working on ensuring that various forms of metadata (including COinS) that will be important to tracking the life of electronic documents will be embedded in everything we publish. As with everything, however, we need significant user input, to ensure that the technological network we’re building develops in concert with the human network that it will serve. So: what are the metrics we need to include both in the review of texts and in the review of the reviewers? How should those metrics themselves be evaluated? What is at stake for members of the network (whether authors, researchers, or more casual readers) in the inclusion and contextualization of those metrics? And what do we need to do, now, to communicate to our institutions that this is, in fact, the future of scholarly authority, and is thus a model of assessment that must be taken seriously by hiring, retention, and promotion committees?
- ← Previous
*hic* - Next →
There Are No Items to Show in This View
Webmentions
No replies yet.