As we sought to map out the design and functionality of the PPJ with colleagues at Matrix a few weeks ago, we began to suggest how a disciplinary economy of an open peer review might be navigated in ways that at once ensure rigor and maximize collegiality.
In order to do this, it will be important to approach the review process not simply as a means to an final scholarly publication, but as itself an important scholarly activity. 1
To facilitate this, we intend to assign to each review target a peer review coordinator (PRC) whose responsibilities would include, among other things:
- Identifying reviewers with the requisite expertise;
- Cultivating a climate of collegiality between the reviewers and the author;
- Establishing review criteria, including target specific review prompts (beyond the standard review prompts adopted by the journal);
- Creating the conditions for a just review, including, if necessary, toggling on reviewer anonymity;
- Facilitating the discussion forum associated with the review to ensure that the most salient and substantive ideas and suggestions have the most influence.
Obviously, with responsibilities like these, it will be necessary (and difficult) to cultivate the requisite habits of digital scholarly communication among the members of the PPJ community. As a start, we envision developing a community of PRCs first among the Philosophy graduate student research assistants at Penn State and Michigan State. But if the PPJ is to be successful, and if we are going to be able to scale up our capacity for open public peer review, we will need to extend our community of coordinators more broadly.
To do this, we envision creating a sophisticated system of credentialing that will be translated into a PPJ user score for each member of the PPJ community. What, precisely, will constitute the PPJ user score will be developed in the months to come in conversation with an emerging community of interested colleagues inside and outside the academy.
However, one measure that should be an important determining factor of the PPJ user score should be something that we might call one’s “collegiality index.”
Drawing on the work done by Hart-Davidson, McLeod, Klerkx and Wojcik, regarding how to measure “helpfulness” in online peer review, we hope to develop a collegiality index. They suggest that a helpful review:
- Describes the rhetorical moves a scholar makes to achieve rhetorical aims;
- Accurately and fairly evaluates the review target; and
- Provides “specific, actionable advice” to improve the target of review. 2
Similarly, we might consider operationalizing the collegiality score according to how well a reviewer is able to:
- Accurately describe what animates the scholarship under review, thus demonstrating a capacity for hermeneutical empathy;
- Evaluate the review target in its own terms, thus demonstrating a capacity for hermeneutical generosity;
- Engage the community in ways that enrich the scholarship under review, thus demonstrating a capacity for hermeneutical transformation.
A community member’s “collegiality index” would be determined over time based on past collegiality scores and would be integrated into the PPJ user profile to become part of the member’s cultivated reputation. The hope is that by integrating a measurable expectation of collegiality into the fabric of the PPJ itself, we will be able to cultivate a Network of Scholarly Practice capable of creating the conditions under which excellent scholarship can be produced and productive scholars can become excellent.
- For the idea that review should be valued “itself as a teachable and learnable activity,” see Hart-Davidson, William, Michael McLeod, Christopher Klerkx, and Michael Wojcik. “A Method for Measuring Helpfulness in Online Peer Review.” In Proceedings of the 28th ACM International Conference on Design of Communication, 115–121. SIGDOC ’10. New York, NY, USA: ACM, 2010, §2.1. doi:10.1145/1878450.1878470.
- Ibid., §3.