Skip to main content
Log in

The interplay between the reviewer’s incentives and the journal’s quality standard

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

In this paper, we study the reviewer’s compensation problem in the presence of quality standard considerations. We examine a typical scenario in which a journal has to match the uncertain manuscript quality with a specified quality standard, but it imperfectly observes the reviewer’s efforts. This imperfect observability issue is an edge case where editors cannot tell the quality/effort of the review at all. We find that the journal always chooses an incentive scheme to reward the reviewer for achieving the highest quality outcome. However, this can lead to an inefficiency when the journal’s quality standard is below the highest possible quality outcome. This is so because reviewers usually seek to ensure that the manuscript’s quality acceptably matches the journal’s standard. Therefore, to improve the observability of review outcome achieved and to obtain a better signal of the reviewer’s effort, the journal can have the incentive to increase the quality standard. This, however, is only beneficial for non-extreme costs. In addition, we find that in order to motivate the reviewer to work hard in the situation where review outcomes of high quality are imperfectly observed due to a limited quality standard, the journal must give a larger reward to the reviewer. In sum, we show that a failure to observe the reviewer’s efforts motivates higher quality standards, and quality standard considerations lead to higher-powered reviewer’s compensations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

Download references

Acknowledgements

This research was sponsored by the Spanish Board for Science, Technology, and Innovation under grant PID2020-112579GB-I00, and co-financed with European FEDER funds. We would like to thank the reviewers for their thoughtful comments and efforts towards improving our manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. A. Garcia.

Appendices

Appendix A: For the Benchmark scenario, the journal’s standard must be either high quality (H) or medium quality (M)

For the Benchmark scenario, the scholarly journal solves for the optimal value of the standard of quality \(Q_b\) to maximize the journal’s net profit:

$$\begin{aligned} \max _{M \le Q_b \le H} \left\{ \sum _{s \in \lbrace L, M, H \rbrace } p_s r \min \lbrace Q_b, s \rbrace - c Q_b - \psi \right\} \end{aligned}$$
(15)

Then the optimal quality standard \(Q_b\) must be either high quality (H) or medium quality (M). To see this, suppose that the journal chooses a quality standard \(Q_b\) intermediate between M and H, i.e., \(M< Q_b <H\). Then the journal’s expected profit is

$$\begin{aligned} (p_\mathrm{H} r -c) Q_b + p_\mathrm{M} r M + p_\mathrm{L} r L - \psi \end{aligned}$$

and therefore, if \(p_\mathrm{H} r -c <0\), then, the profit is dominated by choosing a medium-quality journal standard \(Q_b =M\), and if \(p_\mathrm{H} r -c \ge 0\), then, this profit is weakly dominated by choosing a high-quality journal standard \(Q_b =H\). Note that the dominance is strict unless \(p_\mathrm{H} r -c = 0\).

Appendix B: A sufficient condition on the reviewer’s effort cost

Following (Dai and Jerath 2013), to ensure that the journal has the incentive to induce high effort \(e_\mathrm{H}\) rather than low effort \(e_\mathrm{L}\), we need to have that the journal’s profit for the optimal quality standard \(Q_{e_\mathrm{H}}\) if the reviewer exerts a high effort \(e_\mathrm{H}\)

$$\begin{aligned} r \sum _{s \in \lbrace L, M, H \rbrace } p_s \min \lbrace Q_{e_\mathrm{H}}, s \rbrace - c Q_{e_\mathrm{H}} - \psi \end{aligned}$$

is greater than or equal to the journal’s profit for the optimal quality standard \(Q_{e_\mathrm{L}}\) if the reviewer exerts a low effort \(e_\mathrm{L}\)

$$\begin{aligned} r \sum _{s \in \lbrace L, M, H \rbrace } q_s \min \lbrace Q_{e_\mathrm{L}}, s \rbrace - c Q_{e_\mathrm{L}} \end{aligned}$$

or equivalently

$$\begin{aligned} \psi \le r \sum _{s \in \lbrace L, M, H \rbrace } [p_s \min \lbrace Q_{e_\mathrm{H}}, s \rbrace - q_s \min \lbrace Q_{e_\mathrm{L}}, s \rbrace ] - c (Q_{e_\mathrm{H}} - Q_{e_\mathrm{L}}). \end{aligned}$$

Now we can prove that \(r [ (p_\mathrm{H}+p_\mathrm{M} - q_\mathrm{H} -q_\mathrm{M})M + (p_\mathrm{L} -q_\mathrm{L})L ] - c(H-L)\) is a lower bound of the right-hand side of the above inequality. To see this, taking into account that the optimal quality standard \(Q_{e_\mathrm{L}}\) (if the reviewer exerts low effort \(e_\mathrm{L}\)) is less than the optimal quality standard \(Q_{e_\mathrm{H}}\) (if the reviewer exerts a high effort \(e_\mathrm{H}\)), i.e., \(Q_{e_\mathrm{L}} < Q_{e_\mathrm{H}}\), we obtain:

$$\begin{aligned}&r \sum _{s \in \lbrace L, M, H \rbrace } [p_s \min \lbrace Q_{e_\mathrm{H}}, s \rbrace - q_s \min \lbrace Q_{e_\mathrm{L}}, s \rbrace ] - c (Q_{e_\mathrm{H}} - Q_{e_\mathrm{L}}) \\&\quad > r \sum _{s \in \lbrace L, M, H \rbrace } (p_s -q_s) \min \lbrace Q_{e_\mathrm{H}}, s \rbrace - c (H-L) \end{aligned}$$

since \(Q_{e_\mathrm{L}} < Q_{e_\mathrm{H}}\), \(L\le Q_{e_\mathrm{L}}\), and \(Q_{e_\mathrm{H}} \le H\), and

$$\begin{aligned}&r \sum _{s \in \lbrace L, M, H \rbrace } (p_s -q_s) \min \lbrace Q_{e_\mathrm{H}}, s \rbrace - c (H-L) \\&\quad \ge r \sum _{s \in \lbrace L, M, H \rbrace } (p_s -q_s) \min \lbrace M, s \rbrace - c (H-L) \\&\quad = r [ (p_\mathrm{H}+p_\mathrm{M} - q_\mathrm{H} -q_\mathrm{M})M + (p_\mathrm{L} -q_\mathrm{L})L ] - c(H-L) \end{aligned}$$

since \(Q_{e_\mathrm{H}} \ge M\).

Therefore, we have that the following condition on the reviewer’s effort cost \(\psi\) is sufficient to ensure that the journal wants to motivate a high effort \(e_\mathrm{H}\) from the reviewer

$$\begin{aligned} \psi \le r [ (p_\mathrm{H}+p_\mathrm{M} - q_\mathrm{H} -q_\mathrm{M})M + (p_\mathrm{L} -q_\mathrm{L})L ] - c(H-L). \end{aligned}$$

Appendix C: The reviewer’s incentive should be given only for the best review outcome

Firstly, we consider the case when the journal chooses a quality standard \(Q > M\). In that case, if the journal only pays a positive incentive for the highest quality outcome of the review process, then the reviewer’s expected compensation is \(\frac{\psi }{1-q_\mathrm{H}/p_\mathrm{H}}\).

Suppose that the journal uses a different incentive scheme where the reviewer receives a reward \(t_s \ge 0\) for \(s \in \{H,M,L\}\), and at least one of \(t_\mathrm{M}\) and \(t_\mathrm{L}\) is positive. Following (Dai and Jerath 2013), we must prove that the second scheme is dominated by the first one.

For the second incentive strategy, the incentive compatibility constraint that ensures that the reviewer chooses a high effort level \(e_\mathrm{H}\) rather than a low effort level \(e_\mathrm{L}\) is

$$\begin{aligned} t_\mathrm{H}p_\mathrm{H} + t_\mathrm{M} p_\mathrm{M} + t_\mathrm{L} p_\mathrm{L} - \psi \ge t_\mathrm{H} q_\mathrm{H} + t_\mathrm{M} q_\mathrm{M} + t_\mathrm{L} q_\mathrm{L} \end{aligned}$$

which gives

$$\begin{aligned} t_\mathrm{H} \ge \frac{\psi - \epsilon _{s_1} - \epsilon _{s_2}}{p_\mathrm{H}-q_\mathrm{H}} \end{aligned}$$

where \(\epsilon _s = t_s (p_s -q_s)\) for \(s \in \{M,L\}\).

Then, it follows that the reviewer’s expected incentive is

$$\begin{aligned}&\sum _{s \in \lbrace L, M, H \rbrace } p_s t_s \\&\quad \ge p_\mathrm{H} \frac{\psi - \epsilon _\mathrm{M} - \epsilon _\mathrm{L}}{p_\mathrm{H}-q_\mathrm{H}} + p_\mathrm{M} \frac{\epsilon _\mathrm{M} }{p_\mathrm{M}-q_\mathrm{M}} + p_\mathrm{L} \frac{\epsilon _\mathrm{L} }{p_\mathrm{L}-q_\mathrm{L}} \\&\quad = (\psi - \epsilon _\mathrm{M} - \epsilon _\mathrm{L})\frac{1}{1-q_\mathrm{H}/p_\mathrm{H}} + \epsilon _\mathrm{M} \frac{1}{1-q_\mathrm{M}/p_\mathrm{M}} + \epsilon _\mathrm{L} \frac{1}{1-q_\mathrm{L}/p_\mathrm{L}} \\&\quad > (\psi - \epsilon _\mathrm{M} - \epsilon _\mathrm{L})\frac{1}{1-q_\mathrm{H}/p_\mathrm{H}} + \epsilon _\mathrm{M} \frac{1}{1-q_\mathrm{H}/p_\mathrm{H}} + \epsilon _\mathrm{L} \frac{1}{1-q_\mathrm{H}/p_\mathrm{H}} = \psi \frac{1}{1-q_\mathrm{H}/p_\mathrm{H}}\end{aligned}$$

where the last inequality is true because the monotone likelihood ratio property (MLRP)

$$\begin{aligned} \frac{p_\mathrm{L}}{q_\mathrm{L}}< \frac{p_\mathrm{M}}{q_\mathrm{M}} < \frac{p_\mathrm{H}}{q_\mathrm{H}}. \end{aligned}$$

Secondly, we consider the case when the journal chooses a quality standard \(Q = M\). In that case, if the journal only pays a positive incentive for the medium quality outcome of the review process, then the reviewer’s expected compensation is \(\frac{\psi }{1-(q_\mathrm{H}+q_\mathrm{M})/(p_\mathrm{H}+ p_\mathrm{M})}\). Suppose that the journal uses a different incentive scheme where the reviewer receives a reward \(t_s > 0\) for \(s \in \{M,L\}\). Again, we have to prove that the second scheme is dominated by the first one.

For the second incentive strategy in this new case, the incentive compatibility constraint that ensures that the reviewer chooses a high effort level \(e_\mathrm{H}\) rather than a low effort level \(e_\mathrm{L}\) is

$$\begin{aligned} (p_\mathrm{H} + p_\mathrm{M})t_\mathrm{M} + t_\mathrm{L} p_\mathrm{L} - \psi \ge (q_\mathrm{H} + q_\mathrm{M})t_\mathrm{M} + t_\mathrm{L} q_\mathrm{L} \end{aligned}$$

which gives

$$\begin{aligned} t_\mathrm{M} \ge \frac{\psi - \epsilon _{L}}{(p_\mathrm{H}+p_\mathrm{M})-(q_\mathrm{H}+q_\mathrm{M})} \end{aligned}$$

where \(\epsilon _{L} = t_\mathrm{L} (p_\mathrm{L} -q_\mathrm{L})\).

Therefore, in this second case, the reviewer’s expected incentive is

$$\begin{aligned}&(p_\mathrm{H} +p_\mathrm{M}) t_\mathrm{M} + p_\mathrm{L} t_\mathrm{L} \\&\quad \ge (p_\mathrm{H} + p_\mathrm{M}) \frac{\psi - \epsilon _{L}}{(p_\mathrm{H}+p_\mathrm{M})-(q_\mathrm{H}+q_\mathrm{M})} + p_\mathrm{L} \frac{\epsilon _\mathrm{L} }{p_\mathrm{L}-q_\mathrm{L}} \\&\quad = (\psi - \epsilon _\mathrm{L})\frac{1}{1-(q_\mathrm{H}+q_\mathrm{M})/(p_\mathrm{H}+p_\mathrm{M})} + \epsilon _\mathrm{L} \frac{1}{1-q_\mathrm{L}/p_\mathrm{L}} \\&\quad > (\psi - \epsilon _\mathrm{L})\frac{1}{1-(q_\mathrm{H}+q_\mathrm{M})/(p_\mathrm{H}+p_\mathrm{M})} + \epsilon _\mathrm{L} \frac{1}{1-(q_\mathrm{H}+q_\mathrm{M})/(p_\mathrm{H}+p_\mathrm{M})} \end{aligned}$$

where the last inequality is true because of the MLRP.

In practice, many academic publishers now routinely keep records of the review history of each reviewer in the journal’s editorial manager system (e.g., Springer). Information such as current review statistics, historical reviewer invitation statistics, historical reviewer performance summary, reviewing variables scored by the assigned editors, historical reviewer averages, and so on. Publishers’ evaluations are usually related to the readability, downloads and marketability of their journals.

Nevertheless, there have been some attempts to evaluate reviewers for the quality of their peer-reviews, (Van Rooyen et al. 1999; Mavrogenis et al. 2019; Messias et al. 2017). For example, Van Rooyen et al. (1999) described the Review Quality Instrument (RQI) for reviewers. The RQI is an easy to calculate, seven item instrument that evaluates the importance of the research question, originality of the manuscript, strengths and weaknesses of the methods, provides comments about writing and presentation, constructive criticism, evidence to support comments and comments on interpretation of the results. Van Rooyen et al. (1999) concluded that reviewers performed best on aspects that help the authors to improve the quality of their manuscripts, and less well on aspects that help the editors select papers (originality of the research), which is obvious because more reviewers are experienced as authors and not as editors. An open peer commentary or open peer-review and a post-publication peer-review have also been suggested (Van Rooyen et al. 1999). Reviewers would record authors’ complaints and authors would formally reply to all reviewers’ comments and their reply will be published in its entirety for all readers of the journal to see and appraise. In these situations, a non-anonymous peer-review is recommended, on the basis that if reviewers have to sign their peer-reviews they may put more effort and produce better analysis, (Van Rooyen et al. 1999).

Also, Messias et al. (2017) described a Reviewer Index (RI). This index is a ratio that considers the substantial importance of the reliability of the reviewers as measured by the number of peer-reviews completed, the time spent from peer-review acceptance to submission, and the editor’s score (in cubic), (Messias et al. 2017).

As shown in Mavrogenis et al. (2019), to evaluate who are the best reviewers and to determine if there are any characteristics that would tend to predict the quality of their peer-reviews, they routinely review the peer-reviews performed in their journal by the reviewers included in the journal’s reviewers’ panel. They evaluate the time taken to reply to an invitation, the numbers of accepted assignments and returned reviews, and the scientific quality of reviews. Based upon these observations, they created an index to measure the performance of the reviewers. This index, named the International Orthopaedics Reviewers Score (INOR-RS) considers reviewers’ and reviewing variables calculated using the outputs of the “Editorial Manager” workflow platform and scored by the assigned editor. The INOR-RS can be used specifically for Surgery Journals or in general for any scientific publication. More specifically, the INOR-RS is a five item score; each letter stands for a variable related to the peer-review(er), and scored from 0 to 4 points by the assigned editor. The sum of the total points (0-10 points) indicates the INOR-RS for the respective reviewer for the respective manuscript. This score is inserted by the assigned editor at the review rate section of the Editorial Manager workflow platform, and it accompanies the reviewer at the reviewers’ details section of the reviewer’s panel of the Editorial Manager for the respective reviewed paper. The score may change at the next peer-review performed by the reviewer for another manuscript, and then the mean score is inserted at the reviewer’s details section of the Editorial Manager workflow platform.

Appendix D: The assumption on c that rules out the case in which the journal does not need to motivate any effort from the reviewer

Comparing the journal’s expected profit for the cases of medium and low quality standard, i.e., \(\pi ^*_{Q=M}\) and \(\pi ^*_{Q=L}\), we obtain:

$$\begin{aligned} \pi ^*_{Q=M} > \pi ^*_{Q=L} \end{aligned}$$

if and only if

$$\begin{aligned} (p_\mathrm{H} + p_\mathrm{M})rM -cM - \frac{\psi }{1-(q_\mathrm{H}+q_\mathrm{M})/(p_\mathrm{H}+p_\mathrm{M})} > (p_\mathrm{H} + p_\mathrm{M})rL -cL \end{aligned}$$

or equivalently

$$\begin{aligned} c < C = r (p_\mathrm{H} + p_\mathrm{M}) - \frac{\psi }{(M-L)(1- (q_\mathrm{H}+q_\mathrm{M})/(p_\mathrm{H}+p_\mathrm{M}))}. \end{aligned}$$

Therefore, if the marginal cost c associated with the journal’s standard Q is not too high, with \(c <C\), then, in that case, the journal’s expected profit for a medium-quality standard is strictly higher than that for a low-scientific quality. Hence, the assumption \(c <C\) rules out the case when the journal does not need to motivate any effort from the reviewer because the journal’s quality standard is strictly lower than medium quality.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Garcia, J.A., Rodriguez-Sánchez, R. & Fdez-Valdivia, J. The interplay between the reviewer’s incentives and the journal’s quality standard. Scientometrics 126, 3041–3061 (2021). https://doi.org/10.1007/s11192-020-03839-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-020-03839-1

Keywords

Navigation