Algorithmic Advice, Human Review, and Shared Liability

2025 Working Paper
Wenxiao Yang

Abstract

Algorithmic advice is often used not only to guide decisions but also to determine whether cases are escalated for costly review. I study advice design at deployment for an algorithm provider that observes a calibrated internal score and sends advice to a human decision maker who retains final authority, can pay to open review, and shares misclassification losses with the provider. A startup cost of review makes advice part of the escalation rule: optimal advice design targets the posterior beliefs at which review switches on or off, rather than preserving fine distinctions across the score distribution. This endogenous coarsening can pool cases from opposite sides of the immediate-action cutoff and distort which cases receive review. Consequently, the combined human–algorithm system can underperform the better of the human-only and algorithm-only benchmarks, even without any behavioral frictions. Shifting liability toward the decision maker mitigates these deployment-stage distortions.