Algorithmic Advice, Human Review, and Shared Liability
New!Why do algorithms give vague advice?
A rational explanation for negative human-AI synergy.
Algorithmic advice is often used not only to guide decisions but also to determine whether cases are escalated for costly review. I study advice design at deployment for an algorithm provider that observes a calibrated internal score and sends advice to a human decision maker who retains final authority, can pay to open review, and shares misclassification losses with the provider. A startup cost of review makes advice part of the escalation rule: optimal advice design targets the posterior beliefs at which review switches on or off, rather than preserving fine distinctions across the score distribution. This endogenous coarsening can pool cases from opposite sides of the immediate-action cutoff and distort which cases receive review. Consequently, the combined human–algorithm system can underperform the better of the human-only and algorithm-only benchmarks, even without any behavioral frictions. Shifting liability toward the decision maker mitigates these deployment-stage distortions.
On the Role and Design of Resale Royalties
Resale markets span diverse sectors from physical goods to digital assets, with resale royalties evolving into varied forms through advances in platform economics and blockchain technology. This paper develops a unifying framework to analyze royalty policies under varying degrees of information asymmetry, differentiating three policies—exogenous, adjustable, and committed—based on creators' authority to set and adjust royalty rates. We characterize royalties as creating signaling value through cross-market profit reallocation, transferring value from information-asymmetric primary markets to information-symmetric secondary markets, but at the cost of secondary-market inefficiencies. Under the committed policy, high-quality creators can use both price and royalty rate as signaling instruments. We uncover a lexicographic optimization principle governing this dual-instrument signaling: high-quality creators adopt positive royalty rates to minimize primary-market price distortion regardless of the magnitude of secondary-market inefficiencies. These privately optimal rates are consistently socially excessive, providing rationale for regulatory ceilings. Furthermore, we show that constraining the signaling space with a royalty ceiling can paradoxically benefit high-quality creators in pooling equilibria by preventing escalation to inefficiently high rates. Our welfare analysis shows that committed and exogenous policies generally dominate adjustable policies for both high-quality creators and social welfare, while low-quality creators consistently prefer no royalty. Our findings offer actionable guidance for platforms and regulators on optimal royalty policy design and provide broader insights into multi-instrument signaling under information asymmetry.
Strategic Disinformation Generation and Detection
Disinformation detection is becoming increasingly important and relevant because it is easier than ever to create and disseminate disinformation. How does detection ability affect the incentive to generate disinformation? Given the practical constraints of classification technology, how should a detector be designed? To answer these questions, this paper studies the problem where a sender strategically communicates his type (high or low) to a receiver, and a lie detector generates a noisy signal on the truthfulness of the sender's message. The receiver then infers the sender's type both through the message from the sender and through the signal from the detector. We find a non-monotonic relationship between the probability that the low-type sender is lying and the accuracy of detection. More accurate detection (a higher true-positive rate and a lower false-positive rate) increases the probability of lying when the true-positive rate is low, because of a persuasive effect. By contrast, more accurate detection decreases the probability of lying when the true-positive rate is high, because of a dissuasive effect. We also characterize the optimal detector design. The designer always chooses the lowest feasible false-positive rate for any true-positive rate. The possibility of false-positive alarms implies that the designer chooses an intermediate true-positive rate rather than the highest true-positive rate. Counter-intuitively, the optimal detector may raise an alarm about a smaller percentage of disinformation when its underlying classifier is better at distinguishing the sender's type.