
This article was originally published on LinkedIn.
In 1993, I published a paper in the Journal of the Operational Research Society that added a new dimension to the principal-agent model with moral hazard.
Waller, M.A. "Optimal Investment by the Principal in order to Increase the Probability of Favorable States of Nature in the Principal-Agent Model with Moral Hazard," Journal of The Operational Research Society, 44(2), (1993), 193-198.
The distinctive claim was simple but consequential: the principal can invest to increase the probability of favorable outcomes, but must anticipate how the agent's behavior changes in response to that investment. Optimal investment cannot be evaluated on direct effects alone.
At the time, I was thinking about firms. Contracts. Incentives. Risk allocation.
I was not thinking about contested logistics.
Moral Hazard Under Fire
Principal-agent theory begins with a hard reality: a principal delegates work to an agent whose effort and decision quality cannot be directly observed. Outcomes are uncertain. Better choices increase the probability of success, but don't guarantee it.
That gap (between what the principal needs and what the principal can verify) is moral hazard.
In this framework, the principal is the party that sets the rules, makes the investments, and bears residual risk. The agent is the party acting on the principal's behalf, whose decisions can't be fully monitored in real time.
In contested logistics, the mapping is direct.
The Army, at each echelon, is the principal. Autonomous platforms, dispersed units, and sustainment nodes are agents whose decisions and execution quality can't be observed continuously, especially under communications denial. Higher headquarters sees outcomes (a convoy arrives late, a unit reports a shortage, a drone selects an unexpected route) but can't see the decision process that produced those outcomes.
Consider a concrete case.
An autonomous resupply convoy is tasked to deliver ammunition to a dispersed unit. The routing algorithm, optimizing for platform survivability, selects a lower-risk but longer route. The supported unit's ammunition drops below threshold before arrival. A secondary request triggers, consuming capacity and pulling resources from another mission.
Was the convoy's decision wrong? Was the unit's initial request mis-calibrated? Was the allocation rule (prioritizing route safety over delivery speed) the real failure?
Without explicit alignment and risk-sharing architecture, each layer optimizes locally while the system degrades globally.
Autonomy did not fail. Alignment did.
And moral hazard does not disappear because the agent is wearing a uniform or running on code.
(By "alignment," I mean decision rights + incentives + guardrails + default rules that still hold when visibility collapses.)
Investment Changes Behavior
The distinctive contribution of the model I developed is that the principal is not passive. The principal can invest to shift the probability distribution of outcomes (to make favorable outcomes more likely).
But the principal can't evaluate that investment based only on direct effects.
Because investment reshapes agent behavior.
In contested sustainment, the Army's investments might include more autonomous platforms, better predictive analytics, hardened communications, distributed pre-positioned stocks, or greater shared visibility. Each can improve the probability of delivery for a given level of effort.
But each also changes the incentive landscape for every agent in the network.
If analytics appear more reliable, do supported units reduce hoarding or does perceived cushion encourage riskier planning assumptions? If autonomy is trusted, do commanders take more calculated operational risk or do they treat autonomous delivery as guaranteed and stop building contingencies into their plans? If communications degrade and then restore, do nodes report their true status or does the denial period create information asymmetries they'd rather not surface?
Technology does not eliminate agency. It reshapes incentives.
And investment without behavioral analysis is incomplete analysis.
That lesson translates directly: the Army cannot evaluate investments in autonomous logistics purely on throughput probabilities. It must anticipate how every agent (human and autonomous) adjusts behavior in response.
Risk Sharing Is Architecture
In principal-agent models, the core design question is risk allocation: how much uncertainty should the agent bear, and how much should remain with the principal?
In contested logistics, that becomes architecture:
How much decision authority is retained centrally? How much is delegated to edge nodes? Who bears outcome risk when communications fail? What metrics and authorities shape behavior under uncertainty?
Too much centralization creates fragility. When communications are denied, a system designed around central approval simply stops functioning.
Too much decentralization creates divergence. Units optimize locally, hoard resources, inflate requests, and the theater-level picture degrades without anyone intending harm.
The optimal delegation structure is a risk-sharing rule.
In this translation, "compensation" maps to allocation authority, decision rights, accountability metrics, and risk tolerance thresholds.
If performance metrics reward local stockouts avoided rather than theater-level readiness, agents will hoard. If local safety is weighted more heavily than mission throughput, risk-taking declines across the network.
These are not failures of character. They are rational responses to the incentive structure the principal designed or failed to design.
Distribution Matters
One of the most operationally relevant findings from my principal-agent work is that optimal investment and sharing rules depend heavily on the functional form of the uncertainty distribution, not just "more uncertainty" in the abstract.
When uncertainty is symmetric, intuition tends to hold. When it's skewed, discontinuous, or lumpy, those intuitions can reverse.
Contested environments are not symmetric. And they are not uniform across theaters.
Indo-Pacific maritime logistics under denial often presents a catastrophic-loss structure: a ship carrying sustainment either arrives or it doesn't, with relatively little middle ground.
Land resupply along an extended European theater tends to behave differently: continuous attrition, incremental route degradation, partial loss, rolling disruption variance distributed more evenly across outcomes.
These are fundamentally different probability distributions. Principal-agent theory says optimal investment levels and delegation structures should differ accordingly.
If we don't explicitly model what uncertainty looks like in each operating environment, we'll build autonomy governance that works in one theater and fails in another.
That is not abstract theory. That is theater-level planning discipline.
Practically, this means using wargames and simulations to test behavioral responses under theater-specific uncertainty profiles, then adjusting autonomy governance accordingly, bounded decision rights, fallback allocation protocols, and performance metrics weighted toward theater-level readiness rather than local node optimization.
Why This Matters Now
As the Army invests in autonomous systems and machine-speed logistics allocation, it is introducing a new agent layer into the sustainment network, one whose behavior cannot be directly observed in real time under the conditions where it matters most.
If incentives, authority, and accountability are not designed with the same rigor as the platforms themselves, the system will overreact to uncertainty, hedge independently at every node, amplify variance, consume capacity on redundant demand signals, and erode readiness from within.
The enemy does not have to destroy the system.
It can unravel itself.
Autonomy without alignment is fragility.
Principal-agent theory isn't academic abstraction here. It's a framework for thinking rigorously about delegation, investment, and risk sharing when effort is unverifiable and uncertainty is high, which is a precise description of contested logistics.
Before deploying more autonomous platforms, we should ask:
- Who is the principal at each echelon?
- Who are the agents (human and autonomous)?
- What risks are shared, and with whom?
- What behaviors do current metrics and authorities actually incentivize?
- How does investment shift not just probabilities, but decisions?
Because in contested logistics, misaligned incentives become operational risk.
And operational risk becomes lost combat power.
