Anti-Access and Unmanned Deterrence?
This WOTR piece, while well-intentioned, fails to demonstrate any distinction between the supposedly new problem of unmanned systems and Cold War debates about centralized vs. decentralized control over attack authority.
Indeed, the threat of C4ISR nodes being eliminated in a series of blinding first strikes is precisely why second-strike capability exists, “launch on warning” was such a big debate, and the Triad was built. If anything else one do plenty of things with code to guard against catastrophic events that they can’t do with human grey matter — there is no way to code (for example) exception handling into the human brain. There is no way to defensively program a human brain.
Thus, this statement:
The focus of unmanned investment shouldn’t be in capabilities needed to operate independently at the outset or initiation of conflict. Rather, it should be on deterrence, defense, and intelligence collection.
…..could be easily changed to “the focus of [human] investment shouldn’t be in capabilities needed to operate independently at the outset of conflict.”
There is very little new about this problem. One can find a large number of period articles debating the AirLand Battle/Follow-On Forces Attack set of technologies that note that the very strength of these technologies constituted a threat to peace. Engaging Soviet forces in the “deep” area of operations as opposed to the close-in area required rapid standoff and fire and forget capabilities that critics also warned would make conflict too easy.
But at the end of the day, those critics lost out because of one key problem: political reasons dictated that American forces be committed to defend key allies in place, without relying on an elastic defense. The Soviets had a local concentration of force, and one way to disrupt that advantage was to hit their command and control nodes and devastate their formations as they marshalled in their assembly areas. Does this sound vaguely familiar to some kind of relevant situation involving “access” and “denial?”
Second, the author is a bit too sanguine about the idea of unmanned defense and deterrence.
Conflict deterrence is a significant, if not predominant, consideration for the military, and unmanned systems have a significant role to play in this regard. A robust unmanned combat network – supported from a stand off distance by manned weapons and ground stations – can convince a prospective adversary that his cost of entry to a conflict is high relative to that of U.S. forces. Put another way, the fact that U.S. personnel losses are minimized reduces the adversary’s chances of forcing quick capitulation to his desired ends. One could even presume that the adversary will still invoke a negative international response while simultaneously failing to impose his military and political will – certainly not a recipe for success in armed conflict.
First, the same arguments the author uses against robots as offensive power projection tools hold true for “defensive” and “deterrent” capabilities because — as Colin S. Gray argued — the distinction between offensive and defensive weapons is very hazy in practice. Park an AEGIS-equipped missile boat up along the Persian Gulf scanning Iranian military activity and suddenly things don’t look all that “defensive.” Second, the idea of a completely static defense neglects the need for offensive actions to defend one’s position — and this is how “defensive zones” have a nasty habit of expanding once units come under fire.
Regardless of what one believes about offensive and defensive weapons and the possibility of contextual distinction, the author’s argument against overly scripted response also holds true for deterrent capabilities. What if deterrent capabilities — to be credible — must be able to operate even after they have been cut off from command and control? Then we wind up in the same fix.
Yes, the opponent’s costs may be raised by the existence of unmanned forces……which might simply motivate them to invest in capabilities capable of bypassing a robotic Maginot line. Such a capability already exists: nuclear weapons. And in terms of conventional counter-value attacks, one might replace the American troops along the DMZ with robots….but can you replace everyone in Seoul with robots?
There are also counter-force options available to the opponent, such as further incentivizing him to strike your C4ISR nodes — and the prospect for an “algorithmic security dilemma” of sorts also exists. Build a strong defensive unmanned capability, the opponent then builds his own designed to cut through yours without expending his personnel so he can save them for the main effort.
There are particular computational implications for deterrence, strategy, and crisis stability that can be explored in later posts. But the lesson of this is that technology can neither be blamed for the essential problems of deterrence and compellence nor rescue us from having to grapple with them.