Having wrote a long entry in pieces over the last month (and scheduled it for yesterday) I realized it didn’t quite get across what I wanted to say. So here’s an attempt at making a shorter and more compelling version.
Posting here (already fairly erratic) is going to become very scarce and also much more compact. I’m going to stop writing at Logics of Transformation entirely, leaving it open more or less as just a record of some thoughts I had over the last year and a half on academic matters. I’m going to still contribute occasional pieces on strategy, national security, and academic-ish topics etc to the group blogs I’m a member of as well, but those will be much more polished than my average output and obviously infrequent.
Lastly, I’m going to start a new blog that fulfills LoC's original purpose: a research notebook, with expanded research notes, short literature dives, code, and some pseudocode/mathematical notes. This blog will become my primary personal blog, and RS will become mostly dormant. I’ve obviously changed blog formats a lot since I began blogging in 2006, but this is a new shift that has little to do with whether Tumblr was better than Typepad. Why?
It’s less fun to write as a generalist defense and strategy blogger these days. First, I’m fairly lonely right now — Abu Muquwama, Ink Spots, and similar blogs of the kind of niche this blog roughly occupies are gone. Many people, including my friend and former co-blogger Dan, have either put it on the backburner or moved on entirely. I wrote a lot of off-the-cuff pieces here and other places on basic defense and strategy in large part because I found being part of a community of thinkers exciting. It isn’t exciting anymore because that is a pale reflection of what it once was.
What’s left? Besides a vastly reduced number of generalist strategy/defense analysis blogs, there are a mass of specialist blogs on various functional and regional security topics. Everything else is either journalism (I’ve always been open that I’m an analyst, not a reporter that generates original information) or gifs and listicles.
Publishing about defense, natsec, and related topics at a group blog or online magazine like War on the Rocks, CTOVision, any of the others I’m a part of is more appealing than continuing to use a solo blog actively. It offers more comradeship as well as better in-house editing and feedback opportunities for making what content I do choose to write polished and have more reach. Given the fragmentation of the audience I once wrote for, I’m no longer comfortable putting, say, a 1,000 to 3,000+ word blog out just because I have a burning desire to get a thought into the world. Having it be part of a group environment and especially having it be polished and more infrequent matters more to me at this point. Otherwise the thought and effort feels squandered.
Though I do plan to drop things here, they will be very short and more like the occasional links I’ve been posting lately than the more substantive content I’ve put here since I started RS up in 2011.
My interests and desired in-crowd also have changed. In the last couple of years I’ve become much more integrated with social science academic bloggers, tech bloggers, and the computational social science area where both meet. This is more where most of my energy ought to go, as it develops skills (programming, mathematics, modeling, revising literature, theory-building) I am currently building. It also helps me connect with people I want to meet and facilitates connections that I want to make.
I’ve been meaning to blog on these subjects for a while, but I’ve experienced something of a kind of stage fright at showing my programs, ideas, etc that comes simply from it being a new environment for me. The best way to get over such anxiety is the same way I overcame my anxiety over blogging over the last 7 years: practice, practice, and practice. Lastly, it’s also a better way to briefly review questions, literature, and thoughts, and register some works-in-progress with footnotes, embedded graphics, math, and code.
This doesn’t mean that I’m completely abandoning writing about strategy and natsec. As I’ve often noted, I spent 7 years of my life studying military, strategic and diplomatic history, international affairs, military science, strategic theory, and both interstate and intrastate political violence. All of these subjects constitute my substantive base of knowledge even if the foreseeable future involves spending perhaps just as long of a time building up a methodological knowledge base. The methodological base that will help me do more interesting things with my substantive knowledge base (and other interests I have) besides just write essays and think-posts.
Even so, I also care too much about this subject to abandon commenting and writing on it. I came of age during the Iraq War years, and it’s not like the US has somehow become so strategically adept that I feel no urge to pen a critique of some flawed strategy or idea at 2AM in spite of my knowledge that I’m growing older and need the sleep.
However, I also have to make the painful realization that the audience and community that once motivated me has fragmented — and adapt accordingly. I’m lucky that I — since I began working with Bob Gourley and others on cybersecurity issues and began a long journey into a more technical world — found another passion, and one that can still help me work on my interests in a different way. I know a lot of people that sunk a good deal of time studying the things I have and now don’t know how to adapt to a changed landscape.
The ironic thing is that just as I devote my time to technical subjects, I’ve met hackers like The Grugq that evince the kind of interest I have now in technical things about subjects that are no longer are new and exciting to me in the national security, strategy, and security world. It was very heartening to see someone like FireEye’s Richard Bejtlich thinking out loud on Twitter about Civil War command and strategy — one of the many topics I studied to prepare for my BA thesis on the theory of operational design and campaigns.
Perhaps we’re just converging. People like me that come from a liberal arts-historical-social science background are turning to computer science theory and artificial intelligence as conceptual and practical tools (along with mathematical tools that go with them) for sources of new insight and to do things in the world. And I’ve seen more and more technically gifted figures like Grugq and Bejtlich start to become interested in the conceptual things that I once thought would be my life.
My hope in starting a new blog oriented around code demonstrations, programming, mathematics, and research note and mini lit-reviews is that it will do something more than just join and participate in an existing community (my old goal for blogging). Rather, instead of feeling caught between computation, social science, and security/strategy topics I can help create a new community that fuses them together.
in sum: Rethinking Security, after a brief resurgence as the primary home for my applied defense, IR, and natsec stuff after Andrew Exum closed Abu Muquwama, will go back to being what it was when I blogged at Abu M: an mostly dormant place where I stick things that don’t fit anywhere else and very short and fleeting thoughts.
I thank loyal readers and commenters, particularly those that stuck with me from the beginning when I blogged on strategy out of my college dorm room. That was a time when I never considered the possibility that I would occupy the same Infinity Journal issue with personal heroes like Colin S. Gray or David Betz or blog at a place like Abu Muquwama that a younger version of me would have given anything to get a link or favorable nod from. When my new site is set up, I will post a link here and I look forward to continue engaging with you.
As I’ve said throughout this entry, there will still be some signs of life here at RS every now and then.But not much.
Consider an attacker A and a defender B. B is engaged in revolt and/or armed terrorism against A. A wants to kill or capture B because B's violence threatens in some way a third party observer C's perception of A. Easier said than done, however.
A incurs substantial costs searching for B that might be invested in other resources. B is the proverbial needle in the haystack. And capturing B is also highly risky as well - B could be in a denied environment, an environment of transitional political authority, located next to civilians that would inevitably be harmed in a “boots on the ground” capture operation and/or highly lethal and dangerous. Worse yet, a failed kill or capture operation will destroy intelligence leads A has carefully cultivated. For similar reasons it should not be always assumed that A will grow more likely to benefit from cumulative knowledge and/or process of elimination through repeated searches and strikes, as intelligence leads can also be harmed by failed strikes and B can change both his location and operational security procedures dynamically. Finally, battle damage assessment is costly for the same reasons it is difficult to capture B alive — though this will be treated in greater depth at the end. These are all plausible issues that could factor into A's search, though how many of them apply and the magnitude of their costs also vary by scenario.
A might encounter B through a variety of scenarios. In one scenario, a passing troop patrol engages a militant group and finds a dead or alive B after the firefight concludes. This is a best case scenario, but if it were terribly likely than simply blanketing the country with troops would produce credible kills or captures in abundance. We know from recent experience that this isn’t likely. Why? After the overthrow of Saddam Hussein, there was a lag between the collapse of the Hussein regime and the emergence of a serious insurgency. This time window allowed a massive manhunt for Hussein that would not be possible if military planners had to assume that IEDs and ambushes would make traversing both roads and towns difficult. The search for Zarqawi, while similar in many respects, occurred during an active insurgency and ethnic civil conflict.
Depending on the scenario, the problem above runs into a number of search, optimization, and constraint satisfaction problems. First, the search space is large. Second, the optimization criteria (minimize casualties and logistical costs) also presents problems when we introduce real-world problems seen in insurgency and counterterrorism — e.g. when the enemy can offer resistance. Lastly, when we introduce constraints in the form of rules of engagement and political caveats things become equally dicey. If we go through the space of hypothetical missions we would probably see that this scenario only becomes realistic in a few cases when either the environment is permissive, US can afford the costs inherent in engaging in such a search, or finding B is a function of luck rather than the method of blanketing the country with troops in a manhunt.
A next might drop a group of commandos on B and take him alive, bypassing B's security forces and/or the host nation government/political authorities. And since raiding forces can utilize stealth, speed, and surprise and do not necessarily need to establish military control to engage in operations, it limits the problem seen in the manhunt hypothetical. One not need travel on IED-laden roads or pacify a village of dug-in enemy forces in order to strike.
But there’s a catch: the nature of the optimization problem also changed when we replace general purpose forces and special operations forces (implicit in the blind search scenario) with purely SOF and enablers. SOF are expensive and the optimization problem moves from just minimizing casualties to also minimizing injury and preventing the unnecessary damage or exposure of expensive and classified weapons and systems. To stress how expensive SOF are, Mark Urban’s book on British and American operations in Iraq goes over how a insurgent lying in a stairwell, behind a doorway, or elsewhere could gravely injure or kill a highly expensive British operator — the 21st century equivalent of "scrimmages" and a "ten-rupee jezail." Interagency coordination issues create collective action problems in synchronizing SOF that can be reduced by structures like the Joint Special Operations Command but not completely taken out of the picture.
The search problem also becomes more difficult — there are much less bodies relative to potential target locations and the target B can dynamically shift his location and operational security procedures. Intelligence gathering costs to support such a mission are also costly and are unlikely to ever fully deliver desired reductions in the search cost or level of uncertainty concerning strikes. Finally, all of the usual rules of engagement and political caveats must be factored in. The complexity of the mission is high, and both optimization and constraint satisfaction is more difficult.
If we remove the stipulation that the target should be taken alive or that A's forces need to be physically present, we can cut down on some of the problems posed by the “handful of steely-eyed killers” scenario. A may delegate exhaustive search to a host nation security force D that absorbs casualties and is not bound by the same constraints as A — with the caveat that A cannot for sure control whether or not D takes the target alive or kills the right people and/or searches the right places. A must choose what level of coordination and control it deems acceptable for D (which is added to A's costs), but this step alone substantially reduces A's search cost and allows A to concentrate its resources on using SOFin a way it might not otherwise. A may also have its own preponderance of power and information on the ground that can cut costs. In particular, A can utilize general purpose forces that act in parallel to SOFand destroy B's rank-and file troops and infrastructure. Doing so may make it more difficult for B to hide in various ways, from flushing him out to lowering the costs of a snatch-and-grab mission due to B's now-reduced power to frustrate attackers.
In general, if A has an extensive ground presence it can take advantage of existing combat, logistical, and informational enablers. The “collaborative warfare” in Iraq was a function of the large amount of existing resources the US could throw at the problem — which no longer exist in Iraq if we were to try again tomorrow. If we remove the stipulation that A has to possess a preponderance of force in the area of operations, then the problem morphs again to A's advantage — with one big drawback that I will save for later.
Now A can use either local proxy D, standoff technologies, or a combination of both. There is no need to minimize the varied costs of using expensive SOF and A can has merely to deal with the logistical costs of projecting standoff force and the informational costs of search. There are mitigating factors for both. A may be a superpower that, while unable to achieve lasting control on the ground, project force nearly everywhere there is not competent air defense capabilities. A may also possess a enormous technical intelligence capability that human intelligence and analytic capabilities that while highly imperfect are still lavishly budgeted. Or A may be a local power that has enough of a preponderance of power and capabilities in the region to be able to project standoff power into the denied environment repeatedly and efficiently.
Depending on whether or not A is willing to waive rules of engagement constraints and let D do dirty work it can achieve further efficiency by removing itselffrom the targeting process entirely, but this is counterbalanced by the problem of reconciling A and D's interests — which will likely differ in important ways. Again, A may be willing to accept some level of disobedience from D if going without D is unfavorable. Plan Phoenix, for example, was only possible with substantial participation by varying kinds of local forces. And, on the general subject of Vietnam, not all targeted actions in that conflict took place on territory that was officially part of the battlefield. Often, A can also combine both standoff technologies and local forces in useful ways.
Unfortunately for A, he must accept a significant drawback if he opts to use either standoff weapons and/or D to hunt and kill B. Recall earlier that in the first part of this analysis I mentioned that battle damage assessment (BDA) is costly. In a completely permissive scenario this problem is nonexistent — if opposition is only scattered and ineffective, exhaustive BDA is cheap. In other cases, it may be far more expensive if underlying control over the environment is still contested. The problem with not being on the ground, however, is that in the standoff firepower scenario A has only scattered intelligence reports and technical information as verification tools. The first may be unreliable and the second may be inapplicable. In the proxy scenario, A also may have to rely on D for BDA, and as noted D may not be completely reliable. In some cases, the latter will not be a problem — the killing of Che Guevara and Pablo Escobar comes to mind. In many other cases the likelihood of a corpse that can be ID’d is far more chancy if the kill takes the form of a raid of some sort in a contested environment.
A has a choice when an ambivalent kill is reported. It can ignore the opportunity to exploit the kill to better play to external audience C. Or it can downplay the uncertainty and say a kill has taken place despite its own significant uncertainty. One might also observe that even if A does not consciously decide to minimize doubt, this may arise as part of cognitive bias on the part of decision makers employing motivated reasoning (“I know he’s dead!”) or emerge from the desire of intelligence agency E to get a reward from A.
Though A may suffer a credibility loss in the eyes of C if a false positive is reported, there are a range of circumstances in which the risk may be worth it. If, for example, A thinks that the benefit from publicizing B's death will be principally short or near term, then in an ideal world if B suddenly emerges alive it will be after A has likely already secured the gains. Perhaps A needs to prevail in some internal power struggle and needs a kill as a sign of public strength. If B re-emerges after A has defeated his challengers, then would it really matter? Or maybe an election is in progress and A must posture as being tough on national security. A also has the option of claiming faulty intelligence from E to shift blame, though success in doing so is probably variable. Finally, A may believe that it is impossible to know whether or not B is actually dead and/or B will “come back” from the dead and thus it rolls the dice.
Now, it would help A a great deal if B could cooperate by staying hidden and thus allow A to reap the benefits of a kill report. Why would B cooperate? B's biggest problem is being a hunted man. He faces a tradeoff between security and command and control — security requires greater compartmentalization and denial and deception, but at the cost of being able to communicate with and control subordinates. He can satisfice with delegation but this doesn't remove the problem of ensuring the organization can carry out his desired strategies. If A suddenly lowers the intensity of search, and treats him as if he is dead, hiding suddenly becomes much easier. A will stop intensely focusing resources on him. If A has a proxy D, A may also stop offering the rewards necessary to motivate D to absorb casualties and other costs through search. Third party validation of B's death is unlikely — few third parties have an interest in doing BDAof a military target's killing, though they have devoted extensive resources to investigating reports of civilian casualties or other violations of the law of war.
It might be objected that B may be necessary to keep the cause alive and cannot afford to remain hidden. But this only assumes B matters at minimum to his organization F and at most an external sympathizer audience G. It is possible for B to satisfy both audiences depending on what level of exposure they demand from him — because A's external audience C and B's organization F and audience G may not interact with each other. C is unlikely to be knowledgable about internal group chatter in F, and depending on the way B communicates with G there are varying scenarios in which C and F may not interact whatsoever.
Let’s say B's base of support is a predominately rural population that lacks connectivity to the international media and gets its own news from the rumor wheel. Or perhaps C's direct or indirect ability to communicate or interact with F is low to nonexistent. How many Americans, for example, follow foreign reporting in Pakistan or Yemen or have contacts on the ground? A, on the other hand, has extensive resources at its disposal to ensure that C knows about a “successful” strike.
The conclusion draws on the theory of deception Barton Whaley created and its distinction between simulation (showing the fake) and dissimulation (hiding the real). In short, it is plausible that there are so many false positives of terrorist or insurgent deaths because Ahas an incentive to simulate a successful kill in many conditions in which A can both engage in targeted killings at a tolerable cost and B has an incentive to stay hidden. There are plausible circumstances in which the risk for A to simulate B's death is acceptable, and A may do it unintentionally regardless. While B is “alive” he faces the costs of both simulation and dissimulation. If A simulates for B, B can focus on just dissimulation. This does not imply that A will simply make up reports of B's death, However, in a situation in which a strike has occurred and there A cannot completely remove ambiguity about the quality of BDA, A can either directly choose to dissimulate or be indirectly compelled by cognitive or organizational biases.
This is a wholly qualitative narrative and it has way too much complexity to be put into a model. It has also makes a lot of strong assumptions both explicitly and implicitly. But anyone interested can take a crack at seeing what elements of this may be useful for analyzing the problem of targeted killing and mutual deception.
We’re about to see a very grand experiment in the performative character of mathematical models, and Nate Silver is unsurprisingly the catalyst:
For the last few months, FiveThirtyEight Editor-in-Chief Nate Silver has been largely absent from the political forecasting scene he owned in the 2008 and 2012 presidential elections.
But that hasn’t stopped the Democratic Senatorial Campaign Committee from sending at least 11 fundraising emails featuring Silver in the subject line over the past four months, even as Silver was building the foundation for his new website that’s launching Monday and was not writing regularly.
It’s all part of a digital fundraising game that will increase in intensity as the election draws nearer, as candidates, political parties, and other groups bombard their email lists with messages designed to draw contributions.
One of most widely used tools is fear. Many of the emails seek to convince supporters that the political situation is dire enough that it requires action, and that’s where Silver comes in.
The last time he wrote about the Senate landscape, all the way back in July 2013, Silver said Republicans “might now be close to even-money to win control of the chamber” in 2014. He also cited North Carolina as “the closest thing to the tipping-point state in the Senate battle,” and called Democratic Sen. Mary Landrieu’s seat in Louisiana “a true toss-up.”
That’s scary stuff if you’re a Democratic supporter, especially coming from an analyst whose accuracy made him a household name in the past few years. And the repeated name-dropping has probably opened some wallets for Senate Democrats.
Mathematical models, particularly forecasting models, are about predicting the aggregate behavior of social collections. Aggregate social behavior is in part the emergent sum of individual choices about the future, given a belief state about alternatives and a desired goal/utility to maximize.
However, downward causation also is a factor here. Consider what modelers and sociologists have written about financial markets. I quote here from Derman’s “manifesto” of modeling:
Physics, because of its astonishing success at predicting the future behavior of material objects from their present state, has inspired most financial modeling. Physicists study the world by repeating the same experiments over and over again to discover forces and their almost magical mathematical laws. Galileo dropped balls off the leaning tower, giant teams in Geneva collide protons on protons, over and over again. If a law is proposed and its predictions contradict experiments, it’s back to the drawing board. The method works. The laws of atomic physics are accurate to more than ten decimal places.
It’s a different story with finance and economics, which are concerned with the mental world of monetary value. Financial theory has tried hard to emulate the style and elegance of physics in order to discover its own laws. But markets are made of people, who are influenced by events, by their ephemeral feelings about events and by their expectations of other people’s feelings. The truth is that there are no fundamental laws in finance. And even if there were, there is no way to run repeatable experiments to verify them.
The key element that Derman zeroes in on is the “performative” character of theory. I’ve bolded the parts where he mentions this.
If Silver and other forecasters are perceived by the people they are modeling to be modern day oracles, then the modeled actors will change their choices based on what Silver says. And this has the potential to produce novel results that wouldn’t have existed if Silver hadn’t created something of the political science equivalent of the observer effect in physics. So we will see if Silver is as influential as this story seems as 2014 plays out.
How does this relate to strategy? One of the principal critiques of strategic theory, from early critics to modern “critical” theorists, is that the act of studying geopolitical behavior reinforces a feedback loop of belief that violence is the appropriate way to solve problems. One of the sub-genres of this mode of criticism is what Eliot Cohen dubs “strategic nihilism” — the belief that there is no rhyme or reason to war, and that all political violence is just irrational bloodletting. Cohen argues that this is the view of war taken by Tolstoy in War and Peace, but it also describes things like Dr Strangelove as well.
Knowledge-production of research and policy analysis that prizes an instrumental view of political violence, the thinking goes, merely reinforces the dominance of views that prize force as a means of problem-solving despite the alleged (from the “nihilist” perspective) futility of force and coercion. And hence peace remains elusive.
Personally, I think the argument from observation effects is much more plausible and credible in political markets and financial markets. They are easier to study, in large part because there is masses of data concerning things like financial trades or donor investments. And they also are more amenable to parsimonious models about belief, the future, and preference. Lastly, financial modelers and political forecasters have a degree of credibility and reach that academic thinkers and policy analysts don’t, making it easier to trace causal effects.
In Micah Zenko’s latest Foreign Policy article, he argues that fears of a drone arms race/proliferation are exaggerated.
A casual observer of recent reporting and analysis of unmanned aerial vehicles (UAVs) — most commonly referred to as drones — might assume that the world is already awash in drones of all shapes, sizes, and capabilities. Amazon’s contrived hint of drone-delivered packages only tantalized the public’s imagination back in December, and seemingly new uses for drones are found each day, from identifying rhinoceros poachers, to border control, to tracking whaling ships. But this apparent runaway train of drone proliferation (and its misreported uses) is actually stymieing efforts to promote or influence responsible armed-drone exports and their uses. Because if drones are already ubiquitous, then efforts to control their spread — whether through tight export controls or pressure on major producers to restrict their transfers, which Barack Obama’s administration is now contemplating in a long, contentious interagency review of U.S. drone exports — are unnecessary and even misguided.
The problem with this now commonly stated assumption — that the world is fully equipped with drones — is that while these news articles hyping a drones arms race are exciting, they are also misleading. Contrary to these sensationalist accounts, the international market for armed drones — the most potentially threatening and destabilizing type — is quite small. Actually, it’s minuscule, projected to be about $8.35 billion by 2018, around which time the global defense market is expected to reach $1.88 trillion, which would mean that drone expenditures will make up less than 0.5 percent of the world’s defense spending. Even though global drone expenditures are expected to grow roughly a billion dollars a year (though they actually fell from $6.6 billion to $5.2 billion between 2012 and 2013), the business of UAVs will remain little more than a small focus of defense spending outside the United States for the next decade.
Sounds sensible. So why is the “casual observer” Zenko mentions so afraid of a drone arms race/proliferation? Why did news articles hyping “exciting” narratives of drone dystopia lead us the perception of a “runaway train” to remotely piloted robotic hell? For Zenko, it has to do with a complex confluence of factors:
Part of the reason the public is so easily manipulated is that much of what is known about the development of armed drones is clouded in secrecy. Some countries, including the United States, maintain covert programs for obvious reasons like maintaining the strategic element of surprise, while others, such as Iran, boast of armed drones that have not been demonstrably used in order to garner national prestige. There are also government announcements of deadlines for developing them that appear to go unmet, as well as aspirational statements by drone manufacturers for orders that are never fulfilled.
Fair enough. But another reason the public is “so easily manipulated” is that the public takes its cues on what to worry about from policy elites that appear on TV, testify in public hearings, and write op-eds. Zenko has written a lot about how such elites inflate threats, despite the enviable security environment the US currently operates within. As Zenko powerfully argues, this threat inflation contributes to the perception of widespread danger out of proportion to reality. And it is precisely the eloquent arguments Zenko makes about threat inflation that best explain a large chunk of the drone hype he decries.
If the drone proliferation/arms race threat is indeed “exaggerated,” “misreported,” and discussion of it is riven with “sensationalist” accounts, Zenko himself has certainly contributed to the problem. In 2011, Zenko warned of such an drone proliferation and arms race. The following excerpts from a long Scott Shane piece on a Chinese drone show that Zenko provided comment on:
The presentation appeared to be more marketing hype than military threat; the event is China’s biggest aviation market, drawing both Chinese and foreign military buyers. But it was stark evidence that the United States’ near monopoly on armed drones was coming to an end, with far-reaching consequences for American security, international law and the future of warfare.
Eventually, the United States will face a military adversary or terrorist group armed with drones, military analysts say. But what the short-run hazard experts foresee is not an attack on the United States, which faces no enemies with significant combat drone capabilities, but the political and legal challenges posed when another country follows the American example. The Bush administration, and even more aggressively the Obama administration, embraced an extraordinary principle: that the United States can send this robotic weapon over borders to kill perceived enemies, even American citizens, who are viewed as a threat.
“Is this the world we want to live in?” asks Micah Zenko, a fellow at the Council on Foreign Relations. “Because we’re creating it.”
In November 2013, Zenko wrote a piece for Politico titled “Robot Wars” warning again of Chinese drone use and drone proliferation:
The United States has asserted the right to use lethal force anywhere—“from Boston to the FATA,” as a senior Pentagon official put it during a Senate hearing in May—against terrorist organizations, some of which the U.S. government will not name, and outside the bounds of what the majority of the world believes is lawful targeting. But what happens when other countries start launching killer drone strikes? What will the United States do then?
We don’t have long to wait. Drones are quickly becoming a tool in just about every country’s toolbox. So far, only three countries are known to have conducted drone strikes outside their borders: The United States (more than 450 times across seven countries), the United Kingdom (300-plus strikes in Afghanistan using U.S.-provided Reapers) and Israel (reportedly in the Palestinian Territories and Egypt). But drones are spreading quickly. The U.S. Government Accountability Office estimates that the number of nations with drone systems grew from 41 in 2004 to 76 in 2012. In that time, global drone spending has more than doubled, from $2 billion in 2004 to some $5 billion today.
Also in March 2013, in a National Journal piece with the rather unsubtle title “When the Whole World Has Drones,” Zenko is quoted making a similar precedence argument:
“We don’t like other states using lethal force outside their borders. It’s destabilizing. It can lead to a sort of wider escalation of violence between two states,” said Micah Zenko, a security policy and drone expert at the Council on Foreign Relations.
“So the proliferation of drones is not just about the protection of the United States. It’s primarily about the likelihood that other states will increasingly use lethal force outside of their borders.”
From a January 2013 USA Today piece entitled “Experts: Drone basis for new arms race,” Zenko makes a different argument, this time oriented around prestige:
Some analysts contend that nations seek the drones as much for the clout they bring as any military utility they provide, since few countries have the sophisticated sensors or precision weapons that the United States employs.
"It’s a prestige thing," said Micah Zenko, an analyst at the Council on Foreign Relations. "It doesn’t provide you with much additional combat capability."
Nor are these just tossed-off op-eds or media appearances. Zenko has evidently thought deeply and carefully about the problem of drone arms races/proliferation. Deeply and carefully enough to produce a work of policy analysis that stands above the ephemeral sound byte. In January 2013, Zenko penned a 53-page research paper for the Council on Foreign Relations that discusses, among other things, the risks of drone proliferation/arms-racing:
The second major risk is that of proliferation. Over the next decade, the U.S. near-monopoly on drone strikes will erode as more countries develop and hone this capability. The advantages and effectiveness of drones in attacking hard-to-reach and time-sensitive targets are compelling many countries to indigenously develop or explore purchasing unmanned aerial systems. In this uncharted territory, U.S. policy provides a powerful precedent for other states and nonstate actors that will increasingly deploy drones with potentially dangerous ramifications. Reforming its practices could allow the United States to regain moral authority in dealings with other states and credibly engage with the international community to shape norms for responsible drone use.
The current trajectory of U.S. drone strike policies is unsustainable. Without reform from within, drones risk becoming an unregulated, unaccountable vehicle for states to deploy lethal force with impunity.
I’ve changed this post as well to respond to some relevant feedback and critiques of my critique. Some earlier readers of this post (most notably Daveed Gartenstein-Ross) sensibly objected on Twitter that the 2014 piece and some of the 2011-2013 pieces can be read consistently — and that the block quotes of the 2011-2013 pieces are not clear. Though I’ve refrained from engaging in a deeper discussion due to space concerns, Gartenstein-Ross’s critique necessitates a more nuanced analysis of the problem than the previous draft of the post did.
So this expanded critique should probably note at the outset that Zenko is, after all, making some allowance for the possibility that “caricatures” of drone proliferation may become reality if we do not act according to the arguments he makes in his 2014 piece. This, however, comes after a long and sensible discussion of “structural and normative impediments” to proliferation. Why this problematic? The CFR monograph, as the most detailed exposition of Zenko’s thoughts on drone proliferation, can be used as a explanatory device — particularly when compared to some of the other quotes above.
Many of the pieces cited, including the CFR monograph, portray proliferation as both the looming result of American usage of drones as well as the global diffusion of military technology. In particular, this passage from the CFR monograph reads very much like the kind of “caricature” that Zenko attacks in 2014:
Beyond the United States, drones are proliferating even as they are becoming increasingly sophisticated, lethal, stealthy, resilient, and autonomous. At least a dozen other states and nonstate actors could possess armed drones within the next ten years and leverage the technology in unforeseen and harmful ways. It is the stated position of the Obama administration that its strategy toward drones will be emulated by other states and nonstate actors. In an interview, President Obama revealed, “I think creating a legal structure, processes, with oversight checks on how we use unmanned weapons is going to be a challenge for me and for my successors for some time to come—partly because technology may evolve fairly rapidly for other countries as well.
To be sure, there are many qualifications in the CFR monograph about the technical limitations to state and non-state actors developing the system architecture needed to deploy systems and carry out attacks against American interests. However, they are obviated by Zenko’s own useful point that the complete system architecture possessed by the US isn’t necessary to carry out local objectives — not all powers want to project drones as far and as hard as the US does. And the many qualifications within the monograph seem at odds with the larger point — proliferation is, unless we act now, on the immediate horizon. The contradictions in the monograph are not completely problematic, but should be noted here first before moving on.
The CFR monograph seems to cast the drone as one half F-35 and one half AK-47. Difficult to employ, costly, and support-intensive — yet also irresistible enough to necessitate American action due to the problems of proliferation enabling states to act with “impunity.” The monograph -as well as some other writings - resolves this contradiction by arguing that local powers can have something in between the AK and the F-35:
Based on current trends, it is unlikely that most states will have, within ten years, the complete system architecture required to carry out distant drone strikes that would be harmful to U.S. national interests. However, those candidates able to obtain this technology will most likely be states with the financial resources to purchase or the industrial base to manufacture tactical short-range armed drones with limited firepower that lack the precision of U.S. laser-guided munitions; the intelligence collection and military command-and-control capabilities needed to deploy drones via line-of-sight communications; and cross- border adversaries who currently face attacks or the threat of attacks by manned aircraft, such as Israel into Lebanon, Egypt, or Syria; Russia into Georgia or Azerbaijan; Turkey into Iraq; and Saudi Arabia into Yemen. When compared to distant U.S. drone strikes, these contingencies do not require system-wide infrastructure and host-state support.
Zenko’s article in Politico also makes this point fairly nicely:
Where combat missions are concerned, most emerging drone powers will be limited to launching attacks in nearby countries. (Think Saudi Arabia in Yemen; Rwanda and Uganda in the Democratic Republic of the Congo; Russia in Georgia; or Pakistan in Afghanistan.)
But one truism about new technologies is that almost everywhere they are deployed, people find other uses for them. Countries might start to send drones to wage drug wars or fight pirates off the coast of Somalia. And we can be sure drones will be deployed in ways we cannot yet imagine.
Proliferation expert Dennis Gormley has warned that Americans view armed drones as incredibly precise, with low yields and limited consequences. But what happens the next time an embattled dictator like Syria’s Bashar al-Assad decides to use chemical weapons and finds that, as Gormley argues, drones are actually ideal delivery systems for weapons of mass destruction? The United States is going to demand a new policy—and quickly.
The CFR monograph also contains material describing the drone threat from non-state actors, but dismisses it as insignificant. Hence the chief argument being made concerns states — and a wide range of states from great powers like China to tinpot dictatorships like Assad’s Syria. This doesn’t resolve the larger problem of larger-scale proliferation/arms-races of deadly capabilities being both difficult, structurally limited, and still nonetheless on the horizon as a challenge for American security. But it does make somewhat of a cohesive case for the larger questions of international order and blowback that Zenko warns of.
There is a larger problem embedded in this critique that I’d also like to take some time to discuss. First, Zenko’s 2014 piece argues that the perception that the world is currently awash in drones is wrongheaded and the product of scaremongering and limited information. It is the future that is the primary problem — a future still capable of being molded and shaped. This is an sound argument — the future does matter. Yet in some of his past pieces, Zenko argues as if the future is right around the corner. It is difficult to read passages that raise Syrian hypotheticals or other states conducting counter-pirate or counter-drug operations without thinking that we are at the beginning of the drone proliferation/arms race that Zenko is warning of. So one could be forgiven if they had the mistaken impression about a drone-filled world that 2014 Zenko seeks to combat.
Indeed in one of the linked pieces Zenko says “we don’t have long to wait.” This is a dystopian drone world that we are “creating.” Indeed, had the Chinese actually executed the Burma strike that Zenko argues that they contemplated, would that not be the beginning of the improvised drone problems that he foresees? For what it’s worth, it seems from open sources that it could have happened. Time and space begin to collapse, particularly in the description of the threat horizon that pre-2014 Zenko discusses.
But Zenko in 2014 has raised more impediments to drone employment by precisely the same class of actors he writes about as likely near-term dangers in his pre-2014 pieces. These 2014 impediments include the difficulties of producing advanced drone technology, and declining defense budgets, as well as domestic political backlash. Note that there is no argument in 2014 (as per earlier pieces) that many of these “structural and normative impediments” can be bypassed with a sufficient amount of clever tinkering, limited objectives, and other hacks and cracks.
Instead, Zenko argues that the “truth” of proliferation is important, and that shining truth is that drones are pretty hard to use and may not be all that useful or desirable in the first place:
As the United States has learned, armed drones are not markedly cheaper than manned fighter aircraft, and in some situations they are actually more expensive. Human intelligence is costly and required in large numbers to analyze and disseminate the full-motion video and signals intelligence collected by drones. Before committing to redirect precious defense dollars, governments must identify the military missions for which armed drones are uniquely suited and that cannot reliably be achieved by the weapons systems currently in their arsenals. To date, the majority of governments worldwide simply have not rushed away from manned aircraft, rocket and artillery, or special operation forces — and toward armed drones.
Trying to square this with near-term visions of a drone-enabled Assad, a Chinese drone power blowing up drug lords in Burma, and counter-pirate and counter-drug foreign droning is difficult. Perhaps most puzzling is Zenko’s kicker: the real drone problem is export controls:
[T]he Obama administration is in the final stages of a long, contentious interagency review of U.S. drone exports. If the White House’s strategy is based on the misperception of a world characterized by limitless drone proliferation, then a policy of markedly reduced barriers for U.S. drone exports is sensible, because states would ultimately acquire armed drones irrespective of U.S. policies. If, however, proliferation does have structural and normative impediments, then how the United States — as the largest manufacturer of armed drones — develops its export strategy could have an impact on the breadth and speed with which the technology diffuses. And then some of the caricatures of drone proliferation may end up being credible. The result could be that more states will be armed with the low-risk technology that arguably lowers the threshold for using force, with potentially destabilizing consequences for regional and international security.
If it’s really just export controls that are the gateway to the “caricature” future that Zenko fears, then the US arguably could drone terrorists away without considering any of his previous precedent-based arguments as long as it keeps a tight grip on exports. Why not just drone the terrorists with a light conscience, if states can’t or won’t make killer robots anyway and the only way they could get them is if we allow it? Indeed, as later detailed, Zenko argues that even advanced industrialized states are having problems with the make and deployment of drones. If drones are so inefficient, difficult to make, and future use of them is so tentative that the only guaranteed pathway to drone dystopia is Uncle Sam giving the world drones, then the drone problem must be vastly different than we have imagined it.
As I’ve hinted before, Zenko’s arguments about drones have always been about the shadow of the future — and future risk at that. Hence while inconsistencies and ambiguities in his writing about when that future takes place may plausibly give rise to false perceptions of a world that is currently or on the verge of being flooded with killer robots, this doesn’t necessarily show the problem with the “shadow of the future” argument in abstract. I will have to explain a bit more about how the “shadow of the future” argument breaks down between 2013 and 2014 Zenko pieces.
The shadow of the future problem comes from the inconsistencies between the 2014 characterization of future drone dystopia and the 2011-2013 characterizations. That problem in specific comes from squaring past arguments about why we should be concerned with drone proliferation and arms races with present ones.
Some past arguments:
(1) A state might, in the near future, use drones in a way that violates international humanitarian norms or raises the risk of conflict in the world.
(2) We don’t know what kinds of hacks, tinkering, and innovations states and other actors will improvise to make drones work their way.
(3) Somehow, a larger more long-term future of drone proliferation/arms races is coming even if 1-2 are near-term risks.
While as I’ve noted before the many technical qualifications in the CFR monograph pose problems for (3), they don’t necessarily impact arguments (1-2). In 2011-2013 versions, Zenko can defend (1) as an immediate or at least near-term problem (hence the China example or Assad hypothetical) and use the more vague and generalized fear of proliferation to argue for (2-3). Again, there is a problem squaring technical qualifications with (2-3) but it is not insurmountable.
However, in 2014 Zenko’s characterization of the risk has radically changed — “the drone invasion has been greatly exaggerated.” Instead of present or emerging drone threats that pose dangers of violent escalation in state dyads or regions, we instead have “misleading” articles about rogue state drone capabilities that are in reality militarily insignificant. And while militaries are ”pursuing” drones, as Zenko notes earlier these drones seem to be quite militarily pointless and even undesirable. The shadow of the future has also changed to one in which it is not necessarily US legal precedent combined with diffusion that is the real path to drone hell, but the threat of the US supplying other actors with drone capabilities that they are either incapable or unwilling to generate themselves.
In sum total, Zenko’s 2014 article makes a strong case that there isn’t a drone arms race threat or proliferation threat on the horizon. Budgets are tight. The technology is too difficult to understand or produce. There are too many domestic political impediments. Zenko clearly states that other countries “have not followed the United States’ lead” in their particular drone strategies. Zenko even notes that the current enemy du jour, Russia, is “struggling” to master the necessary technology despite having a “relatively advanced aerospace program” — just like similarly advanced NATO allies France and Italy similarly struggle! Only the United States’ hypothetical supply of drones to other states leads us to drone apocalypse.
So in this 2014 article we have simultaneously moved away from specific near term threat scenarios implied by (1) — states getting drones and using them for ill will in regionally destabilizing ways — as well as more generalized “we don’t know what will happen”-esque precautionary arguments implied by (2-3).to a wholly different world. It’s a more ambivalent, contingent world in which drones actually are neither attractive nor necessarily efficient nor cost-effective for states to make, and where even advanced states that produce manned F-35 competitors (Russia) struggle to make the drones the way Washington does. In fact, Zenko is arguing that these countries aren’t up to the task of producing an “advanced armed drone” to begin with. In such a world, only the United States handing out drones like hotcakes can make “caricatures” of drone proliferation reality.
Zenko’s work has both simultaneously promoted those caricatures and dismissed them. To be fair, his 2011-2013 writings on drones have plenty of caveats. But those caveats seem to have suddenly grown radically stronger in the span of only a year. One cannot go from a risk future that consists of known unknown near-term risks, unknown unknown near-term risks, and combinations of both categories for long-term risks to one in which the best state candidates for production of the purportedly dyad and region-destabilizing drones struggle at the task of engaging in dronery. And one cannot especially go from a risk future in which US behavior and justifications are the problem to one in which US export strategy is the problem. One is about copying what we say and do with tech that they will develop if present trends hold, the other is about America giving them the tech.
Yet this is more or less what Zenko has done. It isn’t a black and white inconsistency in which 2011-2013 says that the sky is blue and 2014 says that it’s black. Rather, it’s a matter of the likelihood of proliferation and the characterization of proliferation and similar risks shifting over time in ways that bear only faint resemblances to each other. Either drone proliferation/arms races are likely if all things are held equal or they are not likely unless the US goes out of its way to enable them through exports.
And again, the 2014 post is also silent about the less demanding drones that other states do have and the less demanding drones that 2011-2013 writings warned that they could and even would have in the near future — drones used to potentially blow up pirates, drug traffickers, deploy weapons of mass destruction, or trigger destabilizing regional problems or dyadic rivalries. At the end, this hell is still a possible future — but only if the US enables drone exports in a thoughtless way.
But another contradiction with 2011-2013 Zenko arises from this argument. Forget US drone exports — what about the 2011-2013 Zenko argument that (as Zenko’s Politico piece details) countries can and will find a way? In fact, Zenko seems to make a strong case that whether or not the US chooses to export is actually irrelevant:
The United States has agreed to sell such lethal drones only to its closest allies, but countries are finding other means of acquiring them. “The United States doesn’t export many attack drones,” a representative of a Chinese aircraft manufacturer said in 2011, “so we’re taking advantage of that hole in the market.” Other countries are building their own drones. Take Iran, for example, whose defense minister in May announced that the country’s newly unveiled Hemaseh (or “Epic”) drone is “simultaneously capable of surveillance, reconnaissance, and missile and rocket attacks.”
This argument is consistent with the CFR monograph’s points because it prefaces it with the following:
Most drones that other states develop or acquire will be unarmed—even in the U.S. arsenal, less than 5 percent of the drones can drop bombs—but perhaps a dozen other states could possess armed drones within the next 10 years.
But these states that will get (and Zenko often notes, copy US justifications to use dangerously) armed drones without the vaunted US exports. And again the qualifications emerge — less than 5% of the drones can drop bombs, most drones that other states develop will be unarmed - etc etc. As I’ve previously noted the qualifications are not dealkillers, but they certainly complicate even the older arguments.
It’s possible that Zenko, if reading this post, may respond to this by arguing that his 2014 piece focuses more specifically on one factor that is most likely to stimulate the processes he has already warned of — that factor being US drone exports. But this still creates an inconsistency due to the way that Zenko has seemingly raised the level of “structural and normative” impediment to drone proliferation/arms races in a way that has not been evident in the 2011-2013 analysis. 2011-2013 Zenko stressed the possibility of emulation of US drone architectures and the negative effects this would have. 2014 Zenko stresses the difficulties and undesirability of emulation and the fact that the US enabling the actors to emulate through direct transfer is the most significant problem.
The point of this isn’t to say that Zenko is a hypocrite or to play a “gotcha” game. Pundits put out such a sheer amount of work that any one of the many writers on foreign policy, national security, and strategy could be similarly attacked via a Google-informed deep dive through their back issues. Including myself — and later on in this blog I point out an instance where I actually cited Zenko to make a Zenko-ish argument. None of us are consistent enough to withstand such a exhaustive comparison, and with time we change our minds, see new data, and rethink our perspective on the issues.
After arguing the contrary case for years, Zenko now evidently believes that drones races and proliferation threats are exaggerated and the notion of such robotic military competitions is analytically unhelpful. If that’s indeed true — and a more informed and improved take on the issue — then good for him. Not many pundits are capable of changing their minds — just look at all of the people that advocated for the Iraq War that still think it’s a good idea! Again, good for a pundit that can change his mind!
However, the reason why I took my readers on a trip down memory lane is to point out the importance of context and herd mentalities in punditry. From 2011-2013, with few interruptions the dominant narrative in security punditry was that of the post-Iraq/Afghanistan Obama administration and its preference for standoff, covert tools like drones, special operations, and cyber exploits. Zenko, like many other analysts, was against much of the Obama administration’s policies, strategies, and tactics. He and others argued that Obama wasn’t just threatening civil liberties, alienating foreign fence-fitters, pursuing terrorists and state rivals ineffectively, and wasting American resources. That was all bad enough, but the distinguishing characteristic of a Zenko take on this issue (though as I argue, he wasn’t necessarily alone even if he was one of the most prominent of voices making such an argument) was something different. Rather, he couched the dangers in terms of blowback.
And not just Cold War-era blowback. Zenko discussed a special variant that took a very distinctive form. From vaporizing terrorists with Hellfire missiles to throwing cyber attacks at Iranians, Obama was setting precedents that would come back to haunt America. Why? The very weapons and legal and political rationales the US was employing would proliferate. Other state and non-state actors would use them in a way that would not only threaten American security and diplomatic objectives, but undermine the basis of the international order. To make matters worse, the American lethal combo of drones, malware, and lawyers was stimulating an arms race and proliferation of capabilities that could only further threaten both America and the structural foundations of the international system.
It’s a powerful argument. And it was a popular one for roughly the period (2011-2013) that Zenko warned of our alleged sleepwalk into drone proliferation/arms race catastrophe. Despite Zenko’s own insightful analysis of threat inflation, these kinds of blowback arguments are distinct claims about threats to both American national security and the world — threats that Zenko feels now are exaggerated and sensationalized. In essence Zenko 2014 is implicitly saying that the previous Zenkos writing on drones and similar subjects were engaging in the very threat inflation he so insightfully diagnosed in his work on threats and “clear and present safety.” But it’s easy to be swept up by the herd. And this is where Zenko was not necessarily unique. Almost everyone that could get op-ed space was also doing it. And the herd mentality was likely self-reinforcing.
If you click through to the news pieces I linked where Zenko is quoted, you’ll find a lot of other reputable experts making the same point. And some of today’s skeptics of drone skepticism also originated as drone critics. Believe it or not, my blogfriend Joshua Foust, known in the defense blogosphere as an staunch skeptic of drone skeptics, once sounded the alarm about drones when he was a columnist for the Atlantic. Hell, even I made a Zenko-ish argument about the dangers of spec ops, raids, and drones after the Bin Laden operation, and cited Zenko’s academic work on discretionary raids to make my point! So a 2011 version of me found — for a brief time — the 2011 Zenko take on Obama’s natsec policies persuasive enough to invest my own analytical credibility. I can’t fault Zenko for making an argument that I made with his (indirect) assistance. Such was the unconscious fashion of the analytical era in which we all were writing our pieces on drones, counterterrorism, and threats.
However, as I argued myself for a while (and now Zenko argues himself), the herd mentality led to misleading and oversimplified arguments. The problem with the blowback narrative is that it is sensationalized. The reason why it’s so easy to sensationalize is that it seems almost commonsensical. We all learn as children that, as Justin Timberlake sang, “what goes around comes back around.” So it’s tempting to apply such logic to the world of arms and influence, and fear that we’ll one day find ourselves with a monster of our own making — only for everyone else to sing (to continue my J.T. metaphor) “cry me a river.” Assuming the 2014 “it’s an exaggeration” Zenko is right, it implies the 2011-2013 Zenkos and others who made such arguments did sensationalize and exaggerate a problem and analytical question that is far more ambiguous and difficult to parse.
Given that me and Dan Trombly wrote at lot at Abu Muquwama about drones, proliferation, counterterrorism, arms races, and blowback, you may be curious what I have to say about these issues in 2014. As Upworthy would say, the answer will surprise you. You might expect me to gloat and say “I told you so, na na na na.” But that’s not what I believe these days. Like Zenko, my mind has changed with time and distance from my own arguments. In all honesty, I’m really not even half as sure about this issue as I was when I penned my Abu M pieces. Why did I become less sure?
Reading the work of military diffusion and innovation theorists like Michael Horowitz, Thomas Mahnken, Stephen Biddle, Emily O. Goldman, Dima Adamsky, and Evan Laksama has complicated analytical judgements that I once was arrogantly sure of. I also learned a lot more about computer science and artificial intelligence and acquired hands-on experience painfully debugging code and programs of my own. Lastly, time spent as a PhD student in hard research methods courses and having to pick apart the research of giants in various fields has made me realize that the answer to the question isn’t going to come from someone like me or Dan.
The best take on drone arms races/proliferation is coming to come from someone dogged and determined enough to systematically survey the literature, the relevant statistics, and contradictory lessons of strategic history. Someone that takes the time to do the research, and be to drone arms races/proliferations what my friend Benjamin Armstrong is to Mahan. So, Armstrongs of the killer robot arms race/proliferation world — the door is open. Go out there and research! Write a dissertation or two. Or at the very minimum do something as systematic and well-argued as Ben’s USNI book on Mahan. Or for that matter be to drones what Mahan was to seapower — and I’ll cheer you on every step of the way.
What I do know is this: problems like drone arms races and proliferation do not have simple a priori answers or inevitable logics. The 2011-2013 Zenko was wrong to write as if drone arms races/proliferation was obviously inevitable. And in turn I’m not really sure that the 2014 Zenko is correct that it’s obvious that the 2011-2013 Zenko sensationalized the drone arms race. Another possibility is that maybe both Zenko 2011-2013 and Zenko 2014 are wrong. It could very well be that drone arms races/proliferation will/won’t occur, but not for the reasons Zenko, Dan, or I have argued. It’s an open question, and my mind is open to those that can martial the evidence and argumentation to make a solid case one way or another. But if they do, they should represent the complexities of the issue — and resist the temptation to slide into deceptively simple and parsimonious narrative.
The obvious naturally intelligent agent is the human being. Some people might say that worms, insects, or bacteria are intelligent, but more people would say that dogs, whales, or monkeys are intelligent. One class of intelligent agents that may be more intelligent than humans is the class of organizations. Ant colonies are a prototypical example of organizations. Each individual ant may not be very intelligent, but an ant colony can act more intelligently than any individual ant. The colony can discover food and exploit it very effectively as well as adapt to changing circumstances. Similarly, companies can develop, manufacture, and distribute products where the sum of the skills required is much more than any individual could master. Modern computers, from low-level hardware to high-level software, are more complicated than any human can understand, yet they are manufactured daily by organizations of humans. Human society viewed as an agent is arguably the most intelligent agent known.
David J. Poole and Alan K. Mackworth, Artificial Intelligence: Foundations of Intelligent Agents, 2010, 6.
Plucking a few events out of the vastness of the world and declaring them to be the news of the day is a mysterious and complicated project. Sometimes what’s news is inarguable—the outbreak of war, a head-of-state transition, natural calamity—but very often it falls into the category of the resonant incident. It isn’t a turn in the course of history, but it strikes editors as illustrative of something important.
— A particularly nice quote that Jay Ulfelder found from Kitty Genovese case.
Ancient Romans had a game similar to rugby called Harpastum. The goal was to get the ball to the end, and since there were no rules on grappling, injuries were high. Beyond that the rules varied. Galen, the famous Roman physician, claimed that harpastum was one of the greatest exercises “better than wrestling or running because it exercises every part of the body, takes up little time, and costs nothing” it was”profitable training in strategy”, and could be ”played with varying degrees of strenuousness.”
Danny Butterman: Have you ever fired two guns whilst jumping through the air?
Nicholas Angel: No.
Danny Butterman: Have you ever fired one gun whilst jumping through the air?
Nicholas Angel: No.
Danny Butterman: Ever been in a high-speed pursuit?
Nicholas Angel: Yes, I have.
Danny Butterman: Have you ever fired a gun whilst in a high speed pursuit?
Nicholas Angel: No!
A while back, Daniel Drezner wrote a prescient post about the emotional exhaustion that he observed as indicative of a certain persuasion in foreign affairs analysis:
The key things to realize about the neoconservative worldview is that:
1) Reputation and the image of strength are everything;
2) Countries bandwagon to the strong states and eschew the weak states.
3) Even the slightest concession in the present weakens one’s reputation and strength for the future; so
4) Any concession in a present negotiation ineluctably leads to unconditional surrender in the future.
Where I think Drezner errors is his ascribing this tendency solely to the neoconservative worldview. This isn’t really a failing that necessarily issues from the neoconservative perspective — in large part because the much-overused “neoconservative,” like its similarly neo-‘d cousin “neoliberal,” is a clumsy moniker for a more diffuse set of beliefs, ideas, personalities, and policies than its promiscuous usage implies. Hence, as some early readers of this post reminded me, it’s hard to say what “neconservatives” Drezner’s analysis applies to. But that isn’t really the important part of why the tendency is more general.
With a few alterations, this Drezner précis could also describe the domestic political “horse race” coverage that Nate Silver so famously battled against. A world where every incident that receives a lot of press coverage is a “Game Changer”, a world where the election is forever “up in the air” despite statistical evidence to the contrary, and a world where victory depends on the strength, cunning, and resolve of powerful men alone.
Or, more colloquially, the world that Call of Duty: Modern Warfare 3's villain Vladimir Makarov describes with his speech during the MW3 reveal trailer: “It doesn’t take the most powerful nations on Earth to create the next global conflict. Just the will of a single man.”
To call this a Great Man theory of history or world affairs is to misdiagnose. Take 24's Jack Bauer, for example. Jack Bauer is integral to saving the day every season of 24. We can’t laugh entirely at this notion — recent work in international relations shows that the hero can’t be dismissed. Leaders do shape the structure of the international system, even if they are also constrained by it. But the idea that the hero is important (and how important he is) isn’t really the key assumption (or debating question) posed by this kind of worldview.
Rather, every action movie is just a snapshot of one bad day/week/month out of many more in a year. Jack Bauer may have to be on point for those unlucky 24 hours, but what does he do the rest of the year? Likely write paperwork, practice his marksmanship at the range, and deal with a number of lesser cases that never rise to the all-consuming importance of That Day In Which He Expends Bullets and BlackBerries Like A Endless Round of Team Deathmatch. All of that occurs offscreen and doesn’t matter - just like Jack Bauer is never shown on the toilet or doing any other sort of mundane activity.
The biggest structural unrealism embedded in 24 is the idea that only That One Day matters. That the fate of the nation and sometimes the world hangs in the balance for 24 hours, and depends entirely on Jack, Chloe, and the team. Those 364 other days? Irrelevant. The problem with this worldview when applied to any large-scale endeavor, be it an election or international security crisis, is that in real life it will never be clear which day is the critical time. There is no clock steadily ticking down, no Kiefer Sutherland voice-over narration. There will be, however, a very many days that seem at first glance to be important but upon closer analysis can be dismissed as noise — pure and ephemeral randomness. And a lot more determines the dynamics and end result than just Jack’s marksmanship and Herculean ability to rapidly maneuver throughout the greater Los Angeles and New York City areas in defiance of the worst traffic in America.
Worldviews that place great importance on the notion of a singularity at which the fate of the world will be decided by the clashing wills of titans have a bad habit of seeing every instance that laps up press attention and fits a comfortable Gotterdammerung/Armageddon narrative as the time to pray to Saint Jack to pick up his pistol and his BlackBerry, instinctively say “dammit Chloe,” and leap into action and stop the terrorists.
So what happens when you (1) over-classify game changers and (2) inevitably see most of your game-changer predictions fail? You can rationalize it by one (or both) options from this menu:
Upon closer examination, both perspectives are essentially the same. They just use a different argumentative strategy. The first one simply ignores the past, the second one uses it instrumentally as part of a narrative (see: Munich analogies) that finds great moments in history where the iron dice is cast and the fate of Mom, Apple Pie, and The Flag hangs in the balance.
ThinkProgress blogger (and soon to be Project X'er) Zack Beauchamp tweeted a while ago that structures, regularities, and fundamentals matter — and that while domestic analysts studying politics had come to understand this, foreign policy analysts did not. The problem is that the perspective that one garners from taking into account structures, regularities, and fundamentals is one that sees a high level of noise in the world and attempts to develop highly precise tools for cutting through it to find the signal. It is a perspective that leads to "incremental" and "dull" analysis and better fits the temperament of the accountant than the romantic.
When it comes to much of what we regard as politics and punditry, the lack of passion and romanticism that soberness necessitates is a feature, not a bug. Many are drawn to politics because they are romantics, full of passion and aesthetic verve rather than statistics books or “dull” and “incremental” scholarship about the stuff of both domestic and intentional affairs. It is no wonder, perhaps, that the kind of worldview I describe here is more than just action movie heroics taken as holy writ. It is also in many respects a kind of throwback to the sort of Victorian-era mythos and tropes that Spengler and others mined for their works. The world has become decadent and complacent, unable to notice a vigorous, vital, and evil force emerging from the darkness. It strides over the weak forces of order, defeating them with frightening ease. A titanic battle looms, one that will be the defining struggle of the age.
As art, there is nothing more sublime than this sort of aesthetic. It’s what makes John Boorman’s Excalibur the greatest of all of the cinematic representations of the Arthur mythos. And it’s what, on a more vulgar level, made 300 and Lord of the Rings so popular at the box office. It is the height of drama, tension, and narrative payoff. If everything wasn’t on the line, why watch Rocky fight in the first place? But as a guide to everyday life — well, there have been entire dissertations written on how using themes that could have been stolen from Wagner operas as a wellspring for your political philosophy and rhetoric leads to dangerous ideas about public policy.
Andreesen-Horowitz’s Marc Andreesen, commenting on Newsweek's recent game of “Where’s Satoshi?” tweeted that “there is a growing CP Snow-style divide between people who trust math/science/tech and people who trust people/institutions.” That’s true, but it’s also lamentable — the world genuinely needs those who both are versed in the technical as well as the social and political. Both intersect, as Herbert Simon implies with his metaphor of an technological artifact’s “inner laws” and outer environment that those laws intersect with.
But another far more serious divide looms — between those who see politics in “dull” and “incremental” terms and those who view politics as a sort of literary romance. Unlike Andreesen’s divide between the technical-trusters and the people/institution-trusters, this one is harder to bridge. There isn’t necessarily an natural or obvious intersection between the two camps. There is very little romanticism in R scripts, Bayes, or prediction models. The lyrical and passionate Nassim Nicholas Taleb of “Black Swan” fame seems to be the exception to the rule, but an exception nonetheless.
So if you’re analyzing politics, you can regard the world as a dark and uncertain place full of noise and randomness — with your tools as imperfect and fallible tools you use to try to find the signal. It’s a viewpoint that I’ve struggled with myself, in large part because of my instinctive attraction to the romantic view. Hey, at the end of the day I bought MW3 because of the Makarov “will of a single man” voiceover in the trailer. I’m not automatically in the first camp — if I was, why would I need to go into a PhD program if I’ve already learned that sort of analytical mindset? I haven’t, and I’m still working on it.
You could, however, forego the attempt entirely. You can aim for the romantic view, where the answers are already known, every struggle is the beginning of that 24 hours ticking down, and the man who wins is he who can fire a gun whilst jumping through the air. It might be more fun, it might save you from being “dull” and “incremental,” but it’ll also, as Drezner argues, exhaust you. Many people have opted for this fork in the road instead.
Maybe one day can matter — though not as a metaphysical, Splengerian conflict. If one day matters, it is as the product a particular confluence of structures, fundamentals, leaders, information, and conflicts. And we acknowledge that one day can make the difference while also acknowledging that the outcome could also be the slow accretion of other days that cumulates in strategic decision. Or perhaps the thing we want to explain or predict is a combination of the two. The choice bolls down to whether or not we want to approach such an analysis from the romantic perspective — or whether we aim for something else more frustrating, incremental, inconclusive, and ultimately more rewarding.
I have mostly stayed out of the debate between Robert Farley and his air minded critics. But the recent War on the Rocks response to his piece makes me want to comment on how remarkably weak the responses to his book have been. Let’s be clear that I disagree with Farley’s thesis as well. But since this topic isn’t really that interesting to me, you’ll have to talk to me offline if you want to know why.
Rather, what interests me is simply how little reflective thought has gone into the actual institutional and social scientific dilemmas this subject raises.
There are two dominant problems that are clear to any causal observer of inter service architecture and the military institutions literature.
(1) Institutions have biases, some of which may be general (e.g. Barry Posen’s argument that mil institutions will seek to maximize autonomy and offensive doctrines) and others specific to whatever culture or operational code structures institutional life.
(2) What is optimal for one actor is not the basis by which that actor’s performance ought to be decided. Aggregate effectiveness, not institutional effectiveness, matters when the actor is part of an overall grouping that must cooperate to achieve some goal.
This paragraph from the response flat-out denies the former issue and minimizes the second:
“An independent Air Force, drawing on decades of combat experiences in the air domain, is best suited to create air-centric doctrine. United States dominance in the five core Air Force missions would be diluted and dominance in the other domains would be at risk if the other services were forced to absorb the Air Force’s mission and responsibility for air domain doctrine. ….
Each branch of the U.S. military has unique, service-specific priorities based fundamentally on the domain in which it fights. ….By focusing on one domain, each service organizes itself to maximize effectiveness in that domain. Adding additional domain requirements would create an intrinsic conflict in organization.”
This response is premised on the idea that discrete domains exist and each service — like a firm in economics — maximizes comparative advantage. But while domains are a nice way for DoD budgeteers to differentiate organizational roles and missions, Sam Liles convincingly argues that for the most part they are completely artificial:
United States doctrine and force structure is built around the domains of air, sea, land, space and now cyber. Domains as defined create cylinders of capability that can be merged and fought within. The domain construct is as much a historical artifact as it is an efficient categorical system. The military force structure to fight within these domains is an air force, army, and navy. The Marine Corps is an expeditionary force between the sea and land (and other tasks as designated). This structure as defined has inherently created a strategic blindness to the capacities, capabilities, and risks of conflict where they meet. This is especially true when dealing with cyberspace.
Hold up your left hand and look at your fingers. Each finger denotes a domain that United States doctrine defines. The palm of your hand represents the joint functions of these domains. When formed into a fist this meshing of national power assets represents a significant amount of power that is bent toward national strategic objectives.
I encourage readers to click through and read his “alien invasion” hypothetical to see just how artificial the service-domain distinctions are. What this implies is that services don’t get a cookie for maximizing effectiveness in their particular area of specialization. If the price of one service achieving its maximal effectiveness in one area is an suboptimal overall military utility then optimal effectiveness isn’t the goal to be sought.
To see just how ridiculous this idea is, imagine would happen if your car crashed and the manufacturer explained it away as “maximizing rear wheel domain effectiveness.” You want a functioning automobile, not something that accepts an overall task inefficiency because of the importance of having each discrete component optimized to the fullest extent possible. And even is this analogy is generous in that it requires accepting that organizational bias isn’t restraining the organization from maximizing its own discrete area of specialization.
Here’s how to argue against Farley.
(1) Either deal with the service culture/aggregate effectiveness arguments or argue that they’ve been misconceptualized. Don’t ignore them altogether.
(2) Argue that the status quo, while inefficient in many respects, is superior in some way to Farley’s preferred alternative. Perhaps there may be a larger benefit to a 3-service architecture (such as, perhaps, using inter-service competition to reduce the military’s collective action power and thus optimize civilian control? Again, I’m not interested in in this issues so it’s up to you, dear readers).
Right now, though, the responses to Farley’s book have looked as troubled as the F-35 production cycle.