Excerpt: How Wars End
Chapter 1. The Clausewitzian Challenge
In late March 2003, the United States and a few allies invaded Iraq. Some of the war's architects thought things would go relatively smoothly once the enemy was beaten. As National Security Adviser Condoleezza Rice put it in early April, "We fundamentally believe that when the grip of terror that Saddam Hussein's regime has wreaked on its own people is finally broken and Iraqis have an opportunity to build a better future, that you are going to see people who want to build a better future—not blow it up."1
Others involved in the operation were more apprehensive. Lieutenant Colonel Steven Peterson was on the military staff that planned the ground campaign. He noted afterward:
Over a month before the war began, the Phase IV planning group concluded that the campaign would produce conditions at odds with meeting strategic objectives. They realized that the joint campaign was specifically designed to break all control mechanisms of the [Iraqi] regime and that there would be a period following regime collapse in which we would face the greatest danger to our strategic objectives. This assessment described the risk of an influx of terrorists to Iraq, the rise of criminal activity, the probable actions of former regime members, and the loss of control of WMD that was believed to exist. It … identified] a need to take some specific actions including: planning to control the borders, analyzing what key areas and infrastructure should be immediately protected, and allocating adequate resources to quickly re-establish post-war control throughout Iraq.
These concerns and recommendations were brought to the attention of senior military leaders, "but the planners failed to persuade the Commanding General and dropped these issues with little resistance."
In retrospect, this episode seems mystifying. It is bad enough not to see trouble coming. But to see it coming and then not do anything about it might be even less forgivable. How could such crucial, and ultimately prescient, concerns have been dismissed and abandoned so cavalierly? "Because," Peterson continued, both the planners and the commander had been schooled to see fighting as the realm of war and thus attached lesser importance to post-war issues. No officer in the headquarters was prepared to argue for actions that would siphon resources from the war fighting effort, when the fighting had not yet begun…. Who could blame them? The business of the military is war and war is fighting. The war was not yet started, let alone finished, when these issues were being raised. Only a fool would propose hurting the war fighting effort to address post-war conditions that might or might not occur.2
Lieutenant General James Conway, the commander of the 1st Marine Expeditionary Force, which helped capture Baghdad, was even more succinct. Asked whether postwar planning inevitably gets short shrift compared to planning for combat, he replied, "You know, you shoot the wolf closer to the sled."3
The Iraq War will long be remembered as a striking example of such attitudes and their unfortunate consequences, but it is hardly the only one. In fact, the notion of war-as-combat is deeply ingrained in the thinking of both the American military and the country at large. Wars, we believe, are like street fights on a grand scale, with the central strategic challenge being how to beat up the bad guys. This view captures some basic truths: America's enemies over the years have been very bad indeed, and winning wars has required beating them up. But such a perspective is misleading because it tells only half the story.
Wars actually have two equally important aspects. One is negative, or coercive; this is the part about fighting, about beating up the bad guys. The other is positive, or constructive, and is all about politics. And this is the part that, as in Iraq, is usually overlooked or misunderstood.
The coercive aspect of war involves fending off the enemy's blows while delivering your own, eventually convincing your opponent to give up and just do what you want. This is why Carl von Clausewitz, the great Prussian military theorist, defined war as "an act of force to compel our enemy to do our will." The constructive aspect involves figuring out what it is that you actually want and how to get it. This is why Clausewitz also defined war as "an act of policy … simply a continuation of political intercourse, with the addition of other means."4
Keeping this dual nature of war fully in mind at all times is difficult. It means recognizing that every act in war has to be judged by two distinct sets of criteria—political and military—and perhaps even by two distinct institutional sources of authority. This is messy, and nobody likes a mess. So there is a great temptation for governments to clean up matters by creating a clear division of responsibility. Civilians should deal with political matters, in this view, and military leaders should deal with military matters, and control should be handed off from the politicians and diplomats to the generals at the start of a conflict and then back to the politicians and diplomats at the end. As U.S. Central Command (Centcom) commander Tommy Franks put it to the deputy secretary of defense on the eve of the Iraq War, "You pay attention to the day after, I'll pay attention to the day of."5
Unfortunately, the clear-division-of-labor approach is inherently flawed, because political issues can permeate every aspect of war. The flaws can sometimes be obscured during the early and middle stages of a conflict, as each side tries to defeat the other on the battlefield. But at some point, every war enters what might be called its endgame, and then any political questions that may have been ignored come rushing back with a vengeance. "The main lines along which military events progress," Clausewitz observed, "are political lines that continue throughout the war into the subsequent peace…. To bring a war, or one of its campaigns, to a successful close requires a thorough grasp of national policy. On that level strategy and policy coalesce: the commander-in-chief is simultaneously a statesman."6
With the war's general outcome starting to become clear, the endgame is best thought of as a discussion over what the details of the final settlement will be and what will happen after the shooting stops. The problem is that this discussion, whether implicit or explicit, takes place under extremely trying circumstances. At least some officials on both sides may now be considering sheathing their swords, but they are doing so against the backdrop of the fighting itself: the triumphs and disasters experienced, the blood and treasure spent, the hopes and passions raised. By this point, moreover, leaders and publics have usually gotten so caught up in beating the enemy that they find it hard to switch gears and think clearly about constructing a stable and desirable political settlement. So they rarely handle endgame challenges well and usually find themselves at the mercy of events rather than in control of them.
Americans have fared on average no better than others in these situations, and sometimes worse. The country's leaders have rarely if ever closed out military conflicts smoothly and effectively. Trapped in the fog of war, they have repeatedly stumbled across the finish line without a clear sense of what would come next or how to advance American interests amid all the chaos. They have always been surprised by what is happening and have had to improvise furiously as they pick their way through an unfamiliar and unfriendly landscape.
For all endgames' drama and historical importance, however, they have received far less attention than other phases of war. A few books look at the ends of individual wars, and there is a small academic literature on what political scientists call war termination.7 But in general, endgames have been as neglected by scholars as they have been by policymakers. This book is intended to help fix that problem. It tells the stories of the ends of American wars over the last century, exploring how the country's political and military leaders have handled the Clausewitzian challenge of making force serve politics in each major conflict from World War I to Iraq.
From one angle, therefore, this is a book about American history. Drawing on a broad range of primary and secondary sources, as well as extensive original interviews with participants in the more recent conflicts, I have tried to re-create the endgame choices that presidents and their advisers confronted during each war. The goal is to put readers inside the room with U.S. officials as they make decisions that affect millions of lives and shape the modern world—seeing what they saw, hearing what they heard, feeling what they felt.
From another angle, though, this is a book about how to think about war, foreign policy, and international relations more generally. Marx once noted, "Men make their own history, but they do not make it as they please," and in this, at least, he was exactly right. The agency that American leaders have displayed—their freedom of action to choose one course over another—has been constrained by various kinds of structures, aspects of their environment that nudged them toward some courses rather than others. To explain endgame decisionmaking properly, therefore, you have to focus not on agency or structure alone, but on how they interact.
As for which kinds of constraints on policymakers matter most, this is a matter of intense debate inside the academy. Followers of "realist" theories argue that a country's foreign policy is concerned above all with the pursuit of its security and material interests. Look to power politics and the country's external environment, they say, and you can predict how its leaders will behave. Critics of realism, in contrast, argue that foreign policy is driven primarily by internal factors, such as domestic politics, political ideology, or bureaucratic maneuvering. And followers of psychological theories, finally, argue that foreign policy is shaped by the cognitive structures inside leaders' minds—such as the lessons they have drawn from the country's last war. Throughout the book, I weigh the relative merits of these different approaches in accounting for what happened in each war. My conclusion is that all of them help explain at least some things some of the time, but a surprisingly large amount of the picture can be sketched out by looking at power and lessons alone. (The technical term for the theoretical approach I follow here—one that begins with power factors but then layers on other variables to gain greater insight—is "neoclassical realism."8)
From a third angle, finally, this is a book about future policy and strategy. The specific mix of factors that led to chaos in Iraq after Baghdad fell are not going to come together again, but that doesn't mean similar mistakes won't be repeated. Time and again throughout history, political and military leaders have ignored the need for careful postwar planning or approached the task with visions of sugarplums dancing in their heads—and have been brought up short as a result. But there is simply no reason this process has to play itself out over and over, and if officials can manage to learn a few general lessons from past failures, perhaps it won't.
The American Experience
For two and a half years, Woodrow Wilson kept the United States aloof from formal participation in World War I, entering in early 1917 only in response to Germany's unrestricted submarine attacks. While neutral, Wilson had tried to end the conflict through negotiations and a "peace without victory." He eventually added a grand international organization to his postwar wish list, an institutional arrangement that would oversee a liberal global order and help the world transcend the evils of war and the balance of power. When the United States finally joined the war, these objectives did not change; rather, Wilson and the nation came to identify German militarism as the main obstacle to achieving them. But since the Allies never really bought into Wilson's idealistic vision, they too presented an obstacle that had to be overcome.
During 1918, American intervention made German defeat inevitable, setting up an intricate triangular dance during the war's endgame. Germany sought to get off as easy as possible. The Allies sought the opposite, trying to recoup their losses and more at German expense. And Wilson, in the middle, pushed for "regime change" in Germany while trying to play both sides off against each another and usher in a new and better world. This delicate balancing act would probably have collapsed even if a master manipulator such as Bismarck were in charge—and the stiff-necked, high-minded Wilson was no Bismarck.
As a neutral, the United States had been unable to get the settlement it wanted because the two evenly matched European coalitions were determined to fight the war to a finish. By becoming a belligerent, Wilson gained a seat at the peace table, but only by helping one side win, paving the way for just the sort of illiberal peace he was desperate to avoid. With no reason to take American concerns seriously once the fighting was done, the Allies simply did what they wanted. And so the tragedy of Versailles—of hapless American attempts to forestall Allied impositions on a prostrate German Republic—is best understood as the working out of the tensions inherent in the war's final acts.
A generation later, the United States was back battling the Germans once again. The American effort in World War II was partly a fight againstthe Axis: the Roosevelt administration chose to seek total victory over its enemies and then achieved it. But the American effort was also a fight for a certain vision of international political and economic order. Even before the Japanese attacked, American leaders had hoped for a postwar settlement that would provide the United States and the world with lasting peace and prosperity.
The negative and positive fights occurred simultaneously, but American policymakers did not link them very well. In particular, they failed to recognize that even the total defeat of the Axis powers would be only a necessary but not a sufficient condition for the emergence of their desired postwar order. Washington had to ally with Stalin to destroy Hitler, and the price of that alliance was giving the Soviets control of half of Europe after the war. The reality of this Faustian bargain took a while to sink in, however, and so the endgame of the positive fight continued long after VE Day—until the emergence of NATO and the postwar settlement in the late 1940s and early '50s.
The Cold War, in other words, is best understood not as some new struggle, but rather as a continuation of the positive fight America had already been pursuing for several years. Given the Soviet Union's different vision for the world, such a clash was probably inevitable; only one side's abdication of the field could have prevented it. But the disillusionment and hysteria accompanying its onset was not inevitable, and stemmed in part from the failure of the Western allies to acknowledge the gap between their political and military policies during the first half of the decade.
As late as the beginning of 1945, Washington expected fighting in the Pacific to continue long after it had stopped in Europe. But the endgame in the east began in earnest in late spring that year, and Japan's capitulation followed a few months after Germany's. In the Pacific, three new factors came into play. Unlike the Nazis, Japanese leaders actually tried to negotiate and end the war short of total defeat. The divergence of long-term interests between the United States and the Soviet Union grew increasingly obvious. And the atomic bomb became available for use. During the summer of 1945, accordingly, U.S. officials actively debated which war-termination policies in the Pacific would best promote American interests. In dealing with Japan, as with Germany, they looked more to the lessons of the past and national ideology than to the calculations of Realpolitik. But beneath American decisions, underwriting policymakers' extraordinary ambition in both theaters, was the strongest relative power position the modern world had ever seen.
That strength remained largely intact several years later, and helps explain one of the most puzzling episodes in American military and diplomatic history—the final stages of the Korean War. Once North Korean troops surged across the 38th Parallel in late June 1950, the fortunes of war shifted back and forth until both sides agreed to begin armistice negotiations the following summer. Six months of haggling dispensed with routine military matters such as the armistice line and postwar security requirements, and by the end of 1951 a settlement seemed imminent. But then an extremely unusual issue rose to the top of the agenda—the question of whether Communist prisoners in UN hands would be forced to go home against their will at the end of the conflict or instead be allowed to refuse repatriation.
Still smarting from having to accept a stalemate and feeling guilty about having forced the return of Soviet POWs to Stalin's tender mercies back in 1945, Harry S. Truman and Secretary of State Dean Acheson decided that there was no reason they had to witness such heart-rending scenes this time around, so they made the principle of voluntary repatriation official U.S. policy. Yet thanks to poor planning and extraordinary bureaucratic incompetence on the ground in Korea, the repatriation stance kept the fighting going for close to another year and a half.
More than 124,000 UN casualties, including nine thousand American dead, came during the period when prisoner repatriation was the sole contested issue at the armistice talks, and the policy cost tens of billions of dollars. Yet rather than end the war by reverting to the routine historical practice of an all-for-all prisoner swap, two successive American administrations chose to continue fighting, and one of them even seriously mulled the possibility of escalation to nuclear war. The only way to make sense of this behavior is to look at the lessons policymakers had drawn from the previous war along with the mid-century hegemony that gave the U.S. leaders extraordinary freedom of action to do pretty much whatever they wanted.
A decade further on, American officials believed that the fall of South Vietnam to communism would have terrible consequences at home and abroad, so they decided to do what was necessary to prevent such an outcome. During the Kennedy and Johnson administrations, the toughest question—whether to accept the true costs of victory or defeat—was kicked down the road. By gradually increasing the scale of the American effort, officials hoped, the United States could persuade the enemy to cease and desist. Once the patience of the American public wore thin, however, such an approach was no longer feasible. By 1968, the war was causing such domestic turmoil and costing so much blood and treasure that finding a way out became just as important as avoiding a loss.
Richard Nixon's first Vietnam strategy stemmed from the lessons policymakers had drawn from the endgame of the Korean War—that negotiations with Communists could be successful if you continued military operations and threatened radical escalation. When that strategy didn't work, the White House opted for what seemed to be a politically palatable middle path between staying the course and withdrawing quickly. It started withdrawing troops and reduced the U.S. role in ground combat while holding off a South Vietnamese collapse. In the end, the twists and turns of policy and negotiations yielded an agreement that permitted the United States to walk out, get its prisoners back, and not formally betray an ally. That same agreement, nevertheless—together with a changed domestic context in the United States—paved the way for the fall of South Vietnam two years later.
The lessons of Vietnam were very much on the minds of policymakers in the George H. W. Bush administration as they responded to Saddam Hussein's invasion of Kuwait in August 1990. Those lessons, officials believed, argued for a quick, decisive use of force to achieve carefully limited political objectives—something that the military campaign in the Persian Gulf accomplished by pushing Iraqi forces out of Kuwait within weeks.
But while undoing the invasion of Kuwait was the Bush administration's chief war aim, it was not the only one, since Washington also wanted to deal with the ongoing threat Iraq posed to the security of the Gulf region. And here the lessons of recent wars were problematic. Both a Korean-style solution (garrison Kuwait forever) and a Vietnam-like approach (get deeply entangled in nation-building in Iraq) seemed unattractive. So Washington convinced itself that it could have its cake and eat it, too—that Saddam was bound to be dispatched by one of his minions following a humiliating defeat, something that would make the problem go away without direct or ongoing American intervention in Iraqi politics or the Gulf more generally.
In the end, however, Saddam managed to retain control over his regime's security apparatus and use the reconstituted remnants of Iraq's armed forces to suppress popular uprisings against him by Shiites in the south and Kurds in the north. Days after celebrating their quick and relatively easy triumph, American officials found themselves watching their defeated enemy rise from the ashes and savage the very people Washington had called on to revolt. Just when Bush thought he was out, therefore, Iraq pulled him back in, as the administration wound up permitting Saddam to reestablish his control over the country while backing into the Korean-style containment it had tried so hard to avoid.
Over the course of the next decade, Washington continued to contain Iraq while hoping for Saddam to fall—less because officials thought this policy was a good one than because they thought the alternatives were even worse. Then came the terrorist attacks of September 11, 2001, which convinced the administration of a different George Bush that the Middle East status quo was unacceptable. Afghanistan was the first front in Washington's subsequent "war on terror," but within days of the fall of Kabul the president ordered planning to start for what would become a second front in Iraq.
Previous administrations had shied away from toppling Saddam because they did not want to take responsibility for what would happen in Iraq afterward. The second Bush team got around such concerns by convincing itself that American commitments in a postwar Iraq could be limited without ill effect. Conventional wisdom about the need for extensive nation-building was misguided, senior officials believed; a light footprint on the ground and a quick handoff to friendly locals was all that was required to get things on track and allow the United States to move on to the next security challenge.
When this theory was put to the test, however, it failed spectacularly, and having toppled Saddam the United States was left presiding over a country rapidly spinning out of control, with officials having no plans or resources for what to do next. Liberation turned into occupation; local ambivalence into insurgency and then civil war. Four years later, a new and better-resourced American strategy managed to build on some positive local trends and stabilize the situation, so that by the end of the decade Iraq had pulled back from the brink and gained a chance at a better future. But even then nothing was guaranteed.
For all the attention devoted to the second Bush administration's distinctive ideas about national security policy, what made its approach to Iraq possible was its unfettered power. International primacy removed limits on American foreign policy imposed by the world at large, and the 9/11 attacks swept away limits imposed by the domestic political system. The administration's leading figures thus found themselves with extraordinary freedom of action and decided to use it to the fullest. Ironically, the mistakes they made had the effect of squandering the surplus capital they had inherited and leaving their successors constrained once again.
The Fire Next Time
In early 2009, the Obama administration assumed responsibility for the still-unfinished wars in Iraq and Afghanistan. Some of the new president's supporters were surprised and dismayed when the administration failed to dramatically change U.S. policy toward either conflict and even increased U.S. involvement in the latter. They should not have been: wars are difficult to close out even when they are started well, and mistakes at the beginning complicate the job exponentially, no matter who is in charge later on. The crucial test for Barack Obama and his successors, accordingly, will be not simply whether they can muddle through the struggles they were bequeathed, but whether they can avoid making major mistakes themselves in the wars that will inevitably follow down the road.
When future American leaders tackle the Clausewitzian challenge, they will still possess great power and will have the advantage of knowing what their predecessors did and how they fared. As this book shows, lessons from previous wars can serve as cognitive blinders, narrowing the way officials think about the situations they face, and power can be a trap, underwriting hubris and folly. But lessons can also guide and power can create opportunities. So if new generations of wartime policymakers fail to think clearly about what they are doing and stumble badly once again, they will have nobody to blame but themselves.
Copyright © 2010 by Gideon Rose. All rights reserved.