Intelligence: From Secrets to Policy (31 page)

One way to help convey uncertainty is to identify in the analysis the issues about which there is uncertainty or the intelligence that is essentially missing but that would, in the analyst’s view, either resolve the unknowns or cause the analyst to reexamine currently held views. This raises another issue: known unknowns (that is, the things one knows that one does not know) versus the unknown unknowns (that is, the things one did not know that one did not know). By definition, the second group cannot be bounded or reduced as it is unknown. But one’s analysis must constantly be examined to identify known unknowns and to give attention to resolving these issues, if possible.
The use of language is important in all analysis. Analysts tend to use a stock set of verbs to convey their views: “believe,” “assess,” “judge.” For some analysts the words have distinct and separate meanings that convey the amount of intelligence supporting a particular view and their certainty about this view. However, the intelligence community did not reach a consensus as to what each verb meant until 2005. The NIC now publishes an explanatory page with each NIE that explains the use of estimative language. The text box, “What We Mean When We Say: An Explanation of Estimative Language,” in the July 2007 NIE,
The Terrorist Threat to the Homeland
, is a useful example. Terms like “we judge” or “we assess” are used interchangeably. (This seems close to the British experience on this issue, according to the 2004 British Butler report on intelligence about Iraqi WMD. The Butler report states that British policy makers assumed the different words had different meanings, but British analysts said they just wrote naturally, using the terms interchangeably.) Analytical judgments can be based on collected intelligence or previous judgments that serve as “building blocks.” The use of “precise numerical ratings” is rejected as these “would imply more rigor than we intend.” Instead, there is a range of likelihood outcomes:
• Remote
• Unlikely
• Even Chance
• Probably, Likely
• Almost certainly
 
Note that there is no certainty at either end of this range. An event that is known to have no chance of occurring will not be analyzed. Nor will an event that is certain to occur be analyzed in terms of likelihood, although its ramifications can be discussed. Phrases like “we cannot rule out” or “we cannot discount” reflect an event that is seen as being unlikely or even remote but “whose consequences are such that it warrants mentioning.” These phrases are classic estimative language and can be interpreted by some readers, again, as a pusillanimous call. Finally, the use of “maybe” and “suggest” are defined as events whose likelihood cannot be assessed because of a paucity of information.
Beyond these uses of language there is the issue of the confidence that the analyst has in his or her judgments, called
confidence levels.
In NIEs, the confidence levels are “based on the scope and quality of information supporting our judgments.”
• High confidence: judgments based on high-quality information, or the nature of the issue makes a solid judgment possible
• Moderate confidence: available information is susceptible to multiple interpretations; or there may be alternative views; or the information is “credible and plausible” but not sufficiently corroborated
• Low confidence: information is scant, questionable, or fragmented, leading to difficulties in making “solid analytic inferences”; or is based on sources that may be problematic
 
Publishing a text box of this sort is a major step forward in trying to get the policy readers to understand the basis by which judgments are made. This depends on policy makers reading it and even this will not preclude future misunderstandings about the use of estimative language. Those who do read it will get a much better idea of the layers of meaning inherent in an estimative judgment. There are few, if any, straightforward calls.
 
INDICATIONS AND WARNINGS. Indications and warnings, or I&W, as it is known among intelligence professionals, is one of the most important roles of intelligence—giving policy makers advance warning of significant, usually military, events. The emphasis placed on I&W in the United States reflects the cold war legacy of a long-term military rivalry and the older roots of the U.S. intelligence community in Pearl Harbor, the classic I&W failure.
I&W is primarily a military intelligence function, with an emphasis on surprise attack. It relies, to a large extent, on the fact that all militaries operate according to certain regular schedules, forms, and behaviors, which provide a baseline against which to measure activity that may raise I&W concerns. In other words, analysts are looking for anything that is out of the ordinary, any new or unexpected activity that may presage an attack: calling up reserves, putting forces on a higher level of alert, dropping or increasing communications activity, imposing sudden communications silence, or sending more naval units to sea. But none of these can be viewed in isolation; they have to be seen within the wider context of overall behavior.
During the cold war, for example, U.S. and North Atlantic Treaty Organization (NATO) analysts worried about how much warning they would receive of a Warsaw Pact attack against Western Europe. Some analysts believed that they could provide policy makers, minimally, several days’ warning, as stocks were positioned, additional units were brought forward, and so on. Others believed that the Warsaw Pact had sufficient forces and supplies in place to attack from a standing start. Fortunately, the issue was never put to the test.
For analysts, I&W can be a trap rather than an opportunity. Their main fear is failing to pick up on indicators and give adequate warning, which in part reflects the harsh view of intelligence when it misses an important event. In reaction, analysts may lower the threshold and issue warnings about everything, in effect crying wolf. Although this may reduce the analyst’s exposure to criticism, it has a lulling effect on the policy maker and can cheapen the function of I&W.
Terrorism presents an entirely new and more difficult I&W problem. Terrorists do not operate from elaborate infrastructures, and they do not need to mobilize large numbers of people for their operations. One attraction of terrorism as a political tool is the ability to have a large effect with minimal forces. Thus, an entirely new I&W concept is needed to fight terrorism, one more likely to catch the much smaller signs of impending activity. In some respects, the I&W function for terrorism becomes very close to police work and keeping watch over neighborhoods or precincts, looking for things that “just don’t look right.” Terrorism also raises the
duty to warn
issue. lf credible evidence indicates a potential attack, does the government have a responsibility to warn its citizens? A warning may tip off the terrorists to the fact that their plot has been penetrated, thus putting sources and methods at risk. Also, citizens may become inured to—if not downright cynical about—recurring changes in the level of warning, especially if the attacks do not occur. Some may come to believe that the government, and especially the intelligence agencies, is trying to cover itself in case an attack does occur. This phenomenon has been seen in the United States since 2001 as alerts have been issued and then withdrawn after the threat subsided or failed to materialize.
 
OPPORTUNITY ANALYSIS. I&W is not only one of the most important analytic functions but it is also one that comes naturally to intelligence analysts. A primary reason to have intelligence agencies is to avoid strategic surprise (see chap.1). I&W is a means to that end. But I&W can become something of a trap, a theme that is reverted to too often lest something be missed.
Policy makers understand that I&W leaves them in the position of reacting to intelligence. But policy makers also want to be actors, to achieve goals and not just prevent bad things from happening. As more than one senior policy maker has said, “I want intelligence that helps me advance my agenda, that makes me the actor, not the reactor.” This is often referred to as
opportunity analysis.
Opportunity analysis is a sophisticated but difficult type of analysis to produce. First, it requires that the intelligence managers or analysts have a good sense of the goals that the policy maker seeks to achieve. Successful opportunity analysis may require some degree of specific and detailed knowledge of these goals. For example, knowing that a goal is arms control may not suggest many useful avenues of opportunity analysis beyond broad generalities. Knowing that the goals include certain types of weapons or restrictions would be more helpful. Thus, again, emphasis is placed on the importance of the intelligence analysts knowing the intended directions of policy. Second, opportunity analysis often seems more difficult or riskier as it requires positing how foreign leaders or nations will react to policy initiatives. Positing a foreign action and then describing either the consequences or possible reactions often seems easier than the reverse process. After all, an analyst often feels more comfortable understanding how a nation or its policy makers are likely to react even if the analyst is an expert in the politics of another country. Finally, opportunity analysis brings the intelligence community close to the line separating intelligence from policy. Writing good opportunity analysis without appearing to be prescriptive can be difficult even if that is not the intended message or goal.
In general, opportunity analysis is not engaged in often and is easily misunderstood when it is produced. However, in his October 25, 2005,
National Intelligence Strategy,
then-DNI John Negroponte (2005-2007) made opportunity analysis one of his strategic mission objectives.
ALTERNATIVE ANALYSIS. One critique of the intelligence community’s performance on Iraqi WMD was the alleged failure to examine alternative analytical lines. Even if true, it remains difficult in the case of Iraq to come up with analytically and intellectually sensible arguments that, in 2003. Saddam Hussein had come clean and had no WMD and was telling the truth when he made this case. Still, looking beyond the Iraq WMD case, the issue of alternative analysis is an important one. The 2004 intelligence law requires that the DNI create a process to ensure the effective use of alternative analysis.
The main driver is the concern that analysts can fasten on to one line of analysis, especially for issues that are examined and reexamined over the years, and then will not be open to other possible hypothesis or alternatives. Should this happen, the analyst will then not be alert for changes, discontinuities, or surprises, even if they are not threatening. One way to attempt to avoid this potential intellectual trap is to create alternative analyses or red cells.
For several reasons, the intelligence community has not always embraced the concept. First, concerns arise that the process can be political in nature or can lead to politicization. The Team A and Team B example from the cold war (see chap. 11) remains a warning in this regard. The alternative analysis group (Team B) was made up of individuals who were more hawkish about dealing with the Soviet Union. Thus, it was no surprise that they found the NIEs on Soviet strategic goals wanting. The existence of an alternative analysis, especially on controversial issues, can lead policy makers to shop for the intelligence they want or cherry-pick analysis, which also results in politicization. Second, only so many analysts are available to deal with any issue or have the requisite expertise on any issue. Therefore, a decision has to be made as to which analysts are assigned to the mainstream and which to the alternative group. Their levels of expertise should be roughly equal. For analysts who have been on the losing side of issues in the area in the past, the chance to participate in alternative analysis may be an irresistible opportunity to reopen old arguments or settle scores. Finally, one of the prerequisites for alternative analysis is that it provide a fresh look at an issue. Therefore, as soon as this type of capacity is institutionalized and made a regular part of the process, it loses the originality and vitality that were sought in the first place.
Alternative analysis consists of more than simply asking a contrafactual question. As in the case of Iraq WMD, the contrafactual question (make the case that Saddam is telling the truth and has no WMDs) would likely not have yielded a better analytical result. The analyst or the analytical manager has to be alert to the nature of the issue under consideration and the type of contrafactual question that will actually probe the generally held premise. It may be necessary to try several such questions before coming up with one that truly challenges the prevailing wisdom.
The 2004 intelligence law puts great emphasis on alternative analysis, competitive analysis, and red teaming, which are all variants on the same theme—an effort to avoid groupthink. The DNI is responsible for institutionalizing processes for these other types of analysis.
The intelligence community has long sought analytic tools, which means both programs and techniques that will foster better analysis. The computer industry has advanced the intelligence community’s ability to collect, manipulate, and correlate data, all of which eases part of the analyst’s burden. But there have also been problems integrating the tools into the analytic process, in large part because of the intellectual disconnect between those responsible for designing the tools and the analysts. Programmers and analysts do not think along similar lines, and too many programs have been developed without regard to how analysts think or work. Also, too few of the tools have been tested by working analysts. The net result has often been a new program that sits unused on an analyst’s computer desk top because it is either overly complex or not complementary to the analyst’s working methods.

Other books

Charmed by Trent, Emily Jane
Tag, The Vampire's Game by Elixa Everett
Feather Boy by Nicky Singer
Everybody's Autobiography by Gertrude Stein
Daddy Love by Joyce Carol Oates
Runt by Niall Griffiths
Stolen Pleasures by Gina Berriault
El único testigo by Jude Watson