Intelligence: From Secrets to Policy (32 page)

There are also a variety of analytic techniques available to analysts. Some of the more popular ones, in the aftermath of 9/11 and Iraq, are alternative competing hypotheses (ACH) and argument mapping, among others. ACH offers a simple way to ensure that multiple plausible explanations for the known intelligence are considered, as well as assessing which hypotheses are more likely by building a matrix to consider alternative scenarios. Argument mapping allows the analyst to diagram a given issue or case and break it into contentions, premises, rebuttals, and so on to get an improved sense of the true substance of the case. Some of these techniques have strong advocates both inside and beyond the intelligence community. But it is best to think about these like tools, no different than a homeowner’s toolbox. No tool is right for every job. They key is to be conversant with the tools and to know which one is right for which job.
 
ESTIMATES. The United States creates and uses analytical products called estimates (or assessments in Britain and Australia). These serve two major purposes: to see where a major issue or trend will go over the next several years and to present the considered view of the entire intelligence community, not just one agency. In the United States their communitywide origin is signified by the fact that the director of national intelligence signs completed estimates, just as DCIs did before.
Estimates are not predictions of the future but rather considered judgments as to the likely course of events regarding an issue of importance to the nation. Sometimes, more than one possible outcome may be included in a single estimate. The difference between estimate and prediction is crucial but often misunderstood, especially by policy makers. Prediction foretells the future—or attempts to. Estimates are more vague, assessing the relative likelihood of one or more outcomes. If an event or outcome were predictable—that is, capable of being foretold—one would not need intelligence agencies to estimate its likelihood. It is the uncertainty or unknowability that is key. As American baseball icon Yogi Berra said, “It is very difficult to make predictions, especially about the future.”
The bureaucratics of estimates are important to their outcome. In the United States, national intelligence officers are responsible for preparing estimates. They circulate the terms of reference (TOR) among colleagues and other agencies at the outset of an estimate. The TOR may be the subject of prolonged discussion and negotiation, as various agencies may believe that the basic questions or lines of analysis are not being framed properly. The drafting is not done by the NIOs but by someone from the NIO’s office, or the NIO may recruit a drafter from one of the intelligence agencies. Once drafted, the estimate is coordinated with other agencies, that is, the other agencies read it and give comments, not all of which are accepted, because they may be at variance with the drafter’s views. Numerous meetings are held to resolve disputes, but the meetings may end with two or more views on some aspects that cannot be reconciled. The DNI chairs a final meeting, a National Intelligence Board, which is attended by senior officials from a number of agencies. After the DNI signs the estimate, signifying he or she is satisfied with it, the DNI owns the estimate. DCIs were known to change the views expressed in estimates with which they disagreed. This usually displeased the drafter but was within the DCI’s authority.
In addition to the bureaucratic game playing that may be involved in drafting estimates, issues of process influence outcomes. Not every issue is of interest to every intelligence agency. But each agency understands the necessity of taking part in the estimative process, not only for its intrinsic intelligence value but also as a means of keeping watch on the other agencies. Furthermore, not every intelligence agency brings the same level of expertise to an issue. For example, the State Department is much more concerned on a day-to-day basis about human rights violations than are other agencies, and INR reflects this in its work for its specific policy makers and in the expertise it chooses to develop on this issue. Or, the Department of Defense (DOD) is much more concerned about the infrastructure of a nation in which U.S. troops may be deployed. Rightly or wrongly, however, estimates are egalitarian experiences in that the views of all agencies are treated as having equal weight. This ignores the Orwellian view of intelligence that holds, on certain issues, that some agencies are “more equal” than others.
Some issues are the subjects of repeated estimates. For example, during the cold war, the intelligence community produced an annual estimate (in three volumes) on Soviet strategic forces, NIE 11-3-8. For issues of long-term importance, regular estimates are a useful way of keeping track of an issue, of watching it closely and looking for changes in perceived patterns. However, a regularly produced estimate can also be an intellectual trap, as it establishes several benchmarks that analysts do not want to tinker with in the event of possible changes. Having produced a long-standing record on certain key issues, the estimative community finds it difficult to admit that major changes are under way that, in effect, undercut its past analysis.
This issue may be less crass than preserving one’s past record. Having come to a set of conclusions based on collection and analysis, what does it take for an analyst or a team of analysts working on an estimate to feel compelled to walk away from their past work and come to an opposite conclusion? One can create a scenario in which some new piece of intelligence completely reverses analysts’ thinking. Such an occasion is extremely rare. Is it possible to start from scratch and ignore past work? If one tries to, what is the cutoff point for old collection that is no longer of use? Although the influence of past analysis can be a problem, it is less easily solved than is commonly thought. Intelligence analysis is an iterative process that lacks clear beginning and end points for either collection or analysis. The case of the 2007 Iran nuclear estimate is again instructive. According to intelligence officials, newly available intelligence only came to light very late in the estimative process. The implications of the new intelligence were clear and stark. The first issue to be dealt with was the veracity of the new intelligence: was it being fed by Iran? Although this question cannot be answered definitively, analysts who subjected the new intelligence to rigorous examination came away convinced that it was real. This meant that the conclusions of the estimate had to be revised, with all of the attendant reaction discussed earlier. Although those responsible for the Iran nuclear NIE stand by their analysis, they also admit that it is not a certainty and will remain subject to change.
Some people question the utility of estimates. Both producers and consumers have had concerns about the length of estimates and their sometimes plodding style. Critics also have voiced concerns about timeliness, in that some estimates take more than a year to complete. One of the worst examples of poor timing came in 1979. An estimate on the future political stability of Iran was being written—including the observation that Iran was “not in a pre-revolutionary state”—even as the shah’s regime was unraveling daily. This incongruity led the House Intelligence Committee to observe that estimates “are not worth fighting over.”
After the start of the Iraq war (2003- ), the estimate process came under intense scrutiny and criticism. Among the concerns were the influence of past estimates, the groupthink issue, the use of language that seemed to suggest more certainty than existed in the sources, inconsistencies between summary paragraphs (called key judgments, or KJs) and actual text, and the speed with which the estimate was written. This last criticism was interesting in that the estimate was written at the request of the Senate, to meet its three-week deadline before voting on the resolution granting the president authority to use force against Iraq. Frequent leaks of NIEs on a variety of Iraq-related topics led some to charge that the intelligence community was at war with the Bush administration.
Since the Iraq WMD NIE, there has also been increased political pressure, largely coming from Congress, to have at least the KJs of the estimates made public. The KJs for Prospects for
Iraq’s Stability:
A
Challenging Road Ahead
(January 2007) and
The Terrorist Threat to the Homeland
(July 2007) were published. As could be expected, members of Congress who take issue with the Bush administration’s policies have used these published documents as confirmation of their own political stances. Although this does not contravene any rules or procedures, it does have the effect of immediately injecting the NIEs into a partisan debate. On October 24, 2007, DNI McConnell announced his judgment that declassified KJs should not be published and that he did not accept recent publication as a precedent. However, the Iran nuclear NIE’s KJs were published just seven weeks later, undercutting McConnell’s stance. The publication of KJs is likely to continue (see chap. 10), and this may have an effect on the willingness of analysts or NIE managers to make strong calls because of their reluctance to be drawn into partisan debates. It also tends to misuse NIEs as factual refutations of administration policies, thus changing the very basis by which an
estimate
is crafted. Also, given the instant political analysis to which released NIEs are subject, this process has the odd effect of taking a strategic document and turning it into current intelligence.
It is also possible that too much emphasis is now put on estimates. Although they do represent the collective views of the intelligence agencies and are signed by the DNI and given to the president, estimates are not the only form of strategic intelligence produced within the analytic community. However, estimates—or the lack of them—have come to be seen as the only indicator of whether the intelligence community is treating an issue strategically. This certainly was the critique of the 9/11 Commission, whose report castigated the community for not producing an NIE on terrorism for several years before 9/11. Strategic intelligence analysis can take many forms and can be written either by several agencies or by one. NIEs are not the only available format, and their existence or absence does not indicate the seriousness with which the intelligence community views an issue.
 
COMPETITIVE ANALYSIS. The U.S. intelligence community believes in the concept of competitive analysis—having different agencies with different points of view work on the same issue. Because the United States has several intelligence agencies—including three major all-source analytical agencies (CIA, Defense Intelligence Agency, and INR)—every relevant actor understands that the agencies have different analytical strengths and, likely, different points of view on a given issue. By having each of them—and other agencies as well on some issues—analyze an issue, the belief is that the analysis will be stronger and more likely to give policy makers accurate intelligence.
Beyond the day-to-day competition that takes place among the intelligence publications of each agency, the intelligence community fosters competition in other ways. Intelligence agencies occasionally form red teams, which take on the role of the analysts of another nation or group as a means of gaining insights into their thinking. A now-famous competitive exercise was the 1976 formation of Teams A and B to review intelligence on Soviet strategic forces and doctrine. Team A consisted of intelligence community analysts, and Team B consisted of outside experts, but with a decidedly hawkish viewpoint. The teams disagreed little on the strategic systems the Soviets had built; the key issue was Soviet nuclear doctrine and strategic intentions. Predictably, Team B believed that the intelligence supported a more threatening view of Soviet intentions. However, the lack of balance on Team B largely vitiated the exercise, which could have been useful not only for gaining insight into Soviet intentions but also for validating the utility of competitive intelligence exercises.
Dissent channels—bureaucratic mechanisms by which analysts can challenge the views of their superiors without risk to their careers—are helpful but not widely used. Such channels have long existed for Foreign Service officers in the State Department. Although less effective than competitive analysis for articulating alternative viewpoints, they offer a means by which alternative views can survive a bureaucratic process that tends to emphasize mutual consent.
A broader issue is the extent to which competitive intelligence can or should be institutionalized. To some degree, in the U.S. system it already is. But the competition among the three all-source agencies is not often pointed. They frequently work on the same issue, but with different perspectives that are well understood, thus muting some of the differences that may be seen.
Competitive analysis requires that enough analysts with similar areas of expertise are working in more than one agency. This was certainly true during the zenith of competitive analysis, in the 1980s. But the capability began to dwindle as the intelligence community faced severe budget cuts and personnel losses in the 1990s, after the end of the cold war. As analytic staffs got smaller, agencies began to concentrate more on those issues of greatest importance to their policy customers. Thus, the ability to conduct competitive analysis declined. To rebuild the capability requires two things: more analysts and the time for them to become expert in one or more areas.
Although the intelligence community believes in competitive analysis, not all policy makers are receptive to the idea. Some see no reason that agencies cannot agree on issues, perhaps assuming that each issue has a single answer that should be knowable. One main reason that President Truman created the Central Intelligence Group (CIG) and its successor, the CIA, was his annoyance over receiving intelligence reports that did not agree. He wanted an agency to coordinate the reports so that he could work his way through the contradictory views. Truman was smart enough to realize that agencies might not agree, but he was not comfortable receiving disparate reports without some coordination that attempted to make sense of the areas of agreement and disagreement. Other policy makers lack Truman’s subtlety and cannot abide having agencies disagree, thus vitiating the concept of competitive analysis.

Other books

Magic Nights by Ella Summers
Unconventional Scars by Allie Gail
Elly's Ghost by John R. Kess
Portadora de tormentas by Michael Moorcock
Recklessly by A.J. Sand
Millionaire's Last Stand by Elle Kennedy
The Twyborn Affair by Patrick White