“Chapter 9 Interpretation and Inferences” in “Security Analysis Text”
Chapter 9
Interpretation and Inferences
Bottom Line Up Front
The interpretation and inference element is where work on the other elements covered previously in this book come together to produce the analytic findings. This is the penultimate element to be addressed in a critical-thinking analysis— just before the final implications and consequences element. Interpretation techniques are usually more intuitive and inductive in nature. Inference, on the other hand, tends to be more deductive (scientific). A number of qualitative, comparative, and quantitative techniques may be employed to address the interpretation and inference element. Qualitative and comparative techniques usually support descriptive inference. Quantitative techniques usually support statistical inference. This chapter presents a number of both inductive and deductive qualitative techniques commonly used in security analysis.
Figuring Things Out
Interpretation and inference are what the mind does to figure things out. Interpretation is the process of taking facts and placing them in the context of the analyst’s own experiences, perspectives, and points of view to describe or possibly explain a situation.1 Inference refers to using facts and reasoning to arrive at findings that describe, explain, or predict a situation.2 Both interpretation and inference can lead to what is known as contentions (i.e., theses, key judgments, findings, conclusions, or recommendations). Interpretation is often more intuitive and inductive, while inference relies on deductive analytic techniques to create more robust findings. The interpretation and inference element is where work on the other critical-thinking elements comes together and interrelate.3 While the other elements have identified and evaluated the facts of a study, plus applied initial reasoning to those facts, the interpretation and inference element is where things are finally figured out. For an intelligence analysis, contentions normally address a threat or opportunity decision makers need to understand. For security policy analysis, recommendations often are produced to solve problems, create or revise programs, or otherwise support decision making.
Without realizing it, people employ interpretation and inference every day under the dominance of intuitive System 1 thinking (Chapter 2 and Appendix II). Peoples’ thought processes are influenced heavily by subconscious beliefs and assumptions (Chapter 6 and Appendix II). For example, when most people see a police car parked beside the road in front of them, their immediate interpretation is that there is a speed-trap ahead. One of the tendencies of poor thinking (see Figure 2.2) is to rely on System 1 thinking to jump to a conclusion without a slower and more-thoughtful analysis. Figure 9.1 delineates the dynamics of the System 1 thinking process.4 Fortunately, the Security Analysis Critical-Thinking Framework (see Figure 2.5) can help analysts overcome System 1 thinking and allow a more systematic analytic process that engages System 2 thinking.
The System 1 thinking process is most related to interpretation or intuitive analysis. This chapter focuses on systematic techniques for interpretation and inference to stimulate security analysts’ System 2 thinking. In the practitioner and academic communities, inferential techniques usually are considered within one of three main categories of analysis: qualitative, comparative, or quantitative. As described in Chapter 3, these categories are delineated by both the number of variables and cases they address and the analytic techniques used to determine the significance of their findings. This leads back to the concepts of descriptive inference and statistical inference discussed in Chapter 3. Descriptive inference applies to qualitative and comparative analytic methods by combining empiricism (data, facts, evidence) and rationalism (theorizing) that lead to contentions.5 It is the most used method for inference in political-military security analysis, because it allows investigation of valid research questions that result in identification of causal relationships and patterns in the data. Descriptive inference also provides a means to advance social theory, but the results of the analysis may only be generalized (inferred) to the case(s) under study and not a larger population of cases. Descriptive inference seldom generates statistical significance. Statistical inference, on the other hand, applies to quantitative studies, which use a combination of statistical and mathematical analysis of quantitative data (empiricism) with theories and models (rationalism). Statistical inference normally provides statistical significance, a measure of the strength of relationships between variables (i.e., the strength of the analyzed patterns). If proper sampling techniques are used (Chapter 3), the results of statistical analyses may be generalized to a larger population. More on qualitative, comparative, and quantitative analysis is provided below.
Qualitative analysis investigates many variables within 10 or fewer cases; it allows more in-depth scrutiny of each variable to improve understanding of how the variable affects the behavior under study. Variables in causal-qualitative analysis often are measured in nominal or ordinal categories, but also may include more powerful interval and ratio measurements. Both causal and process- qualitative analyses often include tables, graphs, and charts of descriptive statistics (measures of central tendency, variances, ranges, distributions, etc.), which may be gleaned from available information. Qualitative analysis seeks to combine the information available (empirical data) with reasoning (theories, models) to reach a contention. Qualitative analysis supports descriptive inference as the techniques usually cannot provide a statistical significance for the relationships among variables. In that case, the best the analyst can do is objectively estimate the likelihood and confidence levels of the analytic results to allow customers a basic understanding of the quality of the analysis. Chapter 11 provides discussion on developing likelihood and confidence levels. Most political- military security analysis is qualitative and this chapter includes analytic techniques frequently used in qualitative security analysis, such as logical argumentation, pattern matching, pros-cons-fixes, weighted ranking, and matrix analyses.
Comparative analysis generally investigates a moderate number (11-50) of cases and supports descriptive inference as it explores diversity over the selected cases. Comparative techniques are frequently employed in regional security analysis. Comparative analysis often is carried out using a combination of qualitative and quantitative methods. This type of analysis makes wide use of descriptive statistics in comparing cases. Although not widely known in the security community, comparative analysis also has its own techniques. The creation of these techniques, commonly known as Qualitative Comparative Analysis (QCA), was spearheaded by U.S. sociologist Charles C. Ragin with a focus on comparing similarities and differences in causal configurations (sets of variables) across a number of cases to find recurring and meaningful patterns that lead to outputs.6 QCA analyses look for necessity and sufficiency in causal configurations leading to the same outputs. Ragin’s comparative-analysis techniques do not calculate statistical significance, but do allow the determination of benchmark proportions, which allow the analyst to determine the percentage of analyzed cases that support the analytic claims. Benchmark proportions usually indicate the causal configurations are, more often than not, sufficient (.50 to .64); usually sufficient (.65 to .79); or almost always sufficient (.80 to 1.0) to support the outcome. A working description of comparative techniques is beyond the scope of this book; however, security analysts with regional portfolios, or who otherwise may become involved in comparative analyses, should learn more about Ragin’s QCA techniques.
Quantitative analyses usually address a limited number of variables within 50 or more cases. The higher case requirement allows the statistical techniques employed to calculate the significance (strengths) of variable relationships. If the relationships among variables in the data tested are strong; however, quantitative techniques may be used with fewer than 50 cases. Figure 3.8 summarizes the general goals, aims, and objectives of quantitative analysis, which is at the heart of statistical inference. Descriptive statistics are used in quantitative studies to summarize large amounts of data and also to help calculate inputs to inferential statistical techniques. There are a number of statistical techniques available for inferential statistics; their use is determined by the research question, number of cases, and categories of variable measurements. Analyses with categorical (nominal, ordinal) measured variables often use cross-tabulation tables and Chi- Square distributions to calculate statistical inference findings and significance, respectively.7 Analyses with continuous (interval, ratio) measured variables use a variety of techniques such as means tables (analysis of variance—ANOVA) and Ordinary Least Squares (OLS) regression to calculate statistical findings and significance.8 Continuous-variable techniques may be calculated in many computer spreadsheet programs such as MS Excel (with Data Analysis add-on) or with more powerful statistical software. For studies with continuous measured variables, and where the outcomes (dependent variable) are probabilities, odds, or likelihoods, the analyst could employ a logistic-regression technique based on a maximum-likelihood estimation model.9 More powerful statistical software programs are needed for calculating logistic-regression statistics. Even more powerful software programs are available for analyzing complex and/or temporal causal models; these include structural equation modeling and advanced time- series analysis. Mathematical models also may be used to analyze data using Chaos Theory or Catastrophe Theory, where the removal of one or more factors (variables) may cause the collapse of an entire model or system. Statistical analysis also includes the techniques of Operations Research, the use of statistics in public-sector policy and administration and private-sector business and management. In statistical analysis, security analysts normally strive to achieve calculated significance levels of .05 or better in evaluating the strength of relationships between variables (see Figure 3.13). The .05-significance level means the analyst is willing to accept being wrong in 1 case out of every 20. (Note: Medical research may strive for significance levels of .01 or better.) In- depth coverage of descriptive and inferential statistical techniques is beyond the scope of this book. Security analysts, no matter their portfolio, should have instruction in descriptive and inferential statistical techniques through at least OLS regression and logistic regression.
Interpretive and inferential analytic techniques—qualitative, comparative, quantitative—are employed after information is collected (Chapter 5) and analyses of points of view and assumptions (Chapter 6), conceptualization (Chapter 7), and alternatives (Chapter 8) have been completed. Figure 9.2 is a guide to help determine techniques most appropriate for an individual analytic project. As discussed in Chapter 1, prior to the September 11, 2001, terrorist attacks on the United States, former CIA analyst Richards Heuer assessed that the U.S. Intelligence Community (IC) employed unaided judgment as their primary analytic techniques.10 This type of judgment corresponds to the inductive- qualitative analytic techniques noted in Figure 9.2. Since 9/11, more deductive techniques have been mandated for analyst training and use in the IC. Experienced intelligence and security policy analysts should be able to work across all cells in Figure 9.2.
Figure 9.2 | Summary of Interpretation and Inference Techniques | ||
Conditions of Analysis | Seldom uses theory or models, conduct contrast of specific instances to find patterns within cases. (Inductive) | Uses theory or models, finds patterns and systematic differences among or within cases. (Deductive) | Uses theory or models, assumes all cases follow same rules, finds patterns among complex and temporal factors. (Deductive) |
Qualitative Analysis Approaches (1-10 cases, with a large number of variables) | Description or explanation building, logical argumentation, pros-cons-fixes, weighted rankings, historical method, analogies, descriptive statistics (may build grounded theory). (Unaided Judgment) | Explanation or prediction building, pattern matching, matrix analysis, other qualitative deductive techniques, descriptive statistics. | Explanation or prediction building, pattern matching, matrix analysis, other qualitative deductive techniques, descriptive statistics, basic time-series analysis. |
Comparative Analysis Approaches (11-50 cases, with a moderate number of variables) | Description or explanation building, truth table, other QCA techniques, descriptive statistics. | Explanation or prediction building, truth table, fuzzy sets, other QCA techniques, descriptive statistics. | Explanation or prediction building, truth table, fuzzy sets, other QCA techniques, descriptive statistics, basic time-series analysis. |
Quantitative Analysis Approaches (50-plus cases, with a limited number of variables) | Description building, descriptive statistics, factor analysis. | Explanation or prediction building, descriptive statistics, cross tabulations, means tables (ANOVA), OLS regression, logistic regression, operations research, other quantitative techniques. | Explanation or prediction building, descriptive statistics, OLS regression, logistic regression, operations research, advanced time- series analysis, structural equation modeling, other quantitative techniques. |
Qualitative-Analysis Techniques
In many ways, the process of qualitative analysis is more difficult than comparative or quantitative analysis. With comparative and quantitative analysis, there are systematic procedures used to analyze information. Qualitative techniques are often “softer” with less-precise procedures. Often faced with considerable amounts of narrative or textual information, qualitative analysts spend significant time making sense of their analyses, while also spending considerable time and effort absorbing a large amount of data.
Causal qualitative analyses are used primarily to develop or induce theory, to advance theory, to give voice to heretofore unknown or underrepresented segments of society, and to interpret the significance of certain behaviors. Process qualitative studies are used to investigate the details of causal relationships, process steps, and alternatives. Both causal and process qualitative studies support problem solving and decision making. Qualitative analyses often will identify many non-systematic factors, which may include significant “noise” or irrelevant information surrounding the analytic topic. While analysts cannot completely ignore these non-systematic factors, they must be careful not to let these factors cloud the overall analysis. Qualitative analysis has a measure of uncertainty (bias), which is often large and not quantified. Qualitative analyses follow the general process listed below:
Step 1: Collect the information. Chapter 5 presents security analysis information-collection techniques.
Step 2: Code and organize the information. There are many ways to code information used in qualitative analyses. The most sophisticated method is to use existing coding systems such as numerical codes used by anthropologists to code data in cultural studies. Anthropologists devised this comprehensive coding system to allow the cross-comparison of cultural studies. Qualitative analysis computer software packages such as QSR NUD*IST and Ethnograph are available to assist in detailed coding as well as analysis of large amounts of qualitative information.
For most studies; however, simple letter, numeric, or color coding is appropriate, especially if the study includes a limited number of variables and cases. For example, every time a certain variable emerges in the information (based on its operational definition), a letter code (A, AB, CDW, etc.) or numeric code (1, 23, 245, etc.) could be annotated in the material’s margin, computer file, or noted for a certain counter code (for video and audio recordings). Alternatively, every time a certain variable emerges in textual material, highlight it with a different-colored marker. Coding information often takes considerable time.
For most analytic projects, simply noting information on each variable on individual note-cards is probably the easiest method of coding. The analyst may then arrange the cards on a bulletin board or table and look for patterns. It is recommended the cards be arranged in a manner that supports how the information will be used in other critical-thinking elements.
In designing a method to code the information, the analyst must consider how the coded information will be retrieved. The ability to conduct computer file keyword searches and computer software packages that support information retrieval often are useful. A manual system to retrieve the coded information may be appropriate when the material is not large in size. Sometimes the easiest method to retrieve coded information manually is to organize it by individual case study, unit of analysis, individual variables, or a combination of all three. For example, if the analyst is investigating attacks by four different terrorist groups— entailing four different case studies, that analyst would be testing the same hypotheses from the same causal model or theory in each case study. In this case, the easiest method to organize the data might be to establish four sections in a notebook—one for each of the four terrorist group case studies—and use a separate sheet of paper in each section to record information for each dependent and independent variable of interest from the theory or model being used. The analyst could then go through the coded information and list every instance where a particular variable of interest emerged in the information. The lists then could be paraphrased and annotated with the information source and location in the larger set of information. Once the information is organized in this manner, the analyst can conduct the analysis.
Step 3: Conduct the analysis. Apply logic and reasoning to information after it is collected, evaluated, coded, and organized. Some of this activity takes place within the critical-thinking elements of points of view and assumptions (Chapter 6), conceptualization (Chapter 7), alternatives (Chapter 8), and interpretation/inference (this chapter). At this point, the analyst must select the analytic techniques best suited for the analytic project (see Figure 9.2).
In selecting an analytic technique, the analyst should evaluate how individual variables are measured. Items to assess include:
- Variables may be measured only as categorical nominal and thus may be categorized but cannot be rank ordered. At times, nominal variables may be measured as either existing (yes or 1) or not existing (no or 0), which is referred to as a dichotomous dummy variable measure.
- Some variables may lend themselves to categorical ordinal measures; these variables may be placed in ordered categories such as low, medium, and high. The intervals between these categories; however, may not be equal.
- Other variables may have continuous ratio or interval measures; for example, this is often true if the variable is measured in a survey instrument or comes from existing statistical reports such as crime data.
Another consideration in selecting an analytic technique is whether the information may be subjected to descriptive statistical analysis— meaning generating descriptive statistics and correlation analyses (see Figure 7.3) to determine if there are obvious patterns in the information. At a minimum, this may include investigating if the information lends itself to descriptive statistical analysis with central tendencies (mode, median, mean), measures of dispersion or variation, and a range (low to high measures). Even nominal measured variables have a central tendency (mode or most frequent measure) and a rough range (listing of most frequent categories or behaviors). Descriptive statistics often are displayed as lists of variables, tables, graphs, or other such methods to summarize the information. Descriptive statistics allow the initial analysis of often disperse and seemingly unconnected information using easily understandable formats.
The key to good qualitative analysis is for the analyst to continually question both the information and reasoning employed in the analysis. Information should be sought that either supports or does not support hypotheses. If only anecdotal information is available, the analyst should widen the information collection effort (Chapter 5). It is useful to routinely approach analyses with suspicion. Analysts must be self-critical and aware of their own biases (the lenses from the etic and emic models discussed in Chapter 3, or those identified in the assumptions and beliefs analysis in Chapter 6). Analysts should not reject or too easily accept, “folk explanations,” or common explanations for the behavior. Be persistent in looking for consistencies and inconsistencies in the information. Ask: Do the patterns that emerge really mean something? Look for negative information—evidence that does not support the preferred hypotheses. Look at alternative explanations for the behavior. Ask: What other conclusions could be made from the same information? Try to identify potential sources of bias in the information or reasoning. Question: Does the analysis ensure reliability and validity? Ask Alexander’s Question: What new information or different assumptions would negate the findings?
Descriptive inference is based on several assumptions analysts should check to make sure they have not violated the rules of causality. These assumptions include:
- The causal model is complete; that is, no important variables are omitted or irrelevant variables included in the model. Social science is based upon an assumption of parsimony (keeping the causal models as simple as possible); however, do not let parsimony get in the way of designing a robust causal model.
- Operationalized measures of variables are unbiased and efficient with measurement error minimized.
- Causal effects are symmetric; that is, as independent variables move up and down, so does (do) the dependent variable(s).
- There is no multicollinearity; that is, there are no strong relationships (correlations) between independent variables.
- There is no endogeneity; that is, the dependent variable does not cause changes in any of the independent variables.
Unlike quantitative analysis, there are no easy methods to test for violations of the above assumptions in qualitative analysis. The analyst must continuously be aware of possible violations of these assumptions and the magnitude and direction of biases and uncertainty created by possible violations (see handling bias in Chapter 3). As a result, the above assumptions are often violated in qualitative analyses.
Analysts should also avoid the two most common mistakes in qualitative analysis:
- Do not become excessive or overzealous in the analysis. Consider Occam’s Razor: the first and simplest explanation for a behavior is often the best. Too much analysis could lead to envisioning variable relationships that do not really exist.
- Make sure to conduct an analysis. Too often, analysts become so caught up in collecting and describing their information, they forget to look for patterns and explanations for the situation under analysis.
Below are several qualitative techniques often used in security analysis.
Logical argumentation. This inductive qualitative technique assists the analyst in combining information (data, facts, evidence) with basic reasoning to reach contentions (theses, key judgments, findings, conclusions, and recommendations). It also is called evidentiary reasoning. This technique should be familiar to most analysts as it is similar to methods taught in secondary school to write research papers and present arguments; a primary difference being the contention is placed first and not as an ending summary or conclusion. Lawyers are familiar with this technique as it is how they build their case arguments. Logical argumentation calls for the analyst to first gather and organize information; then, using basic reasoning, construct logical arguments using that information. This technique is often intuitive and inductive in nature. Logical argumentation at times may result in grounded theory; that is, theory developed from the information itself.
To assist with a logical argumentation analysis, the analyst should create an argument map as shown in Figure 9.3.11 The contention, or Bottom Line Up Front (BLUF), is placed at the top of an argument map. It should be created only after completion of the analysis and underlying argument map structure, and not before information is collected and analysis is completed. The argument map could cover the entire analytic project or just support a single, complex analytic finding, all depending on the nature of the analysis. Using the hierarchical structure shown in Figure 9.3, the analytic findings supporting the contention are developed and presented. The figure depicts only one analytic finding; but, in situations where there are more (usually the case), the most important should be listed starting on the left and moving right on the argument map. The most important analytic findings are on the far left of the map. Each then is supported with reasons and evidence, in addition to any individual objections related specifically to its analytic finding.
Major objections address the specific contention or BLUF and are placed to the right on an argument map; again, the importance of major objections also run left to right. Major objections may address alternatives not selected; but, must be included in the argument map and final analytic report. These objections help moderate the effects of confirmation bias, where only reasons and evidence supporting pre-formed points of view may be presented while overlooking reasons and evidence contrary to these pre-formed points of view. Each major objection will have its own supporting reasons and evidence. Additionally, rebuttal reasons and evidence that refute major objections (i.e., why the major objection was not considered further or included in the findings) should be provided.
Figure 9.4 is an example of a logical argumentation analysis with a corresponding argument map. In this example, the research question asks: “What tactics will terrorists use against the U.S. homeland in the future?” Using available information, this analysis concludes that terrorists will use more “lone wolf” attacks. The findings, reasons, objections, major objections, and rebuttals supporting this conclusion are shown below.
Logical argumentation has at least two primary weaknesses. First, it may allow the analyst to either fail to collect all the information on a situation or to discount information that may not support pre-formed views of the resultant contention. This can be the case when the analyst knows a customer has a pre- formed view of the situation and he/she builds an argument to support that pre- formed view. Second, as logical argumentation is more intuitive in nature, analysts also may not make the effort to complete all of the critical-thinking elements. The analyst may ignore the elements of context, points of view, assumptions, conceptualizations, and alternatives, resulting in contentions with questionable validity. Logical argumentation; therefore, should only be employed when time constraints or incomplete information do not allow a more in-depth information search and critical-thinking analysis.
Pattern matching. This is a deductive qualitative technique that begins with a model or theory already devised and then populated with information to reach a contention. Pattern matching is also called process tracing because it assists the analyst in gaining an in-depth understanding of the causal relationships or structured process steps involved in a situation. Pattern matching frequently is used in qualitative security analyses when only one case is being analyzed. Basic pattern matching, similar to logical argumentation, is sometimes learned in secondary school to organize and present a variety of empirical information. The primary difference between pattern matching and logical argumentation is the employment of a theory or model as the framework for guiding the pattern- matching analysis.
Pattern matching is a mainstay of structural causal analyses that must comply with rules for establishing causality. Originally presented in Chapter 3, the rules of causality include:
- Time ordering—the change or condition in the independent variable(s) must occur before the change or condition in the dependent variable(s).
- Non-spuriousness—there cannot be a third variable that is causing both the independent and dependent variables to change.
- Co-variation—as independent variables change, there must be corresponding changes in the dependent variables.
- Theory—as empiricism alone cannot establish causality, a theory is required (from the rationalism level) to explain why the causal relationships exist.
When conducting a pattern-matching analysis, organize and present information and linkages in a manner where even the most skeptical readers will agree with the contentions. As a skeptical reader might do, the analyst must ask: How is this known? Why should this be accepted as fact? It is important that the information be accurate, sufficient, representative, and precise.12 It is up to the analyst to logically link the different pieces of information and help the reader understand what it means in terms of the hypotheses being tested or alternatives being assessed. Information to support a pattern-matching analysis may include:
- Direct quotations from conversations, speeches, interviews, existing reports, articles, books, etc.
- Observations by intelligence collectors, diplomats, media personnel, etc.
- Words representing objects, images, or events using anecdotes, narratives, or descriptions.
- Ground-truth imagery, overhead imagery, charts, maps, videos, voice recordings, communication intercepts, models, etc., that represent objects or events visually or aurally.
- Figures, tables, diagrams, graphs, boxes, charts, or maps.
- Summaries and paraphrases of any of the above.
Box 7.1 provides an example case study employing a pattern-matching analysis based on the Geller & Singer War-Prone Dyad Model to predict the 1980 start of the Iran-Iraq War. In this case study, the structural model creates a series of hypotheses, which then were tested using available information to see if each hypothesis would or would not contribute to the outbreak of war. The Box 7.1 case study analysis does predict the war; but, since it was a counterfactual (after- the-fact) analysis, it also explains why the war broke out.
Pros-cons-fixes. This is an inductive qualitative technique for assessing and selecting among one or more alternatives. It is often called the Benjamin Franklin Technique as this U.S. Founding Father wrote about how he used this technique when faced with a particularly hard decision.13 In Franklin’s time, he used only the pros and cons parts of the technique; the fixes part of the technique was added later. The pros-cons-fixes technique helps overcome the normal human reaction of negativity to a new idea or alternative by forcing an assessment of the positives (pros) in an idea or alternative before assessing the negatives (cons). The fixes part of the technique then allows creation of potential solutions to the cons. Pros- cons-fixes help analysts organize the factors of the problem in a logical way so each factor can be assessed separately, systematically, and sufficiently.14 Pros- cons-fixes are best assessed using a separate table for each alternative; for example, using three columns (pros, cons, fixes) and following a six-step process detailed below.15 See Box 9.1 below for an example of a pros-cons-fixes analysis.
Step 1: List all the pros. Using group-informed brainstorming or lone storming (Chapter 8), list the positives, benefits, merits, and/or advantages of why this alternative should be selected.
Step 2: List all the cons. Again, using group informed brainstorming or lone storming, list all the negatives for why this alternative should not be selected.
Step 3: Review and consolidate the cons. Using a convergent-thinking approach, merge similar cons and eliminate redundant cons to consolidate the list.
Step 4: Neutralize or fix as many cons as possible; that is, determine what measures can be accepted or what can be done (fixed) to turn a con into a pro or at least to neutralize the con.
Step 5: Compare the pros, unalterable cons, and fixes for all alternatives. Remember the remaining cons constitute the price one will pay if that alternative is selected.
Step 6: Select the best alternative.
Box 9.1 | Pros-Cons-Fixes—An Analysis of al Qaeda Decision Making | ||||
---|---|---|---|---|---|
Suppose the al Qaeda leadership wanted an assessment of whether to use truck bombs or hijacked aircraft to create explosions in upcoming attacks on the United States (see Box 8.1). Assume the leadership’s concerns surround the fact al Qaeda had used truck bombs successfully in the past, discounting the failed 1993 World Trade Center attack, but the organization had never used hijacked aircraft as missiles to create explosions with the aircraft fuel load. Consider that al Qaeda planner Khalid Sheikh Mohammed (KSM) conducted the hypothetical pros-cons-fixes analysis (below) of the truck bombings and hijacked aircraft alternatives. Assume the analysis so far points to making multiple near- simultaneous spectacular attacks inside the United States, and that KSM completed a CARVER analysis (see Box 8.2) and selected the New York World Trade Center, Pentagon, and U.S. Capitol as the primary targets. | |||||
Alternative: Truck Bombings | |||||
Pros | Cons | Fixes | |||
|
|
| |||
Alternative: Hijacked Aircraft | |||||
Pros | Cons | Fixes | |||
|
|
| |||
Since this is a counterfactual (after-the-fact) analysis, we know the hijacked aircraft alternative was selected by al Qaeda. It was selected even though the hijacked aircraft alternative had a longer list of cons. when combined with a weighted-ranking analysis below, the analysis explains why al qaeda likely selected the hijacked aircraft alternative |
Pros-cons-fixes analyses have two main weaknesses. First, there are no consistent evaluation factors for comparing multiple factors across alternatives. The matrix analysis technique below compensates for the lack of consistent evaluation factors. Second, there is no accepted method to prioritize or determine which of the factors used in the analysis are the most important in decision making. Benjamin Franklin devised his own method for determining which items on the pros-cons list for one alternative were the most important. He would first assign weights (scores—scale unknown) to each item as to its importance regarding the decision. He would then strike out any two items (one pro, one con) of equal weights. He continued by removing items where one pro would equal two cons. Finally, he would remove items where two cons equaled three pros. This intuitive process would leave the most important items on the pros-cons list to consider in the final decision.16 Below is a weighted-ranking technique that builds on Franklin’s process in a more systematic manner.
Weighted Ranking. This is an inductive qualitative technique for prioritizing the importance of several alternative items in a decision-making process. Humans constantly rank items subconsciously. For example, when going to the movies, people inventory the films being shown and determine the one film they most want to see. Ranking items in terms of relative importance to other items is an instinctive process that facilitates decision making. The weighted-ranking technique is a systematic process for relative ranking by employing the nine-step process described below and illustrated in Box 9.2.17
Step 1: List all major criteria for ranking items. The Box 9.2 example supports a hypothetical analysis where al Qaeda is deciding the most important items to consider in planning an attack on U.S. interests. The list of criteria al Qaeda decides to use in its hypothetical analysis include: attack executability, overall costs, obtaining spectacular results, operative survival, and infiltration/exfiltration requirements. This analysis supports, but does not replace, the previous al Qaeda pros-cons-fixes example in Box 9.1.
Step 2: Conduct a pair-wise ranking, calling for each criterion to be compared to all other criteria to determine which is more important in the analysis being conducted, where the winner of each comparison gets one point. The criteria with the most points are considered the most important in the decision process. To pair-wise rank the criteria, compare the first criterion on the list to the second, first to the third, and so on—winners of the comparison get one point. After the first criterion is compared to all others, then follow with the second criterion compared to the third, second to the fourth, etc., continuing until all criteria have been compared to all others. Keep a tally of the winners (votes) in each comparison. Finally, calculate the total votes for each criterion and their final ranking. Figure 9.5 provides an example.
Note: The total number of votes in a pair-wise ranking, where N equals the number of criteria or items to be ranked is:
(N X (N-1))/2
Figure 9.5 | Pair-Wise Ranking of Criteria | |||
Criteria | Tally of Votes | Total Votes | Final Ranking | |
Execut | III | 3 | 2 | |
Costs | II | 2 | 3 | |
Spectacular Results | IIII | 4 | 1 | |
Operative Survival | 0 | 5 | ||
Infil./Exfil. Req. | I | 1 | 4 |
Step 3: Select the top several criteria (3 or more) to be used in the weighted-ranking analysis and subjectively weight them with probabilities. As with utility analysis (see Box 7.2), the weighted percentiles (probabilities) for all criteria selected must equal 1.0. An example includes:
Criteria | Weighted Probability |
1. Spectacular Results | 0.5 |
2. Executeability | 0.3 |
3. Costs | 0.2 |
Total Probability | 1.0 |
Step 4: Construct a weighted-ranking matrix and enter the items to be ranked in the left column, selected criteria (Step 2), and selected criteria weights (Step 3). See Box 9.2 for the weighted-ranking matrix for this example.
Step 5: Pair-wise rank all the items to be ranked for each criteria and record the votes in the weighted-ranking matrix. See Box 9.2.
Step 6: Multiply the number of pair-wise ranking votes for each item being ranked and the respective criterion probability. See Box 9.2.
Step 7: Add the weighted values for each item being ranked and enter the sums in the “Total Votes” column. See Box 9.2.
Step 8: Determine the items with most “Total Votes” and enter their ranking in the “Final Ranking” column. See Box 9.2.
Step 9: Conduct a sanity check. Ask: Do these results make intuitive sense? If not, go back and check the pair-wise rankings of criteria and items to be ranked and the math in the weighted-ranking matrix. Do not be surprised if the final results are counter-intuitive as they may conflict with the analyst’s initial intuitive expectations. Additionally, a sensitivity analysis should be conducted where other analysts pair-wise rank the criteria and adjust the criteria probabilities to see if the initial results are robust under different perspectives.
Box 9.2 | Weighted Ranking—al Qaeda Decision Making | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Question: What factors are the most important for al Qaeda to consider when planning an attack on U.S. interests? Items to be ranked from informed brainstorming: Experience: What experience do al Qaeda operatives have with conducting attacks? Explosive Procurement: How difficult will it be to obtain explosives for the attacks? Suicide Attack or Not: Will suicide attacks be required? Number of Operatives: How many al Qaeda operatives will be required to conduct the attacks? Delivery Method: How important is the delivery method to the attacks? Operational Security: Will operatives be undetected in preparing and carrying out the attacks? Hardened Target: How hardened will targets be against attack? Criteria pair-wise ranking and assigning probabilities for this example were covered in Steps 2 and 3 above. Weighted ranking matrix:
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||
The results of this analysis offer the most important items al Qaeda should consider in its planning are explosives procurement, delivery method, hardened targets, and operational security (to remain undetected). Of least importance is whether it will be a suicide attack or not, number of operatives required to carry out the attack, and experience of al Qaeda operatives. These results are consistent with what is known of the actual al Qaeda terrorist attacks on the United States on September 11, 2001. |
Matrix Analysis. This inductive and/or deductive qualitative technique is one of the most powerful, versatile, and flexible techniques used in security analysis. Both qualitative and quantitative information may be included in a matrix analysis. It may be used to address past, current, and future events. Inductively, it is a handy, clear method for sorting information. It may assist in assessing and selecting from a list of alternatives. Deductively, it can be used for testing theories, models, and hypotheses. In fact, the matrix may be considered a model itself. Matrix analyses are analytically illuminating, because even when the analysis does not point to one solution or decision-making option, it helps sharpen analytic judgements.18
Matrixes may project solutions or decisions from a dense body of information. Matrix analysis may be considered similar to a jigsaw puzzle, allowing the isolation of the parts of the problem or decision. Each puzzle part may be assessed in-depth and then the entire puzzle reassembled in a logical order. Matrix analyses may allow:
- Separation of elements of a problem.
- Categorization of information by type.
- Comparison of one type of information with another.
- Comparison of information of the same type.
- Uncover correlations or patterns among the information.19
A matrix is simply a grid with the number of cells needed for whatever issue is being analyzed. It consists of rows and columns (see the weighted ranking matrix in Box 9.2). For a matrix analysis, the top row of each column lists the options, alternatives, or hypotheses being tested or investigated. In the first column (left-most), the evidence or evaluation factors are listed. The evidence or evaluation factors may be analyzed first in a weighted-ranking analysis and then listed in the first column in order of their importance to the final results. The measurements of evidence or evaluation factors used in the matrix may be nominal, such as by a dichotomous variable of yes or no, or with notations of whether the evidence or evaluation factor are consistent or inconsistent (C or I) with each option, alternative, or hypothesis. Measurements also may be ordinal (low, medium, high) or include continuous interval or ratio measures. Matrix analyses are similar to pros-cons-fixes, whereby pros equate to consistent evidence and cons to inconsistent evidence, but in a matrix all the pros and cons for all alternatives are shown in one table. A combination of weighted-ranking and matrix analyses provides a more in-depth assessment over a pros-cons-fixes analysis. Box 9.3 provides an example of a matrix analysis.
Intelligence analysts often employ matrixes to conduct analyses of competing hypotheses (ACH).20 Here the term hypothesis is used in a broad sense to include different options, alternatives, outcomes, scenarios, explanations, predictions, conclusions, or hypotheses. ACH normally lists evidence and assumptions in the left-most column. The evidence and assumptions should have undergone quality-of-information checks (see Figure 5.8) and assumptions and beliefs analysis (see Figures 6.4 and 6.7), respectively. Each hypothesis is assessed for each item of evidence and assumptions. Once the matrix is fully populated with information, reaching the final results may seem counterintuitive. With ACH, the goal is not to determine the hypotheses with the most support, but to determine the hypotheses with the least inconsistencies, which are the most likely results. See Box 10.2 for an example of an ACH effort.
Security policy analysts often employ outcome matrix analyses.21 The matrixes normally focus on identifying the best solution for a problem or to otherwise support decision making. Outcome matrixes may employ evidence and assumptions in the left column; but, more likely, will list a number of evaluation factors. The outcome matrix evidence, assumptions, and evaluation factors may undergo a weighted-ranking analysis to determine those most important in reaching a solution to a problem or a decision. Outcome matrixes help analysts identify and confront tradeoffs—evaluation factors or measurements that highlight significant differences in options. Differing from ACH, outcome matrixes determine the options, etc., with the most support. Box 9.3 provides an example of an outcome matrix analysis.
Matrix analyses—both ACH and outcome matrixes—follow a general, step- by-step process shown below. Box 9.3 follows with an example of an outcome matrix analysis of President Obama’s 2009 decision to increase U.S. troop levels in Afghanistan.
Step 1: Identify the options, alternatives, outcomes, scenarios, explanations, predictions, conclusions, hypotheses, or recommendations to be analyzed. These must be mutually exclusive (not overlap) and should emerge from previous work in conceptualization (Chapter 7) and alternative development (Chapter 8).
Step 2: Construct a matrix. Label the first column Evidence (includes assumptions), Evaluation Factors, or both. Do a critical review of the options, alternatives, outcomes, scenarios, explanations, predictions, conclusions, hypotheses, or recommendations and place those selected at the top of individual columns. In the review, eliminate those options that have little chance of being selected. In some analyses, the status quo (that is, no changes to current conditions) may be an option, unless it is inappropriate to the analysis or a prior decision is made that changes are needed.
Step 3: List significant evidence and/or evaluation factors down the left- hand rows. These may undergo a weighted-ranking analysis to determine the most important evidence or evaluation factors; then, list them in the left-hand rows in descending order. Additionally, list any missing evidence that may be applicable to the analysis. Ask Alexander’s Question: What new information or different assumptions would negate the findings? When conducting an ACH, make sure the evidence is listed as data, assumptions, or statements and not in question form. For generating evaluation factors in outcome matrix analyses, start with the following considerations:
- Effectiveness: Seek to answer the question or solve the problem.
- Efficiency:
- Seek to maximize net benefits and maximize customer happiness (see Chapter 7 on assessing possible customer utilities).
- Ensure customer willingness to pay for options, etc., based upon current resources.
- Calculate cost/benefit analyses.
- Equity and Practicality:
- Assess equity to all players (customers, society, etc.).
- Seek customer input when faced with conflicts in weighting evaluation factors.
- Assess legality of options, etc.
- Appraise political acceptability of options, etc., determine if there is too much opposition and/or too little support.
- Consider robustness of options, etc.; it may be great in theory, but not in practice.22
Step 4: Work across the matrix (left-to-right) and assess the specific evidence or evaluation factor with each individual option. It is important to initially follow this step—left-to-right—to insure consistency in measurements. Assessment measurements may include:
- Nominal categorical measurements such as yes/no or +/- (plus/minus). For ACH, the measurements focus on consistency between the evidence and hypotheses and usually are assessed as consistent (C), inconsistent (I), strongly consistent (CC), strongly inconsistent (II), or not applicable (NA). Remember that the goal of ACH is to determine one or more hypotheses with the least inconsistencies.
- Ordinal categorical measures (low, medium, high).
- Interval or ratio continuous measurements. In some analyses, continuous measurements may be given in ranges (30 to 50, etc.).
Step 5: Review and refine the matrix. Delete, add, or reword options, etc., and evidence and evaluation factors. If any single piece of evidence or single evaluation factor is assigned the same measurement for all options, etc., it has no diagnostic value. Thus, evidence or evaluation factors with no diagnostic value may be removed from the matrix. However, do not discard this evidence or evaluation factor because it may be needed in the final presentations to supervisors or customers.
Step 6: Evaluate each option, etc., remaining. Review each option vertically (up and down the option’s column). Reevaluate and confirm the validity of evidence or evaluation factors. Review underlying assumptions. Delete any options where there are significant inconsistencies or questionable validity in the assessment measurements.
Step 7: Compare the remaining options, etc., and then rank them from best to worst. In ACH, the hypotheses with the least inconsistencies are the most likely. In outcome matrixes, the options, etc., with the most positive evaluations are usually the best selections.
Step 8: Perform a sanity check. As with the weighted-ranking technique ask: Do the results make intuitive sense? If not, recheck the information and entire analytic process. Once again ask Alexander’s Question: What new information or different assumptions would negate the findings?
Box 9.3 | Outcome-Matrix Analysis: Obama’s War in Afghanistan | ||||||||||||||||||||||||||||||||||||
During the 2008 presidential campaign, the eventual victor, Democratic Party candidate Senator Barack H. Obama (D-IL), promised to end the ongoing Iraq and Afghanistan wars. By Obama’s January 20, 2009, inauguration, the Iraq war was already winding down and U.S. combat troops were being withdrawn. The U.S. had been in Afghanistan since October 2001; with 55,000 U.S. troops remaining in Afghanistan in early-2009. The continuing war was generally considered under-resourced and lacking a clear military strategy. Starting after 9/11, the U.S. spearheaded the removal of the ruling Taliban government from power and denial of Afghanistan as a safe haven and support base for al Qaeda (the U.S. primary mission). The war later turned into a stalemate with the U.S., Allied, and new Afghan government forces fighting militants from the Taliban and al Qaeda. The U.S. strategy was based loosely on counter-insurgency (CI) operations modeled on the same strategy used to end the war in Iraq. Three days after Obama’s presidential inauguration, the U.S. National Security Council (NSC) met to strategize on the war in Afghanistan. This particular meeting was to gain presidential approval to increase U.S. troop levels in Afghanistan to provide security for the upcoming August 2009 Afghan elections. The U.S. Department of Defense (DOD) was requesting an additional 30,000 troops, plus force enablers (helicopters, medical, communication, and other support units). President Obama did not immediately approve the full request. Within a few weeks; however, he did approve 17,000 additional troops. His primary goal; though, was to end the war, so he ordered an outside (think tank) strategic review of the war. By the fall of 2009, U.S. troop levels had increased to 68,000. The new U.S. commander in Afghanistan was ordered to complete a force structure and second strategic review. This review, and the decision process to send additional troops and define the nature of their mission, set in motion a bureaucratic politics (Chapter 6) infighting scenario lasting until Obama’s November 29, 2009, final decision. The infighting primarily was between the DOD (Secretary and senior generals), the White House NSC staff, and the President’s closest advisors. DOD continually insisted on a force mission of CI, just as conducted in Iraq, requiring significant additional troop levels. The White House instead was looking for a new approach, one that would wind down the war and require far fewer additional troops. While no one actually constructed an outcome matrix for Obama’s decision on mission and troop deployments, the following matrix captures the essence of the decision (from Obama’s perspective) documented in Bob Woodward’s book Obama’s Wars. 23 The options (outcomes) assessed included: Option 1: 10,000 additional troops to train the Afghan Security Forces (police and military). No additional U.S. combat troops. This was the option Obama established if DOD pushed back significantly on Option 2. He apparently intended to then let DOD use the 68,000 existing troops to execute the war as they saw fit—and see what happened. He apparently also intended to start troop withdrawals in July 2011 as with Option 2. DOD had floated a lower-limit option of 20,000 additional troops, but it was not one either DOD or the White House seriously considered. Option 2: 30,000 additional combat troops plus up to 3,000 additional enablers. Troops would train Afghan Security Forces and conduct counterterrorism (CT) operations, meaning to conduct intelligence-supported attacks on known Taliban or al Qaeda locations. After this surge of additional resources, troop withdrawals would begin in July 2011. This option was designed by President Obama in consultation with his White House advisors. Option 3: 45,000 additional troops with enablers. Troops would train Afghan Security Forces and conduct CI operations; meaning to take, secure, and guard territory and gain the trust of the Afghan people, thus denying those territories to the Taliban and al Qaeda. There was no schedule to remove U.S. troops in this option nor a plan to end the war. This option was proposed and championed by DOD, to the point it all but refused to consider other options. Other Options: A status quo option was discussed (to remain at the 68,000- troop level), but both DOD and the White House agreed something needed to be done in Afghanistan, so this option was set aside. An option of 85,000 additional troops with enablers was presented by DOD, with all other conditions the same as Option 3. This was suspected to be a worst-case option, so the preferred DOD option of 45,000 seemed more acceptable to the White House. Both DOD and the White House considered this option too costly, and so it was not discussed seriously.
President Obama selected Option 2, partly because he wanted to meet the DOD’s requests at least part way. Option 1 was only a fallback option for Obama in case DOD resisted Option 2. Option 2 met all of Obama’s policy goals. Throughout the fall 2009 discussions, it was clear DOD failed to consider the policy factors important to President Obama and anchored themselves on the CI strategy employed successfully in Iraq. DOD did not give adequate attention to Obama’s campaign promises to end the war or his desire to send a political message to the Afghan government of how they needed to take more responsibility for their own security as the United States was not staying there long-term. A major player in the negotiations was Vice-President Joe Biden. President Obama asked Biden to question information, assumptions, and conclusions raised by all sides in the discussions. This is known as placing Biden in the role of a Devil’s Advocate, a technique commonly used in security analysis to insert contrarian views into the analytic process. In fact, it was Biden who strongly questioned the continued use of a CI strategy in Afghanistan (as the on- ground context differed from Iraq), and he was the first to offer the alternative of a CT strategy, which Obama eventually supported. What Happened? U.S. troop levels in Afghanistan reached a total of almost 100,000 by July 2011. As he originally ordered, a slow withdrawal of U.S. troops began that month and, by Obama’s departure from the White House in January 2017, around 8,400 U.S. troops remained in Afghanistan. President Trump inherited the lower troop levels, CT strategy, and efforts to train Afghan Security Forces. The war continued through Trump’s 2017-2020 term. The U.S. departed Afghanistan in August 2021 after the Trump Administration negotiated a withdrawal with the Taliban, which was executed under the direction of then President Biden. The U.S. strategy to end the war was not successful because of the lack of political will on the part of the Afghan government. The Afghan government did not support the Afghan Security Forces, which did not reach the strength or level of effectiveness that would allow transfer of responsibility for security operations from the U.S. and Allied forces. Moreover, the Afghan government did not demonstrate the political will to improve its corrupt and ineffective governance. |
Key Concepts
Alexander’s Question
Analysis of Competing Hypotheses
Argument Mapping
Benchmark Proportions
Benjamin Franklin Technique
Comparative Analysis
Comparative Theory
Contentions
Descriptive Inference
Endogeneity
Evidentiary Reasoning
Grounded Theory
Inference
Interpretation
Logical Argumentation
Matrix Analysis
Multicollinearity
Occam’s Razor
Outcome Matrix
Pair-Wise Ranking
Pattern Matching
Process Tracing
Pros-Cons-Fixes
Qualitative Analysis
Qualitative Comparative Analysis
Quantitative Analysis
Statistical Inference
Unaided Judgment
Weighted Ranking
Discussion Points
- How did using only unaided judgment techniques (see Figure 9.2) affect the quality of pre-9/11 security analysis?
- Select a newspaper, magazine, or other published article and evaluate if the text is presented in logical argumentation format.
- Develop a pros-cons-fixes analysis for a decision situation in your personal life, past or present (selecting a college, accepting a job offer, buying a car, etc.). Would a utility analysis (Chapter 7) or weighted-ranking analysis provide greater insight to your decision?
- Using the information in Box 2.1, Box 7.2, and Figure 7.20, develop an outcomes matrix analysis of the Cuban Missile Crisis from President Kennedy’s perspective.
Notes
1 Richard Paul and Linda Elder, Critical Thinking, Tools for Taking Charge of Your Professional and Personal Life, 2nd ed. (Upper Saddle River, NY: Pearson Education, Inc., 2014), 401.
5 Gary King, Robert O. Keohane, and Sidney Verba, Designing Social Inquiry, Scientific Inference in Qualitative Research (Princeton, NJ: Princeton University Press, 1994).
6 Originally published in Charles C. Ragin, Constructing Social Research, The Unity and Diversity of Method (Thousand Oaks, CA: Pine Forge Press, 1994), 105-130. Revised and reprinted in Charles C. Ragin and Lisa M. Amoroso, Constructing Social Research, The Unity and Diversity of Method 3rd ed. (Los Angeles, CA: Sage, 2019), 123-145.
7 George W. Bohrnstedt and David Knoke, Statistics for Social Data Analysis 3rd ed. (Itasca, IL: F.E. Peacock Publishers, Inc., 1994), 155-163. Most basic statistics texts will include cross tabulation material.
8 Ibid, 76-116, 120-150, 252-314. Most basic statistics texts will include ANOVA material. Techniques for OLS Regression may require more advanced texts.
9 Lawrence C. Hamilton. Regression with Graphics, A Second Course in Applied Statistics (Belmont, CA: Duxbury Press, 1992), 217-242. One-semester college statistics courses often are limited to teaching descriptive statistics and inferential statistics for cross tabulations and ANOVA. A second course usually is required to learn the use of OLS regression and logistic regression. More advanced courses are required to learn techniques such as Bayesian analysis, time series analysis, structural equation modeling, and Chaos and Catastrophe theories.
10 Richards J. Heuer, Jr. “Taxonomy of Structured Analytic Techniques,” (paper presented at the annual meeting of the International Studies Association, San Francisco, CA, March 26-29, 2008), 3-4.
11 Modified from Reasoninglab, “Argument Mapping,” https://www.reasoninglab.com/argument-mapping/ (accessed July 25, 2018).
12 Kate L. Turabian, A Manual for Writers of Research Papers, Theses, and Dissertations 7th ed., revised by Wayne C. Booth, Gregory G. Colomb, and Joseph M. Williams (Chicago, IL: The University of Chicago Press, 2003), 146.
13 Leonard W. Labaree ed., Benjamin Franklin: A Selection from His Personal Letters (New Haven, CT: Yale University Press, 1956).
14 Morgan D. Jones, The Thinker’s Toolkit, 14 Powerful Techniques for Problem Solving (New York, NY: Three Rivers Press, 1998), 77-78.
16 Glenn Stock, “How Ben Franklin Analyzed Pros and Cons to Make Decisions,” https://owlcation.com/humanities/Benjamin-Franklin-Pros-and-Cons (accessed March 14, 20021).
19 Ibid, 116.
20 Richards J. Heuer, Jr., Psychology of Intelligence Analysis (Washington, DC: Central Intelligence Agency, Center for the Study of Intelligence, 1999).
21 Eugene Bardach and Eric M. Patashnick, A Practical Guide for Policy Analysis, The Eightfold Path to More Effective Problem Solving, 5th ed. (Thousand Oaks, CA: SAGE/CQ Press, 2016), 61- 65.
23 Bob Woodward, Obama’s Wars (New York, NY: Simon & Schuster, 2010).
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.