Skip to main content

Critical Belief Analysis Text: Chapter 3 Precision and Ambiguity Of Beliefs

Critical Belief Analysis Text
Chapter 3 Precision and Ambiguity Of Beliefs
    • Notifications
    • Privacy

“Chapter 3 Precision and Ambiguity Of Beliefs” in “Critical Belief Analysis Text”

Chapter 3

Precision/Ambiguity of Beliefs

Aspects of Precision/Ambiguity

Critical Belief Analysis (CBA) views a belief’s precision or ambiguity, like the fundamental need agents look to a belief to satisfy, as a consequential but commonly ignored characteristic. In the vocabulary of CBA, precision and ambiguity, like loudness and softness, are complementary ways of describing the same phenomenon.

For the purposes of CBA, a belief’s precision is the narrowness of the range of observations believers consider belief-consistent. On the other hand, a belief’s ambiguity is the breadth of observations believers consider belief-consistent. In other words, the more precise a belief, the narrower the range of potentially supportive observations and the wider the range of potentially challenging observations. The more ambiguous a belief, the wider the range of potentially supportive observations and the narrower the range of potentially challenging observations.

Readily Quantifiable Precision/Ambiguity

The most precise (i.e., least ambiguous) belief one can have when playing roulette is: “On the next spin, the ball will land in a specific numbered pocket.” On a 38-pocket roulette wheel, such bets will be wrong about 97.3 percent of the time. A less precise (i.e., more ambiguous) belief would be: “On the next spin, the ball will fall into one of the wheel’s eighteen red pockets (or one of the wheel’s eighteen black pockets).” On the same (38-pocket) roulette wheel, such bets will be wrong about 52.6 percent of the time. The belief that the pocket the ball lands in will reflect divine will — a belief consistent with all possible outcomes — is even more ambiguous.

Hard-to-Quantify Precision/Ambiguity

A belief also may be imprecise/ambiguous because it is consistent with diverse qualitative realities. Applying this standard, Austro-British philosopher of science Karl Popper found Freudian and Adlerian theories highly ambiguous. Popper noted that the psychoanalytic theories of Austrian neurologist and founder of psychoanalysis Sigmund Freud and Austrian psychotherapist Alfred Adler effortlessly explained actions as varied as attempting to drown a child and sacrificing one’s life to rescue a child.[1] Freudians, Popper claimed, could attribute the first act to repression and the second act to sublimation (i.e., the transformation of the energy of a biological impulse to serve a more acceptable use). On the other hand, Adler’s followers could claim feelings of inferiority motivated both acts. In the first instance, Adler’s followers might argue that feelings of inferiority compelled the villain to prove they dared commit a crime. In the second instance, Adler’s followers might claim that feelings of inferiority motivated the hero/heroine to prove they dared to risk their lives in a rescue attempt.

Popper described these examples as symptoms of a trait Freud’s and Adler’s theories shared. While neither predicted human behavior, both could account, after the fact, for anything someone might do. Thus, in the language of CBA, they were profoundly ambiguous.

By contrast, more precise theories, such as the German-born physicist Albert Einstein’s Theory of Relativity, were distinguished by their risky predictions. One prediction of Relativity Theory was “gravitational lensing,” the bending of light by gravity.

Relativity Theory predicted the degree to which the sun would bend light passing close to its surface. It became possible to test this prediction in 1919 when a solar eclipse allowed astronomers to measure shifts in the apparent positions of stars whose light passed close to the sun. As Einstein had predicted, the apparent positions of those stars shifted twice as much as the English mathematician Sir Isaac Newton’s Universal Law of Gravitation forecast. Even a small discrepancy between astronomical observations and Einstein’s predictions would have raised questions about the validity of his theory, especially if the discrepancy was consistent with the competing Newtonian model.

The Impact of Second-Order Precepts on Precision/Ambiguity

A belief’s precision/ambiguity is also revealed by the ease with which agents recognize its flaws, i.e., by their answers to questions such as, “How would you know if your belief was wrong?” Answers to such questions are powerfully influenced by what CBA calls second-order precepts, that is, rules determining how agents think about, defend, criticize, and communicate about their beliefs. Some second-order precepts encourage agents to openly discuss and honestly grapple with challenges. Other second-order precepts encourage agents to defend and promulgate their beliefs by any means necessary.

Second-order precepts closely resemble Popper’s “second-order traditions.”[2] However, second-order precepts include both second-order traditions and the rules by which agents operate in the absence of such traditions. The influence of second-order precepts on the pursuit of authentic understanding is evident in physical scientists’ responses to the confirmation of Einstein’s predictions regarding gravitational lensing. The early twentieth-century scientific community was attached to Newtonian conceptions of time, space, and gravity. Such attachment was well-justified; Newton’s model was supported by two centuries of astronomical observations, including the discovery of Neptune, the mass and position of which had been predicted using Newton’s equations. Yet, with few exceptions, the scientific community cautiously embraced and celebrated observations challenging Newton’s model. Newton’s legacy clearly included second-order precepts that encouraged not only precision but integrity.

The second-order precepts associated with Soviet agronomist Trofim Lysenko’s doctrines contrast sharply with those associated with Newtonian physics. Lysenko’s influence over Soviet agronomy was not the result of his doctrines’ successes. Instead, it was due to extraneous factors, including the role of his policies in quelling peasant unrest, his humble origins, and the consistency of his ideas with Marxist doctrine.

Lysenko’s career started in the late 1920s, when new Soviet collectivist reforms were instituted. One of those reforms mandated the confiscation of peasant farmers’ agricultural landholdings. In response, many peasants abandoned their farms, became indifferent to the quality of their work, and engaged in pilfering.

Lysenko drew favorable attention because he advocated agricultural methods that, while unscientific, had positive consequences. Lysenko’s methods encouraged disaffected peasants to return to farming, increased opportunities for year-round agricultural work, and enabled peasants to view themselves as having personal stakes in the success of the Soviet experiment.

Lysenko’s personal history also contributed to his rise. As the son of peasants, bereft of formal academic training or affiliation, Lysenko benefited from policies encouraging Communist Party leaders to promote members of the proletariat to positions of influence. Lysenko’s rise, which continued throughout General Secretary Joseph Stalin’s reign, culminated in his appointment as Director of Genetics at the Academy of Sciences of the Soviet Union.

Lysenko also gained influence because he subscribed to the evolutionary theory of French naturalist Jean-Baptiste Lamarck, who alleged that acquired characteristics of plants and animals could be inherited. The Lamarckian view, which German revolutionary socialists Karl Marx and Friedrich Engels endorsed, suggested agronomists could create new varieties of plants and animals within a few generations by exposing current varieties to environmental pressures. More poignantly, it suggested that subjecting Soviet citizens to the demands and rewards of a socialist Utopia would, within a few generations, create a population that instinctively embodied Soviet virtues and ideals. Lysenko’s domination of Soviet agriculture reached its peak in 1948 when he delivered a speech prepared with Stalin’s aid. That speech denounced prevailing conceptions of genetics and described orthodox geneticists as enemies of the people.

Lysenko imposed Draconian second-order precepts on the discussion of his ideas. Scientists who failed to renounce genetics were dismissed from their posts. Many were imprisoned; some were executed. These realities encouraged scientists to destroy evidence challenging Lysenko’s conceptions, present fraudulent data supporting those conceptions, and write public letters confessing their errors and praising the wisdom of the Party.

In short, the second-order precepts associated with Lysenko’s views rendered those views profoundly ambiguous. They encouraged scientists to restrict themselves to Lysenko-supportive thoughts and statements. They inspired selective promulgation — and even manufacture — of data supporting Lysenko’s ideas, and they suppressed data that might have challenged his ideas.

Lysenko’s followers would have had an answer to the question, “If Lysenko’s ideas were wrong, how would you know?” However, this answer was likely to have been, “The Communist Party will say so!” Although such an answer reveals sensitivity to a particular kind of error, it also reveals subservience to authority and indifference to data, logic, and scientific discipline. Such subservience and indifference render this answer evidence of ambiguity.

Over time, Lysenko’s policies contributed to famines that killed millions in the Soviet Union. When adopted by the People’s Republic of China, those policies played a role in the Great Chinese Famine (1959-1961), which killed between 15 million and 55 million people. Oppressive second-order precepts may alter agents’ reflections and discourse, but they do not change reality.

CBA views Lysenkoism as exemplary of a particularly destructive species of belief. Such beliefs are “justified” by fraudulent facts or theories. A substantial majority of their predictions are false, and the measures they inspire are disproportionately detrimental. Yet they create passionate adherents. They do so by encouraging advocates to deny reality, defend demonstrable falsehoods against credible evidence, and silence critics. They encourage self-deception, defensiveness, dishonesty, bitterness, hatred, and violence. 

Such beliefs put adherents on a slippery slope. Palpable lies require the support of other lies, and those lies require the support of still more lies. Discrediting, defaming, or silencing those who challenge such lies becomes a righteous duty. Further, the unjustifiable harshness of attempts to discredit, defame, or silence challengers encourages agents to rationalize their cruelty, justifying the ever-harsher treatment of their ideological opponents.

Advocates of competing ideologies often support their arguments with differing second-order precepts. Those precepts encourage agents to attend to differing facts and interpret those facts differently. Often, they employ different definitions of the same terms. Characteristically, arguments using those definitions differ in their precision. These phenomena are apparent in the debates between advocates of scientific evolution and creationism, as well as disputes over U.S. security policy.[3]

Classes of Precision/Ambiguity

Beliefs can be thought of as falling into four precision/ambiguity categories: precise beliefs, imprecise beliefs, rules of thumb, and catalytic narratives. Some beliefs fit these categories imperfectly; however, these categories are sufficiently distinct for use in security studies.

Precise Beliefs

Precise beliefs provide agents with explicit guidance about the nature of reality and how to achieve their goals. Such beliefs are characteristic of the physical sciences. A paradigmatic example of such precision is Newton’s Law of Universal Gravitation, which was mentioned above. An even more dramatic example of precision comes from the standard model of particle physics, which describes subatomic particles and forces. A recent experiment devoted to determining the electron magnetic moment, a measure of the strength of the electron’s magnetic field, found it to agree with the standard model’s prediction to within roughly one part in a trillion. Precise beliefs share six characteristics:

  • They offer clear, detailed descriptions of the phenomena they address.
  • They specify how to measure those phenomena.
  • They specify the relationships between those phenomena.
  • They describe the circumstances under which those relationships occur.
  • They incorporate second-order precepts that encourage agents to seek, generate, acknowledge, grapple with, promulgate, and discuss challenging arguments and data, and to thoroughly assess excuses for predictive failures.
  • They are likely to incorporate second-order precepts that encourage the use of increasingly stringent tests as more sensitive instruments or revealing procedures become available.

Precise beliefs may predict that employing well-defined procedures in well-defined circumstances will achieve well-defined outcomes. They may predict that those who make observations under well-defined circumstances will witness well-defined phenomena. Or they may provide data or concepts that enable agents to generate such predictions.

How to determine whether agents assume a belief is precise

If an agent’s statements and actions suggest they rely on a belief to (a) tell them what will happen, (b) tell them how to achieve their goals, or (c) provide a readily falsifiable, data-sensitive framework that helps them explain or predict events, they are treating the belief as if it were precise. In the language of CBA, their behavior suggests they assume the belief’s guidance to be precise.

Imprecise Beliefs

Beliefs CBA calls imprecise are somewhat more ambiguous than precise beliefs. Where precise beliefs make specific predictions, imprecise beliefs make directional predictions.

Most social science hypotheses are imprecise, as are many of the more useful tenets informing security studies. For example, the security studies thesis Democratic-Peace Theory makes two directional predictions: that democratic nations will be (a) more peaceful internally than authoritarian regimes and (b) less likely than authoritarian regimes to wage war against democracies.

The inexactitude of imprecise beliefs is evident in the ways adherents investigate, discuss, and promulgate them. Archetypal imprecise beliefs share eight attributes:

  • They make directional (rather than specific) predictions regarding relationships between phenomena.
  • They describe the general (rather than precise) nature of those phenomena.
  • They broadly (or only implicitly) describe the conditions under which relationships between phenomena are alleged to occur.
  • They lead agents to expect relationships between phenomena to hold true most — but not necessarily all — of the time.
  • Their second-order precepts encourage agents to balance advocacy with openness to challenge and refinement.
  • Their second-order precepts permit agents to accept speculative post hoc explanations for predictive failures and other challenging observations without investigating those explanations.
  • Their second-order precepts fail to encourage seeking, generating, acknowledging, or promulgating challenging facts and arguments.
  • Their second-order precepts inspire laissez-faire attitudes toward reexamining claims when more sensitive instruments or meticulous investigative procedures become available.

How to determine whether agents assume a belief is imprecise

If agents’ statements and actions suggest they expect a belief’s guidance to increase their odds of success — but not necessarily to make success likely — they are treating the belief as if it were imprecise. In the language of CBA, their behavior suggests they assume the belief’s guidance to be imprecise.

Rules of Thumb

Beliefs CBA categorizes as rules of thumb are more ambiguous than imprecise beliefs. Some beliefs everyday language refers to as “rules of thumb” also meet CBA’s criteria for inclusion in that category. However, many beliefs English speakers casually describe as “rules of thumb” are more accurately characterized as imprecise beliefs or catalytic narratives.

Some rules of thumb make rough predictions or describe approaches to problems that promise to increase agents’ odds of success. However, when made by rules of thumb, such promises are illusory. Rules of thumb fail to increase agents’ odds of success because (a) they provide only colloquial descriptions of the phenomena they address and (b) they are vague or silent about the conditions under which relationships between those phenomena occur.

Those characteristics permit rules of thumb to contradict one another. Consider the paired rules of thumb below, which offer conflicting advice and are silent as to the conditions in which the advice is relevant:

  • Look before you leap./He who hesitates is lost.
  • Nothing ventured, nothing gained./Better safe than sorry.
  • Great minds think alike./Fools seldom differ.
  • Many hands make light work./Too many cooks spoil the broth.
  • What will be will be./Life is what you make it.
  • The more, the merrier./Two’s company; three’s a crowd.

Between their colloquial descriptions of the phenomena they address and their silence about the conditions under which they hold, rules of thumb offer little more than elusive hints about the nature of reality. Consistent with a broad range of observations, they are unaccountable for the expectations they inspire. As such, failures of rules of thumb have little impact on agents’ faith in their utility. Often, those who unsuccessfully attempt to apply a rule of thumb are considered responsible for misunderstanding the rule or the conditions in which it applies. However, unlike more ambiguous beliefs (i.e., catalytic narratives), rules of thumb influence only a circumscribed range of agents’ views, values, and perspectives. Archetypal rules of thumb share six characteristics:

  • Their guidance is vague because (a) they provide only colloquial descriptions of the phenomena they deal with, (b) their claims regarding relationships between those phenomena are unclear, and (c) they are vague or silent about the conditions under which those claims hold.
  • Their ambiguity allows them to account, after the fact, for a wide range of observations.
  • They have little effect on agents’ experiences or understanding of the issues they address.
  • Failures of the predictions and strategies they inspire have little effect on agents’ confidence.
  • Their guidance cannot be expected to reliably increase the agent’s odds of success.
  • They encourage agents to consider issues that may matter.

How to determine whether agents assume a belief is a rule of thumb

Suppose an agent’s words and actions indicate they expect a belief to provide nothing more than encouragement to think about issues that may matter. In that case, they treat the belief as a rule of thumb. In the language of CBA, their behavior suggests they assume the belief to be a rule of thumb. However, it should be noted agents rarely view their beliefs this way.

Catalytic Narratives

The most ambiguous beliefs are catalytic narratives. Catalytic narratives are beliefs that make no falsifiable claims but appear — to those who embrace them — to be profound truths. Catalytic narratives come in many forms: they may be packaged as descriptive statements (such as “Members of religion X are enemies of God”), compelling images (even if Photoshopped or generated by artificial intelligence), captivating stories (novels, sacred texts, movies, plays, editorials, documentaries, or the literature of academic disciplines), evocative words or phrases (such as “racist,” “sexist,” “bigot,” “fake news” or “social justice”), and defamatory descriptions ending in “phobe.” They may also be descriptions that make no explicit predictions and are open to widely varying interpretations. Examples include: “Religion Y is a religion of peace,” and “It takes a loathsome person to vote for candidate Z.”

Catalytic narratives provide lenses through which agents view reality, creating “true believers.” Like catalysts, they transform what they encounter while remaining unchanged. Although catalytic narratives bias experience and judgment, they lead adherents to believe their narrative-influenced perceptions and judgments embody unique and unquestionable truths. All too often, catalytic narratives convince those under their sway they are morally and intellectually superior to those who fail to believe as they do. With rare exceptions, political, religious, and other ideologies consist of either a cardinal catalytic narrative and its implications or a web of interwoven, mutually supportive catalytic narratives.

Catalytic narratives are the most ambiguous of beliefs. Their power to explain events after they occur is limited primarily — if not exclusively — by the vagueness of their language and their advocates’ passion, imagination, and rhetorical skill. Their predictive failures are easily discounted. The ambiguity of catalytic narratives allows believers to interpret them in ways they find satisfying. It also makes it easy for those narratives to explain a wide range of phenomena, encouraging believers to think they are “onto something” and inspiring passion and commitment. Archetypal catalytic narratives share six characteristics:

  • They satisfy agents’ needs to see themselves as knowledgeable, wise, and powerful.
  • They make no falsifiable predictions. They often evade falsifiability by making no predictions or encouraging agents to glibly “explain away” predictive failures. Only rarely do the adherents of catalytic narratives have an answer to the question, “How would you know if you were wrong?”
  • They account for a wide range of events after they occur.
  • Their second-order precepts fail to encourage (or actively discourage) seeking, generating, or promulgating challenging facts or arguments.
  • Their second-order precepts strongly discourage serious consideration of challenging arguments, logic, and events.
  • Their second-order precepts strongly discourage critical examination of claimed predictive successes.

How to determine whether agents assume a belief is a catalytic narrative

Suppose an agent’s words and actions suggest they view a belief as transforming them in ways that lead them to see it as true while failing to provide them with authentic information. In that case, they treat the belief as if it were a catalytic narrative. In the language of CBA, their behavior suggests they assume the belief to be a catalytic narrative. However, agents rarely, if ever, view their beliefs in this way. Instead, they look to catalytic narratives to guide their most consequential decisions and actions, unaware of the intoxicating spells those narratives weave, the dubious guidance they provide, and the unjustified certainty they inspire.

Effects of Desire for Reassurance on Precision/Ambiguity

Is it possible for a reassuring belief to qualify as precise, imprecise, or a rule of thumb? Appearances to the contrary notwithstanding, the answer to this question is “No.” If a belief is reassuring, agents who embrace it are likely to:

  • Deny distressing realities that challenge the belief in question.
  • View reassuring falsehoods as accurate.
  • “Spin” vague or ambiguous information to make it appear supportive.
  • Portray challenging information and arguments as inaccurate or irrelevant.
  • Find specious reasons to distrust whatever might challenge that belief and equally specious reasons to trust supportive arguments and information.
  • Selectively remember events consistent with that belief while selectively forgetting events that raise questions about it.
  • Uncritically accept excuses for that belief’s explanatory and predictive failures.

The above approach to evaluating beliefs conflicts dramatically with that which qualifies beliefs as precise. If a belief is to qualify as “precise,” its advocates must dispassionately seek and grapple with challenging arguments and data. They must be willing to evaluate it using the most exacting technological and conceptual tools available, and they must be open to discussing their doubts and concerns.

This approach to evaluation also conflicts, albeit less dramatically, with that which characterizes imprecise beliefs. If a belief is “imprecise,” its advocates cannot be closed to challenging arguments and data or to employing sophisticated conceptual and technical tools to reexamine its claims. Lastly, they cannot be unwilling to discuss the possibility their belief suffers from flaws and limitations.

Reassuring beliefs also differ, albeit subtly, from rules of thumb. Where reassuring beliefs powerfully distort their advocates’ perception and judgment; the bias that rules of thumb engender is comparatively mild. And where the guidance of reassuring beliefs is likely to be ineffective, rules of thumb are likely to inspire reflection and, indirectly, effective action.

However, reassuring beliefs and catalytic narratives are complementary. Reassuring beliefs are shaped by advocates’ desires to see themselves as wise, knowledgeable, and powerful; the ambiguity of catalytic narratives allows them to satisfy those desires.

Like reassuring beliefs, catalytic narratives help advocates see themselves as possessing profound truths. Their ambiguity enables advocates to explain everything after it occurs, make horoscope-like predictions, and rationalize predictive failures. Catalytic narratives also support advocates’ longing for omniscience by diverting their attention from facts and arguments that might undermine their confidence.

Although all reassuring beliefs are catalytic narratives, not all catalytic narratives are reassuring beliefs. Unlike reassuring beliefs, informative catalytic narratives may be motivated by the desire to authentically understand, predict, and control reality.

Why Attention to Precision/Ambiguity Matters

Agents who fail to attend to the precision/ambiguity of their beliefs are vulnerable to relying on those beliefs for guidance they cannot provide. Without explicit attention to this issue, agents are likely to view catalytic narratives, rules of thumb, and imprecise beliefs as powerful aids to understanding reality, predicting the future, and achieving their goals.

However, as seen above, catalytic narratives provide little information about reality. Their predictions are so vague they are meaningless. With few exceptions, their suggested strategies and tactics are ineffective. Worst of all, catalytic narratives blind adherents to their flaws and limitations, leading agents who believe them to experience them as profound truths. Security study history is littered with such beliefs (see the example in Chapter 7).

Unlike catalytic narratives, rules of thumb have negligible effects on agents’ views of reality. Rules of thumb may create illusions of understanding. However, unlike the transformative, totalizing illusions that catalytic narratives create, the illusions that rules of thumb engender are pedestrian and circumscribed. In addition, the second-order precepts associated with rules of thumb lack the blinding power of the second-order precepts associated with catalytic narratives. Moreover, while the second-order precepts of both catalytic narratives and rules of thumb protect their principal claims from being judged wrong, the second-order precepts of rules of thumb, unlike the second-order precepts of catalytic narratives, allow the beliefs they accompany to be deemed inapplicable. Further, unlike catalytic narratives, which encourage agents to view the issues they highlight as uniquely important, rules of thumb encourage agents to reflect on issues that matter to them. Nonetheless, rules of thumb are of little value in understanding reality, predicting the future, or producing well-defined outcomes. In the absence of meticulous attention to the precision of rules of thumb, those who embrace them are likely to overestimate the accuracy with which they describe reality and the value of the guidance they offer.

Agents who are insensitive to the limitations of imprecise beliefs are also likely to view their guidance as more powerful and dependable than it is. Naïve believers in imprecise beliefs, like those who naïvely embrace rules of thumb and catalytic narratives, are likely to assume the guidance of their imprecise beliefs is as functional as the guidance of precise beliefs. In some cases, they may even come to view false imprecise beliefs as true.

Less severe errors are also possible. Those who naïvely place their trust in rules of thumb or catalytic narratives may assume the guidance those beliefs offer resembles the guidance of imprecise beliefs. Advocates of catalytic narratives may also assume their guidance resembles that of rules of thumb.

Attention to the assumed and actual precision of beliefs guiding agents’ thoughts and actions can improve analysts’ ability to:

  • Understand and anticipate the confidence with which agents embrace and implement policies.
  • Estimate the odds that agents’ belief-inspired strategies will have unintended consequences or fail to produce the expected results.
  • Understand and anticipate agents’ responses to failures and surprises.

NOTES

  1. Karl Popper, Conjectures and Refutations: The Growth of Scientific Knowledge (London: Routledge and Kegan Paul, 1963).↑

  2. Ibid. ↑

  3. Those who seek a further exploration of ambiguity and its implications may wish to explore http://barneysplace.net/site/the-trouble-with-truth/ (accessed December 19, 2020).↑

Next Chapter
Chapter 4 Viewpoints Of Beliefs
PreviousNext
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (see https://creativecommons.org/licenses/by-nc-sa/4.0/).
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org
Manifold uses cookies

We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.