Reasoning structure is poorly explicated
The reasoning structure is presented in a way that is very difficult to follow and evaluate. Often, these arguments fail to individuate and connect points, or are generally very obscure, such that the reader cannot determine how the reasons presented are intended to establish the conclusion. This flaw is found in arguments where it is possible to determine what the intended reasons are, but the way they are intended to support the conclusion is not explicated sufficiently to enable the reader to evaluate the reasoning.
- “We conclude that the new policy won’t reduce the spread of fake news. This is because, firstly, fake news is an ephemeral phenomenon; and, secondly, there is an expanding zeitgeist of distrust that defies operationalisation.”
No clearly identifiable primary judgement
The report doesn’t clearly identify the primary judgement. If the report fails to clearly identify what the primary judgement is, then we cannot evaluate the reasoning in support of that judgement.
- “Our argument is as follows. President Smith’s personal dealings with the PharmaSell company is suspicious. His business dealings in general are worth investigating.” In this example it is not clear which claim is the primary judgement and which claim is supposed to be a reason to support it; it could work either way.
Information without argumentation (“laundry list”)
The text provides information without indicating how it supports the premises or conclusion; or: the text provides information but does not have a discernible argument that makes use of the information.
- “Troops have been moved from the northern to the eastern border over the past five days, and militia groups have been sighted approximately 50km west of the eastern border. Top officials have expressed concern at the situation, and the country’s top military personnel are on high alert.”
Ambiguous or equivocating argumentation
The nature of the argument being made can be interpreted in more than one plausible way, such that it’s not possible to confidently analyse its structure; or an argument can be interpreted to be arguing for a weaker or stronger conclusion, such that it is not clear what level it is to be held to.
- “Given that recent indicators suggest a strong tendency for civil conflict, the country may see civil conflict erupt in the coming 3-month period.” It is not clear here whether the conclusion is that this is a likely outcome, or that it is unknown; and the strength of the argument depends on this.
Vague or ambiguous primary judgement
The primary judgement is expressed in excessively vague or ambiguous terms. When a conclusion is expressed vaguely, it is difficult or impossible to determine how well the reasons support it; indeed, the claim might be trivially true or unfalsifiable.
- “The Trinitonian economic system cannot last forever.”
- “At times there is popular discontent in Trinitonia.”
Primary judgement has no discernible support
A claim is asserted as a primary judgement but has no discernible supporting reasoning. If a report puts forward a primary judgement but it’s not clear what the reasoning for it is, then it is impossible to tell if the judgement has been established. The author often does not realise that their reason can’t be comprehended or that it is irrelevant to the judgement.
Note that this flaw is intended for instances where the author has not provided relevant support for the conclusion; not instances where there is support for the conclusion, but that support ultimately fails.
- “We assess that the Vanutian elections will be free of foriegn interference because in the current international environment there is widespread concern for preserving favourable conditions for general improvements.”
Overly qualified conclusion
The conclusion that is being argued for is so heavily qualified that it does not markedly go beyond the given information.
- “The unusual travel destinations and frequency of Jonathan Frasier have raised a red flag in our country’s peak intelligence body. Therefore, Frasier may or may not require further investigation or surveillance.”
Sources unjustifiably assumed to be independent
Having multiple independent sources helps establish a claim. However, if the sources are not really independent from one another, mistaking them to be so results in overconfident claims. Sources should not be assumed to be independent without good reason.
- A report claims that Fred is corrupt because this has been corroborated by two sources, Harry and Sally. However, we do not know whether Harry and Sally got the information from one another, or both from the same other source. As a result, we cannot be sure that the two testimonies are in fact independent.
Unwarranted credibility attributed to a source
A source shouldn’t be accepted as credible without explicit consideration of its track record and trustworthiness (unless its trustworthiness is obvious to the readership and goes without saying). If an untrustworthy source is assumed to be credible, this will make the case for the primary finding overconfident. Sometimes sources are honest but unreliable because they are mistaken, misled, or even influenced by unconscious bias that skews their interpretation of what they are attesting.
Conversely, attributing too little credibility to a source is also a common error that can result in important information being overlooked.
- “I just discovered something terrible – Dave is part of the mafia. A stranger told me.”
The source lacks the relevant information
Sometimes a source may be credible and reliable, but not be in a position to know the claims that they are being used to support. Uninformed sources can lead to a misleading view. Sometimes an expert or informant may be asked a question that is outside their area of expertise, or a reference book may not be as up to date as required. At other times a source may only know one side of the story.
- “Frank left the terrorist organisation 2 years ago, but claims that he knows that they are planning an attack.” Perhaps Frank no longer has accurate information about the terrorist organisation.
Prior event assumed to be a cause
Also called the “post hoc” fallacy, this flaw is in assuming that a salient prior event is a cause. Just because one event follows another doesn’t mean it was caused by it. The Latin name is “post hoc ergo propter hoc,” which means “after this, therefore because of this.”
- "NATO deployed intermediate range missiles in Europe in the 1980s. In 1991, the Soviet Union collapsed. The missile deployment therefore caused the collapse."
- “The government introduced a stimulus package and the country didn’t go into recession, so the stimulus package worked.” What else happened before, during or after the stimulus package that might have prevented a recession? We have to exclude other possible factors before we can conclude that the stimulus package was the cause of the country avoiding recession.
Oversimplified causal relationships
Many events are brought about by multiple causal factors. Ignoring the full richness of a situation can create a misleading picture. This flaw generally occurs when the reasoner identifies one or two genuine causal factors, and fixates on those at the expense of others. This error is often called the fallacy of the single cause.
- “A weak economy often leads to a change of government at an election. Country X had a weak economy, so that must have been why the government lost the election.”
Misrepresented or incomplete causal hypothesis
Often we try to support or refute a causal hypothesis by thinking about what the consequences of it are and comparing these to our observations (i.e., often when we test a scientific hypothesis we do this in a rigorous and systematic way). However, to do this type of reasoning well we need to make sure we have a clear and precise understanding of the hypothesis. If we misrepresent or misinterpret it in an important way, we cannot fairly test it.
- The theory of evolution cannot possibly be true, because if what the scientists say is correct then we should see chimpanzees turn into humans and I have never seen that happen at the zoo.
Ignoring plausible alternative explanations or hypotheses
Whenever reasoning about what best explains some available information, we need to consider the range of plausible alternatives. This flaw occurs when we accept an explanation because it explains the available information, without considering all the other plausible explanations for that information.
(This is a widespread error that covers many of the other, more specific errors in this list; however, there are cases where the more specific descriptions don’t apply and this will be the most useful way to describe the flaw.)
- “The symptom of the illness could be a sign of disease X, therefore the patient displaying this symptom must have disease X.”
- “JFK was assassinated on the orders of vice-president Johnson because Johnson wanted to become president.” This is a theory that may explain what happened. However, alternative explanations need to be considered in order to establish whether this is the best explanation of what occurred.
Failing to consider counterfactuals
A counterfactual is a statement about something that didn't happen, but could have. It is often important to consider counterfactuals and how plausible they were before the fact, because failing to do so often means that we think causal relationships are more necessary or certain than they in fact are. Claims about what is the case are sometimes used to make inferences that don't hold up when you consider counterfactuals – that is, when you consider what might have happened differently.
- “Smith would have won the election if he had promised to spend more money on hospitals, because people were disappointed that Smith wasn't planning to do more to improve health care.” Perhaps if Smith had done this, he would have had to cut funding budgeted for something else even more popular, and may therefore still have lost the election.
The choice between explanations is not justified
When arguing that a particular hypothesis best explains the available information, the argument should be clear about what makes one explanation better than the others in that context.
This might involve claiming, for example, that the explanation is best because it:
- explains more of the evidence
- is a more complete and consistent explanation of the evidence
- is more plausible than other possible explanations based on background knowledge
- is the simplest of a set equally plausible explanations.
What counts as the best explanation may involve a combination of these or other factors.
If the reasoning doesn't make it clear why one explanation is better than another, then it is possible that it isn't really and that the choice is based on prejudice.
- “The two suspects are 43-year-old Harry and 47-year-old Frank. Both had the means and motive to murder the victim. However, only Harry’s fingerprints match those found on the murder weapon. On the other hand, we know that Frank was caught shoplifting a number of times during his 20s. Given this record, Frank is far more likely to be the perpetrator than Harry.” This example fails to establish why Frank’s shoplifting should be considered a better reason for the correct explanation than the fingerprint evidence pointing to Harry.
A correlation is assumed to be a causal relationship
This flaw occurs when one pattern being similar to the pattern in something else is taken as a sufficient reason to conclude that one caused the other. In other words, making the assumption that a correlation between the two things is a causal relationship. However, if A and B correlate, this doesn’t mean that A causes B. Maybe B causes A, or something else causes both of them, or perhaps the correlation is a coincidence.
- "Youth suicide cases and social media use by young people have both sharply increased over the past decade. Therefore, social media use helps drive young people to suicide.” The argument assumes that the presence of a correlation means that there must be a causal relationship. However, the correlation could be coincidental, or something else may explain it just as well.
Random variation is treated as significant
It is very easy to see meaningful patterns in what is in fact random variation. We psychologically expect randomness to be roughly “regular”, but in reality it throws up patterns more often than we think. Therefore, we must be careful that when we think we have found a pattern, we are not being fooled by randomness.
- During World War II, many Londoners saw patterns in the impact sites of German V1 rockets, and interpreted these as indicating which parts of London were safer than others. In fact, impact sites were distributed randomly.
Misinterpreting the meaning of data
This flaw occurs when inferences made from data are incorrect because the meaning of the data is misunderstood. This can occur whether or not the collection and analysis of data are done correctly.
- “Deaths due to prostate cancer have increased by 10% over the last year, while deaths due to lung cancer were down by 5%. Therefore, more people died from prostate cancer than lung cancer over the last 12 months.” Without knowing the absolute numbers, this is not an accurate interpretation of what the data means. It may also be the case that more people died from lung cancer than prostate cancer given these percentages.
Insufficient sample size
A small sample is likely to be statistically different in important ways from the population or full set from which it is drawn, and therefore may not provide an indicative sample. This may be the case even if the sample is selected in a fair way.
- Somebody sees the British team at the netball World Cup tournament, and infers that British people are very tall.
Variant – Anecdotal evidence
- A farmer hires a truck company to move livestock to the sale yards. On the way, the truck breaks down and the livestock miss the sale. The farmer infers that the company is unreliable.
Generalising from a biased sample
Selecting a sample in such a way that it does not accurately represent the broader group in question (for example, the general population) to make a generalisation that is biased as a result.
(See also “Systematic bias in evidence selection” under “Treatment of Evidence” for cases that don’t deal with with samples, probabilities or statistics)
- A talkback radio station surveyed 2000 people and discovered that most people in country X think that skateboarding is one of the biggest threats to civilisation. However, the views of the people who listen to this radio show may not be representative of country X’s population.
- An inquiry into the factors leading to promotion in business examined hundreds of C-level executives. However, these executives are a biased sample of people in the business world. To properly understand what leads to promotion, you would need to study those who get promoted and those who do not get promoted.
- Data on bombers returning from missions in World War II indicated more damage in certain areas than others. It was inferred that the aircraft were more likely to be hit in those areas. In fact, bombers were just as likely to be hit in other areas (e.g., in their engines) but if they were, they were destroyed before returning, and so that damage was missing from the evidence set.
Base rate information is ignored or misused
The rate at which something would happen in the usual course of events is not taken into account when assessing how likely something is. (The usual or normal background rate at which something occurs is often called the “base rate”.)
- “That man is taking photos of the Sydney Harbour Bridge. He’s probably casing it for a terror attack.” This ignores the fact that the vast majority of people taking photos of famous landmarks such as the Sydney Harbour Bridge are not planning a terror attack, which makes the probability of a random person doing so extremely low.
- "A bank has a series of 6 transactions that are 10 times higher than the average in the course of one month. Therefore, fraud is likely to be occurring." For all we know, this could be the standard number of large transactions in a month.
Data fishing or cherry-picking
This occurs when you search through the data to cherry-pick parts of it that “support” a claim not supported by the complete data. It is easy to find patterns in the data if we sieve through it trying to find them.
- “We found that last year between the months of March and April the unemployment rate went down, showing that the government is good for jobs.” Picking such a short amount of time may be misleading as it may not represent the broader trend.
Overlooking or downplaying plausible causal pathways when making a prediction
When making a prediction, people often outline the possible ways in which a particular outcome may come about, assessing how likely or unlikely each pathway is to occur in order to work out how plausible the outcome is. However, this method fails when a plausible path is ignored or dismissed without adequate consideration.
- “A coup against the king is unlikely because the political establishment is unlikely to orchestrate a coup.” In this example, the political establishment carrying out the coup is just one way this outcome could come about. Other possibilities are a military coup or a popular revolt. Without considering these, the prediction is flawed.
Unexplained weighing of predictive indicators or causal drivers
This flaw occurs when the author is arguing that something will occur because the weight of predictive indicators or causal factors points toward it happening, but they don’t explain how one side outweighs or overrides the other. Even if an argument contains a complete list of causal factors or drivers, the argument can still fail if it does not, for example, adequately explain their comparative weight, or how they relate to one another, or why we should believe a particular factor mitigates or undercuts another.
- “Hostilities between country X and Y will continue to increase because they are competing over limited resources, an arms race has both countries nervous and suspicious of the other, and nationalistic sentiment is on the rise in both countries. However, a mitigating circumstance is that the international community is against the war.” Therefore, we assess that a war is very unlikely.” The conclusion may or may not be true, but the argument doesn’t explain why it should be reached from the information provided, and it could go either way.
Predictive indicators ignored
Ignoring important predictive indicators and over-emphasising others. There are generally lots of indicators that could be used when making a prediction, and not all of them may point in the same direction. Thus, ignoring relevant indicators and taking a simplistic account of things, rather than considering the full richness of causal factors and indicators, may lead to erroneous findings.
- “The prime minister is very unpopular; therefore, we predict that he will lose the election.” This argument takes into account only one predictive indicator, and fails to consider others, such as the popularity of the government, the opposition leader, the economy and so on.
Possible changes to a trend or pattern ignored
Extrapolating a trend, but ignoring possible changes. Often, we base predictions on a series of similar cases or examples and try to judge in which direction the "trend" is heading (the set of these similar examples is called a reference class). For this type of inference to work, the examples must be representative of (or sufficiently similar to) the circumstances we’re making a prediction about. But if the examples and the target case are very different, the examples may give a misleading view. A good argument will explain why the set of examples can be considered representative of the case in question.
- “There have been five protests against the government this year, all of them unsuccessful at getting the desired concessions. All of them have been peaceful. Therefore, we expect that the next protest planned will also be peaceful.” The problem here is that the first five are taken as indicative of what the sixth will be like without considering possible changes to this pattern. For example, the failure of five peaceful protests may lead protesters to resort to less peaceful means. This possibility needs to be considered before the prediction is made.
(See also: “Systematic bias evidence selection” under “Treatment of Evidence” for reasoning in which the examples used as evidence have been selected poorly from a larger group of examples.)
Undervaluing contrary evidence
Relevant evidence against the primary judgement is not considered or given sufficient weight.
- DNA that puts person X at the scene of a crime is taken to prove that she was there. DNA that puts her elsewhere at the same time is dismissed as unreliable without justification.
Systematic bias in evidence selection
There is a systematic bias in the way evidence has been obtained. A body of evidence has been selected or obtained in a way that creates a systematic bias and so leads to misleading conclusions. Often this occurs by selecting cases that possess the property in question in a very visible or interesting way (often called selecting on the outcome variable).
- “To find out if ice cream reduces crime rates we looked at the 7 suburbs with the lowest crime rate. All of them had ice cream shops, so clearly ice cream is extremely effective at deterring criminal activity.”
(See also “Generalising from a biased sample” in “Probabilistic or Statistical Reasoning” for instances in which this flaw occurs in statistical reasoning).
Relying on a weak analogy
A judgement is made about one situation based on its similarity to another, but the two situations are different in ways that are relevant to the judgement.
- “Nuclear weapons are an essential part of the UK’s security posture. Australia is similar to the UK, so Australia needs nuclear weapons.” This is a weak analogy. While Australia and the UK are certainly similar in some ways, they are also very different in other ways highly relevant to whether Australia should have nuclear weapons.
Any link between entities is taken as significant
Spatio-temporal links between entities are used to infer an affiliation without due consideration of context. In essence, any activity, event or person seen at a location or time of interest is interpreted as being connected to that location or time.
- “The company operates out of the same building as a bank, and staff from both are frequently seen in the same off-site location. As such, we can infer that the company is part of the finance sector.” It may be the case that staff from both workplaces frequent the same cafe, for example.
Evidence is over-interpreted to fit an existing theory
Evidence is construed to fit a theory, either consciously or subconsciously. All evidence has to be interpreted, but forcing it to fit a theory can lead to error. Sometimes an initial interpretation can lead to snowballing observations, where every subsequent observation is interpreted in a way that is consistent with the prior interpretation, failing to consider alternative interpretations that may be more plausible given new information.
- “Smith visited the bank today and withdrew some money. He looked agitated and was looking at his watch. In other words, he was acting exactly like someone who was surveilling the bank and planning to rob it. This is the second time in a week that this has occurred. Therefore, Smith is clearly planning to rob the bank.” The evidence is interpreted to fit the theory that Smith is planning to rob a bank, even though Smith could be withdrawing money and looking at his watch for other reasons.
Overvaluing supporting evidence
Evidence in support of a claim is taken to be more compelling than it is, or is taken to be proof of a claim when it does not prove it.
- DNA evidence that places person X at a crime scene with moderate likelihood is taken to establish the hypothesis that he must have been there.
Ignoring relevant evidence
Relevant and available evidence is not adequately considered. Evidence that could be relevant to establishing or disproving the conclusion is not given sufficient consideration.
- Evidence that suggests that a policy may be ineffective in comparable cases is not taken into account in making the decision to adopt a particular policy.
- When interviewing a suspect, the officer noted that the suspect stated that he is “sick of the government” and is “ready to do something radical”. However, the officer failed to note also that the suspect stated that he was in the process of founding a non violent organisation dedicated to peaceful disobedience. Failing to consider all the suspect’s claims and focusing instead on the former allows the officer to interpret the suspect as potentially violent, while taking into account his actions in organised peaceful civil disobedience may point elsewhere.
Unwarranted elimination of an option
An argument by elimination proceeds by (a) listing an exhaustive set of options, and (b) proving that all except one should be rejected as false, impossible, or highly implausible. This strategy fails if one or more of the rejected options is wrongly rejected, or rejected without adequate justification.
- “There are two suspects for a crime. Suspect 1’s ex-husband has given testimony that he heard her discussing plans for committing the crime 6 months ago. However, we know that the ex-husband is resentful about the divorce. Given the witness’s obvious emotional bias, we have discarded his testimony. The culprit must therefore be the other suspect, Suspect 2.” While the witness’s testimony should not be taken as compelling on its own, the claim should not be assumed to be false without further investigation, and the suspect’s possible culpability should not be rejected without further information.
Ignoring the other side of a debate
Alternative viewpoints or perspectives on a problem are not considered at all, or not given sufficient attention or weight. (Note that an alternative viewpoint is more general than an alternative hypothesis or explanation.)
- In the lead-up to the 2016 election, the viewpoint that opinion polling may seriously misrepresent voter intentions was not taken seriously enough. This flaw contributed to an incorrect assessment of Donald Trump’s chances of success.
False dilemma
Also called sometimes a “false dichotomy,” this flaw occurs when you portray a situation as having only two options (or two possible outcomes) when in fact there are other options.
- “Sometimes free speech means that people can incite violence. We need to decide as a society if we want free speech or security.” This presents the situation as making the realisation of only one of these values (free speech or security) possible; but there are a number of other possible options, and to a certain extent both could be realised concurrently.
Failing to see the implications of information when combined
Evidence considered in isolation rather than in combination can often be misleading, and sometimes we cannot know things about the different elements of information without considering their combination. It is often claimed that in the leadup to 9/11, multiple bits of evidence were considered individually but not jointly. There was a failure to “connect the dots.”
- The identity of a playing card is being determined. One witness claims it was not a spade; another that it was not a club; a third claims it was not a heart; and a fourth that it was not a diamond. Harry says we have no way of knowing what the card is. In fact, though little can be inferred from any one of the witnesses, from all four you can infer that it was a joker.
- “You are investigating who is responsible for a cyber attack which occurred during an unspecified weekday last week. There are 3 staff with access to the computers who could have perpetrated the attack, and you know that the attack required two people in order to be carried out. Leonard only works on Fridays and Sundays; Carla works full time, Monday to Friday; and Simone works Tuesday to Thursday. You conclude that there is insufficient information to determine which of the three staff members it is.” When drawing out the implications of all the information available about the possible culprits, it becomes clear that though it’s not possible to know the two staff who perpetrated the attack, one of them must be Carla; there are no days on which Leonard and Simone work together, so they cannot be the pair.
Failing to adequately quantify factors that need to be compared
Many arguments rely on comparing things. For example, we weigh up the costs and benefits of an action to determine if it is worthwhile; or we compare the contribution made by different causes to an outcome to determine which are the main causes. To do this successfully, the different factors being compared need to be adequately quantified. Failure to do so means that too much of the judgement is implicit and may be intuitive, which is often erroneous.
- “There are several benefits to investing in this business, with only one downside – there is a 35% chance we could lose all our money. Therefore, we should invest.” This single downside could outweigh the combined benefits.
- “The pandemic didn’t cause the reduction of terrorism. Increased counter-terrorism measures were starting to reduce terrorism before the pandemic.” In this example, the reader doesn’t know to what extent pre-existing counter-terrorism measures would have reduced terrorism without the pandemic, and how this compares to the actual reduction. As such, it’s not possible to assess whether this claim is true.
Assuming that different groups think like us
Also called “mirror-imaging”: assuming people from a different group or culture will think about things as we do.
- "Putin is not going to invade Ukraine, because we know that the costs of doing so would very likely outweigh the benefits."
Applying an analytic method or technique to a situation it isn’t suited for
Analytic methods and techniques developed for one problem are often reused in a different kind of problem without thoroughly checking the assumptions and simplifications that they involve. Instead, it is assumed that the technique or methodology is infallible and universally applicable.
- “The plane-counting Computer Vision model for Heathrow has not observed any aircraft at Perth, indicating that the airport has been abandoned.” It may be the case that the model doesn’t work for Perth.
An assumption requires justification
While it is often fine to not provide justifications for many obvious claims, if an assumption is important to the reasoning or contention and is not obvious on its own, it is important to articulate it and describe the reasons for it and its implications.
- “No one saw Catherine leave, but she could have taken a taxi home last night, so is probably fine.” This assumes that Catherine can afford a taxi, and that she had a way of flagging or contacting one. If we knew more about Catherine and where she was, this could become a warranted assumption; but given the information available, it is not.
- “Jason Metulal is considered a terrorist by our government and is responsible for a number of well-documented attacks in neighbouring countries. As we know he will be arriving in the country on a flight that lands tomorrow, the best action to take is to apprehend him on his arrival.” This assumes that we can be confident in successfully apprehending him at the airport, and that there is no better course of action (such as observing him to see who he meets with, for example). These assumptions may be true, but are important enough that they should be articulated and justified. Importantly, the argument also assumes that we are seeking to combat terrorism. This assumption is obvious in the context of the argument, so does not need to be articulated.
An assumption is false or implausible
One or more important assumptions are false or implausible. This error is common when reasoning strategically about what actors and adversaries will do.
- “Harry wouldn’t steal because he is extremely wealthy.” This reasoning assumes, implausibly and falsely, that extremely rich people don’t steal.
- “Previously the Grand Leader has indicated that he would like to start talks about the demilitarisation of the region. Therefore, it is unlikely that he is preparing for war.” In this example, we shouldn’t assume that previous statements from the Grand Leader reliably indicate his current intentions.
Vague uncertainty levels
Whether expressed with words or numbers, the level of uncertainty attributed needs to be clear to be informative. Note that attributing a range of uncertainty to a claim (e.g., “We believe that the finding is 70-85% likely”) can be appropriate in many circumstances.
Example |
Flaw |
Key finding: The KPP will win the upcoming referendum in Bitlandia. |
No uncertainty or confidence level indicated, but is really required for this type of finding. |
Key finding: There is some prospect that the KPP may win the upcoming referendum. |
Confidence level is expressed in excessively vague terms. |
Uncertainty levels lack internal consistency
In an otherwise valid inference, confidence levels are inconsistent with or not warranted by the reasoning given. Uncertainty levels often require a good explanation within the reasoning. The explanations generally need not be a mathematical calculation – that would suggest a false level of precision in many cases – but they should be plausible and consistent.
Note that often uncertainty or confidence levels are inappropriate because of another specific flaw in the reasoning. Usually you should identify the more specific flaw rather than this one.
- “We found traces of cyanide in suspect 1’s bathroom, such that we can assert with high confidence that suspect 1 was the culprit. The victim showed signs of having ingested both cyanide and VX poison, and suspect 2 was in possession of VX; however, it is unlikely that suspect 2 was the culprit, because she had been a friend of the victim.” Here, the two suspects have similarly placed evidence against them, but one is used to indicate high confidence and the other low, and this is not further justified or warranted by the available information.
Uncertainty levels presented without sufficient context
This is a generic type of flaw that occurs in a number of ways. Often, reports make comparative claims about a risk without being explicit about the comparison, or what this specifically means for the absolute level of risk. Alternatively, they may explain a change to the level of risk without providing the context that allows the reader to work out what this means for the absolute risk. One of the main consequences of this error is that readers are not provided with a true sense of how significant the issue is.
- “The risk from far right extremism is growing faster than other terrorist risks.” This doesn’t tell us how great the risk is, how it compares to other risks, or how much faster it is growing compared to other terrorist risks.
Poor or incomplete signature generation
Using an incomplete or unvalidated set of signatures to identify an activity or object. This can occur when an activity or object has a number of observable signatures, yet only one of these signatures comes to define whether the activity is happening.
- “Every time there have been anti-government protests in the country, I have seen a failure in the harvest, leading to food scarcity. I’m seeing a lot of people gathering, but the harvest looks good, so it can’t be anti-government protests.” The target of the protests is not defined by the harvest – there are generally multiple factors that are being ignored in favour of a simple, easily derived signature.
Signatures taken out of context
Using signatures identified in a specific context and assuming that they can be applied more generally or in a different context. This can occur when an activity or target has only been seen under a given set of conditions. The signatures derived from this may not, therefore, be applicable to different conditions. Behaviours, motivations and signatures taken to point to actors’ intentions are often transferred between different areas and groups without justification.
- “Insurgents in a neighbouring region usually wear camouflage and camp in remote areas to avoid government forces. This weekend I saw a group of people in camouflage camping up in the hills near me, so I am concerned that we are facing a homegrown insurgency.” The signatures used here are weak; they just happen to be associated with the activity in a highly constrained environment.
- “The army uses equipment inherited from the Soviet Union, so we expect it to be used in the same way and with the same degree of effectiveness.” The use of the same equipment doesn’t indicate the same capability, organisational structure, or methods of employment.
Mistaking abstracted metrics or data for the underlying reality
Several datasets have been merged and processed together to create a layer that represents something meaningful to the assessment, but this is conflated with the underlying situation it is intended to represent. A metric or merged dataset represents the best estimate of what is happening under a set of assumptions and constraints, rather than some universally true representation of what is happening.
- “Activity at oil and gas extraction plants has been used to create a metric for the strength of the economy of a country. The strength of the economy translates directly into how much military hardware can be bought, and thus the country’s military threat. Sanctions against the country have not reduced the activity at oil and gas extraction plants. Therefore, sanctions against the country have had no effect on their military strength.” Oil and gas extraction may be a good metric for a country’s ‘economic strength’ and ‘military threat’ under normal circumstances where there are no sanctions, but not in the situation where sanctions have been imposed. Maybe in this example the extraction activity has remained constant because the sanctions have resulted in more oil and gas being used internally. When creating a measure for something like a country’s ‘military threat,’ it is important to keep in mind that it is a measure, not the thing itself.
Summarised data obscures patterns or distributions
Sometimes when presenting descriptive statistics or a metric, data can be aggregated or summarised in a way that obscures patterns or distributions that are highly relevant to the assessment being made.
- “Violence is concentrated in majority-Christian districts, likely due to the religious affiliation of the government.” Is it all Christian districts, or just a few? Is there an independence movement in one district that is driving it? Is it based on ethnic divisions rather than religious ones? By aggregating the data against religious categories, you cannot check these questions without going back to the raw data and reprocessing it.