What intelligence can’t tell you and why that matters
Intelligence is a crucial tool for reducing uncertainty and informing decisions, but it’s not a crystal ball that can eliminate the fundamental uncertainty of trying to predict human decisions in complex, adversarial environments.
The seductive promise of perfect information
There’s something deeply appealing about the idea that if we can just gather enough intelligence then we can know what’s coming. If we intercept enough communications, recruit enough sources, analyze enough data, we can eliminate uncertainty and make decisions with confidence. This promise underlies enormous investments in intelligence capabilities and shapes how leaders think about decision-making in security contexts.
This promise though is fundamentally misleading and understanding why requires us to think carefully about what intelligence can and cannot tell you. Let’s go through the different categories of limitations that are inherent to intelligence, not just problems of insufficient resources or methods.
Intelligence can tell you capabilities but struggles with intent
Let’s think about capability versus threat for a second, Intelligence is actually quite good at assessing capabilities, at least in principle. You can count aircraft, analyze weapons systems, intercept communications about military exercises, observe troop movements, and develop fairly accurate pictures of what an adversary is capable of doing. These are true observables, physical things that leave signatures you can detect and measure.
Intent though is a completely different matter. Intent exists in the minds of decision makers and often those decision makers themselves haven’t fully formed their intentions until circumstances force decisions. Even when leaders have clear intentions, they may hide them, misrepresent them, or change them as situations evolve. Intelligence can sometimes provide clues about intent through communications intercepts, behavioral patterns, or insights from human sources, but these are always partial and somewhat ambiguous.
Here’s why this matters so much. Most crucial security decisions aren’t about capability questions. You are rarely trying to figure out whether an adversary could do something if the answer really determines your course of action. Instead, you’re trying to figure out whether they will do something, under what circumstances, and when. These are questions of intent, timing, and decision making under uncertainty, and these are precisely where intelligence is weakest.
To add some context lets think of the this example, In the months before Russia’s full-scale invasion of Ukraine in February 2022, Western intelligence agencies tracked the buildup of Russian forces around Ukraine’s borders with remarkable accuracy. They knew the number of battalion tactical groups, the logistics preparations, the command structures being activated. The intelligence on capabilities was excellent. Satellites don’t lie about tank formations, and signals intelligence picked up extensive military communications.
But even with all this intelligence, there was genuine uncertainty about whether Russia would actually invade. Some analysts believed the buildup was coercive diplomacy intended to extract concessions without actual invasion. Others thought Putin himself might not have decided yet, that he was creating the capability to invade while keeping his options open. Even after the invasion began, there was intelligence failure around Ukrainian intent and capability to resist, because assessing how Ukraine would fight required understanding morale, leadership, civilian response, and countless intangible factors that don’t show up in satellite imagery.
This example illustrates a fundamental limit intelligence can tell you what someone can do, but it struggles to tell you what they’ve decided to do, especially when they themselves might not have fully decided yet.
The problem of uniqueness and prediction
Think about how prediction works in domains where it’s genuinely reliable. Weather forecasting as an example has become remarkably good because we’re predicting repeating physical processes. We observe the same atmospheric dynamics over and over, build models of how they work, and use current observations to predict future states. We get better through repetition, learning from errors, and refining models. The system we’re predicting follows physical laws that don’t change.
Now consider trying to predict whether a particular leader will order a military strike, or whether a terrorist cell will attempt an attack, or whether a state will cross a nuclear threshold. These aren’t repeating processes governed by stable laws. They’re unique decisions by specific individuals in particular contexts, influenced by personality, perception, domestic politics, organizational dynamics, and countless other factors that may never align the same way again.
Intelligence agencies can study historical cases of similar decisions, but the number of truly comparable cases is usually tiny, and the differences between cases often matter more than the similarities.
This means intelligence can’t rely on the same kind of pattern recognition and model building that makes prediction possible in physical systems or even in some social domains where you have large numbers of similar events. You’re not predicting whether it will rain tomorrow based on atmospheric conditions. You’re trying to predict a unique decision by a specific person in a one-off situation, and there’s no amount of historical data that makes this fundamentally predictable.
Deception and the adversarial relationship
Now we need to address a limitation that’s unique to intelligence in adversarial contexts, which is that the targets of intelligence collection are often actively trying to mislead you. This creates dynamics that don’t exist in most other forms of information gathering.
When you’re conducting scientific research, nature isn’t trying to hide the truth from you or feed you false information. The challenges you face are about developing good methods and having sufficient resources, but you’re not in an adversarial relationship with your subject of study. Intelligence is fundamentally different because adversaries know they’re targets of collection and they devote substantial resources to denial and deception.
Think about what this means in practice. Suppose you’re trying to assess an adversary’s military intentions by intercepting their communications. The adversary knows you’re likely intercepting communications, so they implement operational security measures. They use secure communications, speak in code, compartmentalise information so no single person knows the full picture, and sometimes deliberately send false information over channels they know are compromised. You’re not just trying to collect information; you’re trying to collect information from someone actively trying to prevent you from learning the truth or to feed you lies.
This in itself creates layers of uncertainty that compound each other. If you intercept a communication suggesting an adversary is planning a military operation, you have to consider several possibilities. Maybe this reflects genuine planning. Maybe it’s operational deception intended to make you waste resources or reveal your intelligence capabilities by responding. Maybe it’s genuine planning for a contingency they don’t actually intend to execute. Maybe it’s real but has been deliberately leaked because they want you to know to achieve some deterrent or coercive effect.
The more sophisticated the adversary, the more layers of potential deception you have to consider.
The organisational pathologies of intelligence
Let me turn now to limitations that come not from the inherent difficulty of the problem but from how intelligence organisations actually function. These organisational dynamics systematically distort intelligence in ways that matter enormously for decision making.
First, there’s pressure to provide certainty when certainty doesn’t exist. Policymakers and commanders want clear assessments to guide decisions. They ask intelligence agencies questions like “Will the adversary attack?” and they want answers like “Yes” or “No,” not “There’s a forty to sixty percent chance depending on factors we can’t reliably assess.” This creates pressure for intelligence analysts to compress uncertainty into false precision, to take genuinely ambiguous information and produce confident-sounding assessments. Of course there are ways to limit this. Removing bias, using structured analytical techniques and so forth.
I’ve seen this dynamic play out countless times in intelligence reporting. An analyst might believe the evidence points slightly toward one conclusion but acknowledges substantial uncertainty. By the time this assessment moves up the chain, gets briefed to leadership, and informs decisions, the uncertainty often gets stripped away. Leaders remember the bottom line assessment, not the caveats. They act on “Intelligence says X” when intelligence actually said “We think probably X, but Y remains quite possible and we’re not confident.”
Second, intelligence organisations face incentive structures that bias assessment in predictable ways. If you warn about a threat that doesn’t materialise, you might be criticised for crying wolf, but the consequences are usually modest. If you fail to warn about a threat that does materialize, you face potential career-ending scrutiny. This asymmetry creates systematic bias toward threat inflation, toward assuming worst case scenarios, toward interpreting ambiguous evidence as indicating danger.
Think about how this plays out for an individual analyst. You’ve been tracking a terrorist group and you have fragmentary evidence that might indicate attack planning. Do you write an assessment saying “Evidence is insufficient to conclude attack planning is underway” or do you write “Indicators consistent with possible attack planning warrant continued attention”? The second covers you if something happens while the first exposes you to blame. Over time, this dynamic pushes intelligence assessments toward emphasizing threats.
Third, there’s the problem of politicisation, where intelligence gets shaped to support preferred policies rather than to inform policy choice. This can happen through crude direct pressure, where leaders make clear they want intelligence to support particular conclusions. More commonly, it happens through subtler mechanisms where analysts learn what kinds of assessments are welcome and which create friction, and they adjust accordingly, often unconsciously.
Why acknowledging limits is a sign of maturity
There’s a tendency in security communities to view acknowledgment of intelligence limits as undermining confidence or providing excuses for failure. This is exactly backward.
Mature intelligence consumers understand that intelligence reduces uncertainty but cannot eliminate it, that it provides crucial inputs to decision making without determining decisions, and that the most important questions often have answers that intelligence can only partially illuminate. This understanding leads to better decisions because it forces leaders to acknowledge the irreducible uncertainty in their choices and to plan for contingencies rather than assuming a single predicted outcome.
When leaders expect certainty from intelligence, several bad things happen. First, they may delay necessary decisions waiting for intelligence to clarify what cannot be clarified, missing windows of opportunity or allowing situations to deteriorate. Second, they may act with false confidence based on intelligence assessments that seemed more certain than they actually were, failing to prepare for the possibility that the assessment is wrong. Third, they may blame intelligence for failures that were actually failures of decision-making under uncertainty.
Again, let’s take a known example, before the 2003 Iraq War, intelligence assessments concluded with high confidence that Iraq possessed weapons of mass destruction. These assessments were wrong, and the failure has been extensively analysed. But one key lesson is about how the assessments were used. Policymakers acted as if the intelligence provided certainty when the actual intelligence, even setting aside its flaws, contained more uncertainty than was conveyed in policy discussions. The search for Iraqi WMD programs was genuinely difficult—Iraq had previously had such programs, was actively deceiving inspectors, and retained ambiguous dual-use capabilities. Perfect intelligence was probably impossible even with better tradecraft.
If policymakers had internalized that intelligence on this question would necessarily be uncertain, that Iraq might or might not have active WMD programs and intelligence couldn’t provide certainty either way, the decision calculus might have been different. Instead of “Intelligence tells us Iraq has WMD, so we must act,” the framing would be “Intelligence cannot tell us with certainty whether Iraq has active WMD programs, so we need to decide whether to act given this uncertainty.” That’s a harder decision, but it’s also the real decision that actually confronted leaders.
What this means for decision-making
Understanding intelligence limitations changes how you think about security decisions in fundamental ways. Instead of trying to achieve certainty before acting, you recognise that you’ll often have to make consequential decisions under uncertainty that intelligence can reduce but not eliminate. This shifts focus from demanding better intelligence to developing better frameworks for deciding under uncertainty.
It also means building resilience and adaptability rather than relying on prediction. If you can’t reliably predict what adversaries will do, you need capabilities that can respond to multiple contingencies rather than optimising for a single predicted scenario. You need organisations that can adapt quickly when surprises occur rather than being rigidly committed to plans based on intelligence assessments that might be wrong.
Perhaps most importantly, understanding intelligence limits encourages intellectual humility. When intelligence agencies say “We don’t know” or “The evidence is ambiguous,” that should be respected as honest acknowledgment of genuine uncertainty rather than viewed as intelligence failure. The failure is not in admitting uncertainty where it exists; the failure would be pretending to certainty that doesn’t exist.