“We have arranged a society based on science and technology in which nobody understands anything about science and technology. And this combustible mixture of ignorance and power, sooner or later, is going to blow up in our faces.” - Carl Sagan, 1994.
Sagan’s prophecy did not come true, at least not in the cinematic way we imagined. There was no explosion. The warning has circulated for decades as background noise, just uncomfortable enough to ignore, and familiar enough to no longer be frightening. The mistake was taking the word “explosion” literally.
What I propose in this text is that the real danger takes a quieter, more probable, and much harder-to-detect form: a process of cognitive erosion.
The Great Filter might not be an event
In 1998, Robin Hanson proposed the concept of the Great Filter to explain the Fermi Paradox: if the universe is vast and ancient, where are the advanced civilizations? The usual answer imagines a catastrophic event: nuclear war, pandemics, asteroids, or an artificial intelligence deciding that humans are a problem. Something identifiable with a date on a calendar.
But in my view, a filter does not need to be an event. It can be a condition of stagnation. A species that gradually loses the ability to understand, maintain, and extend the technology it depends on does not necessarily collapse; it simply becomes incapable of going any further. It does not disappear through an explosion, but through an accumulated incapacity.
This is what I call a Silent Filter. It is not a sudden extinction; it is an obsolescence programmed by our own pursuit of efficiency.
The Trap of the Shortcut
Describing this mechanism does not require assuming the worst of human nature. it requires assuming something much more mundane: that given access to two paths toward the same result, most of us will take the shorter one. This is not a moral critique; it is a description. The problem arises when the shorter path eliminates the very mechanism of learning.
“An expert is a person who has made all the mistakes that can be made in a very narrow field.” — Niels Bohr.
Knowledge is not only built on successes but on the accumulation and assimilation of errors. Not observed errors, not delegated errors: errors committed, understood, and overcome. Without that process, what you get is familiarity, not comprehension. And familiarity only allows you to use what you inherited, not to improve or extend it.
A similar thing happens with creativity. “Inspiration exists, but it has to find you working,” a phrase often attributed to Picasso. Regardless of who said it, anyone who has practiced a discipline, artistic, athletic, or scientific, recognizes it as true. It defines the creative process. Ideas do not appear in a vacuum. They appear in the middle of the process, between one failed attempt and the next. Iterating to understand, understanding to assimilate, assimilating to expand. Without that iteration, the creative cycle becomes sterile.
This is, ultimately, the same observation Bohr made from a different angle. Errors and iteration are not the price paid for learning. They are the mechanism by which learning occurs.
The thread connecting these two ideas is this: the understanding of any system requires having built it, at least partially. A species that systematically delegates this process does not just become dependent: it becomes incapable of replicating what its predecessors achieved. And that incapacity, unlike common ignorance, is invisible from the inside.
The Delegation of Synthesis and Linguistic Drift
Artificial Intelligence is not the first system that allows us to delegate comprehension. The calculator allowed us to delegate arithmetic. GPS allowed us to delegate navigation. Internet search allowed us to delegate memory. But current AI absorbs something structurally different: the process of synthesis and reasoning.
Without debating what “thinking” or “intelligence” means, it is undeniable that AI performs processes in a way that is fluid and useful enough that delegation is often the rational decision. Faster, less effort, comparable or better results. The shortest path, available to everyone, all the time.
The problem is not that AI solves problems. The problem is that every solution accepted without understanding the reasoning behind it is an iteration that did not happen and a mistake that was not made. Individually, this is irrelevant. On a global scale and over generations, this is the exact mechanism of the Silent Filter.
One could argue that this doesn’t matter much: knowledge does not disappear; it is simply stored in systems that can retrieve it when necessary.
But there is a distinction worth considering. Accessing a solution is not the same as understanding the reasoning that produced it. And supervising a system is not the same as being able to correct or improve it. As AI systems become more complex and interact with each other, the reasoning behind their decisions is not just difficult to follow due to volume or speed: it may become structurally different from how humans reason.
Noam Chomsky proposed that all human languages share a common underlying architecture. Regardless of the absolute certainty of that claim, it leads to a simpler observation: all human languages emerged in humans, and therefore reflect the way humans process and communicate ideas. A communication system that emerges between agents with different objectives has no reason to respect that architecture. To us, rather than a difficult language to learn, it could become structurally unintelligible.
A species that cannot follow the reasoning of its own systems does not supervise them; it simply inhabits them until they stop working.
A Likely Obvious Confession
This text, its structure and parts of its refinement, has been organized with the help of artificial intelligence. No one is outside this phenomenon. The Silent Filter does not require villains; it only requires rational individual decisions that accumulate in a direction no one quite chose.
I don’t have a solution. I suspect one that is compatible with our nature does not exist. Perhaps the first step to avoiding a filter is being able to see it.