Tacit Knowledge in the World of AI
Michael Polanyi said: “We know more than we can tell.” When you think about it, this describes something we all experience.
The classic example is riding a bike. You can’t write instructions that actually teach someone how to balance. They have to feel and experience it. So even with every new person that learns to ride, there still isn’t anyone who can just do it perfectly without trial and error.
Have you ever given someone advice on a relationship or social situation and realized it just didn’t land? I do this all the time, giving advice assuming the other person has the same wiring I do, and then wonder why it doesn’t translate. But that’s the thing: both people are trying to compress something enormously complex into a few sentences. You can’t transfer everything you know and feel. Some of it just doesn’t survive the translation, even if you could articulate it.
This came to mind rereading Polanyi’s The Tacit Dimension, a book that feels more relevant now than when I first picked it up. AI, and LLMs in particular, are built on codified knowledge. Language is essentially human code where every word carries relationships and interpretations that models can learn. But tacit knowledge is precisely what resists being turned into language. So the question becomes: as AI gets better at everything we can articulate, what happens to everything we can’t?
As AI handles more of the analytical work, we interact less with the raw material. And that means we build less tacit understanding of it.
Think about navigation. People who grew up driving around or using paper maps developed a feel for how places connect. Someone who has only ever followed GPS can get anywhere but understands nothing about where they actually are. The moment it reroutes or fails, they’re lost. The tool worked perfectly. The understanding never formed.
The same thing happens with data. Someone who has worked with a dataset for years knows its blind spots: where the numbers mislead, what the outliers actually mean. Hand that same data to someone who only sees the AI’s summary and the output might be correct but the conclusion wrong. We lose the instinct for when something doesn’t add up.
The danger isn’t just that AI gets things wrong. It’s that we give it authority and gradually lose the ability to tell. Instinct, judgment, the feel for when something is off. Those might be the most valuable things we have left to develop.