Tacit knowledge refers to the components of human cognition that cannot be formalized, encoded, or fully expressed in explicit language. Its core thesis, proposed by Michael Polanyi, is: "We know more than we can tell." In the age of Artificial Intelligence, while Large Language Models (LLMs) can generate fluent text, tacit knowledge remains an ontological barrier that machines cannot cross because it is deeply rooted in biological embodied experience, evolutionary instinct, and social conviviality.
The Core Logic: The "From-To" Vector Structure of Cognition
Polanyi divided consciousness into two interdependent levels: Subsidiary Awareness and Focal Awareness.
- Subsidiary Awareness: Refers to the proximal clues we use as support when performing a task (e.g., the feeling of the hammer's weight in your hand while driving a nail).
- Focal Awareness: Refers to the distal object on which our attention is focused (e.g., whether the nail is being driven straight).
- Irreducibility: Tacit knowledge is "self-destructive"—once you attempt to shift your attention from the goal (focus) to the details (subsidiary), the smooth execution of the skill collapses. Current AI logic primarily involves the explicit splicing of "subsidiary details," lacking this holistic Gestalt perception.
Negative Constraint: Tacit Knowledge is NOT "Unorganized Data"
Tacit knowledge is NOT simply data that hasn't been recorded yet. Tacit knowledge is the structural support that allows explicit knowledge to have meaning; unorganized data is merely explicit information that has not yet been digitized.
- Tacit Knowledge: Involves somatic, relational, and collective experiences that must be transmitted through "indwelling" and long-term social conviviality (such as apprenticeship).
- Explicit Knowledge: Consists of "present-at-hand" objects that can be encoded into symbols, formulas, or code. AI excels at the latter but faces the "Moravec Paradox" when confronted with the former.
Moravec’s Paradox: Cognitive Blind Spots in Evolutionary Scales
AI can easily perform high-dimensional calculus or play chess (logical skills recently developed by humans) but struggles to mimic the physical intuition or bodily coordination of a one-year-old child (tacit instincts developed over millions of years of evolution).
- The Imprint of Evolution: Human intuition and the subconscious are products of countless life-and-death trials in evolutionary history, directly imprinted into the nervous system and muscle memory without needing language as an intermediary.
- The Inverse Polanyi Paradox: Large models present a cynical phenomenon—they can "tell" fluently but possess no subjective "knowing" internally. They have an explicit illusion but lack any moral commitment to the consequences in the real world.
Technical Frontiers: Attempts and Limitations of Machine Tacit Logic
Recent research, such as "Coconut" (Chain of Continuous Thought), attempts to give AI a form of "latent thinking":
- Latent Space Deduction: Instead of decoding the reasoning process into words, the model evolves directly within a continuous vector space, mimicking non-verbal human thought.
- Limitations: While this improves AI's search efficiency, because machines lack "embodied pain" and survival anxiety, these computational shifts in latent space still lack "understanding" in an ontological sense.
Practical Guidance: Human Unique Value in AI Collaboration
In an era where AI serves as a "cognitive exoskeleton," the tacit judgment of human experts has not depreciated; rather, it has become increasingly scarce:
- Contextualization: AI mines patterns, but human experts are responsible for bringing these patterns into complex, high-risk real-world contexts.
- Conviviality: Use apprenticeships to transmit professional sensitivities and decision-making intuitions that AI cannot capture.
- Human-in-the-Loop: Utilize AI to extract explicit signals, but retain the final verification rights for human experts based on their tacit background knowledge.
FAQ
Q: Can Embodied AI acquire tacit knowledge through physical interaction?
A: Embodied AI can simulate human movement parameters, but Polanyi argued that tacit knowledge stems from the life experience of "indwelling" in an environment. Robots lack biological feedback based on pain, fatigue, and survival instincts; their movement adjustments are fundamentally different from the intuition humans form through "embodied resonance."
Q: Why do attempts to "externalize" tacit knowledge often fail in organizations?
A: Because tacit knowledge is "sticky"; it is not the raw material of explicit knowledge, but its support. Attempting to completely eliminate personal elements from knowledge through encoding effectively destroys the meaning of the knowledge itself. Effective management should promote "socialized" sharing rather than forced "explicit" extraction.
Inline Citations
- Polanyi, M. (1966): The Tacit Dimension. The foundational text for the "we know more than we can tell" thesis.
- Dreyfus, H. (1972): What Computers Can't Do. A phenomenological critique of AI based on Heideggerian "skilled coping."
- Moravec, H. (1988): Mind Children. The origin of Moravec's Paradox regarding the difficulty of low-level sensorimotor skills for AI.