Introduction

The human brain, an intricate and enigmatic organ, has long been the focus of neuroscientific inquiry. Traditional models of intelligence have emphasized hierarchical information processing, but recent advances propose a radically different framework: the Thousand Brains Theory of Intelligence. Developed by Jeff Hawkins (who co-invented the Palm Pilot in the 1990s) and his team at Numenta, this theory fundamentally redefines our understanding of cognition, perception, and artificial intelligence (AI). By leveraging insights from neuroscience, this framework not only explains how the brain builds knowledge but also provides a blueprint for next-generation AI systems.

What’s New? How is This Theory Different?

Image source: Amazon

Traditional neuroscience models have largely portrayed the brain as a hierarchical system where information flows upward from simple sensory inputs to more complex cognitive processing centers. This classical view suggests that perception is processed in lower sensory areas before being passed to higher cortical regions, where abstract thought and reasoning emerge. Scientists previously believed that cognition operates in a centralized manner, where a single, unified model of the world is constructed in specific higher cortical areas, such as the prefrontal cortex.

Image source: Numenta

However, the Thousand Brains Theory fundamentally disrupts this notion by proposing that intelligence is highly distributed across thousands of cortical columns in the neocortex. Each column independently builds its own model of the world based on sensory input, and rather than relying on a strict hierarchy, these columns collaborate dynamically to reach a consensus. This decentralized model means that the brain is not dependent on a singular high-level processing center; instead, intelligence is emergent from the interplay of multiple cortical columns working in parallel. This shift in perspective is groundbreaking because it challenges the traditional top-down approach to cognition and instead presents the brain as a network of independent yet cooperative learning units, leading to greater robustness, adaptability, and resilience in both biological and artificial intelligence systems.

Key Takeaways

Image source: The concept of “Reference Frame”

  • Multiple Parallel Models: Instead of one singular, top-down model, each cortical column builds its own representation of the world. These independent models communicate and consolidate information through a voting mechanism, achieving consensus for perception and decision-making. Research in neuroscience, such as Mountcastle’s work on cortical columns (Mountcastle, 1997), supports this idea, showing that each column processes information locally before integrating with global cognition. This decentralized approach is also reflected in AI, where ensemble learning techniques improve robustness by aggregating multiple independent models (Dietterich, 2000).
  • Reference Frames for Knowledge Representation: The theory introduces the idea that every column operates within a reference frame, allowing it to track objects spatially and conceptually over time. Studies on spatial cognition, such as those on grid cells and place cells (O’Keefe & Nadel, 1978; Moser et al., 2014), show that the brain encodes spatial information using internal coordinate systems. These reference frames allow humans to recognize objects regardless of changes in viewpoint, a principle now applied in 3D vision models and robotic perception systems.
  • Sensorimotor Learning: Intelligence is not merely about pattern recognition but is deeply tied to movement. Learning occurs through active exploration, with cortical columns constantly updating their models as we interact with the world. Research by Held and Hein (1963) demonstrated that sensorimotor experience is critical for visual and spatial learning. Similarly, robotics and reinforcement learning have adopted this principle, where embodied AI systems learn by interacting with their environments rather than relying solely on pre-labeled datasets (Schulman et al., 2017).
  • Self-Consciousness and Memory: The Thousand Brains Theory challenges conventional views on self-awareness and memory by suggesting that personal identity and awareness emerge from the interactions between thousands of independent yet cooperating cortical columns. Memory is not stored in a single place but distributed across reference frames within these columns, making recall a dynamic and reconstructive process. Neuroscientific research on memory networks (Nadel & Moscovitch, 1997) supports this distributed nature, showing that hippocampal and neocortical interactions play a role in reconstructing past events rather than merely retrieving static data. This concept is reflected in AI architectures like Transformer networks, which use self-attention mechanisms to dynamically retrieve relevant contextual information (Vaswani et al., 2017).
  • Robustness and Redundancy: Unlike traditional hierarchical models, which are vulnerable to single points of failure, a distributed approach allows for greater resilience. If one region of the neocortex is impaired, other cortical columns can compensate, reinforcing adaptability. Studies on neural plasticity (Merzenich et al., 1983) have shown that when one area of the brain is damaged, other regions can rewire to take over its function. This principle is also applied in fault-tolerant AI systems, where redundancy ensures that model failures do not lead to catastrophic errors (LeCun et al., 2015).

This paradigm shift is not just theoretical—it has deep implications for AI, robotics, and neuroscience, suggesting that truly intelligent systems must be built using distributed, reference-frame-based learning mechanisms rather than static datasets.

Image source: voting mechanisms connected to cortical columns

The Significance of the Thousand Brains Theory and Future Developments

The Thousand Brains Theory represents a paradigm shift in how we understand intelligence, offering profound implications for neuroscience, artificial intelligence, and cognitive science. One of its most significant contributions is the decentralization of intelligence, refuting the long-standing belief that cognition operates through a top-down hierarchical model. By revealing that intelligence emerges through distributed, independent yet cooperative cortical columns, the theory reshapes our understanding of learning, decision-making, and memory formation.

In neuroscience, this theory provides new avenues for studying brain plasticity, neurodegenerative diseases, and cognitive resilience. It could help explain why brain damage does not necessarily result in the complete loss of function, as remaining cortical columns can adapt and compensate. Future research is expected to explore how different regions of the neocortex interact, potentially leading to breakthroughs in treating conditions like Alzheimer’s disease and stroke recovery.

For AI, the theory serves as a blueprint for more robust and adaptive artificial intelligence architectures. Current AI models, while impressive in pattern recognition and task execution, struggle with adaptability, memory retention, and real-time reasoning. The application of reference frames, multi-agent decision-making, and sensorimotor learning could lead to AI systems that learn dynamically, refine their knowledge continuously, and interact with their environments in more human-like ways.

Some recent developments in the theory include:

  • Neuroimaging Studies: Advanced neuroimaging techniques are being used to validate the independent learning processes of cortical columns and their interactions in real time.
  • AI and Robotics Applications: AI models inspired by the Thousand Brains Theory are being developed to incorporate multi-sensory integration, embodied cognition, and reinforcement learning, making robots more adaptable to their environments.
  • Neuromorphic Computing: Research in neuromorphic hardware aims to design chips that function more like the human neocortex, using distributed processing and memory mechanisms.
  • Cross-Species Comparisons: Understanding how intelligence scales across different species could provide further validation of the theory’s universality and reveal deeper evolutionary insights.

In June 2024, IEEE Spectrum reported that The Gates Foundation is providing the Thousand Brains Project a minimum of $2.69 million over two years. This project seeks to develop a novel AI framework inspired by the operational principles of the human neocortex, offering an alternative to traditional deep neural networks. The open-source endeavor plans to collaborate with electronics companies, government agencies, and academic researchers to explore and implement potential applications of this new platform.

Image source: Lex Fridman Podcast

Real-world Application: Tanka as the World’s First AI Messenger with Long-Term Memory

Tanka is a revolutionary AI-driven enterprise messenger that integrates long-term memory, contextual awareness, and structured intelligence — inspired by the Thousand Brains Theory. Just as the neocortex builds thousands of localized models, Tanka distributes AI assistants across different group conversations, each functioning like a cortical column with distinct yet interconnected knowledge representations.

How Tanka Mimics Brain Function

  • Personalized and Structured Memory: Tanka employs ontology-based large language models to develop personalized memory banks that track user interactions, business contexts, and past decisions—mirroring how cortical columns build and refine world models through reference frames.
  • Multi-Agent Collaboration: Tanka’s distributed AI assistants work both independently and cooperatively, resembling how cortical columns vote to create a cohesive perception of reality. Each assistant refines knowledge in its domain while contributing to a broader enterprise-wide intelligence.
  • Dynamic Decision-Making: Tanka integrates collective voting mechanisms, similar to the neocortex’s process of achieving consensus across multiple columns. This ensures that AI-driven insights and decisions are contextually relevant, adaptive, and resistant to biases.
  • Active Learning and Adaptability: Unlike traditional AI models that rely on static datasets, Tanka is designed for continuous learning, dynamically updating its memory with new interactions, trends, and evolving user preferences—just as the brain continually refines its models through experience.
  • Contextual Understanding in Real-Time: By integrating multiple sources of structured and unstructured data, Tanka ensures that business conversations are not just reactive but proactively intelligent, drawing connections between disparate pieces of information, much like the neocortex forms holistic representations of complex environments.

Image: AI long-term memory in Tanka

OMNE: A Multi-Agent Framework with inspirations from the Thousand Brains Theory

OMNE is an advanced multi-agent framework designed to enable AI self-evolution and collective intelligence, with inspirations from the Thousand Brains Theory. Similar to how cortical columns in the brain develop individual models and vote to form a consensus, OMNE leverages multiple AI agents working in tandem, each with independent learning mechanisms but capable of cooperation and adaptation.

Image source: OMNE research paper on Arvix

How OMNE was inspired by the Thousand Brains Principles

  • Distributed Knowledge Representation: Each agent within OMNE operates as an individual unit, much like a cortical column, learning and refining its domain-specific expertise. These agents can communicate and aggregate their knowledge for better decision-making.
  • Long-Term Memory (LTM) Integration: Similar to reference frames, OMNE enables AI systems to store and retrieve structured long-term knowledge, facilitating better contextual understanding and adaptive reasoning over time.
  • Multi-Agent Collaboration & Voting: Just as cortical columns vote to reach a unified understanding, OMNE’s architecture allows agents to deliberate, validate, and refine solutions collectively, reducing errors and optimizing outcomes.
  • Self-Evolution & Adaptability: OMNE supports real-time updates and learning from interactions, enabling AI systems to evolve continuously without requiring frequent retraining.
  • Decision Optimization for Complex Scenarios: By integrating principles of multi-agent negotiation and consensus, OMNE enables AI-powered decision-making that mirrors the robust, distributed intelligence of the human neocortex.

Image source: OMNE research paper on Arvix

Conclusion

The Thousand Brains Theory provides a groundbreaking paradigm for understanding intelligence—one that is fundamentally distributed, resilient, and dynamically adaptive. By mirroring these principles, AI systems like Tanka and OMNE are pioneering a future where digital intelligence is not merely reactive but deeply context-aware, self-improving, and capable of evolving alongside its users.

Image: Tanka is more than just a business messenger

Tanka’s vision extends beyond conventional AI assistants; it aspires to be the cognitive backbone of modern organizations, ensuring that institutional knowledge is not only preserved but continuously enhanced. By integrating long-term memory, collaborative intelligence, and real-time contextual awareness, Tanka transforms communication from a fragmented exchange into a seamless and intelligent workflow.

Similarly, OMNE’s multi-agent framework represents a significant leap toward self-evolving, decentralized AI architectures. By enabling distributed intelligence, dynamic memory storage, and adaptive learning, OMNE stands as a blueprint for the future of AI—where multiple intelligent agents collaborate just as the brain’s cortical columns do.

As AI continues to evolve, the fusion of neuroscience-inspired principles with enterprise collaboration and multi-agent frameworks signals a shift toward self-sustaining, adaptive intelligence—one that empowers teams to think faster, decide smarter, and innovate without constraints.

Table of content

The World's First Long-Term Memory Messenger
Tanka message assistant waitlist branding picture
Join Waitlist
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Related posts

The World’s 1st Messenger with AI Long-Term Memory

Fast and precise smart replies

Join Waitlist