James Hardie
April 10, 2025

Harnessing Collective Intelligence: Lessons from the Military, Academic Communities of Practice and AI Collaboration

Transactive Memory and Distributed Cognition

Human beings have long realised that no one person can know everything, but a group can. Transactive memory is the term psychologists use to describe how people in close relationships or teams develop a shared memory system. First proposed by Daniel Wegner in 1985, transactive memory is essentially a “group mind” mechanism through which groups collectively encode, store, and retrieve knowledge.Instead of everyone memorising all information, members specialise in different domains and remember who knows what. The group then functions like a distributed cognitive system, greater than the sum of its parts. Early studies of transactive memory in couples showed that long-term partners divide information between them and rely on each other for recall, outperforming strangers on memory tasks. In other words, each person stores knowledge in the other’s mind and uses communication as the “retrieval cue” to access it. This efficient sharing of mental work means a well-developed transactive memory system can provide a group with “more and better knowledge than any individual could access on their own”.

For me this resonates because of the way that I have seen it works in the military and some other organisations.  Uniforms, with medals badges of rand and achievements, are clear displays of the probability that someone may be specialised in certain knowledge, experience and authority.  However, the knowledge may be outdated, and some may know much more than is displayed.

Importantly, transactive memory is not just shared information, but a combination of individual expertise and metamemory, an understanding of each other’s knowledge, like a helicopter crew or a special forces unit. Team members learn who the experts are on various topics and trust those sources. This makes the group’s collective memory both extensive and highly coordinated. Scholars note that transactive memory goes together with the concept of distributed cognition, wherein different people hold different pieces of knowledge and must “engage in transactions to assist in recall of the stored information”. When a new problem or question arises, a group with strong transactive memory quickly identifies who has relevant information, rather than everyone redundantly knowing the same facts. The result is faster problem-solving and decision-making. In organisational settings, having a transactive memory system allows teams quick access to a broad knowledge base, improving information integration and decision processes. Teams become more efficient and even develop greater trustand identification with each other because everyone recognizes the value eachmember brings.

Communities of Practice: Shared Learning in Social Networks

Closely related to the idea of a group mind is the concept of Communities of Practice(CoP). A community of practice is a group of people who come together to share what they know with and for each other, and their wider community, as academics often do.  This concept explains how people learn collectively through shared activities and discussions. In a CoP, members continuously exchange knowledge, stories, and solutions around their common domain of interest. Over time, they develop a shared repertoire of best practices and a sense of joint identity as practitioners. Crucially, communities of practice serve as social knowledge networks: experts and novices alike connect, and tacit knowledge (the hard-to-write-down know-how) is passed along through mentoring, observation, and conversation.

Within organisations, cultivating communities of practice has proven to be a powerful strategy for distributed knowledge and innovation. Because CoP members trust and regularly communicate with each other, they effectively create an internal knowledge market where someone in need can quickly find someone else with the answer.This dynamic is another form of transactive memory, but on a larger, more informal scale. Instead of a small team knowing each other’s expertise, a community of practice enables cross-pollination of knowledge across departments or locations. Research has shown that strong communities of practice can decrease the learning curve for new employees, help teams respond faster to customer needs, reduce redundant work, and even generate new ideas for products and services. In short, they are a living embodiment of “distributed cognition” in action, spreading intelligence across an organisation. Wegner’s theory reminds leaders that learning and innovation flourish when people are given the space to share what they know, learn from peers, and together push the boundary of their field. This social aspect of learning builds a collective memory within the community, much like an organisation’s institutional memory, that newcomers can tap into and that outlasts any single individual.

Lessons from History: Shared Intelligence in Action

Churchill and “The Prof”: Two Minds That Changed War

History offers compelling examples of transactive memory at work. Consider the partnership between Britain’s World War II Prime Minister Winston Churchill and his chief scientific advisor, Frederick Lindemann (later Lord Cherwell).Churchill was a visionary leader and statesman, but he freely admitted that he relied on experts for technical knowledge. Lindemann, affectionately nicknamed“The Prof”, became Churchill’s closest scientific confidant. When Churchill became Prime Minister in 1940, he appointed Lindemann as the government’s “leading scientific adviser,” with a seat at War Cabinet meetings and daily access to the PM. The two saw each other almost every day throughout the war, effectively operating as a joint intellect guiding Britain’s strategy.

Lindemann’s contributions illustrate how distributed cognition between a leader and an expert can change the course of events. He led a special statistical branch(the infamous “S-Branch”) that gathered data from across the war effort, from food supplies to enemy bombings, and distilled it into simple charts for Churchill.These one-page summaries turned overwhelming data into clear insights, allowingChurchill to make quick, informed decisions under pressure. For example, bar charts comparing Allied ship production vs. losses, or Allied bomb tonnage vs.German, gave Churchill an instant grasp of the trends. The intellectual and psychological power of these presentations was immense: Churchill could see the big picture immediately, trust the analysis, and adjust policy accordingly. In essence, Lindemann acted as an external memory and analytical engine forChurchill. The Prime Minister didn’t need to personally calculate figures or recall every scientific detail ,“The Prof’s” brain carried that load.Churchill himself quipped that Lindemann’s mind was “a beautiful piece of mechanism,” one he depended on greatly. From advocating for technologies like radar and specialised weapons, to optimising resource allocation, Lindemann’s advice (backed by data) was a decisive factor in many wartime decisions. This collaborative relationship highlights the power of trust and complementary expertise: Churchill knew what to ask for and how to use the knowledge, while Lindemann knew how to get the answers. Together they formed a transactive memory system at the highest level of government, a wartime groupmind of two that was smarter and more adaptive than either man alone.

Samuel Johnson’s Dictionary: One Man, Many Minds

Another historical anecdote takes us back to the 18th century and shows that even a seemingly lone genius can be supported by a form of distributed cognition. SamuelJohnson, the brilliant wit and scholar, set out in 1746 to compile A Dictionary of the English Language essentially by himself, a herculean task at the time. Working in his house on Gough Square in London, Johnson took seven years to complete the work, far longer than his original optimistic estimate, but he did so single-handedly, with only clerical assistance to copy the illustrative quotations from books. The result, published in 1755, was a dictionary of 40,000 words that became the authoritative English dictionary for over a century. It was widely acclaimed; one biographer noted it “easily ranks as one of the greatest single achievements of scholarship, and probably the greatest ever performed by one individual”. But Johnson’s “single”achievement was, in truth, a collective triumph of knowledge. He drew upon countless writings, literature, essays, technical works, and the wisdom of many authors to craft his definitions and usage examples. In effect, Johnson turned the vast recorded memory of English society (all the books and texts available to him) into a cohesive resource. He also had a small team of assistants who helped him index books and transcribe quotations, acting as an extension of his memory as he worked.

Samuel Johnson also had profound insights about the nature of knowledge that resonate with transactive memory. He famously observed that “knowledge is of two kinds: we know a subject ourselves, or we know where we can find information upon it.” This remark, from his conversations recorded by James Boswell, captures the essence of distributed cognition. Johnson recognised that knowing how to find information (in a book, or by asking an expert) is just as important as holding the facts in one’s own head. In compiling his dictionary,Johnson exemplified this principle: he didn’t personally generate all knowledge of English, he expertly curated and interconnected knowledge from diverse sources. Likewise, in any collaboration, one person might not remember a detail, but they might recall which colleague or reference work could supply it. Johnson’s insight presaged how modern teams and even internet-age individuals operate. In fact, contemporary research confirms this phenomenon:people tend to forget information that they know they can easily look up, while remembering how to look it up (for example, remembering the search term or source location), effectively treating the Internet as an external memory store.Centuries before Google, Johnson understood that where knowledge resides(and how to access it) is a core part of intelligence. His dictionary, assembled through tremendous scholarly networking of sources, stands as a nearly monument to collective knowledge sharing.

How Large Language Models Organise Knowledge

Jumping to the present, we now have machines, Large Language Models (LLMs) like ChatGPT, that mimic certain aspects of human collective intelligence. At a high level, an LLMis a computer model trained on vast amounts of text (books, articles, websites)to predict and generate language in a human-like way. How does it manage this feat? The key lies in how it represents and links ideas in a mathematical form.In the training process, the model converts words and phrases into complex numerical vectors (essentially lists of numbers) that capture statistical associations. Words with related meanings end up with vectors that are near each other in this abstract space, so in the model’s “mind,” concepts that are similar or often connected in usage literally have a smaller distance between them in vector space. For example, in such a semantic space, “Winston”might be near “Churchill,” “Prime Minister,” and “Britain”, reflecting their frequent co-occurrence. Likewise, “innovation” might sit close to “creativity”and “technology.” Through exposure to billions of words, an LLM develops a sprawling web of associations: it doesn’t understand ideas as we do, but it knows which ideas tend to appear together and in what contexts.

Imagine looking at the night sky, full of stars and planets, now based on a simple 2D view you might suggest two stars are quite close together, when in fact in actual 3D distance they may be the furthest apart.

When one ‘prompts’ an LLM with a question or a sentence, it processes your input through layers of artificial neurons that weigh these learned associations. In essence, it asks: based on the patterns in my training, what’s likely to come next? If you ask,“What are the benefits of teamwork?”, the model will sift through its vectorised knowledge of “teamwork” and related concepts (perhaps finding links to “collaboration,” “communication,” “productivity,” etc.) and start constructing an answer by selecting words with a high probability of fitting those learned patterns. One can think of this like a highly advanced auto complete that doesn’t just finish a word but can finish whole thoughts. Because it has been trained on human-written text about virtually every subject, the LLM’s knowledge base is extraordinarily broad, it’s as if it has ingested a significant portion of humanity’s collective memory (albeit in raw text form).This is why LLMs can often provide useful information or answers: they are drawing on embedded patterns that originate from real facts and ideas humans have documented. In a sense, an LLM is a distillation of millions of human contributions, compressed into a predictive model.

However, it’s critical to note that an LLM’s way of “knowing” is fundamentally different from human understanding. The model does not truly comprehend the meaning of the words it uses; it has no conscious awareness or context beyond the data it was given. For instance, it knows the word “elephant” often correlates with words like “large,” “grey,” or “trunk,” and it can use them in a sentence about elephants. But it has never seen an elephant, cannot verify a fact about elephants except by internal reference to its training data, and it doesn’t have a goal or intent of its own. It operates entirely by pattern recognition.This leads to both impressive abilities and notable limitations. On the one hand, LLMs show a kind of emergent creativity: by recombining patterns, they can produce original-seeming text, write stories, suggest ideas, or translate languages, tasks that traditionally we associated only with human intelligence. On the other hand, because they lack genuine understanding, they are prone to making things up when the prompt goes beyond their factual training (a phenomenon known as hallucination). The LLM has no internal model of reality to fall back on; if the statistical correlations suggest an answer that sounds plausible, it will present it, even if it’s incorrect or nonsensical. Context is another challenge, an LLM doesn’t truly remember interactions or adapt its answers based on anything outside the text you provide (unless it’s connected to tools or updated databases, which basic models are not).

In simpler terms, an LLM’s “knowledge” is stored as probabilities and proximity of ideas in a mathematical space, whereas human knowledge (especially in a group) is stored in living brains that understand and can interpret information considering experience and current context. This difference makes an LLM incredibly powerful as a repository of static knowledge and linguistic patterns, much like a colossal encyclopaedic brain, but it struggles with the dynamic, context-rich reasoning that groups of humans excel at.

LLMs vs. Human Group Minds: Parallels and Limitations

I think it is fascinating to compare a large language model to a human team with a strong transactive memory system. In some ways, an LLM can be thought of as an artificial group mind: it has “learned” from millions of authors and experts, effectively pooling knowledge from countless sources. It doesn’t hold expertise in modules the way individuals in a team do, but it has statistically encoded many expert viewpoints within its weights. When working properly, an LLM can retrieve a specific piece of information much like a team member might recall what another member once said. In fact, when a person uses an LLM as a support tool (for example, asking it to explain a concept or draft a report), it resembles a transactive memory process, the human knows where to find the information (in the AI) and how to query it, while the AI supplies the content on demand. The breadth of knowledge in an LLM also parallels a well-connected community of practice: just as a large community might collectively know about almost anything within a domain, a sufficiently trained LLM has at least a surface-level familiarity with almost any topic someone might ask about (since it’s read so much of the internet). Both systems, human group minds and LLMs ,rely on patterns. A team uses patterns of communication (“Who do I ask about X?”)and a shared history of problem-solving, while an LLM uses patterns in language. Both can surprise an observer with an answer that seems to come from a higher intelligence, the team via brainstorming and combining members ’insights, the LLM via synthesising patterns into a coherent answer. Notably, both can also exhibit forms of emergent behaviour. A diverse team might come up with solutions that no single member would have conceived alone; an LLM can generate output that wasn’t explicitly programmed, such as creating a short poem on a theme by drawing on learned examples and blending them in new ways.

However, the differences are profound. A human group operates with intentions, emotions, and an awareness of the real world, elements no AI currently possesses. Human collaborators can contextualise their knowledge to the current situation: a team of engineers working on a project knows the specific goals, constraints, and nuances of their task, and they dynamically adjust their thinking as conditions change. An LLM, by contrast, has no inherent sense of the situation or the practical implications of its answers unless those are explicitly described in the prompt. It cannot read a room, sense urgency, or change course unless instructed. Human group minds also excel at critical evaluation and creative conflict, team members can challenge each other’s ideas, ask follow-up questions, or iterate on a concept to improve it. LLMs do not have a built-in mechanism to truly “reflect” on an answer’s correctness or push back on a flawed question; they will politely generate an answer regardless. In terms of memory updates, human transactive memory systems are fluid and adaptive, people learn new information, update each other (“Alice is now the go-to person on client X since she just did research on it”), and forget outdated knowledge. LLMs, in contrast, are mostly static after training; they don’t automatically update with new information unless retrained, and they might continue giving answers based on sources that are no longer accurate.

Another limitation of LLMs is the lack of deep creativity and common sense that human groups can apply. While an LLM can recombine existing ideas, a group of humans can draw on lived experiences, empathy, and a genuine understanding of a problem to generate truly novel solutions or strategies. Humans have the advantage of motivation and purpose, a team working for a company has a drive to succeed, to beat competitors, to earn rewards, etc., which fuels innovation.An AI is not motivated like a human; it will do exactly and only what it is prompted to do. Thus, an LLM can offer a thousand ideas, but it takes human judgment to recognise which of those ideas make sense in context, which are ethical, and which truly solve the problem at hand.

In summary, large language models and human group minds both demonstrate the power of aggregation: aggregating knowledge (in an LLM’s training corpus or a team’s members) and aggregating contributions (words in an answer or ideas in a discussion) can yield outcomes that seem intelligent and often are useful. AnLLM can be seen as a tool that encapsulates a form of collective memory, the stored knowledge of humanity’s texts, and thus it complements human teams rather than replaces them. The model can provide quick answers, draft documents, generate options, and surface information that the team might not have at its fingertips, functioning much like a well-informed team member (albeit one that needs supervision). But unlike a real team member, the LLM lacks true understanding, adaptability, and accountability. It is a supremely knowledgeable assistant with no real-world experience. Therefore, the best results arise when we combine the two: using LLMs to extend and amplify human transactive memory, while relying on human insight to guide, filter, and implement ideas.

Shared Intelligence in Modern Business: People + AI for Innovation

What do these concepts mean for today’s business leaders? The clear message is that shared intelligence is a competitive advantage. In an economy where information is abundant and change is rapid, no single person, not even the CEO or a star performer, can possess all the insight needed to navigate complex challenges. Success comes from building effective “group minds” within the organisation and augmenting them with technology. This starts with culture and organisation design: leaders should foster an environment where knowledge is openly shared, not siloed. Encouraging communities of practice (formally or informally) can pay huge dividends, as Wenger’s work showed. When employees across different teams or locations regularly talk about their craft, exchange tips, or brainstorm together, they create channels for latent expertise to flow where it is needed. For example, a company might support an internal community of practice for data scientists or for customer service reps across regions.Through those networks, an employee faced with a novel problem knows exactlywho might have dealt with something similar, echoing the transactive memory principle of knowing who knows what. The speed with which solutions surface improves dramatically when the right connections exist. As noted earlier, groups with strong transactive memory can integrate information faster and make better decisions. In a business setting, that could mean resolving a client issue in hours instead of days, because the team quickly identified the internal expert to consult.

We should also explicitly cultivate team awareness of knowledge. This can be as simple as having teams periodically map out each member’s specialties or recent project experiences, so that everyone is aware of the collective pool of skills. Some companies use directories or knowledge management systems to facilitate this (“Search the directory for ‘AI expertise’ and you find the person who can help”). However, technology tools alone are not enough, it’s the human relationships and trust that really make transactive memory work. Team building, cross-department projects, and mentorship rotations can all help people get to know each other’s strengths. The goal is an organizational culture in which asking for help is encouraged and no one hesitates to tap someone else’s brain. When mutual awareness and trust are high, the organisation behaves like a well-coordinated brain, with little friction in moving knowledge to where it’s needed most.

At the same time, modern businesses have an unprecedented opportunity to leverage machine cognition alongside human talent. Large language models and other AI tools can serve as powerful extensions of the team’s collective intelligence. A practical example is using an LLM-powered chatbot trained on the company’s internal documents and data. This AI assistant can be available 24/7 to answer employees’ questions (“Where can I find the latest financial report?” or “What does our policy say about X?”), in effect, it’s a new member of the community of practice, one that holds a vast repository of the organisation’s explicit knowledge. Just as the internet has become a dynamic external memory source for individuals, a well-designed AI system can become a reliable memory bank for a company, retrieving information in seconds that might take an employee hours of digging. This frees up humans to focus on analysis, judgement, and creativity rather than rote information retrieval.

However, to truly capitalise on AI, leaders must also instil an understanding of its limitations among their teams. Just as you wouldn’t blindly trust a new junior employee with no track record, you shouldn’t blindly trust AI output without verification. The organisation’s culture should position AI as a collaborative tool, something to draft ideas, explore alternatives, and provide quick knowledge checks, while humans remain the ultimate decision-makers and sense-makers. When human teams treat the AI as a partner rather than an infallible oracle, they can catch errors (like an AI “hallucinating” a falsehood) and refine the AI’s contributions into truly valuable insights. Many forward-thinking companies are already pairing human experts with AI copilots in this way. For instance, software developers use AI code generators to suggest solutions, then debug and optimise them; customer support reps use AI to draft response templates, then personalise them with the empathy only a human can provide. This synergy between human and machine mirrors the ideal transactive memory partnership:each side knows what the other is good at and recognise overlaps. AI is tireless, fast, and encyclopaedic; humans are intuitive, contextual, and principled. We also live in different worlds, one digital the other biological, or are we also in a simulation?

 

What Next?

To inspire and guide their organisations, we may take a few key actions:

  • Cultivate  Communities of Practice: Encourage the formation of networks where     employees with similar interests or roles regularly share knowledge and     experiences. This can be done through scheduled forums, internal social     platforms, or support for professional groups. Such communities will drive     innovation by cross-pollinating ideas and surfacing best practices     (indeed, they have been shown to “generate new ideas for products and     services” and prevent reinventing the wheel).
  • Foster  Transactive Memory in Teams: Promote a culture of knowledge awareness.     Teams might maintain a skills matrix or have “show and tell” sessions to     keep everyone informed of each member’s expertise and recent learnings.     Knowing who to consult for specific knowledge means problems get     solved faster with fewer dead-ends. It also builds respect, everyone     recognises each other’s unique value, strengthening trust and teamwork.
  • Leverage     LLMs and AI Thoughtfully: Integrate AI tools as knowledge partners. For     example, deploy an internal Q&A chatbot for common inquiries, use LLMs     to summarize market research, or help teams brainstorm with AI-generated     suggestions. Treat these tools as an extension of your team’s mind,     powerful, but in need of guidance. Provide training on how to fact-check     AI outputs and on which tasks AI is best used for. When humans and AI     collaborate, the organisation benefits from the combined strengths of     computational power and human creativity.
  • Emphasize     Learning and Adaptation: Shared intelligence is not a one-and-done     achievement; it’s an ongoing process. Encourage continuous learning     (through mentorship, training programs, and open dialogue) so that both     the people and the AI in your organization are always updating their knowledge.     Just as transactive memory systems require refreshing and updating with     new information, an organisation should refine its practices as new tools     and information emerge. Being adaptable and staying current with     technology ensures that the collective intelligence doesn’t become stale     or obsolete.

In conclusion, the most innovative and resilient organisations will be those that consciously blend human and machine capabilities into a unified collective intelligence. This means enabling people to connect and think together asChurchill did with his trusted advisor and as communities of practice do across the world and enabling people to connect with the vast repositories of knowledge that technologies like LLMs provide. The magic lies in the combination:human contextual awareness and creativity guiding the almost-limitless information and pattern-recognition of AI. When a team knows how to tap all its members’ knowledge (and acknowledges each contributor, human or digital) it creates a kind of organisational brain that can tackle big challenges in clever ways. From Winston Churchill’s wartime cabinet to today’s AI-augmented projectteams, the lesson is clear, we truly achieve more when we think together. By championing teamwork, mutual awareness of expertise, and intelligent use of emerging technologies, leaders can unlock a level of collective genius that propels their organisations to new heights of innovation and success.

 

Other Posts