Knowing, Doing, Fulfillment, Power, Collapse

Knowing, Doing, Fulfillment, Power, Collapse

The point here is that AI is about power, not knowledge, intelligence or human fulfillment.

This sounds like a koan, and it is. Let’s dig in.

Knowledge isn’t just knowing–the ultimate manifestation of knowledge is doing-knowing how to make, create, fix, learning from experience, a process that constantly expands our abilities to do more.

Knowing an “answer” doesn’t give us the ability to do something useful. Looking at a recipe doesn’t give us the knowledge of how to cook. Having AI generate a derivative song doesn’t give us the ability to play or compose music. Looking at a blueprint doesn’t give us the knowledge of how to build a house. The knowledge of doing is experiential– knowing is not enough, we must learn by doing.

The process of doing assembles tacit knowledge, experiential knowledge that cannot be fully formalized because it is assembled by both halves of our minds, the intuitive and the rational.

AI provides “answers,” but this is not a substitute for the knowing that enables doing, which then enables mastery.

Humans are not machines, and “value” cannot be reduced to financial numbers. Humans are social beings because isolation offers little selective advantage; working together in groups offers selective advantages.

What’s valuable is thus socially defined: making ourselves useful to others gives us purpose, meaning, a positive social role and a positive identity / self-respect / self-worth.

Without a socially useful role, we wither, and are prone to self-destructive spirals and depression or anti-social behaviors: Idle hands are the devil’s workshop.

Fulfillment as individuals and as social beings arises not from idleness / convenience but from applying the knowing of doing.

The vision of fulfillment offered by AI is the exact opposite: Nirvana is having nothing to do because robots and AI will do all the work, and we will have limitless conveniences and leisure–a PR cover for idleness.

This vision of fulfillment–of having nothing to do but play all day–is at its core childlike, the child’s idea of happiness. But once we grow up, a life of purposeless, socially useless idleness is not fulfilling or healthy; it’s debilitating.

In the Silicon Valley vision of AI supremacy, we will all buy a robot that will do all our cooking and cleaning so we will be blissfully free to stare at screens all day, “entertaining” ourselves with endless AI generated content and social media scrolls. That all this “entertainment” is debilitating and deranging–never mind, the point for the AI boosters is that it’s profitable.

There is no fulfillment possible in watching a robot prepare a meal for us. The fulfillment, the satisfaction, and yes, the joy, is in harvesting the green beans ourselves, julienning them, and then preparing them for the table we set ourselves.

A world in which we stare at screens while robots do all the work is a prison, a drip of Soma, a lifeless life devoid not just of fulfillment but of independence, self-reliance and power, for once we no longer know how to do anything essential and useful ourselves, we are dependent, which is another way of saying we’re powerless.

As I have noted here many times, self-reliance is the foundation of agency–control of the direction of our lives–and power.

AI concentrates power in the hands of the few, turning everyone who comes to depend on the Soma of “answers” and robots into a ring-fenced herd that no longer has the power to act or think independently.

To the degree that knowledge is power, then AI is the concentration of this power because AI curates what is considered knowledge. And since AI is a model, and all models leave things out that the model builders don’t even realize they left out because they’re embedded in a cultural mindset of what qualifies as “”knowable” and “knowledge,” then all AI is deeply, profoundly, inescapably coercive on the ground level of what we take to be “known” and therefore “true.”

The models of “intelligence” and “knowledge” generate content that reinforces the limitations and biases of the model. The inevitable outcome of this self-reinforcing loop is “model collapse”: the model ceases to be anything other than a reflection of its own limitations and biases, presented as “facts, answers and knowledge.”

This article explains just how this curation, editing and bias works. It is paywalled, but it’s well worth reading if you can access a free version. I have excerpted some key points below.

What AI doesn’t know: we could be creating a global ‘knowledge collapse’As GenAI becomes the primary way to find information, local and traditional wisdom is being lost. And we are only beginning to realise what we’re missing.

To understand how certain ways of knowing rise to global dominance, often at the expense of Indigenous knowledge, it helps to consider the idea of cultural hegemony developed by the Italian philosopher Antonio Gramsci.

Gramsci argued that power is maintained not solely through force or economic control, but also through the shaping of cultural norms and everyday beliefs. Over time, epistemological approaches rooted in western traditions have come to be seen as objective and universal. This has normalised western knowledge as the standard, obscuring the historical and political forces that enabled its rise. Institutions such as schools, scientific bodies and international development organisations have helped entrench this dominance.

In her book Decolonizing Methodologies (1999), the Māori scholar Linda Tuhiwai Smith emphasises that colonialism profoundly disrupted local knowledge systems – and the cultural and intellectual foundations on which they were built – by severing ties to land, language, history and social structures. Smith’s insights reveal how these processes are not confined to a single region but form part of a broader legacy that continues to shape how knowledge is produced and valued. It is on this distorted foundation that today’s digital and GenAI systems are built.

I recently worked with Microsoft Research, examining several GenAI deployments built for non-western populations. Observing how these AI models often miss cultural contexts, overlook local knowledge and frequently misalign with their target community has brought home to me just how much they encode existing biases and exclude marginalised knowledge.

The work has also brought me closer to understanding the technical reasons why such inequalities develop inside the models. The problem is far deeper than gaps in training data. By design, LLMs also tend to reproduce and reinforce the most statistically prevalent ideas, creating a feedback loop that narrows the scope of accessible human knowledge.

Why so? The internal representation of knowledge in an LLM is not uniform. Concepts that appear more frequently, more prominently or across a wider range of contexts in the training data tend to be more strongly encoded. For example, if pizza is commonly mentioned as a favourite food across a broad set of training texts, when asked “what’s your favourite food?”, the model is more likely to respond with “pizza” because that association is more statistically prominent.

More subtly, the model’s output distribution does not directly reflect the frequency of ideas in the training data. Instead, LLMs often amplify dominant patterns or ideas in a way that distorts their original proportions. This phenomenon can be referred to as “mode amplification”.

And beyond merely reflecting existing knowledge hierarchies, GenAI has the capacity to amplify them, as human behaviour changes alongside it. The integration of AI overviews in search engines, along with the growing popularity of AI-powered search engines such as Perplexity, underscores this shift.

As AI-generated content has started to fill the internet, it adds another layer of amplification to ideas that are already popular online. The internet, as the primary source of knowledge for AI models, becomes recursively influenced by the very outputs those models generate. With each training cycle, new models increasingly rely on AI-generated content. This risks creating a feedback loop where dominant ideas are continuously amplified while long-tail or niche knowledge fades from view.

The AI researcher Andrew Peterson describes this phenomenon as “knowledge collapse”: a gradual narrowing of the information humans can access, along with a declining awareness of alternative or obscure viewpoints. As LLMs are trained on data shaped by previous AI outputs, underrepresented knowledge can become less visible – not because it lacks merit, but because it is less frequently retrieved or cited. Peterson also warns of the “streetlight effect”, named after the joke where a person searches for lost keys under a streetlight at night because that’s where the light is brightest. In the context of AI, this would be people searching where it’s easiest rather than where it’s most meaningful. Over time, this would result in a degenerative narrowing of the public knowledge base.

Allow me to summarize:
AI inevitably generates model collapse and knowledge collapse.
AI extinguishes fulfillment by extinguishing the tacit knowledge of doing.
This concentrates power in the hands of those who own and control the AI gearing while disempowering everyone who accepts AI’s dripline of “answers” and implicit promise that purposeless idleness is fulfillment when it is actually a prison of debilitating powerlessness.

https://charleshughsmith.substack.com/p/knowing-doing-fulfillment-power-collapse