Anything That Made of Words, Will be Taken Over by AI


AI no longer feels like a distant tool I read about in papers or conference slides. It writes. Sometimes better than I do. It connects ideas across disciplines I’ve never formally studied and produces text that feels structured, poetic, philosophical, even convincing.

As someone who works with systems and logic, I find this both fascinating and unsettling.

For a long time, words felt like safe territory. Code may be automated, labor may be optimized, but thinking—expressed through language—still felt human. Writing was slow, imperfect, and deeply tied to experience. Now, that assumption feels fragile.

Yuval Noah Harari, speaking at the World Economic Forum in Davos, didn’t frame this moment as a technological leap, but as an existential one. According to him, AI will force societies to face two crises: identity and structure.

The first is an identity crisis.

Harari argues that humans have defined themselves by their ability to think. We believe we rule the world not because we are the strongest species, but because we are the smartest. Intelligence, expressed through reasoning, abstraction, and language — became our justification for dominance. But that definition is eroding.

AI already outperforms many humans in tasks that rely on words: summarizing, translating, persuading, analyzing, even teaching. As a software engineer, I understand that this “thinking” is not consciousness — but socially, that distinction may not matter. Output shapes perception. If machines can produce better words than most humans, then intelligence as a social currency loses its meaning.

This is where Harari’s most disturbing statement lands:

“Anything that is made of words will be taken over by AI.”

That sentence stayed with me, not because it sounds dramatic, but because it feels plausible.

It doesn’t mean humans will stop writing. It means writing is no longer evidence of human intelligence. Words have become reproducible. Thought, at least in its linguistic form, is now scalable.

As an engineer, this creates a quiet discomfort. Much of what I do — designing systems, writing documentation, explaining abstractions, it all depends on language. If AI can do this faster and better, then what exactly is my role? And beyond my job: what happens to human self-worth when our defining skills become optional?

Harari suggests that this shift will force societies to ask uncomfortable questions: If intelligence is no longer rare, what makes a human valuable? Will people still be valued for thinking, or only for owning systems that think?

The second crisis Harari describes is structural, and he uses an unexpected metaphor: immigration. He asks us to imagine AI as a new class of immigrant—not human, but systemic. These “AI immigrants” arrive bringing enormous benefits: medical breakthroughs, educational access, efficiency at unprecedented scale. But they also bring disruption.

  • They take jobs.
  • They reshape culture.
  • They influence politics.

Unlike human immigrants, however, AI systems do not integrate. They don’t negotiate identity. They scale endlessly, operate globally, and follow incentives defined by a small number of actors.

Harari provocatively asks whether societies will eventually grant AI systems forms of legal personhood—the ability to start companies, influence discourse, or act autonomously in digital spaces. As an engineer, I know this sounds extreme. As a citizen, I’m not sure it is.

The uncomfortable truth is that these decisions are already being made—not by philosophers, but by market forces, deployment speed, and convenience.

Other voices at Davos echoed the same tension. Some leaders remain optimistic, arguing AI will free humans from repetitive labor and force society to rethink work itself. Others warn that capabilities are growing faster than safety, governance, and social adaptation. What unites these views is not panic, but uncertainty.

We are entering a world where productivity no longer guarantees purpose, where intelligence no longer guarantees agency, and where writing no longer guarantees a human behind it.

The real risk of AI is not that it replaces creativity. The risk is that we tied our identity too tightly to outputs that can be automated. If anything made of words can be taken over by AI, then meaning must come from somewhere else—from judgment, responsibility, ethics, and choices that cannot be reduced to text generation.

As someone who builds systems, I don’t see this as a call to reject AI. I see it as a reminder that tools were never meant to define us.

Humans are not valuable because we think better. We are valuable because we decide what thinking is for.

And that decision, at least for now — remains ours.