Skip navigation
  • About Us
  • Our people
  • Our group
  • Innovations
  • Careers
 
  • DE
  • EN

Insights

08.04.2026

HOLTZBRINCK ON: KNOWLEDGE

The Tools That Shape Us

Insight by Daniel Hook, April 08, 2026

We do not usually think of language as a technology, but we should. Language defines our world: if it lacks the concepts we need, there are ideas we cannot access, articulate, or share. In this sense, language is a kind of cage — a structure that shapes thought from the inside, invisibly. We can only think in the directions our words allow.

Every communication technology we have ever built has been, among other things, a new version of that cage — expanding some possibilities, foreclosing others, and quietly reshaping the people who use it. The tools we reach for to understand the world end up determining what we can understand. This is the oldest story in the history of human invention. It is also, right now, the most urgent one.

When the printing press arrived in the 15th century, the Venetian editor Hieronimo Squarciafico warned that the “abundance of books makes men less studious” — that easy access to standardised text would erode the skills needed to decode older, more cryptic forms of knowledge. He was not entirely wrong. Some capabilities were lost. But the trade, by any reckoning, was a favourable one. Literacy spread. Scientific ideas propagated. Scholarship democratised. The cage expanded, and humanity grew to fill it.

This has been the pattern every time. With every innovation we gain a new capability, but with every adaptation we lose an older skill. And until now, that process has always moved us forward — because we have always, eventually, been able to keep up.

The internet gave us the most vivid recent example. Within just thirty years, search engines moved through three distinct paradigms: keyword matching, then entity-aware search, then generative answers. Each transition rewired our habits. And each time, we adapted.

Keyword search was its own kind of cage — rigid, visible, its bars made of Boolean logic. It forced you to translate the nuance of a genuine question into a handful of terms. There are questions you simply cannot encapsulate faithfully in keywords. But at least you could see where it confined you: you could find the keywords or their synonyms in the evidence returned, trace the reasoning, interrogate the sources. The cage was visible.

Chat-based systems powered by large language models have flipped the paradigm — not gradually, but almost overnight. Instead of returning evidence for you to evaluate, they give you answers. Instead of requiring you to translate intent into keyword code, they invite you to simply ask. The machine meets you where you are.

The cage has not disappeared. At one level, it appears to have become more expansive, but at another level, it is far harder to see.

Prompting feels like liberation because the interface is conversational, natural, almost frictionless. But its walls are woven from the model’s training data, and its omissions are invisible. Competing perspectives are absent unless explicitly requested. Methodological disagreements go unmentioned. The scholarly debates behind the confident summary you have just received are nowhere in evidence. When documents are excluded from training, the concepts they carry become inaccessible. We risk arriving at a world in which those who choose the training data define what can be thought about. Unlike a search index, which you could in principle interrogate, the boundaries of an LLM’s knowledge are opaque.  
One can think of research as the constant search for the edges of the cage of our understanding of the world - new concepts and new language often find their genesis in research communities to describe things that we are grappling to understand. Einstein had to find the extension of the language of mathematics to encapsulate his General Theory of Relativity.  In doing this Einstein had the advantage of knowing that he could make predictions about observed physical reality to ground his extensions.

Instead of spectrometers and telescopes the measurement apparatus of human cognition is language and if an LLM defines the measurements that we have available to us then it may become impossible to even know that there is a cage or an implicit set of limitations that others have put in place.

This matters enormously for what we stand to lose. Where keyword search required you to do the cognitive work of assembling and evaluating information, LLMs invite you to outsource that work to the machine. When we stop gathering and start accepting, we change the role of the researcher — and perhaps even the shape of knowledge itself. The cognitive skills that structured search once demanded — source evaluation, the disciplined construction of a research question, the habit of sitting with uncertainty — may quietly atrophy, because the tool no longer requires them of us.

Which brings us back to Squarciafico — and to the question he never had to ask, because in his day the answer was always eventually yes:

Can we keep up?

In one version of the future, we are augmented. LLMs become a cognitive prosthetic — not replacing our judgment, but extending our reach, just as Galileo’s telescope extended his vision. We access more ideas, make better connections, articulate questions we didn’t previously have the vocabulary to form.

But there is another version — one in which the tools move so fast, and become so capable, that adaptation never catches up. One in which we stop forming our own questions because the machine forms them more fluently. One in which the synthesis is so seamless that the sources become invisible, and with them, the very idea of verifiable truth. Not augmentation, but displacement. Not a new equilibrium, but a permanent dependency on systems we do not understand, cannot audit, and did not choose.

The difference between those two futures is not determined by the technology. It is determined by whether we are conscious enough of the transition to shape it — whether we design tools, educational environments, and research practices that keep human judgment at the centre, or whether we simply let the convenience of acceptance replace the discipline of enquiry.

Of all the technologies humans have created, linguistic ones are both the most powerful and the least examined. We treat language as a given — the water we swim in — rather than as a constructed tool with its own biases, blind spots, and boundaries. Most people command only a fraction of the expressive range their language makes possible; true mastery is rare enough that we revere those who possess it. The great storytellers, the poets, the orators — they are the ones who have learned to move within the cage with unusual freedom, and we have always recognised them as something close to the souls of their age.

LLMs have achieved something unprecedented: a fluency in language that most humans will never match, deployed at a scale no human storyteller could approach. In one sense, this is the cage made newly accessible — its full expressive range, available to anyone with a prompt. But the question of who benefits most from that access is not a technical one. It is a question of power. The cage has always had architects. What is new is that, for the first time, most of us cannot see them — or the walls they are building.

 

Daniel Hook is CEO of Digital Science, a Holtzbrinck company.

Back to the news overview

Skip navigation
  • News and Media
  • Contact
 
Skip navigation
  • Privacy Policy
  • Corporate Responsiblity
  • Imprint
  • Holtzbrinck UK Tax Strategy Publication
 
© 2026 Georg von Holtzbrinck GmbH & Co. KG
Holtzbrinck Logo