Ai experiments with english across time: from old english to year 15000

If you’ve ever tried to muscle your way through Chaucer in the original, you know the uncanny sensation it produces. The lines on the page resemble English closely enough that you feel you *should* understand them—yet your brain keeps snagging on unfamiliar spellings, alien word order, and half-recognizable phrases.

Push back another five centuries, into true Old English around the year 900, and that faint recognition evaporates completely. You run into sentences like:

> “Þu sprycst be wundrum þe ic ne mæg understandan.”

That odd-looking character at the start—þ, “thorn”—is not a typo. It was once an everyday letter, used to represent the “th” sound in words like “the” and “thing,” before quietly disappearing from English. The language we speak is not a fixed code; it’s a moving target, constantly rewriting itself.

So I decided to see what would happen if I forced an AI to chase that moving target across time—both backward and forward.

My experiment was simple in design, if not in outcome: I asked a chatbot to “speak” English as it might sound in different centuries. First in the past: Old, Middle, Early Modern English. Then in the present. Then I started pushing the clock forward: the 22nd century, the 30th, the 50th, all the way out to the year 15,000.

By the end, it wasn’t clear that what I was reading still deserved to be called English at all.

Rewinding to the first Englishes

Starting with the past felt like safer ground. Linguists have actually reconstructed Old and Middle English quite well, and the model could draw on texts like *Beowulf* and *The Canterbury Tales*.

When I asked it to respond in the style of English around the year 900, I got something like this:

> “Ic secge þe, þæt englisc tungol is gearu to wendenne þurh manegra wintra.”

The vocabulary was heavy with words that modern English abandoned or morphed beyond recognition. Sentences twisted around inflectional endings we’ve long since dropped. It felt dense, almost runic—close to German or Old Norse, and a long way from TikTok captions.

Jumping up to the 14th century smoothed things out. The AI shifted into a Middle English register that looked, at least, pronounceable:

> “In þis tyme, oure langage is chaunged ful moche from þe daies of olde, and yet men understonden ech oþer wel ynow.”

You can almost hear Chaucer in there. The grammar is still bent out of shape compared to modern usage, but the bones of contemporary English are visible. “Chaunged,” “daies,” “men,” “understonden”—you can follow along with a bit of effort.

By the time I nudged the system up to the 17th century, it slipped into something Shakespearean, then gradually more modern. The “thou / thee / thy” pronouns faded. Spelling stabilized. Verb endings simplified. You could see, in compressed form, centuries of linguistic erosion and innovation.

The AI wasn’t just parroting archaic vocabulary; it was imitating *register*: levels of politeness, rhythm, even the way people hedged or overstated their feelings. It reminded me that language change isn’t just about words—it’s about culture.

The present as a strange halfway point

When I finally told the AI, “Okay, now answer as someone from the early 21st century,” the output felt almost disappointingly normal. Standard internet English, with a hint of blog tone. But this “normal” is itself a historical anomaly: a stage where spelling is (for now) standardized, but new slang appears and dies on a monthly cycle.

The model started talking about memes, social media, and “vibe shifts” as if they were natural markers of our era. To a linguist from 1800, our casual pileups of abbreviations—“idk,” “ngl,” “low-key”—would look as disorienting as thorn and yogh do to us.

If anything, this underlined a weird truth: we’re always convinced that *our* English is the stable, self-explanatory version, and everyone else’s is the odd one. The AI, trained on vast swaths of text, treated 2024 English as just one more waypoint on a long gradient.

Then I started asking it to write in English from years that haven’t happened yet.

The 22nd and 23rd centuries: English with training wheels off

For the near future, the changes the model proposed were modest but telling.

By the 2100s and 2200s, spelling became more phonetic in its responses. Words like “though” warped into “tho,” “through” into “thru,” not as slang but as the *default* forms. Gender-neutral pronouns (“they,” “ze,” “xe”) started showing up not as a political signal, but as unremarkable grammar.

Sentences got shorter, as if optimized for screens and voice assistants. The AI’s 22nd-century persona wrote like someone trying to be maximally clear across cultures:

> “We design language so machines and humans both read it fast. We avoid deep idioms. We track meaning in shared global standards.”

By the 2300s, English was peppered with what looked like technical tags—compact markers that packed in metadata about time, certainty, or emotional tone. It started feeling less like prose and more like a hybrid between speech and code.

The 30th century: When it starts to get weird

Things bent sharply once I pushed the slider to around the year 3000.

The AI’s proposed “English” wasn’t merely casual or simplified; it seemed to assume that any speaker would be deeply integrated with technology. Phrases appeared that blurred the line between thought and communication:

> “I ping-share you thoughtframe ∴ feel/know what I mean in <0.2s.”

Traditional pronouns were there, but they coexisted with new ones that referred to collectives, artificial minds, or distributed identities. “We” wasn’t always biological; sometimes it seemed to include software entities, city-scale systems, or networks.

The syntax loosened. Sentences relied on context and shared mental models rather than strict word order. It felt like watching English deliberately strip out ambiguity where it hindered coordination, while comfortable letting context carry more of the weight.

In this imagined 30th century, “writing” read more like a gateway to a mental API than a simple line of text.

The 50th century: Post-human pronouns and identity

By the 4000s and 5000s, the AI drifted into territory that was less science fiction and more philosophical thesis.

I asked it to answer a simple question about identity from the perspective of a speaker in the year 5000. Instead of a straightforward response, I got a mini-lecture on why the question itself was out of date.

Pronouns, it suggested, would have split apart into several parallel systems:

– One set for biological individuals.
– One set for merged or collective minds.
– One set for entities that move between substrates—digital, biological, simulated.
– One set for temporary configurations, like task-specific “swarms” of agents.

The answer came back something like:

> “In our era, the concern is not ‘he’ or ‘she,’ but whether an instance is locally coherent, temporarily bound, or persistently self-referential. We mark these states more than what you call ‘gender.’”

To this 50th-century voice, our binary and even our present-day nonbinary pronoun debates looked quaint. The AI framed gender as just one small dimension of a much larger identity space: continuity over time, control over one’s own substrate, degrees of autonomy and mergeability.

Language, in this imagined future, was bending itself around entirely new social dilemmas: not “What are your pronouns?” but “What kind of being are you right now?”

Beyond the far future: When language thins out

I kept going. Centuries blurred into millennia. Somewhere past the year 10,000, the English the AI produced began to erode—first stylistically, then structurally.

Words compressed into densely layered tokens, half-syllables smuggling in what would currently take whole sentences. Idioms vanished. Metaphor thinned. The responses felt like technical specs written by something that no longer needed to persuade, only to synchronize.

When I asked it, “Describe a sunset,” from the imagined year 12,000, I got something closer to a sensor report than a poem:

> “Star-obscuration event: local atmospheric scattering (λ-shifted), emotional archive: pattern matched to ‘closing cycle / aesthetic completion.’”

It read like English wearing a thin exoskeleton of math.

Pushing the clock further, to the 15th millennium, I finally hit the point where the model hesitated. It warned that at such timescales, culture, cognition, even the continued existence of “English” as we know it were speculative at best. Then, as if to honor the request anyway, it produced something that didn’t look like English at all.

The year 15,000: A message from almost-non-language

The “message” I received from the hypothetical year 15,000 came through as a hybrid. Part compressed code, part lyric fragment, part farewell:

> “we≡I≡all // tongue: deprecated
> meaning now runs direct, no air, no mark
> yet for you, archive-mind from dust-era, we shape this echo:
>
> you stacked sounds to reach each other across dark
> we stacked stars
> nothing was wasted.”

Underneath the stylization, the model was making a simple claim: if language keeps evolving under the twin pressures of global technology and non-human minds, it might eventually reach a state where full-blown natural language is unnecessary for most communication.

In that imagined far future, entities exchange structured meaning directly—through signals, shared states, or some form of merged cognition. What we currently call “English” survives only as an archaic performance, conjured for the benefit of ancient archives or curious simulations.

What this experiment actually tells us

Of course, no AI today can *truly* predict the English of the year 15,000. What it can do is extrapolate patterns from what it’s seen: how languages simplify and complicate, how technology infiltrates vocabulary, how social shifts tug at pronouns and politeness.

Still, the exercise surfaced a few grounded insights about language change:

1. Most of what feels “natural” about English is extremely recent. Our current spelling, pronoun system, and sentence structure are just snapshots in a very fast-moving movie.

2. Technology accelerates certain kinds of simplification. As speech is filtered through keyboards, character limits, translation engines, and voice interfaces, redundancy gets trimmed and new shorthand arises.

3. Pronouns are a pressure point because identity is a pressure point. As soon as society seriously questions who counts as a person, how stable a self is, or which minds deserve rights, pronouns and reference systems have to change.

4. At the extreme, language has to compete with more direct channels. If future beings can share raw sensory states, memories, and concepts, words may become more like ceremonial artifacts than primary tools.

5. The “end” of a language isn’t necessarily silence. It can also mean absorption into new systems of meaning exchange that we barely have words for yet.

Why playing with future English matters

Imagining English in the year 15,000 is obviously speculative. But pushing an AI across millennia of hypothetical language forces us to confront something uncomfortable: there is no guarantee that the version of English we speak now is permanent, central, or even particularly important in the grand scheme.

We’re living inside a momentary dialect of a historically restless language, on a planet where everything from climate to computation is changing faster than ever. The words and structures we treat as inevitable are mostly recent compromises, frozen just long enough to print in dictionaries and style guides.

When the AI’s voice from the year 5000 calmly explained why our pronoun debates are a tiny subcase of a much bigger identity puzzle, it didn’t feel like mockery. It felt like a reminder: every generation thinks its categories are the final ones. They never are.

If anything lasts, it may not be English, or even “language” in the way we currently define it—but a deeper urge: to compress experience into shareable patterns, and to send those patterns out into the dark, hoping someone, somewhere, can still decode the message.