GPT-5 has sealed the deal. It is one in a line of underachieving flagship models from major AI labs.

At the same time, we have major manifests of the world entering an age of superintelligence, in which we either all go extinct like ants getting exterminated by superintelligent “pest control” or we ride a benevolent superintelligence that provides us with a post-scarcity paradise.

  • Leopold Aschenbrenner’s “Situational Awareness” (June 2024): Former OpenAI researcher predicts AGI by 2027 is “strikingly plausible.” Claims we’ll see “the most extraordinary techno-capital acceleration” with trillion-dollar compute clusters. Says hundreds of millions of AGIs could compress a decade of progress into one year. US electricity production will grow by tens of percent. Only “a few hundred people” have situational awareness about what’s coming.

  • Geoffrey Hinton (2024-2025): The Nobel Prize winner gives it a 50% chance AI surpasses humans in 5-20 years. Estimates 10-20% chance of AI takeover. Left Google to warn about AI risks. Says there’s no “kill switch” once we reach superintelligence - it’s all about persuasion at that point.

  • OpenAI’s $500 billion valuation (August 2025): The company is in talks for a secondary sale at this astronomical valuation, up from $300 billion just months earlier. Investors apparently believe AGI is imminent enough to justify a half-trillion dollar price tag.

  • Meta’s Superintelligence Labs (June 2025): Zuckerberg created a second, independent AI research center reporting directly to him. Poached OpenAI’s ChatGPT co-creator with reported $100 million signing bonuses. Some packages allegedly reached $200 million over 4 years. Meta invested $14.3 billion in Scale AI just to acquire its CEO for this effort.

So there seems to be a huge anticipatory positioning - at least of minds and capital - to the entrance of abundant intelligence and automation.

What is it now?

We seem to have both bullish and bearish signals. When push comes to shove, I like to rely on the technological signals over the signals from philosophers or Wall Street.

I believe that AGI is not possible with the current regime of LLMs. The GPT-style autoregressive language transformer that was published in 2018 by OpenAI as GPT-1 - this style of AI, we shall call them LLMs from now - lacks the capabilities needed for AGI.

As Yannic Kilcher puts it: “The era of boundary breaking advancements is over… AGI is not coming and we can be reasonably sure about that.” The evidence? Every major lab is now extensively using synthetic data and reinforcement learning, gearing models toward specific use cases (primarily coding and benchmarks) rather than pursuing general intelligence.

The AI labs are trying to make it look like we’re still accelerating into the superintelligent age whereas in reality, it seems more like we’ve exited the accelerating phase and are experiencing the S curve. Benchmark-gaming, fear-mongering about extinction, and excessive investing into an illusion of acceleration while in reality the actual technological curve has been departing from the notional curve, forming an S curve: plateauing out.

We’re in what Kilcher calls “the Samsung Galaxy era of LLMs” - where each new generation has marginally better features but no groundbreaking capabilities. GPT-5 is really cheap for its performance, but that’s because it’s optimized for specific tasks (tool calling, coding) not general intelligence.

So is the AI bubble popping?

Does the “curse” have an end and we can revert back to an LLM-less world with safe white collar jobs, purely human artist-created ads and manual Google Search?

To answer this question, let’s look at the dot-com bubble. Did the bust of the dot-com bubble wipe the earth clean of the irresponsible delusions of inter-connected society, on-demand media streaming or people buying goods significantly on the internet instead of in a physical store?

The answer is obvious. The infrastructure built during the bubble years became the foundation for the actual internet revolution that followed. Amazon survived. Google emerged. Facebook was built on the ashes.

So I think it’s pretty clear that the signals are mixed and confusing. I wouldn’t even rule out the advent of the intelligence abundance era of humanity if some quantum leap is achieved in the AI research labs of the world - though the current evidence strongly suggests otherwise. What is quite unlikely is a reversion back to the pre-LLM world.

What I see is that the research phase on LLMs has mostly peaked out. The frontier labs have exhausted the “pump data, pump compute” approach and are now back to doing “something smart” - synthetic data, reward shaping, reinforcement learning. But now it takes millions of dollars per training run.

We are now in the LLM product era

The gratuitous gift of the open source software culture but also the price dumping wars on AI inference and training have created a powerful gift to the economy. Powerful, cutting-edge LLMs that still seek to be properly used and integrated in today’s businesses.

Not having to anticipate a ground-breaking quantum leap in model architecture and behavior makes it much easier for companies to implement LLMs into their products and services. The models are becoming specialized tools rather than general intelligences - really good at instruction following and tool calling, even if they hallucinate more on world knowledge.

And there are still many design patterns to discover for LLM-based apps:

  • Is the Chatbot interface really a ground-breaking new user interface paradigm in line with the window and mouse and the touch screen?
  • How can we make conversational AI and Voicebots have human-like latency while running on the resource-hungry architectures from the AI labs?
  • How do we guide the user to use the AI app most effectively?
  • How can we make LLMs “remember” things between turns? Giving them a memory foundation for continual learning.

These are the hottest questions to ask right now in my opinion. Now after the baby is birthed (it took a whole 7 years from the release of GPT-1 to the flattening of the LLM S curve), tech companies and consultants have it in their hands and need to raise it. Integrate the technology into businesses and the world.

The most interesting reversal is this: the very technology that was supposed to automate knowledge work has created an entirely new category of knowledge work. Every LLM needs prompt engineering, every integration needs custom tooling, every deployment needs evaluation metrics that actually mean something. The models can’t implement themselves. They can’t even reliably tell you when they’re wrong.

So here we are. The builders who thought they were coding themselves out of existence have instead coded themselves into necessity. Someone has to wire these things into ERP systems. Someone has to figure out why the model that aced medical benchmarks keeps recommending antibiotics for viruses.

The S-curve isn’t just flattening for model capabilities - it’s creating a different curve entirely. One where the distance between what the model can do in a demo and what it can do in production becomes the defining challenge. The AGI researchers are pivoting to product. The product teams are realizing they need to become researchers.

Seven years from GPT-1 to the plateau. How many more until we stop trying to build intelligence and start trying to understand what we’ve already built? That’s the real work now - not training the next model, but figuring out what to do with the ones we have. Turns out the singularity looks less like transcendence and more like integration work. Endless, necessary integration work.