The Most Interesting Thing in That Emad Mostaque Interview Wasn’t the Doom Talk
08/04/2026
What stood out to me most from Emad Mostaque’s interview wasn’t the collapse talk, but the way he’s using AI for government research and simulated teams of historical minds.
I watched that Emad Mostaque interview and the bit that stayed with me was not really the “human labour goes negative” line, even though that is obviously the bit designed to hit people in the chest. The more interesting part, at least to me, was what he seems to actually want AI for. Not just the usual productivity sludge, not “write my emails faster” and not the same dead conversation about whether ChatGPT can replace junior staff.
What caught my attention was the country-specific government research idea, and then the other thing, the weirdly compelling idea of using AI almost like a panel of resurrected historical minds. That is where it stopped feeling like a normal AI interview and started feeling like the interface to something much stranger.
What made it land for me is that he was not talking about AI like a better search engine. He was talking about systems that can absorb massive amounts of domain knowledge, compare across countries, track what one government or medical system is doing differently from another, and then actually help guide decisions.

He ties that back to what he did around autism research first, where he says he built an AI team to analyse literature from first principles, and then later during COVID he helped set up a United Nations-backed initiative to organise collective knowledge with AI because treatment protocols and useful information were not flowing properly between countries. That part matters because it is not theory, it is clearly connected to how he already thinks and works. He is not describing a chatbot with a flag attached to it. He is describing something closer to a sovereign research layer that can reason over a nation’s systems instead of just spitting back summaries.
That, to me, is a much bigger idea than most of the AI stuff people talk about online.
Most people are still trapped in this consumer version of AI where the whole game is prompting better, or wiring together a few tools, or making an assistant that can kind of half-do admin work without falling over. Useful, sure. I use that stuff myself. But this is a different category.
If you train or assemble systems around country-level healthcare, infrastructure, regulation, education, tax, supply chains, demographics, whatever else, then you are not just asking AI for answers anymore. You are creating something that can model how a place actually functions. That has obvious upside, and some pretty ugly downside as well, but it is at least a real ambition and not just another “AI for content creators” dead-end.
The other thing I kept thinking about was the “team of past geniuses” idea, because it sounds ridiculous at first and then becomes more interesting the longer you sit with it. The easy dumb version of this is the same old prompt junk where people tell the model it is Einstein, Sun Tzu, Steve Jobs and Marcus Aurelius at the same time, as if dressing the system in costumes somehow makes it profound.
I do not mean that obviously, I've talked about how I feel about telling AI "You're a Wizard Harry". I mean something more like constructing a reasoning environment where different intellectual traditions, different writing corpora, different historical patterns of thought, and different specialised modes of analysis are all brought into the same problem space. Not as magic necromancy, obviously, but as structured lenses. Instead of one bland assistant giving you the statistical average answer, you force tension into the system. You make it argue, compare, challenge, refine.

That idea is a lot more interesting than most agent talk because it shifts the point of AI from “give me output” to “build me a room full of pressure-tested perspectives”. I think that is what grabbed me. A single model is useful. A model set up to emulate contention between different minds, fields, assumptions and value systems is potentially far more useful, especially for research.
If you were exploring a government policy question, or a social problem, or a medical framework, or even a product strategy, you could imagine a setup where one thread is obsessed with incentives, another with ethics, another with implementation, another with unintended second-order effects, another with long historical pattern matching. That is much more interesting than a one-shot answer, and honestly much closer to how actual thinking works when it is good.
It also lines up with the part of the interview where he keeps pushing past the current “smart buddy” phase of AI and into systems that remember, learn from mistakes, communicate naturally, and operate more proactively. He talks about the public models moving from the old goldfish-brain style interaction into something that can hold memory and behave more like an ongoing entity rather than a disposable prompt box.
He also makes the point that private models are already ahead, and that the systems being built are not just for consumers but for what he sees as the future workforce. Whether someone agrees with his timeline or not, the shape of what he is describing matters. The useful shift is not just more intelligence. It is persistent, contextual intelligence applied to entire domains.
That is probably why the government research part and the historical-figures-as-team part feel connected to me rather than separate curiosities. They are really the same move. In both cases, AI is being used as a structured cognitive environment rather than a tool you poke once and close. In one case the environment is built around a country, its systems, its policies and its realities.
In the other case the environment is built around different kinds of minds, different ways of seeing, and different intellectual pressures. Both are about depth. Both are about moving away from generic assistants. Both suggest that the people getting the most out of AI over the next few years will not be the people writing clever prompts. It will be the people designing good thinking environments.

I think this is also why so much mainstream AI usage still feels shallow. Most people are still using it at the level of convenience. Make this faster. Summarise this. Write this email. Fix this code. Again, nothing wrong with that, I do the same thing constantly, but it is level one stuff. The more serious use is building systems that understand a problem space well enough to become research partners inside that space.
Not perfect, not trustworthy by default, not magically wise, but still far more useful than general chat when properly constrained. And the even more interesting layer on top of that is when you stop treating “assistant” as singular and start building internal disagreement into the system on purpose.
That is where I think the historical genius angle stops being gimmicky. It is not really about pretending Aristotle is in your laptop. It is about realising that one of AI’s best use cases may be synthetic plurality. A cheap way to create multiple analytical viewpoints and let them collide before you make decisions. Humans already do versions of this with advisers, boards, research teams and committees, except those are slow, expensive, political and usually full of people protecting their turf.
AI can fake some of that pluralism at near-zero marginal cost. Whether the quality is good enough depends entirely on the design, the context and the sources, but the structure itself is extremely compelling.
I also think it says something about where the actual frontier is for people building with this stuff. I do not think the frontier is another wrapper. It is not another “all-in-one agent dashboard” with twelve tabs and a gradient background. It is not even just better automation. The frontier looks more like domain-shaped intelligence and multi-perspective reasoning.
A system that deeply understands one slice of reality, and can then interrogate that slice from several angles at once, is probably worth a hundred generic assistants. That is the part of the interview that felt worth writing about, because it is one of the few times I have heard someone describe AI in a way that actually sounds like a new cognitive structure rather than a faster interface.
The darker part, obviously, is that the same architecture can be used for control. He talks throughout the interview about private labs, governments, closed systems, power concentration and digital feudalism, and that is not separate from these ideas either.
If country-specific AI research systems become real, who owns them matters. If simulated teams of minds become persuasive enough to guide people, who shapes those teams matters. This is where it gets messy very quickly. A sovereign research model that helps a country understand itself sounds good until you ask who defines the training, who sets the objectives, who decides what counts as a good outcome, and who gets locked out of the loop. A panel of synthetic great minds sounds brilliant until it becomes a soft manipulation layer wrapped in prestige.
Still, even with all that, those two use cases were the most genuinely interesting parts of the conversation for me. Not because they are safe, and not because I think they are finished or even close to finished, but because they point to a smarter way of thinking about AI.
Less as a chatbot. Less as a content machine. More as a system for constructing serious research contexts. That is a far bigger leap than “AI can now make slides” or “AI can now book appointments”. It is closer to asking what kinds of synthetic thinking spaces we can build, and who gets to use them.
That is probably why the interview stuck with me. The loud headline is the 50/50 odds stuff. The real signal, for me anyway, was buried underneath it. He is not just talking about intelligence getting cheaper. He is talking about intelligence becoming shapeable. Country-shaped. History-shaped. Perspective-shaped. That is where it gets weird, and that is also where it starts to get genuinely useful.