AI is a bogan

Advertisement

The AFR is in full panic mode about AI.

AI stocks are lifting Wall Street to record highs on what feels like a daily basis, and former Google executive turned author and entrepreneur Mo Gawdat is predicting we are just two years away from a dystopian period that could last about 15 years during which AI fuels rising levels of crime, mass surveillance, conflict, loneliness and, of course, job losses.

Lots and lots and lots of job losses. To the point where AI even replaces CEOs.

…the reason Gawdat’s comments have resonated so much is because they tap into a sense of fear that has been building in the middle class.

In Australia, as in many Western countries, the basic intergenerational compact – study hard, work hard, be sensible, and you’ll enjoy a more prosperous life than your parents – is breaking down.

Housing is out of reach for so many younger members of the middle class, and a huge burden for those who can afford it. Economic growth has stagnated while living costs have accelerated. Climate change poses an existential threat. The gap between older, wealthier, asset-rich households, who’ve enjoyed 40 years of falling interest rates and rising financial markets, has become a yawning chasm.

…the arrival of the AI boom has only turbocharged this sense of middle-class dread.

…Gawdat’s fear is that there will be few jobs unaffected by AI, and the idea that the technology will create new jobs is “100 per cent crap” because AI’s goal is not the augmentation of humans, but the replacement of them.

To some extent, yes, but not really, either.

AI is not going to replace jobs requiring deductive reasoning or imagination, which is more than most realise. AI can’t replace this human function, and that limits its usefulness.

Tech legend Erik J. Larson.

Wide AI is a genuine advance — another milestone in AI’s seventy-year odyssey — but it doesn’t escape the automation label. Its outputs are still prone to illogical, mindless “hallucinations” that expose the fundamentally statistical nature of their inference. They are induction engines, not minds. And their uses track perfectly with what we expect from automation: replacing cognitive work. That means familiar fears about deskilling, erosion of expertise, and job loss — as industries adopt LLMs to eliminate or simplify entire categories of work. We’ve simply traded the spinning frames of Luddite England for clusters of GPUs in cloud datacenters.

Advertisement

Philosophically, ten years on, Wide AI is an obvious innovation made possible by Moore’s Law and by OpenAI’s early championing of the “scaling hypothesis” — the idea that bigger models, trained on ever-larger datasets, will yield better results. This has driven a gold rush for massive datasets and GPU megacampuses. In other words, the automation argument I made nearly a decade ago has only grown stronger. AI is exactly automation — and seeing it clearly as such is the first step toward figuring out how to use it well, and what to guard against.

What Narrow AI/Wide AI Really Is

Strip away the hype and here’s how today’s “intelligent” systems — from Deep Blue to AlphaGo to GPT-5 — actually work.

Every AI system, no matter how dazzling, begins with two engineering moves:

Representation. Translate the problem into a form a computer can work on — a chessboard into a game tree, speech into a sequence of phonemes, language into tokens and embeddings.

Specification. Define a finite sequence of computational steps that can operate on that representation to produce outputs — minimax search in chess, convolutional layers in vision, transformer attention in language.

If you can’t represent and specify the problem, you can’t automate it. If you can, you’ve already narrowed it.

That’s why Deep Blue could crush Garry Kasparov at chess but couldn’t play checkers, and why AlphaGo could beat Lee Sedol but couldn’t hold a conversation. Even the latest large language models still live inside these constraints. Their breadth of conversation comes from scaling statistical token prediction over a massive training distribution, not from breaking free of representation and specification.

In fact, if Wide AI were to introduce something truly new to the “narrowness trap” of AI, we’d see a path opening up to generalize language models and make continuing progress toward the Holy Grail of AGI. We don’t see that, in fact, and the limits of the scaling hypothesis that proved so fecund initially are now evident in the failure of deep pocket companies like OpenAI to continue the story of Wide AI by releasing newer versions like GPT 5. Truly general intelligence is still a mystery. In fact, it’s more mysterious now than it was in 2016. Even when we get a quantum leap forward, it quickly signals that it too is a dead end for the bolder ambitions of true AGI. In 2025, it’s been made clear that the more things change, the more they stay the same.

The Illusion of “Getting Smarter”

So, Narrow AI and now Wide AI doesn’t naturally expand into general intelligence. It gets more powerful inside its domain, but the very process that makes it work — reducing the world to a representation and running a specification on it — strips away the open-ended flexibility of a mind. In the case of LLMs the representation is a token sequence and the specification is the domain of syntax, semantics, and pragmatics in generating and interpreting natural language. The barrier to AGI here is actually a restatement of the engineering problem: endless sequences of tokens embedded into vectors to compute similarity and simulate meaning still hits up against the basic problem that gleaning probabilistic information about language is not the same as understanding it. Let this be yet another lesson for us.

“Induction engines” are useful, but they do not move beyond the specifics of what they know from lived experience. Any novel problem will be misinterpreted and made worse by hallucination.

Weirdly enough, this description matches the economic utility of your average Aussie bogan: self-referential, entitled, and trapped in extantancy.

This is AI. A kind of superbogan that can and will displace the lowest rungs of service delivery.

Will it do it better than the existing bogan? Yes, so long as your demand is not outside of the realm of lived experience.

Advertisement

But, if not, it’s likely to lead you horribly astray, more so than the average bogan, because it is not constrained by fear or stretched by empathy.

The corollary is that there will be a new distribution in service outcomes. The majority of interactions will be more efficient, but if your demand lies outside of this, then your outcome is likely to be considerably worse, maybe even destructive.

This is the tech service model that has overtaken business for decades. As it gets ever more efficient and automated, the gains are immense at the peak of the curve, but the losses at the margin intensify.

Advertisement

Not for the business. For you. Because if your demand is idiosyncratic in any way, it will either be ignored or disappear into some AI hallucination, and you going elsewhere becomes the cost of doing business.

One has to ask how far up the value curve this kind of functionality can go and what impact it will have.

To the extent that higher-functioning services are already replete with dysfunction via existing bogans made good, the answer is some distance.

Advertisement

Induction engines lifting this service experience to a kind of highest common denominator is probably no bad thing. And the result is no different from any other automation revolution. Higher productivity and incomes lead to jobs created elsewhere.

Meanwhile, if you are good at your job, and it requires genuine thinking and stretching beyond lived experience, then you will become more valuable, not less, as AI bogans proliferate the democratisation of higher service delivery for the predictable.

This is where AI regulation ought to come in. Not in the need for universal basic income (at least, not yet) but in the need for universal basic service delivery.

Advertisement

We need a kind of AI Trade Practices Act to cover the margins of customer experience.

About the author
David Llewellyn-Smith is Chief Strategist at the MB Fund and MB Super. David is the founding publisher and editor of MacroBusiness and was the founding publisher and global economy editor of The Diplomat, the Asia Pacific’s leading geo-politics and economics portal. He is also a former gold trader and economic commentator at The Sydney Morning Herald, The Age, the ABC and Business Spectator. He is the co-author of The Great Crash of 2008 with Ross Garnaut and was the editor of the second Garnaut Climate Change Review.