Scaling Leads Logic, Unfortunately
June 6, 2025
A few notes from the intersection of small business branding and AI.
May 14, 2025. Back in mid May on The Most Interesting Thing in AI podcast, Nicholas Thompson, CEO of The Atlantic, interviewed Gary Marcus, a cognitive scientist known, in part, for his strong and sometimes contrarian views on AI. Fascinating episode. One of Marcus’ main points is that the current generation of large language models have been built in a lazy way that will hinder their growth going forward.
The problem. The current AI models will hit a wall, and soon, he says, because they depend on shear compute. Just throw more Nvidia chips and gigantic data centers at the problem, the theory goes, and the LLMs will continue their seeming exponential improvements. That’s certainly the way it has seemed as a user on the ground, and of course the AI builders have used some of their fortunes to fan that belief into a bonfire. So what’s the evidence that we’ll be hitting a future wall anytime soon?
Gary Marcus says a few things. First, he points to academic evidence that the pace of LLM improvement has already started to slow. This is why, for instance, OpenAI released ChatGPT 4.5 rather than 5.0—they were hoping the new model would be more profoundly different and thus the 5.0 name would feel right. On the ground as a user, however, change seems like it’s coming pretty freaking fast. Are those academic studies really accurate? What other evidence?
Plus, stuff doesn’t make sense. Second, Marcus reminds us of all the hallucinations that continue to trouble AI outputs. One well-known example he uses is when someone asked ChatGPT how long it would take to bike from San Francisco to Hawaii. The answer: Depends on how fit the cyclist is and what sort of bike they’re riding. Trick question, yes, but a totally unhelpful answer (unless this is standup). AI gets lots of other things wrong, too. How many images of humans with six fingers have been generated? Many! The reasons for these issues, problems, and hallucinations are, Marcus argues, a result of the way the models have been built. So what’s the solution?
Logic, logic, logic! Marcus says the models should be built from the very start using symbolic logic, or essentially a set of rules, like how gravity works or that humans (mostly) have four fingers plus a thumb or that at the moment it’s impossible to ride a bike to Hawaii. Marcus acknowledges that building out a set of rules that govern an AI model would be a much harder, longer road to ride down. Start thinking about it, and your list of rules suddenly becomes very very very long. Like maybe almost endless. Given the insane competition to monetize the models asap, it seems likely that for now everyone will continue to play the scaling game. I mean, who needs rules, I guess. In any case, it’s a great podcast and well worth listening to.
My traffic issue. I ran into my own scaling vs logic issue recently while generating photorealistic race car images for a client project. In this case I tried both MidJourney ChatGPT and got the same response. I asked for a POV shot of the driver mid race from inside the car. The issue…well just take a look.
Clearly we need some rules that explain you drive WITH traffic rather than AGAINST it, especially when racing cars at extremely high speeds. There were other issues in these images as well, like lopsided steering wheels, three arm drivers, and using a right arm to reach the left side of the wheel. Details, but probably ones that wouldn’t surface if the models were based on symbolic logic rather than $10B worth of Nvidia chips.
Okay, off to a client meeting while dodging oncoming traffic. Wish me luck!