Dr. Flattery the always wrong Bot
(Title credit: the amazing Dr. Angela Collier - specifically this recent video where she used the phrase to describe a certain rather popular GenAI chatbot. Go subscribe to her channel for physics deep dives and other interesting rants.)
Alternate Title: a skeptic’s guide to surviving the GenAI bubble
A few years ago when all the CEOs and tech media started hyping GenAI as the next big workplace revolution, I was skeptical. Their predictions ranged from the rather mild “orders-of-magnitude productivity gains” to apocalyptic “bots will replace most workers in the next 2 years”. The more hyperbolic their proclamations got, the more skeptical I became. But then, recently I had an aha moment… a revelation which has turned me into a strong believer and vocal proponent of GenAI technology as the future. Here’s what I realized:
- Those CEOs make a lot more money than me, so by the laws of capitalism they are smarter and better than me in every single way
- I like having a job so that I can pay for food and rent
(/s, in case the sarcasm wasn’t bleedingly obvious.)
1. Is it really hype?
When it comes to GenAI hype, count me out. The technology is cool on the surface… but not to the extent that it’ll replace human intelligence. First off, it’s not intelligence, it merely mimics it… and even if it were true intelligence, intelligence is not enough (thanks Bryan Cantrill). Secondly, think about all the labor-saving inventions from the past couple hundred years - the electric motor, the car, dishwasher, laundry washer-dryer, computer… if all those inventions combined couldn’t bring us closer to a 10-hour work week, or free us up to do more of the things we love… then this certainly won’t. If anything, it’ll entrench our current inequalities even further. Thirdly - if most people are out of work thanks to GenAI, what’s propping up the economy and society? Who will have money to buy anything these GenAI-run factories are producing? The vision of the future that these GenAI peddlers have is a dystopia. Anthony Moser articulates it well in his recent post:
Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright, the way AI tech companies further the patterns of empire, how it’s a con that enables fraud and disinformation and harassment and surveillance, the exploitation of workers, as an excuse to fire workers and de-skill work, how they don’t actually reason and probability and association are inadequate to the goal of intelligence, how people think it makes them faster when it makes them slower, how it is inherently mediocre and fundamentally conservative, how it is at its core a fascist technology rooted in the ideology of supremacy, defined not by its technical features but by its political ones. […] The makers of AI aren’t damned by their failures, they’re damned by their goals. They want to build a genie to grant them wishes, and their wish is that nobody ever has to make art again. They want to create a new kind of mind, so they can force it into mindless servitude.
The problem being solved by GenAI
Don’t get me wrong - the underlying LLM tech is amazing. I’ve worked with old school AI/ML for years now; and know just enough to be a little dangerous. And when I use an LM for coding or writing… it’s fun! (And some people I’ve looked up to in my career - like Kent Beck - agree!) The fact that we can use natural human language to “code” (i.e. tell machines what to do) is a big leap forward in software; bigger than the one 50-75 years ago going from assembly to higher-level coding languages. The previous softawre revolution enabled our current coonected world, chockful of smartphones and smart devices and whatnot.
BUT but but, that revolution was NOT enabled by higher software quality - it was enabled by Moore’s Law. A 25 orders of magnitude improvement 1 in hardware is what subsidized the explosion in software. The quality of software actually got way worse (in terms of efficiency) - just think of how bloated modern OSes are compared to their 1990s counterparts. Think of what a resource hog Firefox or Chrome are today, compared to what they used to be at launch. They’re monstrous, gnarly, messy, terribly inefficient software beasts; but we don’t care because our CPUs, RAM, and SSD-powered machines and low-latency networks have been able to handle it. But it won’t last forever - Niklaus Wirth said this back in 1995, when he said “software is getting slower faster than hardware is becoming faster.”
I draw a parallel to the GenAI boom here. It’s the same kind of “subsidization” happening right now - except that we’re subsidizing our stochastic parrots 2 by burning gargantuan, mind-bending, planetary-scale levels of computing and energy at it. That’s the only area where I 100% agree GenAI will “change everything” - it’ll catastrophically accelerate our climate crisis if allowwd to continue. Nothing that requires hudreds of billions of dollars in annual burn rate, or building entire nuclear plants to power a f’ing data center, should be allowed to pollute our public commons, let alone be hailed as “the future”.
The GenAI Manufacturing Model - an illustration
The GenAI User Model - an illustration
BTW - you might think I’m trendsurfing here, because it’s slowly becoming acceptable now to call BS on the GenAI hype - what with failing shoe companies pivoting to sell AI compute to startups, and failing car companies pivoting to building “terafabs” 3. But this is not a fresh contrarian viewpoint for me - I’ve been writing and speaking about this since 2018.
2. But why would so many smart people go along?
Maybe I’m delusional, but I believe that deep down, most of the CEOs, VPs, directors, and middle managers on the hype train also secretly know that it’s hype. But they are playing along. Why, you might ask? It’s The Money, obviously. Modern capitalism isn’t about rationality, it’s about profit at all costs. No one wants to miss out on a hype cycle - be it subprime housing, or crypto, or GenAI. VCs and hedge fund managers have big money looking for big returns; they go looking for the next big investment; and startup founders and CEOs are only too happy to take that money by looking attractive. No one wants to miss out - no one even wants to be seen as missing out - perception matters to the stock price. Maybe they have internal doubts, but the FOMO is too strong - “all my friends are doing it!” Maybe they believe that they’ll be the smart ones who’ll see the bubble ready to burst, and manage to exit safely with all their riches - time will tell. In the meantime, there’s serious money to be made on the upswing of all scams.
And it’s not just executives either. Remember the early 2020s when FAANG companies doubled or tripled their workforce… only to start laying them off just 3 years later? During the upswing, every ambitious middle manager can use a hiring frenzy to increase their org size and get promoted. If you were one of the few people recommending caution, saying “Hold on, maybe we shouldn’t be hiring so fast and lowering our standards”… well, too bad, you just lost out on the chance to grow your career. Later circa 2023, when the hiring bubble burst, it wasn’t the reckless empire builders who paid a price. The layoffs hit lower-level employees first, especially the new hires, not the execs and middle managers who had hired them without any long-term plans or any consideration of the impact to those people’s lives. When that happens, the naysayers don’t get any acknowledgement of “you were right” either. This is the game of capitalism - boom and bust cycles; and while you may hate it - it’s the only game in town. So here we are, playing along with the AI Con now.
3. Surviving the hype cycle
(Work in progress)
But enough about the big picture. We’ve seen this before.
As a relative nobody, the direct question that GenAI poses to me (as pertains my day-to-day job is whether these tools are any use to make me more productive. And since “efficiency” is inevitable, it behooves me to try out these tools “survive” this cycle. And that’s where I have some thoughts about AI as a tool.
My job requires me to write, and read, a lot of documents. Like, a lot (and I’m a relative nobody in my org. The higher-ups do it all day everyday.) And one of the oft-quoted examples of “benefits of AI productivity” for my job role was that I can read and write stuff so much faster now!
But what happens when both parties
(of course I used ChatGPT to create this image.)
-
Sidebar #3: Apologies to my audience for referencing Uncle Bob - yes, I know. ↩
-
Sidebar 1: Everyone needs to go read the Stochastic Parrots paper by Timnit Gebru, Emily Bender et al. This is the famous 2021 paper where Dr. Gebru and Dr. Bender laid out the dangers of LLMs - and which led directly to their firing by Google’s AI el jefe Jeff Dean. Their predictions were eerily accurate in retrospect. ↩
-
Sidebar 2: Speaking of failing car companies: can we agree that Elon Musk is, without a doubt, the world’s greatest CEO. That is, if you consider a CEO’s job as “increasing shareholder value” and nothing else. Elon has managed to dissociate his car company’s stock from ordinary performance concerns like making cars, turning a profit, safety, and other banalities. It’s the world’s first stock powered by the CEO’s memes. How efficient is that??!! Tesla’s EPS for 2025 was $1.08, giving it a forward P/E ratio of nearly 200. In comparison, Toyota, the world’s largest car company, had an EPS of $24 and a P/E ratio of 11. If Tesla fell to the “low” valuation levels of the world’s largest car maker, they’d have to lose 95-98% of their current valuation. Which would still put Elon’s net worth in the billions. Not bad, eh? /s ↩
