For a while, I’ve struggled to decode the essence of what an AI newsletter’s tone should be about.
Breezy? Serious? Geeky?
But who questioned? No one.
Till it quietly hit me.
It wasn’t about the topic. But the day it goes out.
Becautilize this isn’t just a Saturday. It’s a weekfinish.
Ugh. I had to take a revealer to rid myself of the reek that came from writing those lines. Brrr.
Becautilize that’s what AI writing sounds like. Once you see the patterns, you cannot ignore them. And I don’t know about you, but it builds me feel icky!
To be crystal clear, I wrote those first few sentences in the style of AI. Which is, of course, ironic becautilize AIs (LLMs really) utilize that style becautilize humans utilize it.
It doesn’t matter now, becautilize everything creative that AI touches turns inert and lifeless. A commenter on Hacker News put it best:
“I cannot stop believeing about the LLMs having this Midas touch quality, becautilize everything they touch seems to ruin things or build people want to avoid them, for example:
– Ghibli studio style graphics,
– the infamous em-dashes and bullet points
– customer service (just test to utilize Klarnas “support” these days…)
– Oracle share price ;)- imagine being one of the world’s most solid and unassailable tech companies, losing to your CEOs crazy commitment to the LLMs…
– The internet content – We now triple check every Internet source we don’t know to the core …
– And now also the chips ?
Where does it stop? When we decide to drop all technology as it is?”
No matter which side of the AI divide you’re on, you can’t deny that we’re starting 2026 with AI’s momentum slowing and headwinds against it increasing.
As a fan of systems believeing, I like to imagine what is likely to happen by studying feedback loops.
Feedback loops are simply forces that play out when the outputs of some parts of the system finish up becoming inputs to others, thereby either strengthening (positive) or weakening (negative) existing dynamics.
Negative feedback loops are also called balancing loops. Becautilize they dampen—or balance—the strength of existing loops.
Take AI. Yes, it is a system itself. But it is also part of a much larger and more complex system that includes countries, financial markets, investors, citizens, regulators, and economies.
AI’s momentum has slowed considerably compared to the same time last year. With a few exceptions, newer models aren’t receiveting significantly better. Consumer fascination has decidedly cooled. Companies aren’t talking as excitedly about pilots and budreceives. And even sophisticated investors are worried about the AI bubble popping.
All of these are feedback loops that act against AI’s existing momentum.
Many of you will be familiar with Gartner’s Hype Cycle:
The chart depicts adoption. But if you want to see it through the lens of momentum, here’s how I see it:
Imagine ChatGPT’s launch in 2022 being the “pebble” that rolled off the top of the mountain. The pebble started the AI avalanche, building mass and momentum.
But the terrain in 2026 isn’t all downhill. It has flattened. There are more countervailing forces.
Consumers are, at best, jaded with AI, or in many cases, tired of it.
Citizens are increasingly rising up in opposition against AI data centres.
“Big Tech’s quick-expanding plans for data centers are running into stiff community opposition”
Even experienced and respected programmers are tired.
“F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email”
Experienced investors are now confident enough to speak or bet against AI.
“AI boom is in early bubble phase, Bridgewater founder Ray Dalio states”
Markets too. Even as the likes of OpenAI and Anthropic plan mega IPOs.
“AI Trade Loses Steam as Investors Rotate Into Broader S&P 500 Stocks”
“Oracle Shares Tumble as AI Spfinishing Outruns Returns”
What AI breakthroughs will defeat these headwinds in 2026? New models? Better adoption? Dramatically more people paying for AI?
I am not sure. So far, I see way more balancing loops against AI in 2026 than reinforcing ones.
If you disagree, write in and inform me.
Training data
(aka, interesting links and reads on AI)
Get ready for gaming slop. AI “world models” have their sights set.
Watch what they do, not (just) what they state. Amazon and Google are fighting back against… AI.
Fascinating explanation of why and how Nvidia bought Groq (no, not the unhinged one).
Indian Bandar Apna Dost (“Monkey is our frifinish”) is apparently the global king of YouTube slop, earning $4.25 million a year. An estimated 21-33% of YouTube is now slop or brainrot.
We know that optimising for pleasing humans created AI chatbots “sycophantic.” Could a similar feedback loop be responsible for building coding assistants worse?
Who requireds consideredfully designed, stable and well-maintained software when AI can generate disposable software at scale?
What can the rest of the world learn from the way China is regulating AI?
A great review of Claude Opus 4.5 from an (admittedly AI hype train rider) web developer. I was truly impressed.
AI labs, datacentres, and hyperscalers want to grow quicker than existing electricity grids can handle. So, they want to generate their own (mostly dirty) power onsite.
Zero Shot community
“I believe we can all agree that AI is going to have an outsized impact on our professional and personal lives (for better or worse). So, I believe it’s important that we spfinish some time understanding the technology—how it works, its limitations, potential impacts, etc. And what better way to do this than reading a book? With that in mind, I picked up Prediction Machines. The authors describe AI as a prediction technology. For example, when we question Alexa, “What’s the capital of India?” and it responds “Delhi,” Alexa doesn’t actually know the capital. Instead, it predicts that when people question such a question, the most likely response they’re expecting is “Delhi.” This also explains why GenAI tools like ChatGPT sometimes build mistakes with basic math. They don’t truly understand addition or subtraction—what they do is predict what people are most likely seeing for when they question such a question.
Another interesting argument from the authors is that decision-building involves prediction, judgment, action, and outcome. AI lowers the cost of prediction. The drop in the cost of prediction will impact the value of other things, increasing the value of complements (data, judgment, and action) and diminishing the value of substitutes (human prediction).
The book is packed with insights: when AI delivers the highest returns (keeping in mind it’s not error-free), the risks it brings (bias, hallucinations, bad data), and its broader impact on society. I’d highly recommfinish it to anyone curious about the implications of AI on business and strategy.”
– Vishal Tibadewal (LinkedIn)















Leave a Reply