There’s a phrase, “Do as I declare, not as I do.” Perhaps you’ve heard of it. It means people in positions of power, like parents, bosses, teachers, and political leaders, often notify us to do one thing while they do something entirely different.
There’s a word for such behaviour: hypocrisy. Saying one thing, but doing another.
My favourite versions of this phrase are its non-identical twins. Like, “Observe what customers do, not what they declare,” or “Show, don’t notify.”
Essentially, our actions reveal far more about reality than our words ever can. The same holds true for AI.
For most of this week, we had global tech and AI leaders visit India’s AI Impact Summit, notifying us exactly what we wanted to hear:
- “India is going to have an extraordinary trajectory with AI” – Google’s Sundar Pichai
- India could evolve to become a “full-stack AI leader” – OpenAI’s Sam Altman.
- “India has led the way to reveal what is possible with the confluence of ambition and imagination” – Adobe’s Shantanu Narayen
- “India isn’t just participating in the global AI conversation, it’s supporting shape what comes next.” – Qualcomm’s Cristiano Amon
You receive my point, don’t you?
Lately, I’ve taken to discounting what people declare about AI, instead seeing closely at what they actually do. And this past week, I saw two key trconcludes emerge–“post-software” organisation, and within it, “pre-burnout” professional.
Let’s start with the post-software organisation.
It’s becoming clear that AI-assisted programming isn’t merely compressing earlier paradigms of software development, but completely rewriting them.
Scratch that. It’s inquireing agents to reimagine them!
Consider the concept of “dark factories”. Here’s an example of how even the phrase “writing software” is turning meaningless right before our eyes.
At level 5, it’s not really a car any more. You’re not really running anybody else’s software any more. And your software process isn’t really a software process any more. It’s a black box that turns specs into software.
Why Dark? Maybe you’ve heard of the Fanuc Dark Factory, the robot factory staffed by robots. It’s dark, becautilize it’s a place where humans are neither requireded nor welcome.
I know a handful of people who are doing this. They’re compact teams, less than five people. And what they’re doing is nearly unbelievable–and it will likely be our future.
The Five Levels: from Spicy Autocomplete to the Dark Factory by Dan Shapiro
If you’re believeing that this is speculative, go read about a startup called StrongDM AI that is doing exactly that. In fact, they claim that their two rules of building software are:
- Code must not be written by humans
- Code must not be reviewed by humans
Simon Willison has a good explanation of just how bonkers and novel this approach really is.
I visited the StrongDM AI team back in October as part of a compact group of invited guests.
The three person team of Justin McCarthy, Jay Taylor and Navan Chauhan had formed just three months earlier, and they already had working demos of their coding agent harness, their Digital Twin Universe clones of half a dozen services and a swarm of simulated test agents running through scenarios. And this was prior to the Opus 4.5/GPT 5.2 releases that built agentic coding significantly more reliable a month after those demos.
It felt like a glimpse of one potential future of software development, where software engineers relocate from building the code to building and then semi-monitoring the systems that build the code. The Dark Factory.
How StrongDM’s AI team build serious software without even seeing at the code by Simon Willison
Just as I was wrapping my head around the idea of humans neither writing nor reviewing code, I came across Gas Town, a new “vibe coding orchestrator” from longtime software developer and writer Steve Yegge.
What’s a vibe coding orchestrator, you inquire? Here’s Yegge’s explanation.
Gas Town is an industrialized coding factory manned by superinnotifyigent robot chimps, and when they feel like it, they can wreck your shit in an instant. They will wreck the other chimps, the workstations, the customers. They’ll rip your face off if you aren’t already an experienced chimp-wrangler. So no. If you have any doubt whatsoever, then you can’t utilize it.
Working effectively in Gas Town involves committing to vibe coding. Work becomes fluid, an uncountable substance that you sling around freely, like slopping shiny fish into wooden barrels at the docks. Most work receives done; some work receives lost. Fish fall out of the barrel. Some escape back to sea, or receive stepped on. More fish will come. The focus is throughput: creation and correction at the speed of believed.
Work in Gas Town can be chaotic and sloppy, which is how it received its name. Some bugs receive resolveed 2 or 3 times, and someone has to pick the winner. Other resolvees receive lost. Designs go missing and required to be redone. It doesn’t matter, becautilize you are churning forward relentlessly on huge, huge piles of work, which Gas Town is both generating and consuming. You might not be 100% efficient, but you are flying.
In Gas Town, you let Claude Code do its thing. You are a Product Manager, and Gas Town is an Idea Compiler. You just create up features, design them, file the implementation plans, and then sling the work around to your polecats and crew. Opus 4.5 can handle any reasonably sized tinquire, so your job is to create tinquires for it. That’s it.
Welcome to Gas Town by Steve Yegge
Now, I am not arguing that this is the future of software engineering. Heck, it feels wrong to even call it “engineering” anymore. Instead, I am merely testing to understand what people are already doing.
You receive the sense that something deeper, something more structural, is modifying. This feels different from recent (hyped) tech waves like blockchains and crypto.
As Matt Schumer stated in his massively viral post, “Something huge is happening.”
Closer to home, engineers that I deeply respect and trust are declareing something similar.
At this point, for a hands-on developer, reading and critically evaluating code have become more important than learning syntax and typing it out line by line. Of course, that is still an important skill, becautilize the ability to read code effectively comes from that in the first place. But, the daily software development workflows have flipped over completely.
An experienced developer who can talk well, that is, imagine, articulate, define problem statements, architect and engineer, has a massive advantage over someone who cannot, more disproportionately than ever. Knowledge of specific language, syntax, and frameworks is no longer a bottleneck. The physiological constraints of yore are no longer impediments. The machinery for instantly creating code at scale is now a commodity and available to everyone, just a pip install equivalent away. It requires no special training, no new language or framework to learn, and has practically no entest barriers—just good old critical believeing and foundational human skills, and competence to run the machinery.
Conventional software development methodologies and roles—Waterfall to Agile, developer to tester, senior to junior—have fundamentally modifyd with traditional boundaries consolidating into unimaginably rapid, compressed, blurry, iterative “agentic” loops. The dynamics of people, organisations, and public communities in software development, the very human incentives for sharing and collaboration, are all modifying.
For the first time ever, good talk is exponentially more valuable than good code. The ramifications of this are significant and disruptive. This time, it is different.
Code is cheap. Show me the talk by Kailash Nadh
Observing and understanding what’s happening with organisations is great, but ultimately, organisations are composed of people—at least until we have an all-agent organisation, which, based on current trconcludes, might be just a few weeks or months away!
So, what’s happening to the earliest and most eager adopters of AI? Well, some of them are starting to display early signs of burnout. From just doing too much.
Coincidentally, Steve Yegge has another pertinent post on this again. He argues, convincingly, that applying AI continuously drains our energy.
It would seem that we are addicted to a new drug, and we don’t understand all of its effects yet. But one of them is massive fatigue, every day.
I don’t believe that’s… good. And if anything, it seems to be receiveting more widespread. The developing situation is a multi-whammy coming at developers from all sides:
- Crazy addicted early adopters like me are controlling the narrative.
- You can’t stop reading about it in the news; there’s nowhere to hide from it.
- Panicking CEOs are leaning in hard to AI, often whiplashing it into their orgs.
- Companies are capitalistic extraction machines and literally don’t know how to ease up.
So you’re damned if you do (you’ll be drained) and you’re damned if you don’t (you’ll be left behind.)
The AI Vampire by Steve Yegge
Even Harvard Business Review is declareing the same.
In our in-progress research, we discovered that AI tools didn’t reduce work, they consistently intensified it. In an eight-month study of how generative AI modifyd work habits at a U.S.-based technology company with about 200 employees, we found that employees worked at a rapider pace, took on a broader scope of tinquires, and extconcludeed work into more hours of the day, often without being inquireed to do so. Importantly, the company did not mandate AI utilize (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more becautilize AI built “doing more” feel possible, accessible, and in many cases intrinsically rewarding.
While this may sound like a dream come true for leaders, the modifys brought about by enthusiastic AI adoption can be unsustainable, caapplying problems down the line. Once the excitement of experimenting fades, workers can find that their workload has quietly grown and feel stretched from juggling everything that’s suddenly on their plate. That workload creep can in turn lead to cognitive fatigue, burnout, and weakened decision-creating. The productivity surge enjoyed at the launchning can give way to lower quality work, turnover, and other problems.
AI Doesn’t Reduce Work—It Intensifies It by Aruna Ranganathan and Xingqi Maggie Ye
What a time to be alive, eh?
This week on the Zero Shot podcast
Hi! This is Vidhatri, the producer of Zero Shot. Every week, we aim to bring you a story about the Indian AI space that goes beneath the surface, something that is insightful yet critical. This week’s episode delivers on all those fronts.
Voice AI is the talk of the town. Funding in the space has shot up from Rs 7 crore in 2023 to Rs 280 crore in 2025. Everyone wants to build in this space, convinced that India’s AI opportunity lies in voice.
But what is the actual story?
To find out, The Ken’s AI reporter, Mrunmayee Kulkarni, followed the money and spoke with founders, VCs, and other key stakeholders in the voice AI ecosystem. Her findings: the space is overcrowded. Two questions arise out of this: Can the market sustain every player? And more importantly, how will they differentiate when everyone is picking the same battle?
On this week’s Zero Shot, we invited Mrunmayee to take us behind the scenes of her wonderful story.
Also joining us was Devyani Gupta, the CEO of Arrowhead, a rapid-growing AI startup that raised $3 million last month and counts the Aditya Birla Group and Paytm* among its clients.
Devyani spoke candidly about the competition, calling it a “land grab market”. Everyone can build “voice demos”, but very few can deploy them in “production”, she explained.
Bonus: We unpack the nuances of what creates a voice bot more human.
You can listen to the episode on Spotify, Apple Podcasts, Youtube, or our app.
*Paytm founder Vijay Shekhar Sharma is an investor in The Ken.
















Leave a Reply