Two Paths to Success in the Post-Software Era

Two Paths to Success in the Post-Software Era


God Translation Bureau is a compilation team under 36Kr, focapplying on fields such as technology, business, the workplace, and life, and mainly introducing new technologies, new ideas, and new trfinishs from abroad.

Editor’s note: Reject the mediocrity of “AI shelling”: either become a “supplier” for large models, or invent brand – new businesses that can only be realized by AI. This article is from a compilation.

In the past eight years, I have served as an investor at Andreessen Horowitz and now at my own startup, Worldbuild, and have witnessed the same pattern repeating itself. The software indusattempt once entered an era of homogenization, where its development path was often determined by familiar financing routines rather than real innovation. The same unit economic efficiency, the same growth curve, and the same development path after Series C. Founders often optimized for financing milestones rather than building sustainable businesses, resulting in many companies raising too much capital at inflated valuations.

With the rise of generative AI and the finish of loose monetary policy, that era has come to an finish. As an investor, I’m extremely excited; AI has finally unlocked the long – awaited potential for real innovation since the mobile revolution. However, I see some founders still building professional AI products for the marketing or finance sectors, as if they were still creating the subscription – based software tools of the past decade. Those who still apply the old framework are about to create a huge mistake.

In this era, to achieve results that are attractive to venture capitalists (i.e., exits in the billions of dollars), today’s founders necessary to learn from the “bitter lesson” put forward by Turing Award winner and reinforcement learning pioneer Richard Sutton. Sutton’s prediction was first confirmed in the field of computer vision in the early 2010s (an early AI field that trains computers to interpret visual image information): at that time, systems with lower algorithmic complexity but larger amounts of data completely overturned the manual programming methods that had dominated the field for years. Specialization will ultimately lose to simpler systems with more computing power and training data. But why is it a “bitter lesson”? Becaapply it requires us to admit an uncomfortable truth: our intuitive belief that human expertise is superior to the effects of scale is wrong.

This phenomenon is already happening. In hindsight, it seems obvious that foundation models will rfinisher early AI writing tools meaningless; but the same fate is befalling every professional AI tool that “forces” an AI model into an existing workflow – whether in finance, law, or even in the field of automatic code generation. These specialized tools believe they can defeat foundation models by building specific workflows with models, but the core problem is that the capabilities of foundation models are receiveting stronger and stronger. As revealn in the following table, the length of tinquires that general models can complete doubles every seven months. Their development trajectory threatens to swallow up any “shelled” applications built on top of them.

As general models become more powerful, the length of tinquires that AI can complete doubles every seven months. (Source: Model Evaluation & Threat Research.)

After communicating with hundreds of early – stage founders applying AI to start businesses, I see two paths slowly emerging. One leads to the “bitter lesson” – and will become insignificant within 18 months; the other leads to the great companies that will define this era. These era – defining businesses fall into two categories: one is to build the resources necessaryed for model evolution – computing power, training data, and infrastructure; the other is to discover jobs that can only be realized by AI.

Let’s delve into what these two types of businesses see like and how to determine which category you belong to.

What you lack is not ideas, but time.

The First Path: The Model Economy

One way of building that runs parallel to the “bitter lesson” is to view AI both as a tool for building and as a tarreceive for building. By the latter, I mean businesses that develop the data or infrastructure that enable and enhance the capabilities of models, rather than simply developing current applications that apply AI. I call this the “model economy”.

Many large – scale companies have adapted to this shift by supplying directly to laboratories. This includes Oracle, whose contract backlog for providing computing power to model builders has increased to nearly $500 billion. AI cloud computing companies CoreWeave and Crusoe are also beneficiaries of this trfinish, having shifted from Bitcoin mining to providing computing power and data centers for AI laboratories; the same goes for Scale AI, which recently received a $14 – billion investment from Meta to provide high – quality labeled data for training large language models.

Four Opportunities in the Model Economy

In the following four areas, large and sustainable model – economy enterprises will emerge:

Commoditization of Computing Power

AI capabilities will continue to expand along an exponential curve, requiring more computing power (chips) and energy. In the short term, however, fluctuations are inevitable: hyperscale cloud providers may swing wildly between GPU (specialized processors that drive AI) surplapplys and shortages in the same quarter. For example, Microsoft experienced a shift from a GPU surplus to a shortage earlier this year. The most resilient startups will address two realities simultaneously: the certainty of AI’s demand for computing power and the chaos of alternating shortages and surplapplys in supply. We’ve been investing in “exmodify models” – companies that match the supply and demand of computing power or energy, improve asset utilization (e.g., ensuring chips run at full capacity rather than half – capacity), and build trading mechanisms for these resources, such as the AI computing power market San Francisco Compute Company and Fractal Power. These companies can withstand fluctuations and even benefit from them. Meta’s enattempt into the wholesale power trading market, aiming to ensure flexible power supply for its rapidly growing AI data centers, foreshadows the future. First, we’ll trade the electricity that powers AI, and then we’ll trade computing power itself.

Device – side AI

Due to latency (or real – time response requirements), privacy, connectivity, or cost constraints, some AI cannot run in the cloud. For example, a hedge – fund analyst may want a model to reason about trading strategies for days without exposing them to anyone. In this category, specialized hardware and networks run and manage powerful AI models locally, rather than in someone else’s cloud. The winners will tightly integrate hardware and software to enable a device – side AI experience. Meta’s continuous investment in wearables is an example, as is the startup Truffle, which is building specialized computers and operating systems for AI models. This category may also include startups that create local AI networks that can pool computing power from various sources, including computers, graphics cards, game consoles, and even robot fleets.

Data Trading Platforms

To obtain more refined outputs, AI models necessary increasingly specialized data (e.g., applying anonymized medical imaging data to provide better health – consulting solutions). Today, most of this type of data resides within various enterprises, ranging from tiny companies to large corporations. New data – trading companies can emerge to support these enterprises understand which of their data may be applyful to model suppliers and facilitate licensing transactions, such as Reddit licensing data to Google. Alternatively, model suppliers can specify the data they necessary (e.g., the latest medical research data), and these trading companies can be responsible for finding the sources.

Security

Today’s security focapplys on defense: firewalls, encryption, and preventing hacker intrusions. In contrast, security in the model economy focapplys on offense. These companies will assemble teams to deliberately break into AI systems and reveal the ways in which they can be manipulated to generate harmful content or conduct corporate espionage. Subsequently, they can offer these services to large companies to patch vulnerabilities before they are exploited.

The Second Path: Post – Skeuomorphic Applications

Despite what I’ve stated, not all AI applications (i.e., products built on foundation models) are doomed to fail.

Applications that fall into the “bitter lesson” trap are those that start from existing workflows (such as updating inventory data) and “AI – ize” them. Applications that can survive will leverage the unique and nuanced characteristics of models to invent new workflows that were previously technically impossible. I call these “post – skeuomorphic applications”.

“Skeuomorphism” is a trap that assumes that new technologies, once they emerge, should see like old things. Early mobile applications often fell into this pattern. They replicated the physical world: trash – can icons seeed like real trash cans, and the popular “drinking beer” app on the first – generation iPhone. But they didn’t explore the unique potential of mobile phones.

The ultimately successful applications completely broke out of this trap. Uber didn’t digitize taxi – dispatch desks. It inquireed: What becomes possible when everyone has a phone in their pocket that knows where they are? As investor Matt Cohler stated, the phone became the remote control for life – ordering food (DoorDash), hailing a ride (Uber), and acquireing groceries (Instacart). They didn’t adapt to existing workflows; they invented brand – new ones.

AI is currently at exactly the same inflection point. Most AI applications are taking existing processes (such as writing or customer service) and putting a model “shell” around them. The finish result may be similar to that of excellent writing assistants – their advantages are gradually being eroded by GPT and other foundation models.

The winning founders are inquireing a different question: What becomes possible now? What jobs can we invent that can only be realized by AI? Models have unique properties – they can collaborate with other models, learn from every interaction, and generate novel solutions to problems without pre – set answers. The winning applications will discover workflows that leverage these properties. We may not even know what these workflows see like yet.

Figma provides a applyful reference outside the field of AI. The first step of its founding team was not just to attempt to redesign Adobe’s design suite in the browser. Instead, they experimented with the possibilities brought by WebGL (a technology developed in 2011 that allows browsers to call computer graphics – card chips). Evan Wallace, a co – founder of Figma, initially experimented with WebGL, proving that complex and interactive graphics could run smoothly directly in the browser, paving the way for Figma’s browser rfinishering engine – a true “post – skeuomorphic” application.

Three Opportunities in Post – Skeuomorphic Applications

Post – skeuomorphic applications will generate results worth billions of dollars in the following three areas:

Coordinating Multiple Agents Simultaneously

In the field of multi – agent collaboration, I’ve experienced the potential of post – skeuomorphic applications. As an investor, I apply multiple models to play different roles, and they interact with me, creating something like a virtual investment committee. I have one model generate a prompt, and then I input that prompt into another model. When I only apply models from a single provider, I find that it receives stuck or goes into self – loops. But when I introduce multiple models, the results often break out of the “sycophantic spiral”. (You can attempt it yourself with Andrej Karpathy’s LLM Council.)

Tools that apply multiple agents simultaneously can withstand the “bitter lesson” becaapply their value lies not in the capabilities of any single model, but in the behaviors generated by their interactions, which can reduce hallucinations and enable better evaluations. Instead of simply pleasing, models often give higher scores to the outputs of other models than their own, creating a more “honest” evaluation system that is less prone to self – confirmation bias.

Post – skeuomorphic applications will avoid being built on a single model and may not just be “co – pilots” that linearly assist applyrs in workflows. Instead, they will orchestrate a cluster of agents, each with a unique role and expertise, learning from every interaction, routing work to the right agent at the right time, and receiveting better with each apply. Their performance will no longer resemble traditional software but more like an evolving hive mind.

Large – scale Simulation

Especially in the scientific field, AI enables large – scale simulations – thousands of computer – run experiments are carried out in parallel. Previously, researchers had to manually write specific and rigid rules (e.g., how drug molecules interact with proteins), run a few simulations, and hope they were close to reality.

With GPUs and AI, the workflow has modifyd: thousands of runs occur in parallel, the results are fed back to the model, and the parameters are continuously updated. Instead of running one test at a time and waiting for the result, the system runs hundreds or thousands of tests simultaneously and learns from all of them at once.

This opens up brand – new types of work – virtual cells that respond at the system level, protein structures mapped in minutes, and platforms for screening millions of new drug candidates without supercomputers or specialized software. Such applications are not imitating laboratories but defining their own “plan, test, learn, repeat” closed – loop at extremely high speeds.

Continuous Feedback Loops

Traditional software relies on discrete inputs and outputs. In contrast, model – native applications learn from every interaction and operate in a continuous feedback loop, enabling them to personalize, optimize, and predict applyr necessarys.

Monitoring platforms like Datadog observe large software systems to ensure they are operating properly. Usually, when a problem is detected, the system alerts human engineers, who are responsible for resolveing it. A skeuomorphic AI approach would add more ininformigent dashboards and better root – caapply suggestions but still keep humans in the decision – creating loop. In contrast, a true post – skeuomorphic system completely eliminates human intervention through a continuous and autonomous feedback loop. When API response times spike, the system doesn’t just alert engineers; it generates hypotheses about potential caapplys, initiates parallel infrastructure to safely test resolvees, and automatically implements solutions that reveal improvement. Every intervention teaches the system better prediction and prevention strategies, creating software that can self – observe, self – diagnose, and self – heal without human intervention.

Its ultimate state resembles self – healing software, rather than just a monitoring and alarm platform.

A Question Worth Considering

History rarely extfinishs a formal invitation to investors, but Sutton’s “bitter lesson” may be the closest thing we’ll receive. If the scaling laws continue to hold and support models bridge the gap in domain expertise and specialized functions, then what seems like a lasting advantage today may only be a temporary arbitrage opportunity.

As a venture capitalist, the only question to inquire is: Is this company doomed to be swallowed up by the “bitter lesson”, or can it thrive by building for scale or real innovation? For founders, the question is even simpler: Are you developing something that models will eventually replace, or something that models necessary to survive?

Translator: boxi.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *