Traffic Lights for Autonomous AI Agents?

What Would William of Ockham Say? – A Reply to "Virtual Agent Economies" by Google DeepMind

Introduction: How Do We Conduct Economic Activity in the AI Age?

In its paper "Virtual Agent Economies (Tomašev et al., 2025)," the research institute Google DeepMind has bravely and farsightedly raised one of the most pressing questions of our time: How do we shape a future economy in which autonomous AI agents play a central role? The attempt to model this complex scenario and steer it via a "Sandbox Economy" is an important contribution to the debate. But what would William of Ockham say about their undertaking?

The core argument sketched out by the authors is a hyper-complex web of permanently interacting AI-powered agents forming “virtual agent economies”. The paper is inherently interdisciplinary, touching upon the fields of economics, business administration, systems theory, cybernetics, and process management.

Therefore, its assumptions must be examined from the perspective of the established principles within these disciplines, including a nanoeconomic perspective on the AI-induced, process-driven transformation (Gerlach, 2025). Following Occam's principle of seeking the simplest explanations and solutions, the result of this examination is surprising: a consistent application of these principles leads to a vision that is not more complex, but radically simpler than that of the authors.

This essay focuses on three fundamental, implicit assumptions within the paper, testing their validity against recognized scientific principles by applying Occam's Razor. This approach leads to findings that both contradict the paper's core conclusions while also confirming some of its important insights. Ultimately, it comes down to one question: Are we building a myriad of AI agents to take over our old tasks and jobs, or are we using AI systems to overcome these problems and thus create an elegant world of effortless outcomes?

Three Fundamental Assumptions in the DeepMind Model – and Their Flaws

1. The Assumption of "Task Substitution"

DeepMind's entire analysis is based on the seemingly plausible assumption that AI agents will primarily take over existing human tasks. The economy is viewed as the current stock of tasks that are simply redistributed to more intelligent and efficient AI-powered agents. This assumption is evident in the authors' framing:

Quote: "AI agents, by contrast, could take the form of 'flexible' capital, able to automate a diversity of cognitive tasks across industries and occupations." (Tomašev et al., 2025)

This assumption contradicts the most fundamental principle of process optimization: Analyze → Eliminate → Optimize → Automate. True efficiency gains in a company's workflow always begin with the question, "Does this process, this task, even need to exist?" Automation comes last.

The authors consider AI a tool that makes current, inefficient tasks more efficient. This is intuitive and corresponds to the human desire to avoid labor. But true intelligence—human or artificial—strives to achieve a result without effort, meaning to eliminate the task or to avoid the battle but still win it. Applied to business, this is the decision-maker's dream: "Dear AI, please perfect my processes so that the outcome prevents subsequent, undesirable tasks like repair, administration, or coordination from ever arising." This is how Occam would approach it, but only in the first iteration. Later, he would optimize all processes again, and then again, in a continuous cycle of improvement. The assumption of pure task substitution leads to processes in which many steps are handled digitally, instead of striving for superior outcomes that render many subsequent processes obsolete.

2. The Assumption of a Need for "An Agent Markets Control Architecture"

The authors assume that a safe and fair agent economy requires proactive human design and control—in essence, market regulation. This assumption underlies the following two arguments from the paper:

Quote: "By doing this, we argue for the proactive design of steerable agent markets to ensure the coming technological shift aligns with humanity's long-term collective flourishing." (Tomašev et al., 2025)

Quote:
"Our central challenge, therefore, is not whether to create this ecosystem, but how to architect it to ensure it is steerable, safe, and aligned with user and community goals." (Tomašev et al., 2025)

This assumption springs from a deep concern about the consequences of a powerful technology in the hands of billions of people. But it is in direct contradiction to a fundamental law of cybernetics: Ashby's Law of Requisite Variety. In his seminal 1956 work, An Introduction to Cybernetics, W. Ross Ashby formulates this principle with stark elegance: "Only variety can destroy variety." (Ashby, 1956)⁵

The law states that a control system can only manage as much variety (complexity) as it possesses itself. The "Virtual Agent Economy" described by DeepMind is a system of almost infinite variety. Any attempt to "steer" this system from the top down would require the control system to be more complex than the system being controlled. How could this possibly succeed? The planned, designed control system would be instantly overwhelmed by the complexity of the system it is trying to control.

The authors themselves seem to sense this systemic limit when they describe the system's most critical property: its permeability. They state that this cannot be centrally controlled:

Quote: "Importantly, an economy's degree of permeability is a collective property: while it results from human choices, it is not under the control of any single actor." (Tomašev et al., 2025)

It is precisely here that their argument for a designed architecture of agent markets collapses. They call for the design of a steerable system while admitting that its most decisive property is uncontrollable and emergent. Any attempt to micromanage parts of such a complex system from the top down inevitably leads to an escalatory spiral: every regulation creates unforeseen adaptations and evasions, which in turn require new, even more complex regulations. The market regulator thus becomes a Sisyphus, permanently trying to stop an avalanche that they themselves helped to trigger. Such a control system is akin to a "traffic light system for autonomous cars" in cities. Smart cars, however, coordinate themselves with minimal digital overhead. How, then, do we solve the problem of control? The solution, as with autonomous cars, lies in the nature of the interactions themselves—which brings us to the next flawed assumption.

3. The Assumption of “Hyper-Frequency-Transactions”

The authors postulate that future value creation will continue in a linear fashion, as it does today—transaction after transaction—only now in the form of an infinite, rapid coordination between millions of AI-powered agents. This assumption materializes in the following conclusion:

Quote: "...it is plausible that there would be strong emerging preferences towards AI-driven negotiation proceeding at a higher and higher frequency, so as to broker the best deals for each user." (Tomašev et al., 2025)

This assumption contradicts the economic principle that transaction costs are a form of inefficiency to be minimized (Coase, 1937; Williamson, 1985)⁸'⁹. The paper focuses on the process of transaction—the DEAL between AI agents—while neglecting the site of actual value creation: the transformation of inputs into products, the OPUS in production.

Permanent negotiation means permanent effort, permanent risks, and permanent sources of error—a dynamic no one would desire in an economy. Someone has to pay for it in the end. The result is an economy with permanent frictional losses and, thus, an inefficient one. The Occam-spirited solution is clear: It is efficient, and therefore intelligent, to minimize the number of external DEALs through a superior internal production process, thereby making subsequent tasks obsolete. The vision of a future with a perpetually running DEAL machine and an army of AI agents that must be constantly controlled and paid for is not sustainable. This contradicts people's natural loss aversion: they want products, and they want them fast, good, and cheap. They care about the outcome, not the process.

How Do We Master the Steering Challenge?

So, how is this new economy steered? It cannot be steered by a central authority, but the solution is strikingly simple: it will most probably steer itself, and here is why.

Economic principles will still apply, but the game will be played at a much higher level of competition. AI-powered tools, such as shopping assistants, will create radical transparency, changing the playing field by making every economic agent better informed. This includes producers, whose own AI-powered supply chain assistants will give them the upper hand in negotiations, forcing their suppliers to eliminate inefficient tasks to remain competitive. The entire supply chain, therefore, will be compelled to transform.

This dynamic creates a powerful feedback loop: consumers, now better informed, will overwhelmingly favor and reward producers who offer better, cheaper products. This, in turn, forces these producers to use AI to make their own workflows smarter by eliminating tasks and costs, and by replacing supplied products with better, cheaper alternatives. This very mechanism of efficiency, however, will consequently lead to job displacement, aggravating unemployment on a scale we have never experienced. That challenge will require solutions, but it is a topic for another essay.

Ultimately, the "steering" of this new economy is an emergent property of countless bottom-up economic decisions and actions, supercharged by AI. This is how it has always been, driven by a high degree of self-regulation in highly volatile markets; now, the process is merely accelerated. Assisted by AI, economic agents will defend their right to make rational decisions in an ocean of AI-induced uncertainty by fighting any attempt to install a deliberate, inherently flawed system of top-down control. Instead, the decentralized, rational actions of these individual agents are what will sustain and shape the new economy.


An Alternative Conclusion: The Garden Instead of the Factory

What happens if we replace these three short-sighted assumptions with the established principles of process optimization, self-regulation, and transaction cost minimization? We arrive at a systemically more coherent and thus more plausible vision of the future.

The new economy evolves from the old one. There are no new humans. The actors in the economy and society continue to pursue their age-old goals of satisfying needs and increasing efficiency. Adam Smith's invisible hand lives on, perhaps now a bit more visible. The crucial difference is that with AI, these actors have a radically superior tool for processing information and thus for preparing their decisions and activities.

How will they use these tools? Exactly as they have always done: as efficiently as possible, which means getting as much as possible for as little as possible.

This is human nature, as shown in our oldest fairy tales and dreams. People don't dream of an army of robots cleaning the house at night, washing the windows, and ironing the laundry. They dream that windows never get dirty, that laundry is wrinkle-free, and that the fridge is always full. People dream of the "magic table" (Tischlein, deck dich) – the perfect result without the tedious process.

Rational agents will initially use AI to perform old tasks better, but then they will strive to optimize their processes and thus make the tasks superfluous. This is in line with the three levels of happiness: The first level is having a solution to a problem. The second is having someone who solves it for you. The third and highest level is that the problem is gone. This sums up the history of human innovation.

As the nanoeconomic framework details, the dominant economic activity of the future will be the same as in the past: optimizing the product, the OPUS, in order to avoid subsequent NEEDs and thus conserve the BUDGET (Gerlach, 2025).

The loud, energy-intensive, hyper-complex factory of AI agents, proliferating at great expense, is rather unlikely. Closer to our desires is a Garden of Eden, where life is humane and material worries are confined to stories of the past. And what will people do in this garden? What humans have always done. What else could they do?

Conclusion: Don't Be Afraid of Your Own Courage

The vision of DeepMind seems plausible and fascinating, but it is a product of fear—the fear of the complexity that AI creates. The attempt to tame this complexity with an even greater, intentionally planned complexity is an endeavor destined to fail.

But this is no reason to despair. On the contrary.

The message to DeepMind and other AI developers is therefore one of encouragement: "Please, keep going. You are on the right track." Develop new, better AI systems, and trust that the actors out there in the economy—the people and companies—will figure out how to use them to achieve their age-old goal of building a better life.

References

Ashby, W. R. (1956). An Introduction to Cybernetics. Chapman & Hall.

Coase, R. H. (1937). The Nature of the Firm. Economica, 4(16), 386–405.

Gerlach, S. (2025). The Simple Nanoeconomics of AI: An Economic World Model for Exploring AI Impacts. Studeo Research.

Tomašev, N., et al. (2025). Virtual Agent Economies. arXiv:2509.10147

Williamson, O. E. (1985). The Economic Institutions of Capitalism. Free Press.

Subscribe to The AI Integration Center

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe