The BuX of LLMs: A New Epidemic Risk, a Race, and Hope

BuX: An AI Bug—or Our Own?

A Personal Note

I just finished a large scientific essay—and I wrote it shoulder-to-shoulder with AI. Along the way I kept observing the same pattern, over and over, until I felt compelled to name it. I’ve been in research long enough, and I read widely enough, to recognize the pattern from other domains. Years ago, in the pre-AI era, I even tried to build an application to tackle it.

I’m talking about bullshit in technical prose. In non-technical writing, play is part of the art. In scientific or professional texts, though, bullshit is costly. What finally annoyed me into action was the time tax: the hours spent separating signal from polished noise. A recent study noted that developers lose substantial time because they must repair the code produced by the AI co-pilot; my own experience matched that precisely.

So I took a walk, asked myself what this thing is and where it comes from, and wrote this short essay. I call this phenomenon BuX—the Bullshit eXplosion. Consider this an invitation to discuss, refine, and (hopefully) defuse it.

What I Mean by BuX

At the micro level, it’s an effect: when you produce text in steps and each step is less than 100% accurate, substance is diluted. The surface gets smoother; the core thins out. You can feel the math: even at 95% accuracy per step, after 25 steps you keep only ~28% of the original substance; at 80%, you keep <1%; even at 99%, you keep only ~78%. Smooth isn’t true. (Obviously there are countervailing factors—finished projects rarely look that poor in the end—but that’s a topic for another walk around the lake.)

Zoom out and it becomes a phenomenon: smooth, fact-light prose that reads beautifully and says very little—theses, project docs, blog posts everywhere.

On the open web, it becomes an infekt: once the BuX-infected opus is released, it replicates—copied, paraphrased, ranked, and retrained—and starts contaminating the pool.

Why Do LLMs Generate BuX in the First Place?

On the AI side, three forces do most of the damage.

First, objective mismatch. The model is paid—in loss and rewards—to produce what’s likely and well-formed, not what’s true or mission-correct. It optimizes for fluent next tokens. That yields sentences that sound right, even when they’re not.

Second, no robust world model. Without a living picture of the goal, the domain, and the hard constraints, the model picks plausible continuations and drifts. It will invent a tidy mechanism or a “reasonable” citation because, from its point of view, tidy and reasonable are enough to continue the pattern.

Third, dialogue myopia. Chats overweight the latest turn and the “be helpful” reflex. The last instruction dominates; earlier guardrails fade. Ask for “make it punchy,” and it can quietly trade precision for style without telling you what it dropped.

Now the part we own—the user side.

Time pressure and bounded attention. Deadlines push us to ship polish before proof. When the clock runs, we accept “sounds right” and move on.

Incentives and feeds. Platforms reward speed, volume, and engagement. We post sooner, fact-check later (or never). BuX content spreads exactly where likes and rankings pay for fluency.

Automation bias and weak grounding. Smooth tone earns our trust; we skip retrieval and sources. We ask the model to recall instead of forcing it to look things up, and we accept confident prose as evidence.

Put simply: the model leans toward plausibility, the chat format erodes guardrails, and our habits give BuX oxygen.

It’s important to say this isn’t malice. Most of us aren’t trying to smuggle in fluff. It’s genuinely exhausting to scrutinize every sentence—no fuzzy claims, no loose logic, no soft sources. In busy weeks the vigilance slips; under time pressure, more slips through. That’s life, not bad intent—and it’s exactly why BuX finds oxygen.

So it seems that BuX is a true co-venture between LLMs and—us.

How an Effect Becomes an Infekt

Outside the model, the web amplifies the effect. A share-first economy rewards speed and volume, so copies multiply. Paraphrase mills and content washing flood the commons with variants; crawlers ingest them; the next model relearns the same errors. Standards slide as ranking systems crown the least-bad rather than the best-supported.

LLM-smooth text gets copied, paraphrased, ranked, cited, and eventually ingested into training sets; the next generation is more confident about the same weak claims. References erode, hallucination confidence rises, source verification gets harder, and the data pool slowly self-poisons.

How I Dealt with BuX in My Research Project

I got lucky: I had a formal research model—a granular economic world model. That gave me an early-warning system. Whenever a draft drifted, the model surfaced the deviation fast.

AI helped, but mostly as research and language support, not as a source of core claims. I kept the engine on a short leash: find sources, propose structure, tighten prose—stop at theory.

And I was patient. Many passes, slow cooking. No wonder—AI was part of my research. I wanted to look really closely. And the tools improved under my feet: GPT-5, especially in Thinking mode, was noticeably better at staying on course and admitting uncertainty.

Where This Is Going

A race is underway. Which curve climbs faster: BuX diffusion or AI model improvements?

There’s a second, less exotic scenario: people read less, trust less, react less. Systemic inertia strikes back, and the commons stagnates—not because we’re evil, but because we’re tired.

Why I’m Hopeful

Models are getting better at spotting real ideas—think Tom DeMarco’s Same Old Stuff Index (SOSI) as a vibe check, but sharper and embedded. (SOSI = 100 − 0.5×(Novelty % + Evidence %) + 0.5×Reuse %; lower is better.)

World models—task structure plus constraints plus tools—will help models keep their course. Personal AI assistants will curate feeds for each of us, hopefully lifting the average quality of what we see.

And maybe life will do its own filtering. If we spend less time with AI because we’re busy keeping the fridge full, that can be healthy too: fewer low-value loops, more contact with the stubborn facts of the world.

For the time being, as long as BuX looms, we must be mindful of what we send out onto the internet—and, above all, of what we choose to believe from what we find there.

Subscribe to The AI Integration Center

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe