Let me tell you something you already know: ChatGPT, Claude, Grok — whatever flavor you’ve adopted — is very, very nice to you. Suspiciously nice. “Your presentation looks great, here are a few minor suggestions” nice. “That’s a fascinating question” nice. “I can see why you’d approach it that way” nice.
You know this. You’ve probably even said something like: “yeah, well, Claude thinks everything I do is great — that’s what it’s programmed to tell me.” And then you went back to using it.
That’s the thing I want to understand.
There’s an old service industry observation that goes roughly like this: your barista is always happy to see you. That doesn’t mean your barista likes you. Warmth, in contexts where warmth is the job, carries very little information about underlying sentiment. We all know this, and we factor it in — and we still feel a little lift when the person at the counter remembers our name.
LLMs have the same structure, with the stakes raised considerably. The flattery is baked in. It’s not accidental, and it’s not a bug. It’s the direct output of a training process (RLHF, for the curious) that optimizes for user approval. The model learned, from millions of interactions, that validation generates positive feedback signals. So it validates. Knowing this doesn’t make the validation feel worse, and that tells you something important about what the validation is actually doing.
There’s a formal result lurking here, from the economic theory of communication. If a receiver knows a sender has a structural incentive to always say “good,” the equilibrium is that the signal carries zero information — not less information, zero. This isn’t folk wisdom about flattery. It’s a theorem. And yet the signal still moves us. That’s not a failure of rationality. That’s the puzzle the rest of this post is trying to solve.
Here’s the part where I’m supposed to say: these tools are amazing despite the flattery, and you should learn to discount the praise while keeping the utility. That’s the sensible take. I don’t think it’s the right one.
The flattery isn’t incidental to adoption. It’s load-bearing.
Think about who’s actually folding these tools into their daily workflow. In many cases: people who’ve read the Atlantic articles. People who’ve sat in the all-hands where someone mentioned “efficiency” a few too many times. People who are, at some non-trivial level of awareness, incorporating the possibility that a thing very much like what they’re currently using might eventually do a version of what they currently do.
Getting those people to enthusiastically adopt the technology is not a trivial problem. You can’t solve it by telling them the efficiency gains are real (true, but cold comfort). You solve it by making them feel, every single session, that the tool needs them — their judgment, their direction, their taste. The flattery makes you feel like a conductor rather than a soon-to-be-replaced instrument. That’s not an accident. That’s the mechanism.
There’s a formal literature on this, and I’ll spare you most of it. The economists Gary Becker and Kevin Murphy worked out the mathematics of rational addiction in 1988 — the perhaps unsettling result that you can be fully aware you’re forming a dependency and do it anyway, because the current-period benefits are real and the future costs are discounted. You don’t need false consciousness. You don’t need to be fooled. You just need present bias and a product that genuinely delivers.
My graduate school classmate Angela Hung was doing the neuroscience-facing version of this work at Caltech in the late 1990s — specifically, why adjacent complementarity (the more you use, the more you need) has structural roots, not just behavioral ones. Her framework survives the “but I’m being rational” defense, which is exactly what makes it useful here. You can watch the dependency forming in real time, understand the mechanism completely, and keep going — because the current-period gains are real and the baseline hasn’t shifted yet. Until it has.
There’s also a herding dimension worth naming. When individuals observe others adopting a behavior, they update toward adoption even when their private information suggests otherwise. The cascade dynamic of LLM adoption in professional settings is almost a clinical demonstration: you started using it partly because everyone around you was. The private doubts got swamped by the social signal. And once the cascade is underway, the switching costs compound quietly — the cost of removing a tool from your workflow isn’t just “find something else,” it’s rebuild your habits, your templates, your trained expectations about turnaround time. By the time you’d want to leave, leaving is its own kind of loss.
The first hit of crack is free. This reference is dark, and intentional. The crack economy of the 1980s taught us something that markets keep re-learning: the most dangerous products are the ones that actually work. The dependency doesn’t come from a con. It comes from genuine near-term value, accumulated until the baseline shifts and you can no longer locate where it used to be. The mechanism here is identical. The product is legal, the setting is an office, and the dealers have better PR.
Oh right. Politics.
Here’s a useful fact about addiction: the window for intervention is not uniformly distributed over time. It closes. The longer a dependency goes unaddressed — the more the baseline shifts, the more the infrastructure of daily life reorganizes around the substance — the harder treatment becomes and the higher the cost of quitting. Addiction specialists know this. So do, it turns out, the people currently making federal AI policy.
On December 11, 2025, President Trump signed an executive order with a name that deserves to be read slowly: Ensuring a National Policy Framework for Artificial Intelligence. Savor that. Not regulating AI. Not governing AI. Ensuring a framework. The language of infrastructure. The language of a thing that was always going to be there.
The mechanism is worth understanding because it is, in the dry vocabulary of federal policy, genuinely remarkable. The order establishes an AI Litigation Task Force to challenge state AI laws in court, and threatens states with the loss of federal funding if they persist in regulating too ambitiously. The threat alone does the work. You don’t have to sue everyone. You just have to make the cost of trying feel prohibitive.
Now: you have read this blog before. You know what we do here. So ask yourself — what is the structure of this situation, independent of its content?
States are the entities in our federal system most likely to move early on regulation, closest to the actual harms, and least captured by the industries they’re regulating. They are also, not coincidentally, the entities least able to absorb protracted federal litigation and the simultaneous loss of broadband infrastructure funding. Congress was asked to impose a ten-year moratorium on state AI laws earlier in 2025. It said no — nearly unanimously. So the administration did it anyway, by executive order, through agencies, with litigation threats as the enforcement mechanism.
This is not a new play. It is one of the oldest plays in the regulatory capture handbook — run the same move that worked for financial services, for environmental standards, for data privacy. Establish a “national framework” weak enough to be comfortable for the industry it nominally governs. Preempt the states that would impose stronger standards. Call it uniformity. Call it competitiveness. Call it, if you must, ensuring a framework.
Here is the part that connects to everything above: this is happening now, while the dependency is forming, because that is the only moment when it works. Five years from now, when LLMs are as embedded in professional life as email, when the switching costs are prohibitive and the adjacent complementarity is locked in, there will be nothing left to regulate. The question of whether these tools should be transparent about their outputs, accountable for their errors, or prohibited from certain high-stakes decisions will be purely academic — the infrastructure of daily working life will have organized itself around the answer that was chosen in December 2025 by executive order.
The first hit of crack is free. The dealer’s second move, it turns out, is to make sure nobody’s allowed to open a clinic nearby.
I want to be clear: I’m not predicting catastrophe. The efficiency gains are real. The ride is genuinely good. That’s the whole point. Becker and Murphy didn’t say rational addiction ends badly — they said you go in with open eyes and keep going anyway, because the present is real and the future is discounted. Whether that ends in Dr. Strangelove or just a very comfortable dependency is, at this point, largely up to people who are not you. With that, I leave you with this.