Watch two people on the same team use the same AI tool for the same kind of work and you’ll see two failure modes.
Person A accepts everything. They type a prompt, review the output quickly, clean up a few rough edges, and ship it. Fast. Efficient. The work looks polished because AI output always looks polished — complete sentences, smooth transitions, confident assertions. Person A feels productive. The output is competent and generic. It could have come from anyone with the same subscription.
Person B rewrites everything. They fight the tool at every turn, complain that “it doesn’t understand our business,” treat every suggestion as an insult to their expertise, and end up spending more time arguing with the AI than it would have taken to do the work from scratch. Person B feels principled. The output is fine, but the tool contributed nothing.
Both are frustrated. Both blame the tool. Neither can see that they’re producing the same result through opposite behaviors: the tool’s potential, completely wasted.
I’ve been watching this pattern for longer than AI has been on anyone’s radar. Fifteen years of running a marketing agency taught me that people do this with every tool, not just AI. The CRM that never gets used because the sales team “already has their own system.” The analytics dashboard that gets checked religiously but never informs a decision. The website redesign that gets approved and then slowly reverted to the old version because the founder “just likes it better the other way.”
Two responses. Surrender to the tool and stop thinking. Fight the tool and refuse to change. Both are rational if you don’t have a clear model of what you’re supposed to contribute and what the tool is supposed to handle. And most people don’t have that model. Nobody gave them one. The training covered which buttons to press, not how to think about the collaboration.
The automation research community named these patterns decades ago. In aviation, where the stakes are higher and the data is better, “misuse” means overreliance on automation — trusting the autopilot when conditions require human judgment. “Disuse” means ignoring automation — overriding systems that are more accurate than the pilot’s gut instinct. Both crash planes. Both have the same root cause: the human doesn’t have an accurate model of when to trust the system and when to trust themselves.
AI makes this problem worse because of something I think of as the fluency trap. Bad work usually looks bad. A rough draft, a poorly reasoned argument, a spreadsheet with obvious gaps — your brain registers these as incomplete and triggers correction. AI output doesn’t look bad. It looks polished even when it’s wrong. The confident assertions, the smooth transitions, the professional tone — these are the quality signals your brain uses to decide “this is mostly done.” When those signals are present but the underlying substance is weak, your normal quality-checking instinct never fires.
That’s why Person A doesn’t catch the problem. The output feels finished. And it takes genuine effort to override that feeling and interrogate work that looks good.
Person B has the opposite problem. They’re so protective of their expertise that they can’t let the tool contribute anything. Every AI suggestion feels like a challenge to their competence. So they reject across the board — the bad suggestions and the good ones — and end up doing the work alone while paying for a tool they’re not using.
The healthy middle isn’t obvious, but it exists. A consultant I respect uses a simple test for every piece of AI-generated work: “If I handed this same prompt to any other person with access to this tool, would the output be meaningfully different?” If the answer is no, she hasn’t contributed yet. The tool produced commodity output and she accepted it. If the answer is yes — because she brought her specific client knowledge, her understanding of the competitive landscape, her judgment about what matters in this particular situation — then the output has her fingerprint on it. That’s the difference.
The distinction isn’t about how much time you spend with the tool. It’s about whether your expertise actually enters the collaboration. Surrender withholds it by disengaging. Fight withholds it by refusing to let the tool participate. Both keep the best version of the work from happening.
There’s a third option, though it takes more discipline than either surrender or fight. Use the tool for what it’s good at — volume, breadth, speed on routine tasks — and bring your judgment for what requires judgment. Delegate deliberately, not by default. Engage deeply where depth matters. And build the habit of checking whether the work reflects what you know, not just what the tool produced.
If you find yourself doing one of the two things — accepting everything or rewriting everything — that’s a signal, not a personality trait. It means the model of how you and the tool should work together hasn’t been built yet.