A financial analyst I know spent fifteen years building a reputation as the person who creates the most thorough models in her department. Her spreadsheets were elegant — complex revenue projections with layered assumptions, sensitivity analyses that anticipated three scenarios, formatting that made the data readable at a glance. People came to her because her models were better than anyone else’s.
Then AI got good at building financial models. Not just basic ones. Sophisticated models with the right structure, reasonable assumptions, proper formulas. In minutes. The skill she’d spent a career developing was suddenly available to anyone with a subscription.
Her first reaction was fear. If the tool can do what she does, what’s she worth? Her second was a kind of detached compliance: she started using AI to generate the models and spent her time editing them. Faster, but the work lost something she couldn’t name. The models were structurally sound and strategically empty.
The shift happened when a junior colleague showed her an AI-generated model with a subtle error in the revenue recognition timing. She caught it instantly. Not because she was looking for mistakes, but because she knew this particular client’s contracts had unusual milestone triggers that changed when revenue was recognized. That knowledge wasn’t in the prompt. It wasn’t in any database the AI could access. It was judgment built over a decade of working with this client and this industry.
What she realized, eventually, is that the model was never really the expertise. The model was the vehicle. Her actual expertise — the thing the client paid for and the AI couldn’t replicate — was understanding what the numbers meant in context. Which assumptions were safe and which were wishful. Where the standard model would mislead because this business operated differently than the template assumed.
AI stripped away the vehicle and revealed what was underneath.
I think this is happening across every profession that involves both mechanical skill and judgment, which is most of them. The mechanical components — the tasks that follow a procedure, that have a right answer, that can be taught as steps — are increasingly something AI handles well. Fast, tireless, consistent. Available to anyone for twenty dollars a month.
That availability changes the math. When everyone has access to the same mechanical capability, the mechanical part differentiates nothing. Two marketing strategists ask the same AI for a Q4 campaign strategy. Both get back competent, generic work. The one who brings her knowledge of this specific brand’s competitive position, her understanding of which channels actually convert for this audience, her insight from last year’s data about what worked and why — that one produces something specific, strategic, and valuable. Same AI. Different human contribution.
The difference is what economists would call the scarce input. AI capability is abundant. Your contextual understanding, your judgment about what matters in this particular situation, your ability to spot the error that looks right but isn’t — that’s scarce. And scarce inputs are where value concentrates.
This framing changes the question people ask about AI. Instead of “will AI replace me?” — which is the wrong question for anyone whose work has a judgment layer — the useful question is: “What do I bring to this collaboration that makes the output different from what anyone else with the same tool would produce?”
If the answer is “nothing,” that’s a problem. But it’s a solvable one, and the solution isn’t learning more about AI. It’s bringing more of your actual expertise into the interaction. Your domain knowledge. Your client context. Your read on the situation that no database captures.
I should be clear about the honest limitation here. Not every job has a judgment layer underneath the mechanical. For work that is genuinely procedural, where the human contribution is primarily following steps correctly, AI does represent displacement. Pretending every role has hidden depth would be dishonest. Some work is mechanical, and AI does mechanical work well.
But most professional work — the kind that commands a salary, that requires years of experience, that produces different results depending on who does it — has that judgment layer. The problem is that people often build their professional identity around the wrong part. Around the spreadsheet rather than the insight. Around the deliverable rather than the thinking. Around the mechanical skill rather than the judgment that guided it.
AI forces a reckoning with that identity. It’s uncomfortable. Research shows that peers rate colleagues lower on competence for using AI tools — a measured social penalty for adoption. People sense that using AI threatens something about how they’re valued, and they’re not wrong about the feeling. They’re just wrong about what’s being threatened.
What’s being threatened is the mechanical component. What remains — and what becomes more visible, more necessary, and more valuable — is everything the mechanical component was built to express. The judgment. The taste. The contextual intelligence that no tool can replicate because it lives in the specific intersection of your experience, your relationships, and your understanding of this particular problem.
AI doesn’t diminish that. It clarifies it. But only if you’re willing to look at what’s left when the mechanical is stripped away and recognize it as the thing that was always actually yours.