126,000 tech jobs cut in 2026. That number is still climbing — about 973 per day.

But here's what makes this wave different from every layoff cycle before it: the jobs being created don't look like the jobs being eliminated. And the gap between those two categories is where the actual story lives.

The Obvious Narrative Is Wrong

The easy take is "AI is replacing jobs." The reality is more nuanced and (I think) more interesting.

Companies aren't hiring fewer people because AI can do the work. They're hiring different people because the nature of the work changed. The roles being cut are largely execution-focused — people who were hired to do a specific, repeatable thing well. The roles being created require something harder: the ability to combine deep domain knowledge with AI tooling to produce outcomes that neither could achieve alone.

That's not "AI replacing people." That's a fundamental shift in what makes someone valuable at work.

The Data Behind the Shift

The numbers here are striking:

  • Workers with AI skills earn 56% more than their peers in the same role without them
  • But the highest-paid aren't the ones with the most AI skills — they're the ones with the deepest intersection of AI and domain expertise
  • Adding a second domain specialization alongside AI commands a 20-38% salary premium
  • Forward-Deployed Engineer roles (the Palantir model) grew 800% in 2025 and are now standard across industries

The market isn't rewarding AI proficiency in isolation. It's rewarding the combination of AI proficiency with deep knowledge of a specific domain. The person who understands healthcare AND can build AI workflows for clinical teams is worth dramatically more than someone who can do either one alone.

The Skills Paradox

Here's the part that surprised me when I dug into the Gartner data:

By 2027, 75% of hiring processes will include AI proficiency testing as part of the interview. That tracks — AI fluency is becoming table stakes, like Excel was 15 years ago.

But half of those same companies will also require AI-free assessments. They want to verify that candidates can think independently, without the tools.

Think about what that means. The most valuable hire in 2027 isn't the person who's best at using AI. It's the person who's excellent at their domain, excellent with AI, and can clearly demonstrate where one ends and the other begins.

That's a high bar. And it explains why the salary premium is so large — the supply of people who meet it is genuinely small.

What This Looks Like in Practice

I see this every week in my consulting work. The CTOs I talk to aren't asking "should we use AI?" anymore. They're asking "how do I restructure my team around AI without losing the domain expertise that makes us good at what we do?"

The answer I keep coming back to: you don't replace your domain experts with AI engineers. You invest in making your domain experts AI-fluent. The person who's spent a decade understanding your industry and can now multiply their effectiveness with AI tools — that's the hire that compounds.

The pure AI specialist who doesn't understand your business? They can build you a technically impressive system that solves the wrong problem. I've seen it happen enough times that it's become a pattern.

The Uncomfortable Implication

If you're a technical leader reading this, the question isn't whether to adopt AI. It's whether your team development strategy reflects this shift.

Are you training your best domain experts to work with AI? Or are you hiring AI specialists and hoping they'll learn the domain?

The data suggests one of those approaches produces 56% higher value. The other produces impressive demos that don't ship.

I've been thinking about this a lot — both for my clients and for my own practice. The consultants and fractional CTOs who thrive in this market will be the ones who combine AI implementation expertise with deep understanding of their clients' domains. The ones who can only talk about AI in the abstract will get squeezed.

That's not a prediction. That's what I'm already seeing.


The AI Systems Readiness Checklist I published this week evaluates whether your AI project is model-centric or systems-ready. It's the same framework I use with clients. Get it here.