The 'AI can code' effect on technical interviews
Something is shifting in technical interviews and nobody’s quite figured out what to do about it.
A year ago, most coding interviews still followed the same format they’d had for a decade. Write a function on a whiteboard. Solve an algorithm in a shared editor. Build a small feature in a take-home. The assumption was always that the ability to produce correct code under pressure tells you something meaningful about a candidate.
That assumption is getting harder to defend. Not because candidates got better at algorithms. Because AI got good enough at writing code that testing someone’s ability to produce code — mechanically, from memory — feels less useful than it used to.
What’s actually changing
I don’t have insider access to how Google or Meta run their loops. This is what I’m seeing in my corner — mid-size European companies, startups, remote-first teams. Maybe 30 interviews total on both sides of the table over the past few years. Not a huge sample.
Fewer “implement this algorithm” questions. Not zero, but fewer. Interviewers know a candidate could have an LLM write the solution. That doesn’t make the question worthless — watching someone think through a problem still has signal — but the weight of “did you get the correct answer” is dropping relative to “did you reason about it well.”
More “explain and discuss” format. Technical discussions rather than coding exercises. “Here’s a system. Walk me through how you’d change it. What are the trade-offs?” Harder to game with AI because it tests judgment, not syntax.
Take-homes are getting deliberately vague. The ones I’ve seen recently aren’t “build this exact thing.” More like “here’s a messy problem, make some decisions and explain them.” The code is almost secondary to the written explanation of your choices.
Pair programming with AI allowed. Some companies are explicitly letting candidates use Copilot or ChatGPT during the interview and evaluating how they use it. Can you prompt effectively? Can you catch the bugs in what it gives you? I’ve only heard about this from a couple of places though.
The uncomfortable part
Here’s what nobody in hiring wants to say out loud: most interviewers can’t reliably tell if a candidate used AI to prepare their answers. Not the code — the explanations. I’ve seen candidates give perfectly structured, comprehensive answers about system design trade-offs that sounded rehearsed in a way that felt… generated. But I couldn’t prove it. And I’m not sure it even matters if the candidate actually understood what they were saying.
That’s the part I haven’t resolved. If someone uses ChatGPT to prepare a great explanation of CQRS trade-offs, and they can answer follow-up questions about it, did they learn it or memorize it? In practice the line between those two things is blurry. I’ve caught myself not knowing whether I “really understand” something or just got good at reciting it.
And then there’s the practical question — what do you actually do when you suspect it? I’ve been in interviews where a candidate gave a flawless explanation of event sourcing, like textbook-perfect structure, clear trade-offs, no hesitation. So I asked a sideways question: “have you ever had to debug a projection that got out of sync?” Total blank. Not “I haven’t dealt with that” — just nothing, like the script ran out. You notice it and you move on. You can’t write “I think they used ChatGPT” in your feedback. You just note that the depth wasn’t there.
The irony is that I use AI at work constantly. Copilot, ChatGPT for debugging weird Vue reactivity issues, generating test data. It’s a normal part of how I write code at Refurbed. So when I’m on the other side of the table evaluating whether someone’s answers are “really theirs” — there’s a hypocrisy there that I’m aware of but don’t have a clean answer to. The tools are the same. The context is different. That distinction feels important but also kind of arbitrary.
What actually differentiates candidates now
If code production is less of a differentiator, what’s left?
Explaining why, not just what. “I’d use a message queue here” is a statement. “I’d use a message queue because this operation takes 3-4 seconds, the user doesn’t need the result immediately, and if it fails I want automatic retries without the user re-triggering it” — that’s reasoning. AI can generate the first. The second requires actually understanding the system you’re building.
Navigating ambiguity out loud. Real engineering work is ambiguous. Requirements are fuzzy. The ability to say “I don’t know this part well, but here’s how I’d think about it” and then actually think about it coherently — that’s more valuable than having a polished answer to a clear question.
Evaluating trade-offs with actual specifics. Not “Redis gives us speed but adds complexity.” That’s a line from a textbook. More like: “Redis makes sense here because our read pattern is simple key-value lookups, but it means another service to monitor and our team has zero Redis ops experience, so the first production incident is going to be painful.” The specificity is what matters. Anyone can name trade-offs in the abstract.
None of these are new skills. They’ve always mattered. But they used to be the tiebreaker between two candidates who could both write correct code. Now they’re the main thing.
How this changes prep
The classic strategy — grind LeetCode, memorize patterns — still works for some interviews. But if that’s your entire strategy, you’re optimizing for a shrinking slice of the evaluation.
The thing I’d add is verbal practice. Not thinking about technical concepts, but saying them out loud. I’m building Prepovo for exactly this reason — I kept noticing that devs who knew their stuff would fumble the explanation in real time. The knowledge was there. The ability to articulate it under pressure wasn’t.
Where this goes
Honestly? No idea. The industry is figuring this out in real time and doing a messy job of it.
Maybe coding interviews stick around because they’re easy to standardize. Maybe they become “code with AI and explain what you built.” Maybe the whole format shifts toward trial periods and paid project work. All of those are plausible and none of them are happening fast.
The developers who invest only in technical knowledge — the “what” — are going to find interviews harder as AI commoditizes that. The developers who invest in judgment and the ability to reason out loud are going to have an edge. That part I’m fairly confident about.
The rest? Ask me again in two years.
Ready to practice?
Start explaining concepts out loud and get AI-powered feedback. 5 minutes a day builds real skill.
Start practicing for free