Leetcode is dead. If you're still asking candidates to invert binary trees without AI assistance, you're screening for skills that peaked in relevance circa 2015.
Here's the uncomfortable truth: every developer you hire will use LLMs on the job.
Yet you still see people engaged in an absurd arms race - proctoring systems, browser lockdowns, forcing in-person interviews for remote companies - all to prevent candidates from using the exact tools they'll use every single day if hired.
What Actually Matters
Let’s call out the elephant in the room: LLMs are fundamentally changing software development. Used poorly, they lead to a proliferation of sloppy code and bad abstractions. But used effectively they can drive engineering productivity to dramatic new levels.
Interviews are proxies for on-the-job performance. When the job has fundamentally changed but the proxy hasn't, you're measuring the wrong thing entirely.
So how do you design an interview process to test for these new skills? How do you determine who will dramatically up their productivity, and who will be writing slop?
This is far from a solved problem, but here's how we're now handing the technical parts of our interview process.
Phase 1: The Curiosity Test (30 min)
This starts from a very simple opening: "Tell me about something interesting you've built." Then we dig. What broke? What would you change if you did it again? If you were the BDFL for the language or framework you used, what would you fix?
The goal is to understand how they think about their work and their tools. Do they learn from what they're doing? Can they reflect and retrospect? Do they think about the tradeoffs inherent in any particular design or set of tooling?
Why does this matter? Because you have to keep learning to be able to deal with and leverage ever-changing tools, and the ability to think critically about the tools and approach you are using becomes even more important when an LLM is making the suggestions.
Phase 2: System Design (1 hour)
This again starts from something very simple: A vague and high level problem statement. From there the interviewer roleplays a non-technical PM, and the interviewee's job is to extract requirements, explore the problem space, and design a buildable solution.
Then we change it. Scale changes. New constraints. New features or requirements.
Because after all that's the job - figuring out how to make a buildable system and then evolving it as the needs evolve. And this ability to strategically design a crystal clear system is key to getting an LLM to actually build it. Sloppy definition leads to sloppy prompting leads to sloppy code.
Phase 3: LLM-based coding interview (1 hour).
Use whatever AI tools you want. Build something real. We can start from the system design problem, from a leetcode problem, or an open source project you've been working on, but we want to see you use the tools to build it.
Then we push you past your comfort zone and see if you can use the LLM to make it happen. Turn it into a webapp. Add reporting. Add RBAC. Once again, evolve the system, and do it at the lightning pace that LLMs enable.
In this interview we're watching how you prompt, how you verify outputs, how you course correct the LLM, and crucially: what you do when the AI fails you.
The Skills That Actually Matter
The best engineers in 2025 aren't the ones who can implement mergesort from memory. They're the ones who can:
- Decompose ambiguous problems into solvable chunks
- Craft prompts that get useful outputs on the first try
- Spot when an LLM is heading in the wrong direction
- Learn as quickly as the technology is evolving
- Push the tools to the limits
- Know when to trust the machine and when to override it
I don't know that what we're doing is the best way we could possibly filter for these skills, but it's definitely better than what we were doing before.
How are you handling this? Any techniques you've seen or tried that are effective?
And just in case this piqued your interest... we're hiring.