Published on

Using coding assistants in technical interviews

Authors

tl;dr

  • The day-to-day work of being a software engineer is changing rapidly, due to coding assistants.
  • It makes sense for us to assess candidates' skills against these new patterns of engineering work.
  • However, over-use of coding assistants can hinder our ability to assess technical fit.
  • It's essential to ensure that the candidate understands the code, even if they didn't write every character.

Our position is:

  • We encourage candidates to use their standard coding assistant tools during technical interviews.
  • If the assistant starts to fill in gaps in a candidate's conceptual understanding, we ask the candidate to rely less on it.

What do I mean by "assistants"?

There is a world of difference between coding assistance and coding assistants. (Let's hope nobody is using text-to-speech on this).

By coding assistance, I mean things like syntax highlighting, dumb symbol completion, automatic closing of parentheses, things like this. The programmer is not literally typing every single character, but they have total ownership of the semantics.

By coding assistants, I mean tools like Copilot or IDEs like Cursor. The programmer might not be familiar with all of the syntax or APIs being used, and in extreme cases might not really know what the programme is doing.

For the purposes of this post, I'm only talking about the latter category. I would be shocked if anyone banned assistance, but would love to hear about examples!

The purpose of interviews

Most importantly: let's establish what we're aiming for when conducting interviews. The primary considerations are:

  1. Did the candidate learn more about how well the role suits their interests?
  2. Did we learn more about how well the candidate will perform in the role?

The topic of coding assistants is relevant to both of these questions.

The case for allowing coding assistants

Interviewing goal 1: the candidate better understands the role

To achieve this goal, it's crucial for the technical interview to be as realistic as possible.

Most Elicians use coding assistants all the time, and my expectation would be that this candidate would too. For that reason, a blanket ban on coding assistants would give the candidate a distorted view of what it would be like to work together. We use some pairing-style interviews at Elicit, and the dynamic of the back-and-forth is incredibly important. Why would we set it up to be unrealistic and awkward?

In addition, banning assistants might mean the candidate views Elicit as a Luddite organisation – which we're not! Our whole mission is to scale up good reasoning: we love using new technology to enhance human capabilities. The same principles apply to how we get our own work done, and it's useful to the candidate to see that demonstrated.

Interviewing goal 2: we can assess the candidate's fit for our role

Many people are now conditioned to use coding assistants all the time. Taking these tools off the table could unsettle the candidate by forcing them out of their habitual development patterns.

Interviews are already a high-stress situation for candidates. This only gets in the way of our ability to see what it would really be like to work with the candidate. The last thing we want to do is needlessly off-balance them, stress them out, and make it less likely we'll see them at their very best.

There's another reason it makes sense to allow coding assistants: their usage is a skill, like anything else, and it's informative to see how adept the user is at getting the most out of the tool.

Some candidates have blindly accepted big blocks of code from Copilot, only for that code to contain an obvious bug which holds them back later in the interview. It's unfortunate, but this is an incredibly valuable insight for the interviewer – and not a positive one.

The case for moderation

Up until recently, I encouraged candidates to use any and all coding assistants: Copilot, Copilot Chat, ChatGPT, Cursor, …

This changed when a candidate using the Cursor IDE pasted a whole file into the chat interface and asked what bugs existed in the code.

I was surprised, but shouldn't have been, when the underlying model successfully spotted the exact problem holding the candidate back (along with a few other issues which were less pertinent).

This didn't count against the candidate. In fact, I was profoundly impressed and learnt a valuable lesson about this brave new world we're all heading towards.

However, I realised that when the assistant interactions are as high-level as this example, we lose a great detail of insight into the capabilities and preferences of candidates. It strictly detracts from our second goal.

Summary

Our operationalisation of this principle, where we encourage the use of coding assistants in moderation, comes down to a single question:

Does the candidate understand the code?

This is merely a rule of thumb: we each might interpret it slightly differently and there will always be exceptions, but it's a good starting point.

Concretely:

  • I encourage the candidate to use coding assistants as much as they want.
  • If we start to stray into territory where they are delegating conceptual understanding to the assistant, I will start to ask the candidate to explain the code in more detail, to describe trade-offs, and propose alternatives. This is a light nudge towards ensuring that they grasp what they are suggesting.
  • If the candidate can't give a good account of what their code does, it sadly counts against them in my assessment.

In an interview, we learn very little by watching the candidate do the busywork of looking up API details; we learn even less seeing a candidate ask a model how to solve a problem at the macroscale. This post describes our approach to making Elicit interviews realistic and informative: both for us and for the candidate!

Does this approach sound interesting?

If you want to show us how you use coding assistants, check our open roles!