The WTF...err...WFH Issue
We can hear the faint yet persistent voice of years past requiring their employees come back to the office….or else.
This is Part 3 of my series on hiring in tech, in which I present the high-level outline of the hiring process that I recommend for software engineering hires. Although you can read this article on its own, I strongly recommend reading Parts 1 and 2 first as they provide valuable context.
In Part 1 I provided a few general tips for how to think about, design, and conduct an interview process for tech roles more generally.
In Part 2 I discussed four hot-button issues and controversies in the software engineering hiring process.
Without further ado, my suggested hiring process! Note that I am assuming that you are following the general best practices highlighted in Part 1, such as writing a proper job description and interviewer assessment rubrics.
This is the one part of the hiring process where you attempt to observe the engineer interacting with a codebase in a way that resembles what they may do in their job day to day.
In my experience, there are two reasonable ways to assess pragmatic coding skills:
The mock code review. This is not a very well known assessment technique, but I think it’s brilliant and we adopted it to good effect at Airtasker. Code review is an executive / meta function of coding. It is impossible to be a good code reviewer without being a good coder, so this is a great test. It is also very time efficient and ideally suited to being a take-home style challenge in cases where a screener is necessary.
Pair programming. If you really want to see the candidate actually write real code in a real text editor, some sort of pair programming exercise is the best choice. Asking them to write something from scratch is inefficient and not a good gauge, so you will want to ask the candidate to add a feature to / refactor some existing code.
A drawback that is shared by both these options is that they require a reasonable amount of preparation on the part of the hiring organisation: you need to have a suitable codebase for the candidate to work from, and in the case of the code review exercise, the PR to review as well. Them’s the breaks. If you aren’t able to do the work, I would actually recommend skipping this phase of the process rather than wasting the candidate’s time by making them build something from scratch.
I covered this in Part 2. Ideally this is conducted as a whiteboard-based exercise or remote equivalent. Start by presenting a problem (e.g. “keep track of the five largest numbers in an unordered list”) and have them discuss, think out loud, sketch ideas on the whiteboard. This should feel conversational. Feel free to interject with questions and prompts. The idea here is that the candidate should figure out an algorithmic solution to the problem before they start writing code, as this is generally easier and less error prone. Generally I encourage interviewers to allow the candidate to get to a more-or-less working solution, even if it is far from optimal. Let the candidate know this. Once they have something that works, that is the time to challenge it from the point of view of efficiency, edge-cases, etc.
Some candidates want to jump straight to coding. You should strongly discourage, but not prohibit, doing so. A small minority of people really do think best in code. If after you caution them that it may not be advisable they still want to jump straight in, let them.
The coding component can be done in an editor, or handwritten on the whiteboard. Whiteboard coding has gotten a bad rap in the past five years, in my opinion unfairly, as many critiques misunderstand the purpose of the exercise. What you are looking for here is an ability to express the algorithm as code. As such, what you are looking for is clarity of thought and expression. This code shouldn’t need to compile; boilerplate is unnecessary; personally I don’t even care if there are syntax errors or misremembered library function names. Fixing syntax errors is trivial; clear expression of algorithms as code is not. Make sure you’re looking for the right things.
An architectural system design interview is, like the algorithmic interview, a whiteboarding exercise. The main difference is that it takes place at a much higher level of abstraction. The questions should be designed to be open ended; the candidate’s ability to navigate an ambiguous and underspecified space is part of what is being tested here.
Typical architectural system design questions include:
Design a backend for a Twitter-like service
Design a service that aggregates news from multiple sources
Design a system that creates SEO pages for a local listings site
The aim of this interview is to determine how well the candidate can navigate an ambiguous space, set down their assumptions, make conscious and well-considered tradeoffs, identify risks and challenges, and break down a large problem into smaller components. There should be no expectation that the candidate make highly specific infrastructure choices or go into detail about the architecture.
As an example, for the Twitter question above, you will look for the candidate to ideally cover the following:
State their assumptions around volume of usage and traffic.
Consider the API primitives.
Discuss the shape of the follower graph and its infrastructural implications.
Identify the challenges of “celebrity” accounts on the system architecture.
Suggest a very high-level database schema that takes account of the above.
Talk about the data layer: SQL vs. noSQL tradeoffs, sharding, caching etc.
The output of the interview is often a high-level “boxes and arrows” architecture diagram, but this really is a case of the journey being more important than the destination. As an interviewer, it is important to take thorough notes and to guide the interview to explore the candidate’s skills. This should really feel like a collaboration: two peers standing at a whiteboard, nutting out a problem.
A couple of notes on architectural system design interviews:
Traditional system design questions tend to have a backend bias. Where the candidate is more experienced in client software, you may wish to come up with a specific set of system design questions that relate to frontend matters.
System design skills become more important the more senior a candidate is. Some organisations don’t subject junior candidates to an architectural system design interview; I prefer to include it, but to make sure that the hiring decision-making process weights the importance of the interview appropriately relative to seniority.
Structured behavioural interviews are a type of interview where the interviewers follow a script of pre-determined questions (that’s the structured bit), generally about how the candidate has acted or would act in certain situations (that’s the behavioural bit). Structured behavioural interviews tend not to flow as well or feel as comfortable as some of the more interactive interview formats, but there is decent research to suggest that they are more predictive of job performance than many other types of interviews.
For engineering hiring, a structured behavioural interview can be used to assess or supplement the assessment of so-called “soft” skills and attributes such as communication, leadership, teamwork, and values. Do not neglect these skills and attributes: not only is there nothing soft about them, but I would argue that they are just as important (and in some ways more so) as pure technical skills in enabling a software engineer to succeed as part of a team and an organisation.
All but the most junior candidates should have time with a senior leader in the engineering organisation, at the Director or VP level (or equivalent). This is a semi-informal interview in a behavioural style, but is actually more important to function as a sell and a close. It’s a candidate’s market, and this is an important part of getting the candidate bought in and excited about your company: its mission, its potential, and its organisational structure.
I covered this in Part 1: I believe it is important that each interviewer is provided with a specific assessment rubric, and that they provide written feedback prior to discussing the candidate with anybody else, and that there is a decision-making body (which may or may not include the interviewers) whose job it is to make an overall determination about whether to proceed with an offer. This is, in many ways, the most important part of the whole process. It doesn’t matter how good your interviewing process is if the final decision making process is ad hoc and biased.
It’s a hot market out there, and good engineers are in short supply. Saying “no” to someone is hard, but it’s an important part of making your organisation great. A fair but rigorous hiring process will enhance your brand in the market amongst the people who count: good engineers who want to work with other good engineers, in a place where engineering is valued.
Australia is establishing itself as a major player in the global tech game
We’re not writing this to inform you that there are plenty of vacancies right now, you already know that! We’re simply here to pick out some of our favourite opportunities that are being overlooked right now.