Check-Out Our Case Studies. Cooperatively With Byteboard
An all-in-one solution for every step of the technical interview pipeline
Solutions tailored to your organization’s hiring needs
Eliminate pipeline bottlenecks and scale your top-of-funnel
Assess candidates for real world skills
Stories of world class technical hiring
Hiring insights, industry benchmarking and more
We’re here to make technical teams work. For everyone
Join Team Byteboard
Most technical interviews assess an engineer's ability to prepare for traditional algorithm problems, not their ability to do the actual job. At Byteboard, we are obsessed with creating assessments that provide a useful, fair, and accurate report of a candidate’s fit for a role, regardless of their background. So, how do we do that?
No one likes traditional technical interviews. Even their staunchest defenders acknowledge that they are painful experiences that do not reflect the day-to-day of a software engineer. We’ve already written about why this needs to change, but the tl;dr is this: traditional interviews make it harder for candidates to show what they can do and harder for employers to assess skills and knowledge.
The biggest indicator that coding interviews aren’t measuring the right thing is that qualified candidates have to spend so much time preparing for them. Candidates spend hundreds of hours studying material that “teaches to the test,” not to the actual job they’re interviewing for. And, the studies find that even their preparation isn’t what determines their success in technical interviews. Traditional whiteboard interviews reliably measured performance anxiety, and that’s about it.
The stakes of a bad interview experience are high for both the company and for candidates. Bad interviews mean an inefficient interview process that drains engineering time leaving talent on the table.
Candidates hate them. Employers don’t get what they need from them. It’s time for something better.
We’ve identified four traits of a truly fair and useful assessment: holistic, realistic, consistent, and strengths-based.
Theoretical questions-based interviews are good for assessing a candidate’s familiarity with advanced data structures and algorithms, but that’s about it. Those skills are important, for sure, but they’re an incomplete picture when there are many other skills relevant for the role:
At Byteboard, we recognize that the answers to these questions are often more important as whether the candidate can implement Dijkstra’s algorithm in a pinch. Obviously, we need to know how well they can write code, but what other skills do they need to be successful?
We surveyed and interviewed hundreds of working engineers across company sizes and experience levels to figure out what matters beyond the algorithms. From there, we identified a list of core competencies our assessment would need to account for. These competencies include technical skills like code composition, interpersonal skills like the ability to ask the right questions, and growth mindset and grit. Together, they paint a fuller picture of who the candidate is as an engineer: what they would be like to work with and the full breadth of what they’d be able to contribute to the team.
We always aim to build assessments that only ask candidates to perform the sorts of tasks that real software engineers do every day. To do this, we looked at how each of the core competencies we identified were demonstrated in every work day and created an assessment that paralleled those activities.
Our assessment includes tasks like reading through design documents and adding their thoughts, implementing new features in existing codebases, and making decisions in ambiguous situations where there is no one right answer. We try to never ask candidates to do anything they wouldn’t reasonably have to do on the job. In other words, it’s okay if you can’t implement Dijkstra’s algorithm in a pinch!
We also talked to educators about how to build real-world assessments and talked to faculty at the Stanford Center for Learning and Equity to ensure that we were following best practices to extract signal and limit bias. And, we tested (and continue to test) to make sure that each thing we’re asking a candidate to do gives us the signal we need to measure for that competency.
The resulting assessments are scenario-based tasks that ask candidates to do things like create design documents and navigate existing codebases, bringing forward real potential engineering challenges they'll face on the job.
Once the candidate has finished the assessment, we interpret their performance and produce a Skills Report, which is a rundown of which on-the-job skills the candidate demonstrated over the course of taking the assessment, and which ones we thought were lacking.
We rely on human graders to encode the candidate’s performance into a format that can be processed by our in-house rubric system. We ensure humans are in the process because we believe that some aspects of a candidate’s performance are too nuanced to leave up to a fully-automated system, but we also rely on a carefully-calibrated rubric system because we believe that structured, rigorous rubrics are an essential component of any fair assessment. And, by de-identifying candidate materials, we can also ensure that unconscious bias does not impact a candidate's evaluation.
In addition to measuring what matters, our realistic assessments allow candidates to do the assessment on their own time and have the same experience, which reduces test anxiety.
It’s a better experience for the candidate and lets them “show their work,” so to speak: how they think and communicate, not just the end result. And, the completed assessment results in a body of work that can be graded anonymously, improving equity.
Our rubrics operate under the principle that a good assessment recognizes the diversity of talent – there are many ways for a candidate to be a strong fit for a role. Consider these examples:
Our rubric system is designed to view candidates from multiple angles, and keep track of the ways that they shine. That way, candidates can feel confident that their strengths are seen, and hiring managers can feel confident that they’re not excluding candidates who might seem unconventional, but are still a great fit.
The efficacy of Byteboard assessments gets better candidates through the door. In our partnership with Lyft, their onsite-to-offer rate more than doubled, going from 25% pre-Byteboard to 53% with Byteboard. Their team was able to fill 78% of their positions in just two months, an improvement from the 50% they were able to fill with previous assessments. Assessing candidates holistically prior to the on-site makes for a more effective, efficient process for companies and candidates alike.
Companies that partner with Byteboard have seen a decrease in time-to-hire, an increase in underrepresented hires, and saved hundreds of hours of engineering time. And, it’s a better experience for candidates, too. 80% of the candidates in the above case study said they preferred Byteboard to other pre-onsite interviews. Across our platform, candidates give Byteboard an average rating of 4.2 out of 5.
Our goal isn’t just to be better than traditional technical interviews (that’s a low bar to clear). We work to continuously improve our assessments and experience for engineers at all levels, whether they’re looking to fill a position as a hiring manager or for their own next opportunity.
Learn more about how Byteboard can make your hiring process more efficient, effective, and equitable. Request a demo.