Designing effective reviews for software development teams is a particularly challenging task. It entails providing actionable feedback to specialized roles, such as software engineers, UX/UI designers and product managers. Too often companies rely on the same generic review process for all employees. This is generally ineffective, as people in technical roles require in-depth feedback on the technical skills they need to succeed in their positions.
Over years managing software development teams we have experimented on and evolved several technical review processes. Below are some of the ideas and strategies we found most effective.
In many companies the review process in centrally controlled, usually by the HR department. For the sake of simplicity, HR departments often apply the same generic review process to the entire organization. Collecting unstructured feedback with a small number of open-ended prompts such as ‘List Jane’s strengths’ or, even worse, ‘How do you think Jane did this quarter?’. These types of reviews are ineffective at evaluating technical roles for a few reasons:
In my experience the best feedback is generated by reviews that really focus in on the skills people need to succeed in their particular position and circumstances. These skills vary greatly by role, seniority and organization. They may also evolve over time within a specific company, as the requirements for success change.
The more you can target review content for each subject’s particular role and situation, the more effective the process will be.
The more you can target review content for each subject’s particular role and situation, the more effective the will be. For software engineers that may include reviewing specific skills like problem solving in code, code structure and technical definition. Whereas for product managers you might review skills such as product design* and ability to communicate a compelling product vision. Structuring review content and tailoring it to the review subject, makes it far easier for reviewers to respond to questions. They'll know exactly which aspects/skills to evaluate. It also means that extracting meaningful feedback at the end of the review process is quick and easy.
Multiple choice questions are great for providing structure and focus in a review. That said, they suffer from a couple of problems. The first is that they don’t allow people to provide additional feedback beyond the structured response. Many people like to qualify their responses or provide ideas for development. This content is particularly useful when providing feedback at the end of a review.
We generally structure review question sets with 3-5 skill-specific multiple choice questions followed 1 optional free-text input, with a prompt like - ‘Please provide additional comments as rationale for your answers above.’
Another issue with multiple choice questions is that they can be too easy to complete. Reviewers may speed through a review without giving enough thought to responses. To avoid these issues we’ve found that it’s best to intersperse sets of structured (multiple-choice) questions with some unstructured open-text inputs. We generally structure review question sets with 3-5 skill-specific multiple choice questions followed 1 optional free-text input, with a prompt like - ‘Please provide additional comments as rationale for your answers above.’
Incidentally we’ve found the amount of additional optional content provided by a reviewer to be a good indicator of how strongly they feel about their feedback. If someone provides particularly strong feedback in multiple choice responses but doesn’t bother to provide any rationale, it may well indicate that they don’t really feel that confident with their analysis. Whereas if they have taken the time to write a more detailed response, it's likely they feel far more strongly about the issue.
This is an obvious point but worth mentioning anyway: Reviews that include peer feedback are generally better than only having direct manager/report feedback. They may take a bit more time but the wider perspective and resulting additional weight given to feedback provided are significant benefits. To ensure a wide range of feedback it's important to consider the following when selecting peers:
360 reviews include many more participants and as a result are more time-consuming. To minimize this issue, while still providing regular feedback, we like alternating 360-degree and managers reviews at 3 month intervals. For example:
January - Full 360-degree review including 3 peers reviewers per evaluee
April - Smaller review with managers and reports reviewing each other
June - Another full 360-degree review
September - Smaller review with managers and reports reviewing each other
Technical skills are notoriously difficult to evaluate objectively. The perception that a review has not been fair generally results in people not taking feedback seriously and feeling demotivated by the process.
To ensure that reviews are as objective as possible, it's important to take a step back and review the actual work someone has completed over time. Asking questions like What were a person's real achievements relative to the goals they set over the period? and Were they well equipped and in a position to effective deliver on their goals?
This goals review step can be incorporate as part of the core review process or as it can be a separate process altogether. Managing technical goals and OKRs is a lengthy topic and we'll cover in more detail at a later stage. For now, I'll say that this is an essential part of reviewing any product development team.
360 reviews are a good way to provide people with feedback on how they are doing but they must be well designed to do so effectively. Providing meaningful feedback to people in technical roles is particularly challenging. It’s important to tailor reviews to the roles and skills relevant to success in your company. Doing so will ensure that reviews run smoothly and deliver actionable feedback.