As you might have surmised from the previous post, I care a lot about getting the right goals in place for software developers.
There are several reasons for that, and I'll unfold most of them as this post finds its way to conclusion. The first and the main principle is already revealed in the title though. It is fairness.
Goals are not just for people whom I manage. They are also for me: they make sure that everyone is assessed fairly based on their results, and not by personal bias or most recent memories. They also draw a fair, uniform, target that determines rewards and career progression based on performance rather than visibility, luck and various other subjective factors.
Software engineers: what do they do?
Ok, time to get off the soap box. Let's talk trivialities, and start by asking a semi-rhetorical question. What do software engineers get paid for? Well, they get paid for coding (or, if we hop back on the soap box for a second, they draw their paycheck for constructing software).
So, if we try and be as simplistic as possible, the more software is written in a mythical man-month with the right quality, the more rewarded should the employee be. Yes, I know, writing software is not flipping burgers, and we are way too simplistic here. That will be fixed before this article is over, but for now please hold the indignation: we all have to start somewhere. And, truth be told, there is strong evidence that there is a 10x difference in productivity factor among developers. If one guy creates ten times more value than his/her neighbour, shouldn't there be some kind of reward? So, let's develop the topic of
Productivity
Assuming we answer the previous question with an emphatic 'yes', we have to understand how productivity is to be measured. Now, this is a much taller hurdle. In fact, there is no acceptable answer - many have struggled to define various productivity metrics, and many have failed.
Just counting lines of code produced (LOC metrics) is too easy. LOC differs based on language, complexity of the task, and encourages bad copy/paste practices, verbosity etc. As an extreme example, imagine a hard defect that is resolved by a single line change after a couple of days of surgeon-like troubleshooting, and compare it to a copy/pasted test harness code. There are more cases such as these two where counting LOC fails, and fails hard.
There are slightly better productivity metrics: story points, functional points, and requirements implemented. Unfortunately, better does not mean good. Story points are subjective estimates, and it's not uncommon for one developer to price a task at 5 SPs, and for another 13 SPs. Functional points and requirements are often too bulky, and don't work well when multiple developers collaborate on the same requirement. As always, we need to be wary of side effects, where engineers motivated by speed with which they implement new requirements, and thus, eschew maintenance work or unit tests.
To avoid all this nastiness, I've arrived to fluffy comparative goals, which go along these lines:
Complete tasks at a pace consistent with engineers at level X
Yes, I know, very fluffy, not smart (upper- and lower-case), and prone to misinterpretation. However, it does provide a talking point if someone does not pull their own weight, or vice versa, if someone is indeed very productive. Future goals in this article will be better though - promise. For now, let's say that we want to talk about productivity, but not necessarily measure it with fine instruments. With that in mind, we can turn to greener pastures, and talk aboutQuality
This is an easier one. Defects can be measured, and the more defects created the lower is the quality (yes, captain obvious is yet again at work). Obviously, one way to avoid defects is to avoid writing software, but hopefully the productivity goal, even in its vague form, puts a boundary around that.
So how are we going to measure good versus bad? One approach is to do a similar cop out as with the productivity goal, and say something like this:
Do not create more defects than expected from engineers at level X
This is better than nothing, but I feel we can do better. Sometimes, one bad defect that caused an exec-level aggravation is big enough to register, while five minor defects not visible to naked human eye don't. For this reason, we can take it one level forward:Do not create any showstopper defects, not more than A major and B minor defects as expected from engineers at level X
This is a bit better, but in some situations we can take it further. Catching defects at code review vs. QA vs. customer differs by orders of magnitude in organizational costs. In other words, we want to encourage strong review and QA practices to create better software at lower costs. The main point though is that it is not just the manager's job to facilitate that. The developer should be the main stakeholder in catching defects as early as possible. They should be motivated to have a thorough code review, and be happy when their peer finds an issue in their work. They should take active interest in the QA plans, and appreciate testers who find issues.For many people, this is self-evident, but definitely not for everyone. Assuming the point of goals is to encourage particular behaviour, we can go for this:
Do not create any showstopper defects, not more than A major defects found in code review, B found in QA, and C found by customers as expected from engineers at level X. Also, do not create more than D minor defects.
Maintainability
Before we move off this topic altogether, let me just touch upon an important and slightly controversial point. Defect is not just a case where the product does not behave as per specification. It is also a case where the code is so brittle that a defect is liable to appear once someone else touches it. Poor encapsulation, 100+ LOC functions, lack of OOD, absence of test coverage are an issue in themselves, even if the code (usually by accident) functions the right way. Hence, I tend to treat those as defects during code reviews. In case you disagree, think of a builder who uses hollow bricks here and there in your new house. Sure, the house does not have any defects today in the strict sense of the word. Is this house well built though, and wouldn't you like to reflect your opinion in the builder's assessment?
Tracking
Before we race too far though, let's talk about the evil twin of elaborate goals: performance tracking. If we work off a goal which enumerates a number of defects found during code review, minor/major and so on, we need to somehow measure these numbers over the performance period.
The more goals and numbers are there, the more there is to measure, and there's unlikely to be a friendly PA or accountant that would take care of the business. No, it would have to be the goals' creator, i.e. you, the manager.
Hence, there are two valid options:
- Intentionally dumb down the goals, and rely more on your subjective opinion and working relationship with the employee when it comes to reviewing.
- Work with approximate numbers as much as possible, but still have SMART definitions.
There's no right or wrong here. Usually, I go for the second option, and do lightweight tracking every week or so by registering any "abnormal" results, i.e. whenever a complex work is done to great quality or vice versa. In the vast majority of the cases, this suffices to provide a fair review. Rarely, when I and the reviewee have different perception of the work done, I might have to dig deeper and get more accurate numbers on demand for a task or two. Just make sure that your code review system, SCM and quality system are there to back you up.
Proficiency
So far, I've concentrated on the bread and butter of software engineering: i.e. creating software. However, this is not always the entire story. Software professionals also contribute by solving problems, reviewing code of others, responding to technical queries, designing solutions and so on. The higher an engineer is on the technical career ladder the less they code and the more they deal with aforementioned activities. This, in turn, means that we need more goals as coders become developers and architects.
However, measuring proficiency again bumps into lack of metrics. (For example, counting the number of technical emails answered is emphatically not the way to go.
One goal I tend to use goes along these lines:
Be the technical point of contact for components X, Y, and Z
Sometimes this can be extended into collective responsibility where an engineer is accountable in some degree for all defects in X, Y and Z. This usually happens further along the career curve, and has to come with informal and formal authority.Adjusting
I've spent quite a bit of your and my time talking about what goals should be. However, now I'd like to pop that balloon and again say something controversial. The important part of goals is not what they are, but that they are a living and breathing mechanism.
If the manager reviews the goals every couple of months, checks whether they fairly assess their team's work, and adjusts them as necessary, then he/she would do a reasonable job, even if the starting point was off the mark. And vice versa, preparing the perfect goals, putting them in the fridge for 12 months, and unfreezing them come review time in many cases will yield a slightly spoiled result.
Of course, there are always developers who are happy where they are and do something very similar year after year, but they are usually not the focus of elaborate goal construction. It's those that develop their skillset and have aspirations that are the focus of this article, and for them goals must be constantly revised: same as their career.
Project-specific goals
One dilemma that came back to me year after year revolved around project-specific goals. For example:
Complete refactoring for the management front-end by Q3, release it to customers by Q4, with fewer than X major regressions
There's something very tempting about these goals: they cascade directly business goals to individuals. At the end of the day, I care less about amount of code produced, or the amount of entries in the ticketing system. What I care about is whether customers got the feature on time, and whether they are happy with it, and such goals allow expressing just that.
However, they might not be less tempting for the individual, as sometimes not all factors are in their control (e.g. the business does not prioritize this feature enough, or there's no right QA skillset present, and defects escape). This is why I oscillated between adding and omitting those; at this point, I'm using them sparingly with a number of caveats to ensure that they are achievable and fair.
Summary
As usual, this post is less about providing answers, and more about sharing thoughts. I went through a fair amount of trial and error when managing engineers, and that process led me in a particular direction. There's no single size that fits all, and my last advice for today will be to use this post as food for thought rather than a template.