When I spoke at PHPNW10 in October, I talked about teams, skills and the importance of benchmarking things in order to illustrate improvements. If you didn't see the talk, the video and slides are linked off the conference site. In particular I talked about the importance of analysing skills in a team, then improving them, then analysing again to see how things had changed. This post is about that process, some techniques that might apply, and what to do with the results when you get them.
The really important part is NOT measuring skills accurately - in fact the most critical part of this process, and the hardest bit to get right, is figuring out which skills need to be measured and how detailed that measurement should be. In general, most teams will need around 30 skills and these should range across the technologies in use, allied tools, and soft skills such as writing. To get a good outcome, I recommend putting together an initial list (it is always easier to criticise something that exists and build on it!) and circulating it. Be prepared to go through at least two iterations of this process before you get something that you can start to work with.
There are a few ways to analyse skills. The fastest way is to have the manager or lead developer rate each developer on his or her skills, on a scale of 1-5. A longer-winded but more effective way is to ask developers to rate each other. Developers inherently understand one another's strengths and weaknesses in a way that someone in authority never can. You can pass around a sheet and ask everyone to fill it in, or go a step further and use a favourite technique of mine, 360 Degree Feedback (links to a previous blog post with more information about this approach).
However you collect your data, you should end up with a grid containing a list of people down one side, and skills along the top, something like this:
I like to use styling rules in Open Office to change the colours of the cells depending on the scores, to help see where the "dark" patches are, where the knowledge is lacking, which is what you are seeing here.
The aim of the skills improvement programmes is not to bring everyone up to the point where they score fives on all topics, and the whole grid is bright. Team members will always have differing skill sets, interests and inclinations, and so long as they combine to make a good overall outcome, then we have all we need! What teams need to look for though is where the knowledge is all in one person, or a handful of people with that skill who then become a bottleneck for projects. We need "bright" spots in the areas we use the most, so that we can schedule our team more easily. This is why it is so important to begin by identifying what this picture ought to look like, then see how closely it matches.
Rinse and Repeat
It is a disputed point as to whether these types of skills analysis should be linked to performance reviews or not - personally I think they are a good big-picture tool for teams, and perhaps might feed into individual personal development targets, but that they aren't particularly helpful as performance measurements for individuals. I think skills analysis is something that can and should be done fairly regularly - ideally quarterly, although twice a year is probably enough for a stable team.