Peer Review Relevance and Fade

Just finished conducting a short course on peer reviews, inspections and walkthroughs.  A very interesting question was asked about the “quality fade” effect on reviews.

This is a summary of how I responded:

After peer reviews, inspections, walkthroughs (and various other types of review) have been in place for some time (generally more than a year) their effectiveness tends to wane.  (a similar effect has been recently written about quality fade in factories)

Reviews can be a funny type of beast – they are the bane of many developers’ existence and a useful tool for others.

They are a vital weapon in our arsenal to fight poor quality systems.  Yet they are seen as a “nice to have” or “old fashioned” by many software engineering organisations.  Management often love them because they don’t think they cost much (though they often do but make up for it later) and because they give management insight into development capability (and lack thereof).  They can also easily be misused e.g. to highlight and embarrass specific developers.
A counter to this if your management team wants to use them like this is to make it well known that if peer reviews are the only place that we can identify poor performers, then it’s pretty obvious management don’t have very use oversight of the development process.

But back to the relevance of reviews.  They are good for:

  • Identifying defects
  • Identified improvements
  • Communicating scope and issues between team members and others
  • Building organizational capability and insight
  • Training new staff in domain and process expectations
  • Enable authors to gain different perspectives
  • Enable corporate knowledge to be captured and built into checklists

It is this last point in which a key mistake is often made.  If the checklists are used, they need to be up to date.  Without periodic improvement than can be seen as outdated, irrelevant, and time wasters.

Someone needs to be made responsible for making the checklists readily available and then updating (even if infrequently) the checklists and data used for peer reviews.

More tomorrow…

This entry was posted in CMMI, defects, improvement, metrics, peer reviews, people, process improvement, software quality and tagged , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s