The check sheet is, by far, the easiest report to produce.  The only thing required is an Excel-type application for columns and rows.  The header should be a fact for the dimension you are tracking.  This fact is usually something like a day of the week, week of the month or another time-based milestone, but it doesn’t have to be.  The row dimensions are the defect types you are tracking.  So, as an example, let’s say we are tracking defects on a web application that’s under development.

The defects we are tracking may be something like this:

  • Misspelled Words
  • JavaScript Errors
  • Wrong Colors
  • Page Not Found’s (404)
  • Application Errors (500)

Now, obviously, you could add to this list and, perhaps, even break this list down to be a little more specific.  But, for our example, this list is sufficient.  Our reporting facts will be the day of the week.

Based on this, our check sheet report may look something like this:

(click on image to enlarge)

 

From this report, we can gain a quite a bit of insight.  First, we can see which defects have the highest amount issues and which have the lowest.  By understanding this, we may be able to have a discussion why JavaScript has so many errors and explore using tools like unit tests and automation testing.  Additionally, we may want to investigate further why there are so many misspelled words on the web site – why isn’t the team proofreading adequately or using a spellchecker.  As I mentioned in the main post, these reports are great conversation starters.

Next, by examining the facts (i.e. the header columns), we see which days are more prone to defects.  In this case, we see that Wednesday has the highest number of defects, followed by Monday, then Tuesday.  This leads us to ask the question, why does Wednesday have so many defects as compared to Thursday or Friday?  If, like many companies, we have deployments into QA once a week, say Monday or Wednesday, this may make sense.  If we are engaged in continual integration, there may be other issues contributing to the high defect rates.  Again, it may be a great time to have a conversation.

We also can using the above table to help direct our QA efforts.  If we see that JavaScript and misspelled words have the highest number of defects, while our CSS (wrong colors) seems to have very little defects, we can focus our QA efforts more on testing client-side functionality vs. branding and styles.  According to the check sheet, 81% of our testing efforts should be centered around JavaScript and copy while only 4% should be focused on clicking links and testing if pages actually exist.  (Obviously, 3 application errors (13%) for a web application is probably not realistic, but this still gives you an example.)

Finally, from a team management perspective, this chart helps us determine human resource requirements and allocation.  From the report, we may determine that the development team might benefit from a more senior front-end developer and an experienced copywriter.  By investing in specific human resources, we can minimize initial defects in our application, thus reducing code churn, delivering the product sooner and, possibly, even increasing sales.

Sample Reports

  1. Ishikawa (“fishbone”) Diagram
  2. Check Sheet
  3. Stratification (alternatively, flowchart or run chart)
  4. Control Chart
  5. Histogram
  6. Pareto Chart
  7. Scatter Diagram