I noticed a tweet by Lisa Crispin in which they had said they had commented on this article http://blog.btdconf.com/?p=43. I may not agree with everything in the article but it makes some interesting points that I may come back to at a later date, still not seen that comment from Lisa yet...
However I did notice a comment by Capers Jones which I have reproduced below:
December 29, 2013 at 10:01 am
This is a good general article but a few other points are significant too: There are several metrics that violate standard economics and distort reality so much I regard them as professional malpractice: 1 cost per defect penalizes quality and the whole urban legend about costs going up more than 100 fold is false. 2 lines of code penalize high level languages and make non coding work invisible The most useful metrics for actually showing software economic results are: 1 function points for normalization 2 activity-based costs using at least 10 activities such as requirements, design, coding, inspections, testing, documentation, quality assurance, change control, deployment, and management. The most useful quality metric is defect removal efficiency (DRE) or the percentage of bugs found before release, measured against user-reported bugs after 90 days. If a development team finds 90 bugs and users report 10 bugs, DRE is 90%. The average is just over 85% but top projects using inspections, static analysis, and formal testing can hit 99%. Agile projects average about 92% DRE. Most forms of testing such as unit test and function test are only about 35% efficient and fine one bug out of three. This is why testing alone is not sufficient to achieve high quality. Some metrics have ISO standards such as function points and are pretty much the same, although there are way too many variants. Other metrics such as story points are not standardized and vary by over 400%. A deeper problem than metrics is the fact that historical data based on design, code, and unit test (DCUT) is only 37% complete. Quality data that only starts with testing is less than 50% complete. Technical debt is only about 17% complete since it leaves out cancelled projects and also the costs of litigation for poor quality.
My problem is the use of this metric, which I feel is useless, to measure the quality of the software, it seems too easy to game. Once people (or company) are aware that they are being measured their behaviour on a psychological level (intended or not) adjusts so that they look good against what they are being measured against.
So using the example given by Capers
Say a company wants their DRE to look good so they reward their testing teams for finding defects, no matter how trivial and they end up finding 1000, the customers still only find 10.
Using the above example that means this company can record a DRE of 99.0099. WOW - that looks good for the company.
Now let us say they really really want to game the system and they report 1, 000, 000 defects against the customers 10 - this now starts to become a worthless way to measure the quality of the software.
It does not take into account the defects that still exist and never found, how long do you wait before you can say your DRE is accurate? The client finds another 100 six months later, a year later? The company testing finds another 100 when they have released to client how is this included in the DRE %?
As any tester would ask:
IS THERE A PROBLEM HERE?This form of measurement does not take into account the technical debt of dealing with this ever growing list of defects. Measuring using such a metric is flawed by design, never mind the other activities which would also have hidden costs as mentioned by Caper, that will quickly spiral. By the time you have adhered to all of this, given the current market place of rapid delivery (dev ops) your competitors have prototyped, shown it to the client, adapted to meet what the customer wants and released to the client, implementing changes to the product as the customer desires and not focusing on numbers that quickly become irrelevant.
At the same time I question numbers quoted by Capers such as:
- Most forms of testing such as unit test and function test are only about 35% efficient and fine one bug out of three.
- Other metrics such as story points are not standardized and vary by over 400%
- Quality data that only starts with testing is less than 50%
- Plus others
Where are the sources for all these metrics? Are they independently verified?
Maybe I am a little more cynical of seeing numbers being quoted especially after reading "The Leprechauns of Software Engineering" by Laurent Bossavit. Without any attributions for the numbers I do question their value.
To finish this little rant I would like to use a quote from Albert Einstein
"Not everything that can be counted counts, and not everything that counts can be counted."