The other day, I met with Burke Cox who heads up Stelligent, a company that specializes in helping sites set up their code-quality infrastructure (build systems, test frameworks, code coverage analysis, continuous integration--the whole works, all on a fixed-price basis). One thing Stelligent does before leaving is to impart some of the best practices they've developed over the years.
A best practice Stelligent uses for determining the number of unit tests to write for a given method struck me as completely original. The number of tests is based on the method's cyclomatic complexity (aka McCabe complexity). This complexity metric starts at 1 and adds 1 for every path the code can take. Many tools today generate cyclomatic complexity counts for methods. Stelligent's rule of thumb is:
- complexity 1-3: no test (likely the method is a getter/setter)
- complexity 3-10: 1 test
- complexity 11+: number of tests = complexity / 2
Note: you have to be careful with cyclomatic complexity. I recently wrote a switch statement that had 40 cases. Technically, that's a complexity measure of 40. Obviously, it's pointless to write lots of unit tests for 40 case statements that differ trivially. But, when the complexity number derives directly from the logic of the method, I think Stelligent's rule of thumb (with my modification) is an excellent place to start.
2 comments:
Those numbers seem low. For example, the following method has a complexity 3.
void foo(x, y) {
if (x) {
doAThing
}
if (y) {
doAnotherThing
}
}
You would need a test with x = true, y = true, and probably one of x != y, or x == y, to get a basis path -- depending on the real code of course. It really seems like it needs at least 3 tests though.
Andrew,
I think that, if possible, it is important to delve into the cyclomatic paths because that may provide additional benefit (like bob evans said). I've blogged about that here.
Post a Comment