Outcome and Aftermath

There are two kinds of statistics, the kind you look up and the kind you make up.

Archie Goodwin

Table 15-7 summarizes my results. I have marked cells where an operating system excels with a + and corresponding laggards with a –. For a number of reasons it would be a mistake to read too much from this table. First of all, the weights of the table’s metrics are not calibrated according to their importance. In addition, it is far from clear that the metrics I used are functionally independent, or that they provide a complete or even representative picture of the quality of C code. Finally, I entered the +/– markings subjectively, trying to identify clear cases of differentiation in particular metrics.

Table 15-7. Result summary

Metric FreeBSD LinuxSolarisWRK
File organization  
Length of C files  
Length of header files + 
Defined global functions in C files  
Defined structures in header files   
Directory organization +  
Files per directory   
Header files per C source file    
Average structure complexity in files + 
Code structure  
Extended cyclomatic complexity + 
Statements per function +  
Halstead complexity + 
Common coupling at file scope   
Common coupling at global scope +  
% global functions + 
% strictly structured functions  +
% labeled statements  +
Average number of parameters to functions    
Average depth of maximum nesting  
Tokens per statement    
% of tokens in replicated code+ 
Average structure complexity in functions+  
Code style  
Length ...

Get Making Software now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.