The Problem With Standards and Certifications
One of the common misconceptions about our work at CITL is that we’re certifying software products, or in some way asserting that they meet a particular standard. Some other organizations have gone this route, but we feel that it doesn’t drive the industry towards the end result we desire. Rather, our data is more akin to nutritional facts: we provide all the statistics, with some guidance for an average user to allow them to interpret the values and understand what is good, and what is bad. More importantly, it allows comparison between two products.
When I shop for granola bars, my main concern is protein. I’m buying them so that I can have a protein hit available in my purse, for days when I’m busy and my meal schedule is irregular. I don’t care that much about sugar or calories, because I’m not concerned about my weight. The reason why all those other bars that I don’t buy stay on the shelves, however, is that there’s lots of other shoppers who have lots of other decision processes. Some look at fat first, or calories, or sugar content. Of course, that nutritional label isn’t everything. I could buy the bar with 20g of protein, if that was truly my only concern, but the taste is a dealbreaker – I just can’t stand those ones.
Similarly, our data isn’t going to provide the whole picture. One product might do better than another when it comes to build quality and security features, but maybe it just isn’t as good a word processor, or maybe it doesn’t have that one capability that you really look for in a *your software product category here*.
At a very coarse level, certifications can allow for product comparison, but only if one product got the certification and one didn’t. If having that seal of approval makes the difference between making a sale or not, pretty soon all the big brands will have gone to whatever minimal effort is required to obtain it, and pretty soon there’s no basis for comparison at all. More importantly, that’s *all* the software vendor is going to do. The consumer can’t tell the difference, based on that gold star, whether the product obtained it with ease, or whether they just barely scraped by. It doesn’t allow them to tell the ok product from the great one. Therefore, once all software vendors have managed to drag themselves over whatever low bar was set, they’re done. They have no motivation to continue to improve, or to keep looking for new ways to make their products safer. On the other hand, with nutritional facts, that competition remains. The consumer can see that both products do pretty well, but they can also see that *this one* did better than *that one*.
Don’t get me wrong, defining a bare minimum that companies can aspire to meet can be useful, but without transparency, that’s all they’re going to do.