I wonder what metrics Microsoft uses to calculate the benefit of a new feature. Take the new battery notifications messages added in Windows 7 for example. On paper, and during testing, that must have seemed like a useful feature to have in the product â€“ it certainly seems useful to me.
Instead, when the feature actually worked as it was supposed to it turned out to be a mini PR issue for the company, as sites reporting that â€˜Microsoft is investigating battery notification issuesâ€¦â€™ steadily appeared.
Of course thereâ€™s nothing wrong with reporting that there might be an issue (thankfully it didnâ€™t turn into the sky-is-falling fiasco of the T-Mobile data hiccup), but I shudder to think how much time and energy Microsoft had to invest into investigating the reports, speaking with partners and then conducting testing into the occurrences. And thatâ€™s not even counting the gynormous cost of Steven Sinofsky (his hourly rate must be up there) writing his clarification post over on the Engineering Windows 7 blog. Great post by the way!
I can imagine the next meeting of the â€˜Windows 7 Battery Notificationâ€™ team. â€˜Hey great feature folks. Nice work. But unfortunately weâ€™ve had to make you all redundant. Stevenâ€™s post was charged to our cost center and the budget for the next year is all goneâ€¦â€™
I wouldnâ€™t be surprised if a quiet Windows Update in the next month or two just simply removes this feature altogether. There, problem solved!
Let this be a sobering thought to all developers out there unlucky enough to be tasked with estimating the cost of a simple new feature:
- Specification of feature: 4 hours
- Cost of development: 12 hours
- Testing and deployment: 8 hours
- Investigation into and write up of supposed problems caused by the feature: 2,396 hours