Bloomberg recently reported that Microsoft "provides intelligence agencies (NSA, CIA, FBI) with information about bugs in its popular software before it publicly releases a fix."
Though I cannot confirm if other governments in the world were also alerted of the bugs or only the US agencies, this strongly smells like a selective disclosure practice that Microsoft is following.
Not only is it unfair, but it is also dangerous. For one, other governments are denied the early alert even though they pay the same price for Microsoft applications. Secondly, and more importantly, selective disclosure alienates those governments and foreign security researchers. Reporting a vulnerability to Microsoft earns the researcher maximum of $150,000 dollars (at Microsoft's discretion) and ironically puts his own country in an uneasy situation.
Some will argue that it's fair because there is a payout after all. They have entirely missed the point. Security is about trust. When a researcher reports a vulnerability to Microsoft, there is an implicit trust that that information is private. On one hand, Microsoft would expect the researcher to hold on the disclosure. On the other hand, Microsoft breaks that trust itself by letting some third parties know about the discovery. And Microsoft also breaks the trust between itself and its customers, causes dire political consequences.
Furthermore, there are adverse effects selective disclosure may have on the exploit market. Selective disclosure can only reduce the number of public bugs and hike up the price of vulnerabilities. Microsoft will become a victim of its own action when it has to spend more and more to acquire the information that it could have been given for free. All simply because there is no more trust.
I cannot tell whether this whole thing is good or bad. Personally, I think it is bad. And time will give us a definite answer.