Kevin Remde's IT Pro Weblog

Blogs

You should already know about this...

  • Comments 5
  • Likes

And if you don’t know about it, and you consider yourself an “IT Pro”, then shame on you for not being connected and informed in areas of Security that are CRITICAL to your job, bozo.

IT Pro who isn't keeping informed

“What’s up?”

Recently a vulnerability was found in the way Windows works with .WMF formatted files – particularly in a function that can be exploited.  Originally it was determined that there were few enough examples of this exploit “in the wild” that the fix could wait and be rolled out on the next “Super Patch Tuesday” (which is this coming Tuesday, January 10th). 

The fact that we (Microsoft) were going to wait to roll out the fix was taken in various ways by the techno-pundits out there… most using it as an excuse to drive readership to their rags by making negative statements and falsely accusing Microsoft of delaying something that should be fixed right away.

Hogwash. 
Here are a couple of things for you to consider:

As I understand it, the normal cycle for fixing and testing the fix properly before it is available to be rolled out is around 6 weeks.  Consider all the permutations of the files that need to be tweaked, and all the different language versions that have to be tested.  It’s mind-blowing.  Now.. consider that in this case, we’ve had 2 weeks to do 6 weeks worth of work.  Yes friends, there are people working around the clock on this one; guaranteed.

Also consider what happens if we DO roll out the patch to something that really isn’t all that widespread of a problem – or when there are simple workarounds that can be applied while waiting for the patch to be fully tested.  Rolling out a patch is a BIG DEAL to most IT workers, because it means testing it themselves, and rolling it out.  It may mean re-booting servers (and when you’re running 24x7, you KNOW that this can’t be taken lightly).  Microsoft has heard loud-and-clear that we need to be more predictable in our patch release cycles, which is why we now make the 2nd Tuesday of the month such and important day.  And IT workers appreciate that.  (In fact, I’ve read recently where a number of non-Microsoft people are even saying that we should go every other month now, because we’ve had months recently with no patches.)  So even if it has been sufficiently tested, it’s a burden to our customers.  And if it’s NOT sufficiently tested… well, many of us (myself included) have been burned in the past by applying patches that screwed something up.  Microsoft definitely will NOT make that mistake again if they can help it.  And customers appreciate that the patches recently have been pretty-much rock-solid.

So that brings us to today…

Unfortunately, the spread of this exploit has grown to the point where Microsoft has upped the severity, and has rolled out the patch “out of band”.  Meaning – you probably already see it showing up as Automatic Updates. 

“What should I do, Kevin?”

There are a lot of resources available to you. 

For this particularly vulnerability, check out this bulletin.  It contains a summary of the issue, plus links for where to go if you’re a consumer, or an IT Pro. 

And if you ARE one of those who is learning of this for the first time from this blog posting, please please PLEASE at the very least sign up for the Microsoft Security Notification Service

Stay informed!  Stay safe!  …and let’s be careful out there!

What do you think? Should we have rolled out the patch sooner in this situation?  Should we go to an every-other-month patch release day?

  • Kevin,

    If you don't want to read an extended explanation: Yes, Microsoft should have released this patch prior to completing full regression testing, provided that is came with a warning as do all of the Microsoft hotfixes.

    Now on to the extended commentary:
    Software vendors should not dictate the security policy of their clients, especially when those clients have paid for software and support. This was not a wildly dangerous exploit to begin with, but I hope it sets a precedent for future security policies at Microsoft. If we have an immediate danger that cannot be eliminated through the use of antivirus, firewalls, stateful packet filtering, and user education should we still sit in the water while Microsoft defines our risk tollerance?

    I for one hope not. Microsoft must understand that Microsoft clients need to be able to respond in a timely manner and must have the ability to assess their own risk tollerance, especially if an untested patch is available. I can only offer myself as an example. I manage around 4,000 servers with 23 GigE connections to the Internet. If I had no way to protect a network with that much horse-power and bandwidth would Microsoft share the part of the bill when a serious, unpatchable security exploit was used to create tactical strikes taking down major ecommerce sites world-wide?

    Large scale example, but I hope these matters are considered seriously now before we see them on CNN.

    Sincerely,

    Vlad Mazek
    MCSE 2003, Exchange MVP
    http://www.vladville.com

  • Hi Vlad. Thanks for commenting... and you make some strong arguments.

    Really, though; going back to my point about sufficient testing... Realize that the Security powers-that-be understood how much damage was being done or what the potential for damage was. But I highly recommend that you read Mike Nash's post on the MSRC blog; specifically point #3 in his 3 "things we know for sure" list. "The only thing worse than having to deploy an update is having to deploy that same update twice because of a quality problem with the update." How much more damage could be caused if some common, critical business application or functionality were lost temporarily because the patch screwed something up? As I've said - I've seen it happen. Several years ago there was the uproar when a Microsoft update completely screwed up how the company I was managing IT for lost the ability to use Terminal Services. Ouch. The cure was worse than the disease! I was happy to read that a big reason we were able to release it early at all was due to some amazing, heroic efforts on the part of the quality assurance / test folks who were able to sign off on this earlier than anticipated.

    Here's the MSRC blog address (for those of you who didn't see my other post on this subject):
    http://blogs.technet.com/msrc

    -Kevin

  • Just for the record, I disagree with Vlad's assertion "This was not a wildly dangerous exploit to begin with..."
    The vulnerability is rated critical
    The exploit was zero-day
    It doesn't get more dangerous than that in the risk-management rubric most folks use.

  • Tony,

    I concede my wording left a lot to be desired. Yes, it was rated critical and there was an exploit, however, the exploit was wasted on driving traffic to an adware site which from my standpoint is not as damaging as perhaps a DDoS or sending clients private information.

    I guess what I wanted to say is that while we were not damaged by the exploits available for this hole I hope Microsoft considers making hotfixes available in the future if the active exploits cause more damage than would be caused by a poorly tested patch. At some point Microsoft needs to let the network owner establish their own risk tollerance.

    -Vlad

  • Oh, I agree with that too, Vlad. That also weighs into the calcuation. Along with that was the fact that when there are known workarounds, then we might have a way to help the network owners protect themselves in the meantime.

Blog - Post Feedback Form(CAPTCHA)
Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment