Security Research & Defense

Information from Microsoft about vulnerabilities, mitigations and workarounds, active attacks, security research, tools and guidance

February, 2010

  • Details on the New TLS Advisory

    Security Advisory 977377: Vulnerability in TLS Could Allow Spoofing

    In August of 2009, researchers at PhoneFactor discovered a vulnerability in the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols. As the issue is present in the actual TLS/SSL-standard, not only our implementation, Microsoft is working together with ICASI, the Industry Consortium for Advancement of Security on the Internet to address this vulnerability. Today, Microsoft released an advisory and an associated workaround package that experienced administrators can use to protect their web services.

    Explaining the risk of the security vulnerability

    The issue, CVE-2009-3555, allows an attacker who successfully became a man-in-the-middle to prepend information to a TLS/SSL protected connection. It does not allow an attacker to read, change or edit the encrypted data. This vulnerability exists because certain SSL-protected protocols, such as HTTP, assume that information received after a TLS-renegotiation is sent by the same client as the information sent before that renegotiation. Renegotiation is a feature of the TLS protocol, described in RFC 2246 which allows either peer to renegotiate the parameters of a protected connection at any point in time. An attacker could exploit this vulnerability by intercepting a legitimate connection from a client, then initiating a renegotiation to the vulnerable server, or by piggybacking on a TLS renegotiation initiated by the web server.

    This vulnerability can affect different protocols that use TLS/SSL, but most clearly affected is the HTTPS protocol which protects web transactions.

    IIS 6, IIS 7, IIS 7.5 not affected in default configuration

    Customers using Internet Information Services (IIS) 6, 7 or 7.5 are not affected in their default configuration. These versions of IIS do not support client-initiated renegotiation, and will also not perform a server-initiated renegotiation. If there is no renegotiation, the vulnerability does not exist. The only situation in which these versions of the IIS web server are affected is when the server is configured for certificate-based mutual authentication, which is not a common setting.

    Scope of the vulnerability in IIS 5

    IIS 5 does allow clients to initiate a TLS renegotiation and is vulnerable in its default configuration. Our investigation has shown it is unlikely that these attacks will be exploited successfully. An attacker would already need to successfully leverage a man-in-the-middle attack to intercept a connection between a client and vulnerable server in order to exploit this vulnerability.

    Likelihood of the vulnerability being exploited in general case

    Eric Lawrence, a Program Manager in the Internet Explorer security team also evaluated the exploitability of the vulnerability and found the following:

    The below is an example of an exploitation of this vulnerability. The text in red is prepended to an SSL connection by an attacker, the text in blue is sent by the unwitting victim client, and the text in green is the web server’s response:

    GET /app/transaction.asp?action=sendMoney&srcAcctID=12345&targetAcctID=6666&amount=2000 HTTP/1.1
    GET /app/updatecheck.asp HTTP/1.1
    User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; Trident/4.0;)

    Cookie: PREF=ID=0d3e398a45b12d8a:U=ed647eec50a4edca:HSID=AOHxfGVRaYatVUIUs
    Authorization: Basic c2VjcmV0OnBhc3N3b3Jk


    HTTP/1.1 200 OK
    Content-Type: text/html
    Server: Microsoft-IIS/6.0
    Connection: close

    <html><body>Successfully sent $2000USD from account #12345 to account #6666.</body></html>

    The lines highlighted in yellow represent client state or identification information; by being in the same header block as the attacker’s request, they effectively authorize that spliced request.

    There are two reasons why this attack is unlikely to be exploited:

    ·         If a site is vulnerable to this attack, they are almost certainly vulnerable to classic Cross Site Request Forgery style of attack.  The attacker need only send the client some HTML containing an IMG SRC to the victim URL and the client will dereference that URL, automatically providing the credentials to the server.  This is a simpler mechanism of accomplishing the same thing than the more complicated TLS/SSL and request-splicing attack hopes to achieve.

    ·         If an attacker were able to overcome the previous issue, this technique will not work for a site that only accepts parameters via HTTP POST requests.  The reason is that the attacker must convey the malicious request within the POST’s body.  By definition, the HTTP POST body occurs after the request header block has completed.  So, the malicious attack would look something like this:

    POST /app/transaction.asp HTTP/1.1

    Content-Type: application/x-www-form-urlencoded

    Content-Length: 62

    GET /app/updatecheck.asp HTTP/1.1
    User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2;

    Cookie: PREF=ID=0d3e398a45b12d8a:U=ed647eec50a4edca:HSID=AOHxfGVRaYatVUIUs
    Authorization: Basic c2VjcmV0OnBhc3N3b3Jk


    HTTP/1.1 400 Bad Request
    Content-Type: text/html
    Server: Microsoft-IIS/6.0
    Connection: close

    <html><body>Credentials required.</body></html>

    Because the victim’s spliced request is sent after the header block, the credentials will not be used to authenticate the submitted transaction.  The only way an attacker could make this work is if the server accepted what are called “Trailer” headers from the HTTP request, like so:

    POST /app/transaction.asp HTTP/1.1

    Content-Type: application/x-www-form-urlencoded

    Transfer-Encoding: chunked

    Trailer: Authorization, Cookie, X-Ignore

    GET /app/updatecheck.asp HTTP/1.1
    User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; Trident/4.0;)

    Cookie: PREF=ID=0d3e398a45b12d8a:U=ed647eec50a4edca:HSID=AOHxfGVRaYatVUIUs
    Authorization: Basic c2VjcmV0OnBhc3N3b3Jk


    However, it is unlikely that real-life web applications actually would accept this type of Trailer header, as it is a very little-used part of HTTP, not supported by mainstream browsers such as Internet Explorer.

    Workaround package available to disable TLS renegotiation

    While a comprehensive, multi-vendor fix is in the works, today we released a workaround package which allows system administrators to disable TLS renegotiation on their server. This package is described in KB article 977377 and disables TLS/SSL renegotiation for all TLS/SSL-protected protocols. We need to stress that TLS/SSL renegotiation is a feature of the protocol that is used by several applications. One common example is Microsoft Exchange and ActiveSync. These applications may operate inappropriately upon installation of this workaround package.  We recommend that administrators carefully test the workaround prior to deploying it on production systems. The package will protect all clients making SSL connections to the server on which it is installed. Installing it on clients will not provide any security benefit.

    We recommend that customers only install this workaround if they have very specific concerns regarding this vulnerability and require an ad-interim solution while Microsoft and other vendors work on a revision of the protocol.

    Despite the low risk of active exploitation, this vulnerability breaches a security promise made by the TLS protocol and we intend to address it comprehensively. We are working with the relevant standards body and our partners in ICASI to ensure that our fix for this issue is compatible with third party SSL/TLS-enabled solutions.

    Thanks to Nasko Oskov from Windows Security, Eric Lawrence from Internet Explorer and Jonathan Ness from the MSRC Engineering team for their significant contributions to this blog post.

    -Maarten Van Horenbeeck, MSRC Program Manager

    *Posting is provided "AS IS" with no warranties, and confers no rights.*


  • MS10-006 and MS10-012: SMB security bulletins

    Today we released two bulletins to address vulnerabilities in SMB. MS10-006 addresses two vulnerabilities in the SMBv1 client implementation, and MS10-012 addresses four vulnerabilities in the SMB server implementation. In this blog entry, we want to help you understand the vulnerabilities and better prioritize the updates.

    What are the SMB server vulnerabilities and how could they be exploited?

    The first issue is an authenticated remote code execution (RCE) vulnerability (CVE-2010-0020) in the server SMBv1 implementation on all versions of Windows. A long filename can lead to kernel pool memory corruption in an error path. This issue has a severity rating of important as an attacker needs to be authenticated to perform the attack.

    The second and third issues (CVE-2010-0021 and CVE-2010-0022) are remote unauthenticated denial-of-service (DoS) vulnerabilities in the SMBv1 and SMBv2 server implementations and have Important severity ratings. CVE-2010-0021 is caused by a race condition when handling valid Negotiate requests. CVE-2010-0022 is caused by an integer underflow when handling a path name in the SMB request.

    The final server-side issue is CVE-2010-0231, an Important-severity remote unauthenticated elevation of privilege (EoP) affecting all versions of Windows. This issue is unusual in that it is caused by weak entropy in the cryptographic challenge values generated by SMB. An attacker could exploit this issue and gain access to the SMB server under the credentials of an authorized user.

    We recommend placing higher priority on the SMB server-side update due to the risk of RCE and EoP on all systems.

    What are the SMB client vulnerabilities?

    The first issue is a Critical severity kernel pool memory corruption vulnerability (CVE-2010-0016) in the client SMBv1 implementation on Windows 2003 and below. The vulnerability happens during the SMB client/server negotiation phase and  does not require authentication. A remote attacker who successfully exploits this issue could gain complete control of the target system.

    The second one is an Important severity race condition in the client SMBv1 code on Windows Vista and higher (CVE-2010-0017). The vulnerability is in the SMB client/server negotiation phase and does not require authentication. The severity of this issue depends on the version of Windows on the client computer:

    • On Windows Vista and Windows Server 2008 a remote attacker would not be able to gain control of a target system using this vulnerability; instead the impact would be a system DoS. However, a local authenticated user could potentially exploit this vulnerability and gain control of the system. On these platforms, the severity of this issue is Important. The update should be prioritized for Terminal Servers and other system that allow users to log on locally.
    • On Windows 7 and Windows Server 2008 R2 a remote attacker can potentially gain control of a target system using a variation of this vulnerability. Due to the RCE impact, the severity of this issue on these platforms is Critical. Unsuccessful attempts to exploit the vulnerability would result in a system DoS. This update should be applied to all affected systems due to the RCE risk; however, due to the nature of the issue, DoS is much more likely.

    Why does the SMB client update have an aggregate severity of Critical on Windows 7 and Windows Server 2008 R@, but only Important on Vista and Windows Server 2008?

    As outlined above, CVE-2010-0017 affects Vista and higher systems and is rated Important on Vista and Windows Server 2008. However, on Windows 7 and Windows Server 2008 R2, the severity is higher (Critical) due to the risk of RCE. The reason for this difference is a design change made during the Windows 7 development process, when the SMB client code moved to use a new kernel-mode networking I/O mechanism – Winsock Kernel (WSK). This change exposed the SMB client code to different timing conditions, exposing a race condition. This race condition is different to the issue present on Vista and Windows Server 2008, although it is reachable under similar conditions.

    It should be noted that WSK is not the source of the vulnerability and no change to WSK is being made in this update.

    How could a malicious user exploit the SMB client vulnerabilities?

    It is important to understand that both of the vulnerabilities in MS10-006 are in the SMB client implementation and do not affect SMB server roles. (For more details regarding SMB client/server roles, see ref. 2 below) Therefore, in order to exploit this vulnerability, an attacker would have to setup a malicious SMB server and trick the client to connect to it. If your environment does not allow outbound SMB connections to the Internet (best practice), then you are protected from the Internet attack vector. A malicious user on the local network (or a compromised computer) would be able to exploit this issue by performing man-in-the-middle attacks and responding to SMB requests from clients within the Intranet.

    The Internet attack vector would involve browsing to a malicious or compromised website, or receiving HTML email with embedded links to a malicious SMB server. If a victim attempted to retrieve the files or other content specified in the HTML file, an outbound SMB connection would be made and assuming SMB traffic were allowed through the perimeter firewall, the issues could be exploited.

    Depending on your environment, you may not need to place a high priority on the SMB client-side update. 

    We would like to thank Dustin Childs from MSRC and Kowshik Jaganathan and the Windows Sustained Engineering team for their hard work on this update.

    - Bruce Dang and Mark Wodrich, MSRC Engineering


    1. Winsock Kernel on MSDN (
    2. SMB client/server roles (

  • MS10-007: Additional information and recommendations for developers

    Today we are releasing MS10-007 to address a URL validation issue generally applicable to the ShellExecute API.

    How would a malicious user leverage this vulnerability?

    This issue involves how ShellExecute handles strings that appear to be legitimate URLs, but are malformed such that they result in execution of arbitrary code. Various technologies use ShellExecute to initiate a browser navigation. It is assumed that the operation is safe if the parameter passed to ShellExecute “looks like a URL.” It seems reasonable to expect that if a string is a valid URL, it cannot possibly result in execution of arbitrary code when processed by ShellExecute.

    But while it may be valid to assume that

    will not execute a system command, it should be understood that the core purpose of the ShellExecute API is to execute files. This vulnerability involves the use of a valid-looking URL that ShellExecute will run as a system command. To get exploited, a user might click on a link appearing outside the context of the browser, for example as an address book contact. At that point, a remote executable could run without prompting.

    Recommendations for Developers

    We recommend that application developers wishing to use ShellExecute for URL-based navigation take a conservative approach to validation. First, developers should heed the specific guidance in KB943552 as it pertains to this scenario. Additionally, rather than simply validating that a URL is of the format [scheme]://[FQDN]/[path]?[querystring], it is advisable to also validate that the URL scheme is one of a specific set of allow-listed URL schemes, for example “http” or “https.” This is consistent with guidance provided in Chapter 4 of the Microsoft Design Guidelines for Secure Web Applications.

    As it turns out, many commonly-used code paths actually do perform this level of URL scheme validation and thus do not present viable attack vectors, even in the presence of the ShellExecute bug. Defense-in-depth FTW!

    Thanks to Chengyun Chu for insight and analysis on this issue.

    - David Ross, MSRC Engineering

    *Posting is provided "AS IS" with no warranties, and confers no rights.*


  • Assessing the risk of the February Security Bulletins

    This morning, we released 13 security bulletins.  Five have maximum severity rating of Critical, seven Important, and one Moderate. One security bulletin (MS10-015, ntvdm.dll) has exploit code already published, but we are not aware of any active attacks or customer impact. We hope that the table and commentary below helps you prioritize the deployment of the updates appropriately.


    Most likely attack vector

    Max Bulletin Severity

    Max Exploit- ability Index

    Likely first 30 days impact

    Platform mitigations




    Victim opens malicious AVI or WAV file.



    Likely to see working exploit in next 30 days. 





    Attacker hosts a malicious webpage, lures victim to it.



    Likely to see exploit code released resulting in binary on WebDAV share being executed.


    For more detail, see this SRD blog post.




    (SMB Client)

    Locally logged-in attacker with low privilege runs a malicious executable to elevate to high privilege.



    Likely to see working exploit code for local attacker escalation.


    For more detail, see this SRD blog post.




    (ActiveX kill-bits)

    Attackers host a malicious webpage, lures victim to it



    third party code not rated for exploitability

    Likely to see working exploit for vulnerabilities in third party ActiveX controls.




    (SMB Server)

    Attacker sends network-based malicious connection to remote Windows machine via SMB.



    Likely to see working proof-of-concept in next 30 days for CVE-2010-0231 resulting in attacker luring remote victim user to open file on attacker server and initiating a connection back to machine where remote victim is logged on. 


    Less likely to see working exploit code for the authenticated code execution vulnerability (CVE-2010-0020) or unauthenticated denial-of-service vulnerabilities (CVE-2010-0021 and 0022)


    For more detail, see this SRD blog post.





    Attacker already able to execute code as low-privileged user escalates privileges.



    Proof of concept code already widely available. No active attacks.





    Attacker who logs onto console of system where victim later logs onto console of same system can potentially run code with victim’s identity.



    Likely to see proof-of-concept code published for this vulnerability.  However, unlikely to see wide-spread exploitation due to extensive user interaction required.





    Attacker sends network-based attack against system on local subnet.



    May see denial-of-service proof-of-concept code published leveraging CVE-2010-0239 or CVE-2010-0241.  Attackers are less likely to discover real-world attack surface in next 30 days for CVE-2010-0240.

    /GS effective mitigation for CVE’s:





    CVE-2010-0242 is denial of service only.




    Attack sends malicious .xls file to victim who opens it with Office XP or lower.  (Office 2003, 2007 not affected.)



    Likely to see working exploit file effective on Office XP in first 30 days.

    Office 2003 and Office 2007 not affected.




    Attacks malicious .ppt file to victim who opens it with Powerpoint Viewer 2003.



    Likely to see working exploit file effective on PowerPoint Viewer 2003.  However, PowerPoint Viewer 2003 was replaced online by PowerPoint Viewer 2007.  Only victims who use PowerPoint Viewer 2003 from Office 2003 install disk would be vulnerable to the PowerPoint Viewer vulnerabilities. 


    Less likely to see working exploit for other PowerPoint vulnerabilities.

    PowerPoint Viewer cases not exploitable for code execution on Windows XP or Windows 2000.



    Attacker running code on virtual machine crashes host OS.



    Unlikely to see working exploit code in next 30 days.




    Attacker potentially able to cause denial of service via Kerberos traffic if victim server configured with trust relationship to MIT Kerberos realm.



    Unlikely to see public exploit code in next 30 days.




    Attacker sends malicious JPEG to victim.   Victim saves JPG, launches mspaint, and then file->opens the malicious JPEG



    Likely to see exploit code developed.  Unlikely to have broad impact as mspaint is not registered file association for JPEG.



    We also released Security Advisory 977377 covering the TLS man-in-the-middle vulnerabilities disclosed several months ago.  The advisory describes more about the Microsoft attack surface (and a mitigation option).  You can read our blog post about the issue here:

    Thanks to all of MSRC Engineering for providing data for this table.  Thanks Jerry Bryant, Andrew Roths, and Mark Wodrich for your ordering / priority thoughts.

    -     - Jonathan Ness, MSRC Engineering

    *Posting is provided "AS IS" with no warranties, and confers no rights.*

    Update Feb 10 - Changed Exploitability Index rating for third party killbits to "n/a" as we do not rate exploitability of third party vulnerabilities.

    Update Feb 25 - Thanks to Secunia for sending in the PowerPoint Viewer question.  Listed non-exploitable platforms in the table above.

  • Using code coverage to improve fuzzing results

    Hi all,

    I’m Lars Opstad, an engineering manager in the MSEC Science group supporting the SDL within Microsoft. I wanted to share with you some of the ways that we are improving our internal security practices, specifically in the area of file fuzzing.

    Many fuzzers take a good file (template) as a starting point for creating malformed content. For years, dating back to Writing Secure Code in 2002, the SDL has encouraged people to fuzz their file parsers with a “good representative set” of template files. The question is “how do I tell if I have a good set?” At Microsoft, our answer is code coverage, which is in line with other external research (Guruswami, Miller, and Sutton, Greene and Amini to cite only a few).

    Code Coverage Definition

    Before getting into how we use code coverage for fuzzing, I should briefly define the terms. If you are well versed in these concepts, feel free to skip to “Using Coverage for Better Fuzzing.”

    Code coverage is a measure of how well a set of code is exercised based on a set of tests or inputs. The simplest scheme of code coverage is at the function level. Functional coverage measures which functions were called and which functions were not called. While this is useful at a very broad level, it does not really provide the granularity necessary to measure meaningful coverage for a file parser. A better metric is “block” coverage. A basic block in coverage terms is a section of code that always executes in sequence with no jumps in or out. Take the following code sample: 

    x+=1;                                     // block 1


                    <                              // block 2 (although executes after block 3)

                    sin(y)    )              // block 3

    {                                              // block 4

    else                                       // block 5

    x+=1;                                     // block 6
    return result;

    For the above example, there are two main coverage cases depending on whether sin(x)<sin(y) or not. If it is, the blocks covered will be (in order of execution) 1, 3, 2, 4, 6 (not 5). If not, the blocks covered will be 1, 3, 2, 5, 6 (not 4). For full coverage, both those cases are necessary.

    There are additional coverage concepts that can be useful in more detailed analysis, but for the purpose of this post, that lays a good foundation.

    There are many free and commercially available coverage tools (including one built into Visual Studio, described here) that will give you this type of information. The focus of this post is not on any one tool, but using the measurements to improve file fuzzing effectiveness.

    Using Coverage for Better Fuzzing

    Our first goal with using coverage for fuzzing was simply to evaluate the overall completeness of a template set. This was achieved by measuring each template running through the application and calculating the overall block coverage of the parser code. The most important part of this exercise was performing the gap analysis when coverage was too low (e.g. less than 60%). Through the gap analysis, the team could then find or build additional templates that exercised the remaining code. For example, with a standard RIFF format, it is possible that one type of tag was missing from all the templates. With that knowledge, the owner of that component could easily find or generate a file that contains the missing record type and increase the coverage appropriately.

    After doing some work in this area, we realized that many template files covered almost exactly the same blocks of code. Fuzzing one of these templates is roughly equivalent with fuzzing any of the other similar templates. We wanted to be able to increase our odds of hitting the less frequently used parts of code, under the basic assumption that bugs are much more common in the less tested parts of the code. Thus, we wanted to reduce duplication and choose the fewest number of templates while still providing equivalent coverage relative to our full original set.

    Technical Details of Coverage Based Template Optimization

    The algorithm we used to choose the optimized set of templates is known as the Greedy Algorithm, and operates under the premise of choosing the best fit, then the next best fit and so on. In this particular case, here is the pseudo-code for our algorithm (after having gathered coverage for each template): 

    while (FullTemplateList.count>0)


    Template candidate=null;

    foreach (Template t in FullTemplateList)


                                    if (candidate==null || t.BlocksCovered>candidate.BlocksCovered)








    // Exclude blocks from candidate template from coverage of other templates

    foreach (Template t in FullTemplateList)


                    t.CoverageMask = t.CoverageMask & ~candidate.CoverageMask;


                    if (t.BlocksCovered == 0)







    To illustrate this algorithm, assume you have measured the coverage of seven input files that have given you the following masks:




    Template A












    Template B












    Template C












    Template D












    Template E












    Template F












    Template G












    The best choice from this list is Template B, with six blocks covered. After one iteration of the loop, the lists would look like (gray blocks indicating blocks covered by the original template that already have coverage in the OptimalTemplateList):



    Template B












    FullTemplateList (remaining):

    Template A












    Template C












    Template D












    Template F













    Template E












    Template G












    Ultimately, this would result in four templates in the OptimalTemplateList:

    Template B












    Template C












    Template A












    Template D












    Results from Template Optimization

    Coverage-based template optimization was first used at Microsoft during the development of Windows Vista. Applying the algorithm above yielded a significant reduction in the number of templates necessary to cover the file parser. In examining 90,000 JPG files parsed by MSHTML.DLL (used in Internet Explorer), we identified <100 files that gave the same coverage as all 90,000. Due to this, finding issues near blocks of code covered only by one template increased in odds by a factor of 900.

    While investigating one MSRC case, Gavin Thomas from MSRC Engineering used template optimization to see how effective a small, optimized set of templates were in comparison to a normal, large set of templates. For this test, he used two different mutation fuzzers and ran them for 500,000 iterations each. The issue count was nearly twice for the optimized set for either fuzzer:

    Fuzzing results using optimized vs. non-optimized templates


    Code coverage is only one approach to improving the fuzzing process. While it does not guarantee that you will find all of the bugs in your product, it increases the probability that your fuzzer will reach the less frequently exercised parts of your code because you reduce the time spent exercising more common blocks of code. MSEC encourages software development teams to use this technique to maximize the efficacy of their fuzz testing.

    Thanks to Adel Abouchaev and Michael Levin for refinements in the algorithm, Gavin Thomas and Andy Renk for experiments verifying effectiveness, Michael Howard for historical perspective, and Damian Hasse and Matt Miller for technical review.


    Lars Opstad, MSEC Science