Get on-the-go access to the latest insights featured on our Trustworthy Computing blogs.
In June 2013, we released EMET 4.0 and customer response has been fantastic. Many customers across the world now include EMET as part of their defense-in-depth strategy and appreciate how EMET helps businesses prevent attackers from gaining access to computers systems. Today, we’re releasing a new version, EMET 4.1, with updates that simplify configuration and accelerate deployment.
EMET anticipates the most common techniques adversaries might use and shields computer systems against those security threats. EMET uses security mitigation technologies such as Data Execution Prevention (DEP), Mandatory Address Space Layout Randomization (ASLR), Structured Exception Handler Overwrite Protection (SEHOP), Export Address Table Access Filtering (EAF), Anti-ROP, and SSL/TLS Certificate Trust Pinning, to help protect computer systems from new or undiscovered threats. EMET can also protect legacy applications or third party line of business applications where you do not have access to the source code.
Today’s EMET 4.1 release includes new functionality and updates, such as:
EMET built by Microsoft Security Research Center (MSRC) engineering team, brings the latest in security science to your organization. While many EMET users exchange feedback and ideas at TechNet user forums, a less known fact is that Microsoft Premier Support options are also available for businesses that deploy EMET within their enterprise. Many of our customers deploy EMET - at scale - through the Microsoft System Center Configuration manager and apply enterprise application, user and accounts rules through Group Policy. EMET works well with the tools and support options our customers know and use today.
As we continue to advance EMET, we welcome your feedback on what you like and what additional features would help in protecting your business. If you are attending RSA Conference at San Francisco, or the Blackhat Conference in Las Vegas next year, be sure to stop by the Microsoft booth, and share your feedback with us. We look forward to hearing from you.
The EMET Team
Today we released MS13-098, a security update that strengthens the Authenticode code-signing technology against attempts to modify a signed binary without invalidating the signature. This update addresses a specific instance of malicious binary modification that could allow a modified binary to pass the Authenticode signature check. More importantly, it also introduces further hardening to consider a binary “unsigned” if any modification has been made in a certain portion of the binary. Those improvements to the Authenticode Signature Verification, as described below, require changes from a small but important set of third party application developers, so the new process will not be enabled by default today. Six months from today, on June 10, 2014, binaries will be considered unsigned if they do not conform to the new verification process. If you want to enable the regkey and test the change today, Please see the information posted in the security advisory 2915720.
We’d like to use this blog post to share more about Authenticode and the role of Authenticode in enabling customer confidence while running executables downloaded from the internet.
Authenticode and signed binaries
Authenticode® is a digital signature format that is used to determine the origin and integrity of software binaries. Authenticode is based on Public-Key Cryptography Standards (PKCS) #7 signed data and X.509 certificates to bind an Authenticode-signed binary to the identity of a software publisher.
The idea behind Authenticode is to leverage the reputation of a software developer or company to help customers make a trust decision. If you trust a particular company, you can execute binaries published by that company from any source and media as long as the binary is signed with the company’s valid Authenticode signature. The valid Authenticode signature does not guarantee that the software is safe to run. However, it does prove that the binary has been signed by that particular company and has not been altered afterward. According to the Authenticode Portable Executable format specification the Authenticode signatures can be “embedded” in a Windows PE file, in a location specified by the Certificate Table entry in Optional Header Data Directories. When Authenticode is used to sign a Windows PE file, the algorithm that calculates the file's Authenticode hash value excludes certain PE fields. When embedding the signature in the file, the signing process can modify these fields without affecting the file's hash value. These fields are as follows: the checksum, certificate table RVA, certificate table size and the attribute certificate table. The certificate table contains a PKCS #7 SignedData structure containing the PE file's hash value, a signature created by the software publisher’s private key, and the X.509 v3 certificates that bind the software publisher’s signing key to a legal entity. A PKCS #7 SignedData structure can optionally contain:
The following schema illustrates how an Authenticode signature is included in a Windows PE file:
This design philosophy allows no executable code being omitted from the signature. Once the code is authenticated and attributed to an author, everything that code does is the responsibility of the author.
Installer programs and Authenticode signatures
Downloaders and installers signed by Authenticode require special consideration because they download and execute other executables. As explained above, Authenticode testifies that a particular program’s code was signed by the author and that the executable code has not changed since then. If that particular program is designed to download and run a second executable from the network, the original program needs to verify the second executable’s integrity with Authenticode or by other means. The developers of a program should pay close attention to guarantee the same level of trust and integrity across the full download chain to ensure that executables downloaded by their installer are also trustworthy and cannot be replaced with a malicious program.
Microsoft was informed that a small set of third party installer programs, signed with a valid Authenticode signature, had been modified to download a different executable than the one originally designed to download without invalidating the installer’s Authenticode signature.
We analyzed each of these samples to study the execution flow to learn how they worked. Firstly, the code, which is covered by Authenticode, is executed from the entry point. Then, this code looks for an overlay inside the file to read a stream. Finally, the code decrypts a URL from the stream and downloads and executes an executable from that URL. The programs unfortunately omitted the integrity check before executing the downloaded file.
An overlay is data appended to the physical image of a Portable Executable. Explained simply, one can take a PE binary, append additional content to the end without adjusting the header, and it has an overlay. This data area is not defined as part of the image by the PE header and therefore isn't part of the virtual image of the loaded PE. The Authenticode verification code verifies that the Attribute Certificate table is the last thing in the file and report an invalid signature if something is appended after that.
In the sample reported to Microsoft, the size of the certificate directory had been increased to cover the overlay. So technically, the certificate directory was the last thing in the file, allowing the test to pass.
There are couple of lessons to learn from this sample:
First, the developer stored the URL stream intentionally inside the certificate directory to allow them to sign once and create different installers. This particular sub-optimal practice enabled the malicious binary modification reported to Microsoft. The MS13-098 hardening, expected to go into effect June 10, 2014, will consider a binary unsigned in this case going forward.
Second, the developer in this particular case was not validating the file subsequently downloaded and executed by any other means.
A better way to enable the scenario desired by the developer would have been to store the URL as a resource inside the PE. In doing so, the URL would have been covered by Authenticode and any attempt to modify the downloaded URL would have resulted in a failed signature verification.
Today, with MS13-098, as described above, the Windows team has added additional hardening and mitigation in order to detect this kind of bad practices and report an invalid Authenticode signature. When enabled, these hardening measures will detect cases where additional unverified data has been placed after the PKCS #7 blob in the certificate directory of a PE image. The check validates that there is no non-zero data beyond the PKCS #7 structure. Although this change prevents one form of this unsafe practice, it is not capable of preventing all such forms; for example, an application developer can place unverified data within the PKCS #7 blob itself which will not be taken into account when verifying the Authenticode signature. However, as this blog post illustrates, developers are strongly discouraged from doing this as it can lead to unsafe application behavior and could potentially put the reputation of the signing company at risk if their application makes use of the unverified data in an unsafe way.
- Ali Rahbar, MSRC engineering team
I would like to thank the Jonathan Ness, Elia Florio and Ali Pezeshk
Ref : http://download.microsoft.com/download/9/c/5/9c5b2167-8017-4bae-9fde-d599bac8184a/Authenticode_PE.docx
In our previous posts in this series, we described various mitigation improvements that attempt to prevent the exploitation of specific classes of memory safety vulnerabilities such as those that involve stack corruption, heap corruption, and unsafe list management and reference count mismanagement. These mitigations are typically associated with a specific developer mistake such as writing beyond the bounds of a stack or heap buffer, failing to correctly track reference counts, and so on. As a result, these mitigations generally attempt to detect side-effects of such mistakes before an attacker can get further along in the exploitation process, e.g. before they gain control of the instruction pointer.
Another approach to mitigating exploitation is to focus on breaking techniques that can apply to many different classes of memory safety vulnerabilities. These mitigations can have a broader impact because they apply to techniques that are used further along in the process of exploiting many vulnerabilities. For example, once an attacker has gained control of the instruction pointer through an arbitrary vulnerability, they will inherently need to know the address of useful executable code to set it to. This is where well-known mitigations like Data Execution Prevention (DEP) and Address Space Layout Randomization (ASLR) come into play – both of which have been supported on Windows for many releases now. When combined, these mitigations have proven that they can make it very difficult to exploit many classes of memory safety vulnerabilities even when an attacker has gained control of the instruction pointer.
In recent years, attackers have been increasingly forced to adapt to exploiting vulnerabilities in applications that make use of a broad range of mitigations, including DEP and ASLR. As our previous blog post explains, there are scenarios where both DEP and ASLR can be bypassed, and it is no surprise that attackers have been increasingly focused on improving their ability to do so. Likewise, attackers have placed greater interest on finding classes of vulnerabilities, such as use after free issues, that can grant them more flexibility when attempting to develop an exploit. In light of these trends, we focused a significant amount of attention in Windows 8 and Windows 8.1 on improving the robustness of mitigations that break exploitation techniques that apply to many classes of vulnerabilities. In particular, this blog post will cover some of the noteworthy improvements that have been made to ASLR, such as eliminating predictable address space mappings, increasing the amount of entropy that exists in the address space, and making it more difficult to disclose address space information where possible.
For compatibility reasons, executable images (DLLs/EXEs) must indicate their desire to be randomized by ASLR through the /DYNAMICBASE flag provided by the Visual C++ linker. If an executable image has not been linked with /DYNAMICBASE, the Windows kernel will attempt to load the image at its preferred base address. This can cause the executable to reliably load at a predictable location in memory. While this limitation of ASLR on Windows is by design, real-world exploits for software vulnerabilities have become increasingly reliant on executable images that have not enabled support for ASLR.
To generically mitigate this issue, an application running on Windows 8 (or Windows 7 with KB 2639308 installed) can elect to enable a security feature known as Force ASLR. When enabled, this feature forces all relocatable images to be randomized when they are loaded by the application, including those images which have not been linked with /DYNAMICBASE. This is designed to prevent executable images from being loaded at a predictable location in memory. If desired, an application can also elect to prevent non-relocatable images from being loaded.
Since the Force ASLR feature will cause executable images to be randomized that have not enabled support for ASLR, there is a risk that a compatibility problem may be encountered. In addition, the method used to forcibly relocate executable images that have not been built with /DYNAMICBASE can have a performance impact due to decreased page sharing. This is because Force ASLR essentially mimics the behavior of a base address collision and thus may incur a memory cost due to copy-on-write. As such, the Force ASLR feature is not enabled by default for applications running on Windows 8. Instead, applications must explicitly enable this feature.
The Force ASLR feature has been enabled by default for critical applications such as Internet Explorer 10+, Microsoft Office 2013, and Windows Store applications. This means an attacker attempting to exploit vulnerabilities accessible through these applications will not be able to rely on non-randomized executable images. For example, our recent security update to enable ASLR for HXDS.DLL would not appreciably impact the security posture of applications that enable Force ASLR because this non-ASLR DLL would already get randomized. Going forward, attackers will most likely need to rely on a vulnerability-specific address space information disclosure when exploiting applications that completely enable ASLR or that make use of Force ASLR.
Virtual memory allocations that are made by an application can have their base address assigned in one of three ways: bottom-up, top-down, or based. The bottom-up method searches for a free region starting from the bottom of the address space (e.g. VirtualAlloc default), the top-down method searches starting from the top of the address space (e.g. VirtualAlloc with MEM_TOP_DOWN), and the based method attempts to allocate memory at a supplied base address (e.g. VirtualAlloc with an explicit base). In practice, the majority of the memory that is allocated by an application will use the bottom-up allocation method, and it is rare to see applications use the based method for allocating memory.
Prior to Windows 8, bottom-up and top-down allocations were not randomized by ASLR. This meant that allocations made through functions like VirtualAlloc and MapViewOfFile had no entropy and could therefore be placed at a predictable location in memory (barring non-deterministic application behavior). While certain memory regions had their own base randomization, such as heaps, stacks, TEBs, and PEBs, all other bottom-up and top-down allocations were not randomized.
Starting with Windows 8, the base address of all bottom-up and top-down allocations is explicitly randomized. This is accomplished by randomizing the address that bottom-up and top-down allocations start from for a given process. In this way, fragmentation within the address space is minimized while also realizing the benefits of randomizing the base address of all memory allocations that are not explicitly based.
For compatibility reasons, applications must indicate that they support bottom-up and top-down randomization. An application can do this by linking their EXE with /DYNAMICBASE.
One of the major differences between 64-bit and 32-bit applications on Windows is the size of the virtual address space that is made available to a process. 64-bit applications whose EXE is linked with the /LARGEADDRESSAWARE flag receive 8 TB in Windows 8 (128 TB in Windows 8.1) of virtual address space whereas 32-bit applications only receive 2 GB by default. The limited amount of address space available to 32-bit applications places practical constraints on the amount of entropy that can be applied by ASLR when randomizing the location of memory mappings. Since 64-bit applications do not suffer from these limitations by default, it is possible to significantly increase the amount of entropy that is used by ASLR. The ASLR implementation in Windows 8 takes full advantage of this opportunity by enabling high degrees of entropy for 64-bit applications. Providing higher degrees of entropy can further decrease the reliability of exploits written by an attacker and also makes it less likely that an attacker will be able to correctly guess or brute force an address.
This feature introduces 1 TB of variance into the address that bottom-up allocations start from. This equates to 24 bits of entropy, or a 1 in 16,777,216 chance of guessing the start address correctly. Since heaps, stacks, and most other memory regions are allocated bottom-up, this has the effect of making traditional address space spraying attacks impractical (such as heap and JIT spraying). This is because systems today do not have enough memory available to spray the amount that would be needed to achieve even small degrees of reliability. In addition, executable images that are randomized by the Force ASLR feature receive high degrees of entropy as a result of the high entropy bottom-up randomization feature being enabled for an application. As a result, exploits for vulnerabilities in 64-bit applications that rely on address space spraying will first need to disclose the address at least one bottom-up allocation in order to determine where data may have been placed relative to that address.
For compatibility reasons, this feature is disabled by default and must be enabled on a per-application basis. This is because some 64-bit applications have latent pointer truncation issues that can surface when dealing with pointers above 4 GB (significant bits set beyond bit 31). 64-bit applications that enable this feature are guaranteed to receive memory addresses that are above 4 GB when allocating bottom-up memory (unless insufficient address space exists above 4 GB). 64-bit applications can enable support for this feature by linking their EXE with the /HIGHENTROPYVA linker flag provided by Visual Studio 2012. This flag is enabled by default for native applications when building with Visual Studio 2012 and beyond.
This feature introduces 8 GB of variance into the address that top-down allocations start from. This equates to 17 bits of entropy, or a 1 in 131,072 chance of guessing the start address correctly. 64-bit processes automatically receive high degrees of entropy for top-down allocations if top-down randomization has been enabled (which is controlled by whether the EXE linked with /DYNAMICBASE).
Prior to Windows 8, 64-bit executable images received the same amount of entropy that was used when randomizing 32-bit executable images (8 bits, or 1 in 256 chance of guessing correctly). The amount of entropy applied to 64-bit images has been significantly increased in most cases starting with Windows 8:
The reason that entropy differences exist due to the base address of an image is again for compatibility reasons. The Windows kernel currently uses the preferred base address of an image as a hint to decide if the image supports being based above 4 GB. Images that are based below 4 GB may not have been tested in scenarios where they are relocated above 4 GB and therefore may have latent pointer truncation issues. As such, the Windows kernel makes a best-effort attempt to ensure that these images load below 4 GB. Because of these constraints, the vast majority of 64-bit EXEs and DLLs in Windows 8 and Windows 8.1 have been based above 4 GB to ensure that they benefit from the highest possible degrees of entropy. 64-bit images produced by the Visual C++ tool chain also base images above 4 GB by default.
The effectiveness of ASLR is inherently dependent on an attacker being unable to discover the location of objects in memory. In some cases, an attacker can leverage a vulnerability in a program to disclose information about the address space layout of a process. For example, an attacker could use a vulnerability to read memory that they would not normally be able to access and thereby discover the address of a DLL in memory. While the mechanics of disclosing address space information are typically dependent on the application and vulnerability that are being exploited, there are some general approaches that attackers have identified. In Windows 8, we have taken steps to eliminate and destabilize known address space information disclosure vectors, although these changes have by no means resolved the general problem posed by address space information disclosures.
Windows uses an internal data structure known as SharedUserData to efficiently communicate certain pieces of information from the kernel to all processes on a system. For efficiency and compatibility reasons, the memory address that SharedUserData is located at is consistent across all processes on a system and across all versions of Windows, including Windows 8 (0x7ffe0000). Since Windows XP Service Pack 2, this memory region has contained pointers into a system DLL (NTDLL.DLL) that have been used to enable efficient system call invocation, among other things. The presence of image pointers at a known-fixed location in memory was noted as being useful in the context of certain types of address space information disclosures. In Windows 8 (and now prior versions with MS13-063 installed), all image pointers have been removed from SharedUserData to mitigate this type of attack. The removal of these pointers effectively mitigated a DEP/ASLR bypass that was later disclosed which affected versions of Windows prior to Windows 8 (involving LdrHotPatchRoutine).
Ensuring that all forms of memory allocation have some base level of entropy has the effect of eliminating what would otherwise be predictable memory mappings in the address space. In some cases, an attacker may be able to leverage a vulnerability to read the contents of arbitrary locations in memory. In these cases, the attacker must be able to predict or discover the address of the object that they wish to read from (typically via heap spraying). The improvements that have been made to ASLR in Windows 8 have made it more difficult for attackers to do this reliably, particularly on 64-bit. As a result, any address space information disclosure that relies on reading from a specified location in memory will generally be more difficult and less reliable on Windows 8. It should be noted, however, that the size of the 32-bit address space places practical constraints on the impact of this, particularly in cases where an attacker is able to fill a large portion of the address space with desired content.
While the previous sections highlighted improvements that were made to ASLR for user mode applications, we also made investments in Windows 8.1 into hardening the Windows kernel against disclosing kernel address space information to lesser privileged user mode processes. The majority of these improvements focused on restricting low integrity processes from accessing certain system and process information classes that intentionally expose kernel address space information. In addition, certain kernel addresses were removed from the shared desktop heap and hypervisor-assisted restrictions were added to limit the exposure of kernel addresses via instructions that can be used to query the GDT/IDT descriptor table base addresses. As a result of these improvements, sandboxed applications such as Internet Explorer 11, Microsoft Office 2013, and Windows Store apps are all prevented from discovering addresses through these interfaces. This means it will be more difficult for attackers to exploit local kernel vulnerabilities as a means of escaping these sandboxes.
The improvements that have been made to ASLR in Windows 8 and Windows 8.1 have addressed various limitations that attackers have been taking advantage when exploiting vulnerabilities. As a result of these improvements, we anticipate that attackers will continue to be increasingly reliant on address space information disclosures as a means of bypassing ASLR. Forcing attackers to rely on information disclosures has the effect of adding another costly check box to the conditions that attackers need to satisfy when exploiting memory safety vulnerabilities in modern applications.
- Matt Miller
Today we released MS13-106 which resolves a security feature bypass that can allow attackers to circumvent Address Space Layout Randomization (ASLR) using a specific DLL library (HXDS.DLL) provided as part of Microsoft Office 2007 and 2010.
The existence of an ASLR bypass does not directly enable the execution of code and does not represent a risk by itself, since this bypass still needs to be used in conjunction with another higher-severity vulnerability that allows remote code execution in order to provide some value to attackers. ASLR is an important mitigation that has been supported since Windows Vista which, when combined with Data Execution Prevention (DEP), makes it more difficult to exploit memory corruption vulnerabilities.
Because ASLR is a generic mitigation aimed at stopping exploitation techniques that apply to many vulnerabilities, attackers are very interested in attempting to find new bypass techniques for it. These bypass techniques typically fall into one of three categories:
1) Presence of a DLL at runtime that has not been compiled with /DYNAMICBASE flag (therefore loaded at a predictable location in memory).
2) Presence of predictable memory regions or pointers that can be leveraged to execute code or alter program behavior.
3) Leveraging a vulnerability to dynamically disclose memory addresses.
The ASLR bypass that has been addressed by MS13-106 falls into the first category. The difficulty of finding and using an ASLR bypass varies based on the category of the technique. It is generally easier to identify DLL modules that fall into the first category (especially expanding the search through third-party browser plugins and toolbars), while it is generally more difficult, and less reusable, to find or create a bypass for the other two categories. For example, two of the recent Internet Explorer exploits that were used in targeted attacks (CVE-2013-3893 and CVE-2013-3897) both relied on the same ASLR bypass, which fell into the first category -- making use of the HXDS.DLL library that is part of Office 2007/2010 that was not compiled using /DYNAMICBASE.
Bolstering the effectiveness of ASLR helps to harden the security of our products and that is why MSRC continues to releasetools and updates that enforce ASLR more broadly on Windows (such as KB2639308 and EMET) and to release updates that close known ASLR bypasses as part of our defense-in-depth strategy (such as MS13-063 for the bypass presented atCanSecWest 2013).
Today MS13-106 closes one additional known bypass that will no longer be available to attackers.
- Elia Florio, MSRC Engineering
Today we released eleven security bulletins addressing 24 CVE’s. Five bulletins have a maximum severity rating of Critical while the other six have a maximum severity rating of Important. We hope that the table below helps you prioritize the deployment of the updates appropriately for your environment.
(GDI+ TIFF parsing)
(Kernel mode drivers)
(hxds.dll ASLR mitigation bypass)
- Jonathan Ness, MSRC's engineering team
Over the weekend we became aware of an active attack relying on an unknown remote code execution vulnerability of a legacy ActiveX component used by Internet Explorer. We are releasing this blog to confirm one more time that the code execution vulnerability will be fixed in today’s UpdateTuesday release and to clarify some details about the second vulnerability reported.
The attack was disclosed to us by our security partners and it’s the typical targeted attack exploited through a specific “drive-by” legitimate website that was compromised to include an additional piece of code added by the attackers. At the moment we have analyzed samples from the active attack that are targeting only older Internet Explorer versions running on Windows XP (IE7 and 8) because of the lack of additional security mitigations on those platforms (Windows 7 is affected but not under active attack). EMET was able to proactively mitigate this exploit.
The exploit was created combining two distinct vulnerabilities, but with different impact and severity ratings:
The remote code execution vulnerability with higher severity rating will be fixed immediately in today’s Patch Tuesday and we advise customers to prioritize the deployment of MS13-090 for their monthly release. As usual, customers with Automatic Updates enabled will not need to take any action to receive the update and will be automatically protected.
The information disclosure vulnerability does not allow remote code execution and so it has a lower security rating since it will be typically used in combination with other high-severity bug (like it happened with CVE-2013-3918) to improve effectiveness of exploitation. Also, this vulnerability requires attackers to have prior knowledge of path and filenames present on targeted machines in order to be successfully exploited. This vulnerability was not used to bypass ASLR, but simply to remotely determine the exact version of a certain DLL on disk in order to build a more precise ROP payload (it’s a local information disclosure rather than a memory address disclosure).
We are still investigating the impact and root cause of the information disclosure vulnerability and we may follow up with additional information and mitigations as they become available.
Elia Florio – MSRC Engineering
Today we released four security bulletins addressing six CVE’s. All four bulletins have a maximum severity rating of Important. We hope that the table below helps you prioritize the deployment of the updates appropriately for your environment.
(NDProxy, a kernel-mode driver)
Addresses vulnerability described by security advisory 2914486.
(win32k.sys, a kernel-mode driver)
(Microsoft Dynamics AX)
- Jonathan Ness, MSRC engineering
In light of recent research into practical attacks on biases in the RC4 stream cipher, Microsoft is recommending that customers enable TLS1.2 in their services and take steps to retire and deprecate RC4 as used in their TLS implementations. Microsoft recommends TLS1.2 with AES-GCM as a more secure alternative which will provide similar performance.
Developed in 1987 by Ron Rivest, RC4 was one of the earliest stream ciphers to see broad use. It was initially used in commercial applications and was faster than alternatives when implemented in software and over time became pervasive because of how cheap, fast and easy it was to implement and use.
Stream vs. Block
At a high level, a stream cipher generates a pseudorandom stream of bits of the same length as the plaintext and then XOR's the pseudorandom stream and the plaintext to generate the cipher text. This is different than a block cipher, which chunks plaintext into separate blocks, pads the plaintext to the block size and encrypts the blocks.
A History of Issues
RC4 consists of a Key Scheduling Algorithm (KSA) which feeds into a Psuedo-Random Generator (PRG), both of which need to be robust for use of the cipher to be considered secure. Beyond implementation issues with RC4, such as, document encryption and the 802.11 WEP implementation, there are some significant issues that exist in the KSA which lead to issues in the leading bytes of PRG output.
By definition, a PRG is only secure if the output is indistinguishable from a stream of random data. In 2001, Mantin and Shamir < http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6198 > found a significant bias in RC4 output, specifically that the second byte of output would be ‘0’. Attacks and research have evolved since 2001, the work of T. Isobe, T. Ohigashi, Y. Watanabe, M. Morii of Kobe University in Japan is especially significant when evaluating the risk of RC4 use. Their findings show additional, significant bias in the first 257 bytes of RC4 output as well as practical plaintext recovery attacks on RC4.
The plaintext recovery attacks show a passive attacker collecting ciphertexts encrypted with different keys. Given 2^32 ciphertexts with different keys, the first 257 bytes of the plaintext are recovered with a probability of more than .5 < http://home.hiroshima- u.ac.jp/ohigashi/rc4/Full_Plaintext_Recovery%20Attack_on%20Broadcast_RC4_pre-proceedings.pdf >.
Since early RC4 output cannot be discarded from SSL/TLS implementations without protocol-level changes, this attack demonstrates the practicality of attacks against RC4 in common implementations.
Internet Use of RC4
One of the first steps in evaluating the customer impact of new security research and understanding the risks involved has to do with evaluating the state of public and customer environments. Using a sample size of five million sites, we found that 58% of sites do not use RC4, while approximately 43% do. Of the 43% that utilize RC4, only 3.9% require its use. Therefore disabling RC4 by default has the potential to decrease the use of RC4 by over almost forty percent.
Today's update provides tools for customers to test and disable RC4. The launch of Internet Explorer 11 (IE 11) and Windows 8.1 provide more secure defaults for customers out of the box.
IE 11 enables TLS1.2 by default and no longer uses RC4-based cipher suites during the >TLS handshake.
More detailed information about these changes can be found in the IE 11 blog <http://blogs.msdn.com/b/ie/archive/2013/11/12/ie11-automatically-makes-over-40-of-the-web-more-secure-while-making-sure-sites-continue-to-work.aspx>
For application developers, we have implemented additional options in SChannel which allow for its use without RC4.
Today's update KB 2868725provides support for the Windows 8.1 RC4 changes on Windows 7, Windows 8, Windows RT, Server 2008 R2, and Server 2012. These updates will not change existing settings and customers must implement changes (which are detailed below) to help secure their environments against weaknesses in RC4.
Call to Action
Microsoft strongly encourages customers to evaluate, test and implement the options for disabling RC4 below to increase the security of clients, servers and applications. Microsoft recommends enabling TLS1.2 and AES-GCM. Clients and servers running on Windows with custom SSL/TLS implementations, such as, Mozilla Firefox and Google Chrome will not be affected by changes to SChannel.
How to Completely Disable RC4
Clients and Servers that do not wish to use RC4 ciphersuites, regardless of the other party's supported ciphers, can disable the use of RC4 cipher suites completely by setting the following registry keys. In this manner any server or client that is talking to a client or server that must use RC4, can prevent a connection from happening. Clients that deploy this setting will not be able to connect to sites that require RC4 while servers that deploy this setting will not be able to service clients that must use RC4.
How Other Applications Can Prevent the Use of RC4 based Cipher Suites
RC4 is not turned off by default for all applications. Applications that call into SChannel directly will continue to use RC4 unless they opt-in to the security options. Applications that use SChannel can block the use of RC4 cipher suites for their connections by passing the SCH_USE_STRONG_CRYPTO flag to SChannel in the SCHANNEL_CRED structure. If compatibility needs to be maintained, then they can also implement a fallback that does not pass this flag.
Microsoft recommends that customers upgrade to TLS1.2 and utilize AES-GCM. On modern hardware AES-GCM has similar performance characteristics and is a much more secure alternative to RC4.
- William Peteroy, MSRC
I would like to thank the Windows, Internet Explorer and .NET teams for their work in this effort as well as Ali Rahbar and Suha Can of the MSRC Engineering team for their hard work and input. I would also like to thank Matthew Green for the excellent write-ups he has for this and other applied cryptography issues on his blog.
Today we released eight security bulletins addressing 19 CVE’s. Three bulletins have a maximum severity rating of Critical while the other five have a maximum severity rating of Important. We hope that the table below helps you prioritize the deployment of the updates appropriately for your environment.
(Digital signature parsing denial-of-service)
- Jonathan Ness, MSRC Engineering
Microsoft is recommending that customers and CA’s stop using SHA-1 for cryptographic applications, including use in SSL/TLS and code signing. Microsoft Security Advisory 2880823 has been released along with the policy announcement that Microsoft will stop recognizing the validity of SHA-1 based certificates after 2016.
Secure Hashing Algorithm 1 (SHA-1) is a message digest algorithm published in 1995 as part of NIST’s Secure Hash Standard. A hashing algorithm is considered secure only if it produces unique output for any given input and that output cannot be reversed (the function only works one-way).
Since 2005 there have been known collision attacks (where multiple inputs can produce the same output), meaning that SHA-1 no longer meets the security standards for a producing a cryptographically secure message digest.
For attacks against hashing algorithms, we have seen a pattern of attacks leading up to major real-world impacts:
Short history of MD5 Attacks
Source: Marc Stevens, Cryptanalysis of MD5 and SHA-1
It appears that SHA-1 is on a similar trajectory:
Microsoft is actively monitoring the situation and has released a policy for deprecating SHA-1 by 2016.
Microsoft recommends that Certificate Authorities (CA’s) stop using SHA-1 for digital signatures and that consumers request SHA-2 certificates from CA’s.
Microsoft has publicized a new policy that calls for users and CA’s to stop using SHA1-based certificates by 2016.
I would like to thank the Microsoft PKI team as well as Ali Rahbar of the MSRC Engineering team for their hard work and input.