Get on-the-go access to the latest insights featured on our Trustworthy Computing blogs.
One of Microsoft’s longstanding strategies toward improving software security continues to involve investing in defensive technologies that make it difficult and costly for attackers to exploit vulnerabilities. These solutions generally have a broad and long lasting impact on software security because they focus on eliminating classes of vulnerabilities or breaking the exploitation primitives that attackers rely on. This also helps improve software security over the long run because it shifts the focus away from the hand-to-hand combat of finding, fixing, and servicing individual vulnerabilities and instead accepts the fact that complex software will undoubtedly have vulnerabilities.
To further emphasize our commitment to this strategy and to cast a wider net for defensive ideas, Microsoft awarded the BlueHat Prize in 2012 and subsequently started the ongoing Microsoft Defense Bounty in June, 2013 which has offered up to $50,000 USD for novel defensive solutions. Last month, we announced that we will now award up to $100,000 USD for qualifying Microsoft Defense Bounty submissions. This increase further affirms the value that we place on these types of defensive solutions and we’re hopeful this will help encourage more research into practical defenses.
In this blog post, we wanted to take this opportunity to explain how we evaluate defensive solutions and describe the characteristics that we look for in a good defense. There are a few key dimensions that we evaluate solutions based on, specifically: robustness, performance, compatibility, agility, and adoptability. Keeping these dimensions in mind when developing a defense should increase the likelihood of the defense being deemed a good candidate for the Microsoft Defense Bounty and will also go a long way toward increasing the likelihood of the defense being integrated and adopted in practice.
The first and most important criteria deals with the security impact of the defense. After all, the defense must have an appreciable impact on making it difficult and costly to exploit vulnerabilities in order for it to be worth pursuing.
We evaluate robustness in terms of:
The impact the defense will have on modern classes of vulnerabilities and/or exploits. A good defense should eliminate a common vulnerability class or break a key exploitation technique or primitive used by modern exploits.
The level of difficulty that attackers will face when adapting to the defense. A good defense should include a rigorous analysis of the limitations of the defense and how attackers are likely to adapt to it. Defenses that offer only a small impediment to attackers are unlikely to qualify.
The second most important criteria deals with the impact the defense is expected to have on performance. Our customers expect Windows and the applications that run on Windows to be highly responsive and performant. In most cases, the scenarios where we are most interested in applying defenses (e.g. web browsers) are the same places where high performance is expected. As such, it is critical that defenses have minimal impact on performance and that the robustness of a defense justifies any potential performance costs.
Since performance impact is measured across multiple dimensions, it is not possible to simply distill the requirements down into a single allowed regression percentage. Instead, we evaluate performance in context using the following guide posts:
Impact on industry standard benchmarks. There are various industry standard benchmarks that evaluate performance in common application workloads (e.g. browser DOM/JS benchmarks). Although SPEC CPU benchmarks can provide a good baseline for comparing defense solutions, we find that it is critical to evaluate performance impact under real-world application workloads.
Impact on runtime performance. This is measured in terms of CPU time and elapsed time either in the context of benchmarks or in common application scenarios (e.g. navigating to top websites in a browser). Defenses with low impact on runtime performance will rate higher in our assessment.
Impact on memory performance. This is measured in terms of the how the defense affects various aspects of memory footprint including commit, working set, and code size. Defenses with low impact on memory performance will rate higher in our assessment.
One of the reasons that Windows has been an extremely successful platform is because of the amount of care that has been taken to retain binary compatibility with applications. As such, it is critical that defenses retain compatibility with existing applications or that there is a path for enabling the defense in an opt-in fashion. Rebuilding the world (e.g. all binaries that run on Windows) is not an option for us in general. As such, defenses are expected to be 100% compatible in order to rate highly in our assessment.
In particular, we evaluate compatibility in terms of the following:
Binary interoperability. Any defense must be compatible with legacy applications/binaries or it must support enabling the defense on an opt-in basis. If an opt-in model is pursued, then the defense must generally support legacy binaries (such as legacy DLLs) being loaded by an application that enables the defense. In the case where the defense requires binaries to be rebuilt in order to be protected, the protected binaries must be able to be loaded on legacy versions of Windows that may not support the defense at runtime.
ABI compliant. Related to the above, any defense that alters code generation or runtime interfaces must be compliant with the ABI (e.g. cannot break calling conventions or other established contracts). For example, details on the x64 ABI for Windows can be found here.
No false positives. Defenses must not make use of heuristics or other logic that may be prone to false positives (and thus result in application compatibility issues).
Given the importance of binary compatibility and the long term implications of design decisions, we also need to take care to ensure that we are afforded as much flexibility as possible when it comes to making changes to defenses in the future. In this way, we pay close attention to the agility of the design and implementation associated with a defense. Defenses that have good properties in terms of agility are likely to rate higher in our assessment.
All defenses carry some cost with them that dictates how easy it will be to build them and integrate them into the platform or applications. This means we must take into account the engineering cost associated with building the defense and we must assess the taxes that may be inflicted upon developers and systems operators when it comes to making use of the defense in practice. For example, defenses that require developers to make code changes or system operators to manage complex configurations are less desirable. Defenses that have low engineering costs and minimize the amount of friction to enable them are likely to rate higher in our assessment.
The criteria above are intended to help provide some transparency and insight into the guidelines that we use when evaluating the properties of a defense both internally at Microsoft and for Microsoft’s Defense Bounty program. It’s certainly the case that we set a high bar in terms of what we expect from a defensive solution, but we believe we have good reasons for doing so that are grounded both in terms of the modern threat landscape and our customer’s expectations.
We strongly encourage anyone with a passion for software security to move “beyond the bugs” and explore opportunities to invest time and energy into developing novel defenses. Aside from being a challenging and stimulating problem space, there is now also the potential to receive up to $100,000 USD for your efforts in this direction through the Microsoft Defense Bounty program. The impact that these defenses can have on reducing the risk associated with software vulnerabilities and helping keep people safe is huge.
Microsoft Security Response Center
Today Microsoft released update MS15-085 to address CVE-2015-1769, an important severity security issue in Mount Manager. It affects both client and server versions, from Windows Vista to Windows 10.
The goal of this blog post is to provide information on the detection guidance to help defenders detect attempts to exploit this issue.
As part of the update, we are also shipping an event log to help defenders detect attempts to use this vulnerability on their systems. The event log will be triggered every time a malicious USB that relies on this vulnerability, is mounted on the system. If such an event is recorded, it means that attempt to exploit the vulnerability is blocked. So once the update is installed, companies auditing event logs will be able to use this as detection mechanism.
These events are logged under “System” channel and is reported as an error.
Note: Multiple events may be raised for single exploit attempt.
After installing the update, exploitation attempts will result in the Event (ID:100) generated with MountMgr or Microsoft-Windows-MountMgr, as its source. The CVE associated with this vulnerability will also be logged for further reference. Note that this error code can also be logged in other extremely rare circumstances. So, while there is a very small chance that this event log could be generated in non-malicious scenarios, there is a high probability that an exploitation attempt is the cause of the event.
- Axel Souchet, Vishal Chauhan from MSRC Vulnerabilities and Mitigations Team
In the last several releases of Windows, we’ve been working hard to make the platform much more powerful for administrators, developers, and power users alike. PowerShell is an incredibly useful and powerful language for managing Windows domains. Unfortunately, attackers can take advantage of these same properties when performing “post-exploitation” activities (actions that are performed after a system has been compromised).
The PowerShell team, recognizing this behavior, have significantly advanced security focused logging and detection in Windows 10 and PowerShell v5. Some capabilities take advantage of new functionality in Windows 10, others are available on Windows 8.1 and Windows Server 2012R2 with KB3000850, and the functionality that is specific to PowerShell v5 will be available on Windows 7 and Windows Server 2008R2 when the next version of the Windows Management Framework is released.
Antimalware engines traditionally focus the majority of their attention on files that applications (or the system) open. Scripts have historically been difficult for antimalware engines to evaluate because scripts can be so easily obfuscated. Unless the antimalware engine can emulate the particular scripting language, it will not be able to deobfuscate the script to view the actual payload.
A new Windows 10 feature, the Antimalware Scan Interface (AMSI), lets applications become active participants in malware defense. Applications can now request antimalware evaluation of any content – not just files on disk. This gives script engines (and other applications) the ability to request evaluation of deobfuscated scripts and to request evaluation of content entered directly in to the console.
For more information about the Antimalware Scan Interface, see http://blogs.technet.com/b/mmpc/archive/2015/06/09/windows-10-to-offer-application-developers-new-malware-defenses.aspx
Given the incredible power of PowerShell’s shell and scripting language, we’ve made major advancements in PowerShell’s transparency for PowerShell v5:
Improved over-the-shoulder transcription
Previous versions of PowerShell provided the ability to transcript sessions. Unfortunately, transcripting was not globally configurable, could be easily disabled, and only worked in the interactive PowerShell console. The result was that transcripting was not very practical for detecting malicious activity.
For PowerShell v5 and Windows 8.1/2012R2 with KB3000850, the following changes have been made for transcripting:
Deep script block logging
Previous versions of PowerShell provided “pipeline logging”, a mechanism to log all commands invoked (with the parameters). The way this information was logged made it difficult to use for security auditing and detection. In PowerShell v5 and Windows 8.1/2012R2 with KB3000850, PowerShell gains a new security focused logging mechanism called “Script Block Logging”.
A “script block” is the base level of executable code in PowerShell. Even when a script is obfuscated, it must eventually be transformed from an obfuscated script block back in to a deobfuscated script block containing its malicious payload.
PowerShell now provides the option to log all script blocks to the event log prior to executing them. In the case of obfuscated scripts, both the obfuscated and deobfuscated script blocks will end up being logged. This gives defenders the ability to see exactly what PowerShell code is being run on their systems.
One concern when you increase logging on a machine is that the information you’ve logged may contain sensitive data. If an attacker compromises that machine, this sensitive information in the event log may be a gold mine of credentials, confidential systems, and more. To help address this concern, we’ve added Protected Event Logging to Windows 10, which lets participating applications encrypt sensitive data as they write it to the event log. You can then decrypt and process these logs once you’ve moved them to a more secure and centralized log collector.
Additional security features added to PowerShell v5 include:
For more information about PowerShell’s transparency improvements, Protected Event Logging, and other PowerShell security improvements, see http://blogs.msdn.com/b/powershell/archive/2015/06/09/powershell-the-blue-team.aspx
Joe Bialek (MSRC Engineering), Lee Holmes (PowerShell)
Today, we’re releasing the Enhanced Mitigation Experience Toolkit (EMET) 5.2, which includes increased security protections to improve your security posture. You can download EMET 5.2 from microsoft.com/emet or directly from here.
Following is the list of the main changes and improvements:
Your feedback is always welcome, as it helps us improve EMET. Feel free to reach out to us by sending an email to firstname.lastname@example.org.
3/16/2015 UPDATE: We have received reports of certain customers experiencing issues with EMET 5.2 in conjunction with Internet Explorer 11 on Windows 8.1. We recommend customers that downloaded EMET 5.2 before March 16th, 2015 to download it again via the link below, and to uninstall the previous EMET 5.2 before installing the new one.
- The EMET Team
Today we are releasing MS15-011 & MS15-014 which harden group policy and address network access vulnerabilities that can be used to achieve remote code execution (RCE) in domain networks. The MS15-014 update addresses an issue in Group Policy update which can be used to disable client-side global SMB Signing requirements, bypassing an existing security feature built into the product. MS15-011 adds new functionality, hardening network file access to block access to untrusted, attacker controlled shares when Group Policy refreshes on client machines. These two updates are important improvements that will help safeguard your domain network.
Let’s looks at one of the typical attack scenarios as outlined in the below diagram.
This is an example of a ‘coffee shop’ attack scenario, where an attacker would attempt to make changes to a shared network switch in a public place and can direct the client traffic an attacker-controlled system.
In this scenario, the attacker has observed traffic across the switch and found that a specific machine is attempting to download a file located at the UNC path: \\10.0.0.100\Share\Login.bat .
On the attacker machine, a share is set up that exactly matches the UNC path of the file requested by the victim: \\*\Share\Login.bat.
The attacker will have crafted the contents of Login.bat to execute arbitrary, malicious code on the target system. Depending on the service requesting Login.bat, this could be executed as the local user or as the SYSTEM account on the victim’s machine.
The attacker then modifies the ARP table in the local switch to ensure that traffic intended for the target server 10.0.0.100 is now routed through to the attacker’s machine.
When the victim’s machine next requests the file, the attacker’s machine will return the malicious version of Login.bat.
This scenario also illustrates that this attack cannot be used broadly across the internet – an attacker need to target a specific system or group of systems that request files with this unique UNC.
An RCE vulnerability existed in how Group Policy received and applied policy data when connecting to a domain. Concurrently, a vulnerability existed whereby Group Policy could fail to retrieve valid security policy and instead apply a default, potentially less secure, group policy. This could, in turn, be used to disable the domain enforced SMB Signing policy.
The risk of circumventing SMB Signing was fixed by correcting how Group Policy would behave when it fails to retrieve a current, valid security policy. After applying the fix, Group Policy will no longer fall back to defaults and will instead the last known good policy if a security policy retrieval fails.
While SMB Signing safeguards against Man-In-The-Middle attacks, with the vulnerabilities like the above in Group Policy it is possible to disable it. But more importantly, SMB Client doesn’t require SMB Signing by default so it is possible to direct the domain related traffic, especially the unencrypted traffic, to attacker controlled machines and serve malicious content to the victims in response. To block this kind of attacks we added the ability to harden the UNC path access within domain network.
Universal Naming Convention (UNC) is a standardized notation that Windows uses to access file resources; in most cases these resource are located on a remote server. UNC allows the system to access files using the standard path format: \\<hostname>\<sharename>\<objectname>, for example, \\contoso.com\fileshare\passwords.txt, without requiring the application or user to understand the underlying transport technology used to provide access to the file. In this way, the UNC client in Windows abstract network file technologies, such as SMB and WebDAV, behind a familiar file path syntax. UNC paths are used in Windows in everything from printers to file shares, providing an attacker a broad surface to explore and attack. To properly address this weakness in UNC, we had to improve UNC to allow a server to authenticate itself to a client, thereby allowing the client machine to trust the content coming from the target system and be protected from malicious file shares.
When an application or service attempts to access a file on a UNC path, the Multiple UNC Provider (MUP) is responsible for enumerating all installed UNC Providers and selecting one of them to satisfy all I/O requests for specified the UNC path. On a typical Windows client installation, MUP would try the Server Message Block (SMB) protocol first, but if the SMB UNC Provider is unable to establish an SMB connection to the server, then MUP would try the next UNC Provider and so on until one of them is able to establish a connection (or there are no remaining UNC providers, in which case the request would fail).
In most scenarios, the security of the server is paramount: the server stores sensitive data, so file transfer protocols are designed in such a way that the server validates the client’s identity and performs appropriate access checks before allowing the client to read from or write to files. The trust boundary when Group Policy applies computer and/or user policies is completely reversed: the sensitive data is the client’s configuration and the remote server has the capability of changing the client’s configuration via transmission of policy files and/or scripts. When Group Policy is retrieving data from the policy server, it important that the client performs security checks to validate the server’s identity and prevent data tampering between the client and the server (in addition to the normal security checks performed by the server to validate the client’s credentials). It is also important that MUP only send requests for Group Policy files to UNC Providers that support these client-side checks, so as to prevent the checks from being bypassed when the SMB UNC provider is unable to establish a connection to the server.
Group Policy isn’t necessarily the only service for which these extra client-side security checks are important. Any application or service that retrieves configuration data from a UNC path, and/or automatically runs programs or scripts located on UNC paths could benefit from these additional security checks. As such, we’ve added new feature, UNC Hardened Access, along with a corresponding Group Policy setting in which MUP can be configured to require additional security properties when accessing configured UNC paths.
When UNC Hardened Access is configured, MUP starts handling UNC path requests in a slightly different manner:
Each time MUP receives a request to create or open a file on a UNC path, it evaluates the current UNC Hardened Access Group Policy settings to determine which security properties are required for the requested UNC path. The result of this evaluation is utilized for two purposes:
MUP only considers UNC Providers that have indicated support for all of the required security properties. Any UNC Providers that do not support all of the security properties required via the UNC Hardened Access configuration for the requested UNC path will simply be skipped.
Once a UNC Provider is selected by MUP, the required security properties are passed to that UNC Provider via an Extra Create Parameter (ECP). UNC Providers that opt-in to UNC Hardened Access must respect the required security properties indicated in the ECP; if the selected UNC Provider is unable to establish a connection to the server in a manner that satisfies these requirements (e.g. due to lack of server support), then the selected UNC Provider must fail the request.
Even 3rd party applications and services can take advantage of this new feature without additional code changes; simply add the necessary configuration details in Group Policy. If a UNC Provider is able to establish a connection to the specified server that meets the required security properties, then the application/service will be able to open handles as normal; if not, opening handles would fail, thus preventing insecure access to the remote server.
Please refer to http://support.microsoft.com/kb/3000483 for details on configuring the UNC Hardened Access feature.
Consider the following scenario:
Contoso maintains an Active Directory domain named corp.contoso.com with two Domain Controllers (DCs) named dc1.corp.contoso.com and dc2.corp.contoso.com.
A laptop is joined to the aforementioned domain.
Group Policy is configured to apply a Group Policy Object (GPO) to the laptop that configures UNC Hardened Access for the paths \\*\NETLOGON and \\*\SYSVOL such that all access to these paths require both Mutual Authentication and Integrity.
Group Policy is configured to apply a GPO to the laptop that runs the script located at \\corp.contoso.com\NETLOGON\logon.cmd each time a user logs on to the machine.
With the above configuration, when a user successfully logs onto the laptop and the laptop has any network access, Group Policy will attempt to run the script located at \\corp.contoso.com\NETLOGON\logon.cmd, but behind the scenes, MUP would only allow the script to be run if the file could be opened and transmitted securely:
MUP receives a request to open the file at \\corp.contoso.com\NETLOGON\logon.cmd.
MUP notices that the requested path matches \\*\NETLOGON and paths that match \\*\NETLOGON are configured to require both Mutual Authentication and Integrity. UNC Providers that do not support UNC Hardened Access or indicate that they do not support both Mutual Authentication and Integrity are skipped.
The Distributed File Server Namespace (DFS-N) client detects that the requested UNC path is a domain DFS-N namespace and begins its process of rewriting the UNC path (all DFS-N requests will be subject to the same security property requirements identified by MUP in step 2):
The DFS-N client uses the DC Locator service and/or DFS-N DC Referral requests (depending on the OS version) to identify the name of a DC on the domain (e.g. dc1.corp.contoso.com).
DFS rewrites the path using the selected DC (e.g. \\corp.contoso.com\NETLOGON\logon.cmd becomes \\dc1.corp.contoso.com\NETLOGON\logon.cmd). Since Mutual Authentication is required and the target is expected to be a DC, DFS utilizes a special Kerberos Service Principal Name (SPN) to verify that the name retrieved in the previous step is indeed the name of a DC (if the name is not a DC, Kerberos authentication would fail due to an unknown SPN)
If there are additional DFS-N links in the specified UNC path, the DFS-N client continues iterating and replacing paths to DFS-N links with paths to available targets until it has a UNC path that does not have any remaining DFS-N links.
The final UNC path is passed back to MUP to select a UNC Provider to handle the request. MUP selects the SMB UNC provider since DCs utilize SMB to share the NETLOGON and SYSVOL shares.
The SMB UNC Provider establishes an authenticated session with the selected SMB Server (if an authenticated session is not already present). If the authenticated session is not mutually authenticated (e.g. authentication was performed utilizing the NTLM protocol), then SMB UNC Provider would fail the request to open logon.cmd since mutual authentication requirement identified in step 2 could not be met.
The SMB UNC Provider enables SMB Signing on all requests related to logon.cmd since MUP informed SMB that integrity is required for this request. Any attempts to tamper with the SMB requests or responses would invalidate the signatures on the requests/responses, thus allowing the receiving end to detect the unauthorized modifications and fail the SMB requests.
In this scenario, the client-side requirement of end-to-end mutual authentication and integrity protects the laptop from running a logon script located on a malicious server via the following security checks:
The requirement for Mutual Authentication ensures that the connection is not redirected to an unexpected (and potentially malicious) SMB Server when SMB Client attempts to establish a connection to the requested UNC path.
The requirement for Integrity enables SMB Signing, even if the SMB Client does not require SMB Signing for all paths by default. This protects the system against on-the-wire tampering that can be used to change the contents of the logon.cmd script as it is transmitted between the selected DC and the laptop.
The combined requirements for both Mutual Authentication and Integrity ensures that the final rewritten path selected by DFS-N Client matches a path allowed by the DFS-N namespace configuration and that spoofing and/or tampering attacks cannot cause DFS-N client to rewrite the requested UNC path to a UNC path hosted by an unexpected (and potentially malicious) server.
Without these client-side protections, ARP, DNS, DFS-N, or SMB requests sent via Group Policy over untrusted networks could potentially cause the Group Policy service to run a the logon.cmd script from the wrong SMB Server.
Once the update included as part of the bulletin MS15-011 is installed, follow the instructions at http://support.microsoft.com/kb/3000483 to ensure your systems are adequately protected. MS15-014 will install and provide protection without any additional configuration.
Please note that the Offline Files feature is not available on paths for which the UNC Hardened Access feature is enabled.
In many regards, this security ‘fix’ is more accurately described as completely new functionality in Windows. Adding something of this scale posed a unique challenge to security response. Software vulnerabilities are typically more narrowly constrained in both investigation and remediation – and most response is structured to address that scope. Among the benefits of Coordinated Vulnerability Disclosure (CVD) is it provides for greater flexibility and deeper collaboration with researchers to take the necessary time and perspective to deliver the most complete security solutions to customers. In this case we tackled a vulnerability that required a much greater scope in engineering to deliver a solution.
Most vulnerabilities reported to the MSRC are bugs in a single component, which are investigated, understood, and fixed within industry accepted response times. Creating the new functionality of UNC Hardening, however, required an entirely new architecture which increased development time and necessitated extensive testing. Thanks to CVD, and the close collaboration with the passionate security researchers who reported the vulnerability, Microsoft had sufficient time to build the right fix for a complicated issue. If the security researchers were not willing to refrain from disclosure until our fix was ready, customers would have been put at risk.
Microsoft offers its appreciation to the CVD community and a special thanks to the reporters of the issue which has resulted in UNC Hardening: Jeff Schmidt of JAS Global Advisors, Dr. Arnoldo Muller-Molina of simMachines, The Internet Corporation for Assigned Names and Numbers (ICANN) and Luke Jennings from MWR Labs.
Geoffrey Antos (Windows), Brandon Caldwell (MSRC), Stephen Finnigan (MSRC), Swamy Gangadhara (MSRC)
Please note that the Offline Files feature is not available on paths for which the UNC Hardened Access feature is enabled.
Today Microsoft released update MS14-068 to address CVE-2014-6324, a Windows Kerberos implementation elevation of privilege vulnerability that is being exploited in-the-wild in limited, targeted attacks. The goal of this blog post is to provide additional information about the vulnerability, update priority, and detection guidance for defenders. Microsoft recommends customers apply this update to their domain controllers as quickly as possible.
CVE-2014-6324 allows remote elevation of privilege in domains running Windows domain controllers. An attacker with the credentials of any domain user can elevate their privileges to that of any other account on the domain (including domain administrator accounts).
The exploit found in-the-wild targeted a vulnerable code path in domain controllers running on Windows Server 2008R2 and below. Microsoft has determined that domain controllers running 2012 and above are vulnerable to a related attack, but it would be significantly more difficult to exploit. Non-domain controllers running all versions of Windows are receiving a “defense in depth” update but are not vulnerable to this issue.
Before talking about the specific vulnerability, it will be useful to have a basic understanding of how Kerberos works.
One point not illustrated in the diagram above is that both the TGT and Service Ticket contain a blob of data called the PAC (Privilege Attribute Certificate). A PAC contains (among other things):
When a user first requests a TGT from the KDC, the KDC puts a PAC (containing the user’s security information) into the TGT. The KDC signs the PAC so it cannot be tampered with. When the user requests a Service Ticket, they use their TGT to authenticate to the KDC. The KDC validates the signature of the PAC contained in the TGT and copies the PAC into the Service Ticket being created.
When the user authenticates to a service, the service validates the signature of the PAC and uses the data in the PAC to create a logon token for the user. As an example, if the PAC has a valid signature and indicates that “Sue” is a member of the “Domain Admins” security group, the logon token created for “Sue” will be a member of the “Domain Admins” group.
CVE-2014-6324 fixes an issue in the way Windows Kerberos validates the PAC in Kerberos tickets. Prior to the update it was possible for an attacker to forge a PAC that the Kerberos KDC would incorrectly validate. This allows an attacker to remotely elevate their privilege against remote servers from an unprivileged authenticated user to a domain administrator.
Companies currently collecting event logs from their domain controllers may be able to detect signs of exploitation pre-update. Please note that this logging will only catch known exploits; there are known methods to write exploits that will bypass this logging.
The key piece of information to note in this log entry is that the “Security ID” and “Account Name” fields do not match even though they should. In the screenshot above, the user account “nonadmin” used this exploit to elevate privileges to “TESTLAB\Administrator”.
After installing the update, for Windows 2008R2 and above, the 4769 Kerberos Service Ticket Operation event log can be used to detect attackers attempting to exploit this vulnerability. This is a high volume event, so it is advisable to only log failures (this will significantly reduce the number of events generated).
After installing the update, exploitation attempts will result in the “Failure Code” of “0xf” being logged. Note that this error code can also be logged in other extremely rare circumstances. So, while there is a chance that this event log could be generated in non-malicious scenarios, there is a high probability that an exploitation attempt is the cause of the event.
The only way a domain compromise can be remediated with a high level of certainty is a complete rebuild of the domain. An attacker with administrative privilege on a domain controller can make a nearly unbounded number of changes to the system that can allow the attacker to persist their access long after the update has been installed. Therefore it is critical to install the update immediately.
Azure Active Directory does not expose Kerberos over any external interface and is therefore not affected by this vulnerability.
Joe Bialek, MSRC Engineering
Today Microsoft shipped MS14-072 to the .NET Framework to address an Elevation of Privilege (EOP) vulnerability in the .NET Remoting feature. This update fixes a specific issue in .NET Remoting that permitted specially crafted remote endpoints to take advantage of this vulnerability.
What is .NET Remoting?
.NET Remoting is a layer within the .NET Framework that facilitates communication between application domains (AppDomains). This permits managed objects to communicate across AppDomain, process, or machine boundaries. Objects can be passed by-reference across these boundaries. When methods are called on these objects, control again passes across the boundary to execute within the boundary where the object originated. Refer to .NET Remoting for more details.
Typical use of this is a .NET Remoting service that returns objects by-reference to the client. When the client invokes methods on these objects, code is executed on the server. Similarly, a client can pass an object by-reference to the service, and when that service invokes methods on that object, code executes on the client.
Use WCF instead of .NET Remoting
.NET Remoting is a legacy technology that is inherently less secure than WCF. It is unable to preserve isolation of trust levels across the client/server boundary, allowing specially crafted messages to exploit the use of by-reference objects to achieve an elevation of privilege. It also uses a legacy serialization technology that makes the server vulnerable to denial-of-service attacks. Because of this we recommend developers of distributed applications based on .NET Remoting to consider porting their code to Windows Communication Foundation (WCF) which is more secure.
The boundary transparency in .NET Remoting makes it possible for a remote untrusted endpoint to take control of a .NET Remoting service. Because the service typically executes with full privileges, this permits a remote endpoint with lower privileges to elevate themselves using functionality exposed by .NET Remoting services. Within a completely trusted environment, this is normally not a problem. But if the .NET Remoting service is exposed to untrusted remote endpoints, this becomes an issue as it crosses the security boundary.
Read about how .NET Remoting works to know more information around why we recommend moving away from it.
The Windows Communication Foundation (WCF) unified programming model is designed to be robust when communicating with untrustworthy endpoints. In many cases this may be a small exercise to move to the newer, more supported technology. The MSDN article entitled How to: Migrate Managed-Code .NET Remoting to WCF provides a number of examples and code samples to help ease this transition process.
Securing .NET Remoting services
Moving to newer technology takes time, meanwhile here are some steps to make .NET Remoting service more secure:
This enables encryption and digital signatures if the remoting system determines that the channel implements ISecurableChannel.
There is no authentication or encryption by default, developers have to do this explicitly.
- Swamy Gangadhara (MSRC) & Ron Cain (.NET)
Today we released fourteen security bulletins addressing 33 unique CVE’s. Four bulletins have a maximum severity rating of Critical, eight have a maximum severity rating of Important, and two have a maximum severity rating of Moderate. This table is designed to help you prioritize the deployment of updates appropriately for your environment.
Most likely attack vector
Max Bulletin Severity
Platform mitigations and key notes
(Windows OLE Component
User opens malicious Office document.
CVE-2014-6352 used in limited, targeted attacks in the wild.
A malicious user sends specially crafted packets to an exposed service.
Internally found during a proactive security assessment.
MS14-065 (Internet Explorer)
User browses to a malicious webpage.
User opens malicious Word document.
Office 2010 and later versions are not affected by any of the vulnerabilities in this bulletin.
Only MSXML 3 is vulnerable.
User opens a malicious link.
This is a Cross Site Scripting vulnerability.
User opens a malicious PDF document with Adobe Reader.
CVE-2014-4077 used in one targeted attack in the wild to bypass Adobe Reader Sandbox via binary hijacking using malicious DIC file.
(Windows Audio Service)
Local elevation of privilege only, could potentially be utilized as a sandbox escape.
An authenticated Windows user runs a malicious program on the target system.
Local elevation of privilege only.
Attacker sends malicious data to a vulnerable web application.
Applications not using .NET Remoting are not vulnerable.
A whitelist-only site could be accessed by an attacker not connected to the proper domain. A blacklist could be similarly bypassed.
The vulnerability manifests itself in configurations where the Domain Name Restrictions whitelist and blacklist features are used with entries that contain wildcards.
IP Address Restrictions are not affected
An authorization audit log could be bypassed in some scenarios.
The vulnerability only applies to failed AuthZ scenarios, and not to failed AuthN. For example, if a valid user logon is attempted for a user that does not have privilege to RDP into a server, that event log may not be recorded. Event logs will still be recorded if an invalid user or password is presented.
An authenticated user could not be logged out in some configurations.
Manifests itself in a specific configuration where the ADFS server is configured to use a SAML Relying Party with no sign-out endpoint configured.
(Kernel Mode Drivers [win32k.sys])
User browses to malicious webpage.
The vulnerability leads to denial of service only.
- Suha Can, MSRC Engineering
Today, we’re releasing the Enhanced Mitigation Experience Toolkit (EMET) 5.1 which will continue to improve your security posture by providing increased application compatibility and hardened mitigations. You can download EMET 5.1 from microsoft.com/emet or directly from here. Following is the list of the main changes and improvements:
All the changes in this release are listed in Microsoft KB Article 3015976.
If you are using Internet Explorer 11, either on Windows 7 or Windows 8.1, and have deployed EMET 5.0, it is particularly important to install EMET 5.1 as compatibility issues were discovered with the November Internet Explorer security update and the EAF+ mitigation. Alternatively, you can temporarily disable EAF+ on EMET 5.0. Details on how to disable the EAF+ mitigation are available in the User Guide. In general we recommend upgrading to the latest version of EMET to benefit from all the enhancements.
We want to particularly thank Luca Davi, Daniel Lehmann, and Ahmad-Reza Sadeghi from System Security Lab at Technical University Darmstadt/CASED, and René Freingruber form SEC Consult for partnering with us.
Your feedback is always welcome as it helps us improve EMET with each new release, so we encourage you to reach out using the Connect Portal or by sending an email to email@example.com.
Today Microsoft shipped MS14-057 to the .NET Framework in order to resolve an Elevation of Privilege vulnerability in the ClickOnce deployment service. While this update fixes this service, developers using Managed Distributed Component Object Model (a .NET wrapped around DCOM) need to take immediate action to ensure their applications are secure.
Managed DCOM is an inherently unsafe way to perform communication between processes of different trust levels. Microsoft recommends moving applications to Windows Communication Foundation (WCF) for inter-process communication instead of using Managed DCOM. Exposing Managed DCOM containers or servers to lower trust callers can result in elevation of privilege vulnerabilities. Please note that DCOM is considered to be secure; only Managed DCOM is considered to be insecure.
For more information around why we recommend moving away from Managed DCOM, it is helpful to understand how COM and DCOM work.
COM is a platform-independent, programming language independent, object-oriented system for creating software components that interact. Traditional COM occurs within a single process boundary.
DCOM is similar to normal COM except it allows for objects to be created in different processes or even different computers. This can be useful for distributed computing, or for scenarios where a client application needs to communicate with a server application.
Unfortunately the communications wrapper that the .NET Framework uses to talk to DCOM (also known as Managed DCOM) is unable to maintain this security boundary. If you use managed code to implement either a server or a container, it’s possible for the remote end of the communication channel to take over the managed process. In scenarios where the interaction is taking place inside the same process or between two processes running with the same privilege, this isn’t a problem. However, when the processes communicating with each other run with different levels of privilege, this becomes an issue.
Fortunately there is a solution for developers that rely on this functionality. The Windows Communication Foundation (WCF) unified programming model is designed to be robust when communicating with untrustworthy endpoints. In many cases this may be a small exercise to move to the newer, more supported technology. The MSDN article entitled How to: Migrate Managed-Code DCOM to WCF provides a number of examples and code samples to help ease this transition process.
For MS14-057, Microsoft removed the ClickOne deployment service dependency on Managed DCOM. We suggest all developers do the same if they are currently using Managed DCOM to communicate between components running with different privilege.
-Reid Borsuk (Product Security) and Joe Bialek (MSRC)