Mark Russinovich’s technical blog covering topics such as Windows troubleshooting, technologies and security.
Though I didn’t realize what I was seeing, Stuxnet first came to my attention on July 5 last summer when I received an email from a programmer that included a driver file, Mrxnet.sys, that they had identified as a rootkit. A driver that implements rootkit functionality is nothing particularly noteworthy, but what made this one extraordinary is that its version information identified it as a Microsoft driver and it had a valid digital signature issued by Realtek Semiconductor Corporation, a legitimate PC component manufacturer (while I appreciate the programmer entrusting the rootkit driver to me, the official way to submit malware to Microsoft is via the Malware Protection Center portal).
I forwarded the file to the Microsoft antimalware and security research teams and our internal review into what became the Stuxnet saga began to unfold, quickly making the driver I had received become one of the most infamous pieces of malware ever created. Over the course of the next several months, investigations revealed that Stuxnet made use of four “zero day” Windows vulnerabilities to spread and to gain administrator rights once on a computer (all of which were fixed shortly after they were revealed) and was signed with certificates stolen from Realtek and JMicron. Most interestingly, analysts discovered code that reprograms Siemens SCADA (Supervisory Control and Data Acquisition) systems used in some centrifuges, and many suspect Stuxnet was specifically designed to destroy the centrifuges used by Iran’s nuclear program to enrich Uranium, a goal the Iranian government reported the virus at least partially accomplished.
As a result, Stuxnet has been universally acknowledged as the most sophisticated piece of malware created. Because of its apparent motives and clues found in the code, some researchers believe that it’s the first known example of malware used for state-sponsored cyber warfare. Ironically, I present several examples of malware targeting infrastructure systems in my recently-published cyber-thriller Zero Day, which when I wrote the book several years ago seemed a bit of a stretch. Stuxnet has proven the examples to be much more likely than I had thought (by the way, if you’ve read Zero Day, please leave a review on Amazon.com).
My last several blog posts have documented cases of the Sysinternals tools being used to help clean malware infections, but malware researchers also commonly use the tools to analyze malware. Professional malware analysis is a rigorous and tedious process that requires disassembling malware to reverse engineer its operation, but systems monitoring tools like Sysinternals Process Monitor and Process Explorer can help analysts get an overall view of malware operation. They can also provide insight into malware’s purpose and help to identity points of execution and pieces of code that require deeper inspection. As the previous blog posts hint, those findings can also serve as a guide for creating malware cleaning recipes for inclusion in antimalware products.
I therefore thought it would be interesting to show the insights the Sysinternals tools give when applied to the initial infection steps of the Stuxnet virus (note that no centrifuges were harmed in the writing of this blog post). I’ll show a full infection of a Windows XP system and then uncover the way the virus uses one of the zero-day vulnerabilities to elevate itself to administrative rights when run from an unprivileged account on Windows 7. Keep in mind that Stuxnet is an incredibly complex piece of malware. It propagates and communicates using multiple methods and performs different operations depending on the version of operating system infected and the software installed on the infected system. This look at Stuxnet just scratches the surface and is intended to show how with no special reverse engineering expertise, Sysinternals tools can reveal the system impact of a malware infection. See Symantec’s W32.Stuxnet Dossier for a great in-depth analysis of Stuxnet’s operation.
Stuxnet spread last summer primarily via USB keys, so I’ll start the infection with the virus installed on a key. The virus consists of six files: four malicious shortcut files with names that are based off of “Copy of Shortcut to.lnk” and two files with names that make them look like common temporary files. I’ve used just one of the shortcut files for this analysis, since they all serve the same purpose:
In this infection vector, Stuxnet begins executing without user interaction by taking advantage of a zero-day vulnerability in the Windows Explorer Shell (Shell32.dll) shortcut parsing code. All the user has to do is open a directory containing the Stuxnet files in Explorer. To let the infection succeed, I first uninstalled the fix for the Shell flaw, KB2286198, that was pushed out by Windows Update in August 2010. When Explorer opens the shortcut file on an unpatched system to find the shortcut’s target file so that it can helpfully show the icon, Stuxnet infects the system and uses rootkit techniques to hide the files, causing them to disappear from view.
Stuxnet on Windows XP
Before triggering the infection, I started Process Monitor, Process Explorer and Autoruns. I configured Autoruns to perform a scan with the “Hide Microsoft and Windows Entries” and “Verify Code Signatures” options checked:
This removes any entries that have Microsoft or Windows digital signatures so that Autoruns shows only entries populated by third-party code, including code signed by other publishers. I saved the output of the scan so that I could have Autoruns compare against it later and highlight any entries added by Stuxnet. Similarly, I paused the Process Explorer display by pressing the space bar, which would enable me to refresh it after the infection and cause it to show any processes started by Stuxnet in the green background color Process Explorer uses for new processes. With Process Monitor capturing registry, file system, and DLL activity, I navigated to the USB key’s root directory, watched the temporary files vanish, waited a minute to give the virus time to complete its infection, stopped Process Monitor and refreshed both Autoruns and Process Explorer.
After refreshing Autoruns, I used the Compare function in the File menu to compare the updated entries with the previously saved scan. Autoruns detected two new device driver registrations, Mrxnet.sys and Mrxcls.sys:
Mrxnet.sys is the driver that the programmer originally sent me and that implements the rootkit that hides files, and Mrxcls.sys is a second Stuxnet driver file that launches the malware when the system boots. Stuxnet’s authors could easily have extended Mrxnet’s cloak to hide these files from tools like Autoruns, but they apparently felt confident that the valid digital signatures from a well-known hardware company would cause anyone that noticed them to pass them over. It turns out that Autoruns has told us all we need to know to clean the infection, which is as easy as deleting or disabling the two driver entries.
Turning my attention to Process Explorer, I also saw two green entries, both instances of the Local Security Authority Subsystem (Lsass.exe) process:
Note the instance of Lsass.exe immediately beneath them that’s highlighted in pink: a normal Windows XP installation has just one instance of Lsass.exe that the Winlogon process creates when the system boots (Wininit creates it on Windows Vista and higher). The process tree reveals that the two new Lsass.exe instances were both created by Services.exe (not visible in the screenshot), the Service Control Manager, which implies that Stuxnet somehow got its code into the Services.exe process.
Process Explorer can also check the digital signatures on files, which you initiate by opening the process or DLL properties dialog and clicking on the Verify button, or by selecting the Verify Image Signatures option in the Options menu. Checking the rogue Lsass processes confirms that they are running the stock Lsass.exe image:
The two additional Lsass processes obviously have some mischievous purpose, but the main executable and command lines don’t reveal any clues. But besides running as children of Services.exe, another suspicious characteristic of the two superfluous processes is the fact that they have very few DLLs loaded, as shown by the Process Explorer DLL view:
The real Lsass has many more:
No non-Microsoft DLLs show up in the loaded-module lists for Services.exe, Lsass.exe or Explorer.exe, so they are probably hosting injected executable code. Studying the code would require advanced reverse engineering skills, but we might be able to determine where the code resides in those processes, and hence what someone with those skills would analyze, by using the Sysinternals VMMap utility. VMMap is a process memory analyzer that visually displays the address space usage of a process. To execute, code must be stored in memory regions that have Execute permission, and because injected code will likely be stored in memory that’s normally for data and therefore not usually executable, it might be possible to find the code just by looking for memory not backed by a DLL or executable that has Execute permission. If the region has Write permission, that makes it even more suspicious, because the injection would require Write permission and probably isn’t concerned with removing the permission once the code is in place. Sure enough, the legitimate Lsass has no executable data regions, but both new Lsass processes have regions with Execute and Write permissions in their address spaces at the same location and same size:
VMMap’s Strings dialog, which you open from the View menu, shows any printable strings in a selected region. The 488K region has the string “This program cannot be run in DOS mode" at its start, which is a standard message stored in the header of every Windows executable. That implies that the virus is not just injecting a code snippet, but an entire DLL:
The region is almost devoid of any other recognizable text, so it’s probably compressed, but the Windows API strings at the end of the region are from the DLL’s import table:
Explorer.exe, the initially infected process, and Services.exe, the process that launched the Lsass processes, also have no suspicious DLLs loaded, but also have unusual executable data regions:
The two Mrx drivers are also visible in the loaded driver list, which you can see in the DLL view of Process Explorer for the System process. The only reason they stand out at all is that their version information reports them to be from Microsoft, but their signatures are from Realtek (the certificates have been revoked, but since the test system is disconnected from the Internet, it is unable to query the Certificate Revocation List servers):
At this point we’ve gotten about as far as we can with Autoruns and Process Explorer. What we know so far is that Stuxnet drops two driver files on the system, registers them to start when the system boots, and starts them. It also infects Services.exe and creates two Lsass.exe processes that run until system shutdown, the purpose of which can’t be determined by their command-lines or loaded DLLs. However, VMMap has given us pointers to injected code and Autoruns has given us an easy way to clean the infection. The Process Monitor trace from the infection has about 30,000 events, and from that we’ll be able to gain further insight into what happens at the time of the infection, where the injected code is stored on disk, and how Stuxnet activates the code at boot time. Read more in Part 2.
Mark Russinovich is a Technical Fellow on the Windows Azure team at Microsoft and is author of Windows Internals, The Windows Sysinternals Administrator’s Reference, and the cyberthriller Zero Day: A Novel. You can contact him at email@example.com.
The depressing thing about this is that there's hardly anything the average user can do to stop this behavior and still use general purpose consumer operating systems. Patching and virus scanners simply don't protect against this, and neither does "being careful."
Mark, the depth of your knowledge and expertise knows no bounds. I truly enjoyed this article.
I thought you / your readers would find it equally interesting that Sysinternals tools were employed in the Rustock botnet takedown, as discussed at Microsoft's www.noticeofpleadings.com website (official lawsuit documents file 2011-March).
For example - see page 29 of the "Campana Declaration (Exhibits 1-10)" document:
Not unlike the details you shared, sites such as FireEye have published similar details on the Rustock botnet, including a list of known C&C IP Addresses used by Rustock, as seen here:
Great stuff - keep it coming!
The original Paulie D
@Paulie Thanks for the feedback! I didn't realize that the Microsoft cybercrimes team had used Sysinternals tools for the Rustock takedown - pretty cool!
Great Analysis , waiting for part two.
Btw I ordered your book and hopefully will have it here in a few days (ordered it pretty much when it was available on Amazon).
Shipping to Israel takes quite a while :(
I always enjoy these kinds of articles, even if it's only on a "detective novel" sort of level...
".... what made this one extraordinary is that its version information identified it as a Microsoft driver and it had a valid digital signature issued by Realtek Semiconductor Corporation, a legitimate PC component manufacturer...."
After having run across a number of Dell drivers that were AutoRuns-hidden because they were signed as Microsoft under the Windows Hardware-Certificate Authority, for a short time had quit hiding Signed Microsoft Entries. Wanted to see *ALL* third-party software. (Clarification: No intent to imply that the Dell drivers had confirmed malicious code.)
The linked Dossier was *very* interesting reading. Recently submitted an unsigned oddly-named Microsoft driver (loading via weirdly named registry key) unto which the Microsoft analyzer dismissed as "Slightly modified with appended data, but a clean volsnap.sys nevertheless." The techniques described in the dossier lead me to wonder whether there is any possibility of this "appended data" being executed while avoiding detection by automated scanners. The XP-Professional system, for various reasons I assume having once belonged to a corporate environment, (coincidently owner employed by a regional utility company,) had an updated copy of F-Secure running. Never flagged the suspicious file.
Anyway, the above NOT included, have seen many machines with MRx drivers on them but because they were signed, had let them go.
As always, many thanks. Computing would not be the same without your expertise; benevolently put forth in the form of the SysInternals tools suite.
Good analysis Mark, thanks .
hope to see more deep looks at this specimen in future posts .
I was referred here from an article I wrote regarding a UAC prompt to install a driver supposedly signed by Microsoft. My attempts to validate the signature failed at several turns: the driver name did not fit in the UAC prompt; the signature for a driver released on Windows Update in March 2011 expired in January 2010; the certificate revocation list is offline; the PKI policy in the certificate is a dead link; the PKI policy's parent URL works but only says that as of June 2001, it is the "future location" of the PKI policy; and a Microsoft PC Safety rep showed no interest in a potentially bogus certificate.
What is the point of digital signing if there is no pubic PKI policy, no CRL, and out-of-date signatures? How does one determine if a certificate or driver is really from Microsoft?
Here is the article: www.mcbsys.com/.../is-this-driver-legitimate.