By Adrian J. Beasley
Software Restriction Policies (SRPs) are extremely powerful. They also make it possible for you to foul up big-time - there is no safety barrier. For all that, they are very useful.
“With great power comes great responsibility.”
The following rules should enable you to apply this seriously powerful security technique effectively and also safely:
SRPs are defined in the Computer Configuration/Windows Settings/Security Settings/Software Restriction Policies section of a GPO. (There is also provision for defining them under the user settings, presumably to allow administrators to circumvent the restrictions. I think this is a very bad idea, there should be no exceptions. Should it genuinely be necessary temporarily to circumvent the SRP, my preferred solution would be to move the machine to a different OU, where the SRP did not apply, do whatever was necessary, then move it back.)
Initially, no SRP is defined for the GPO, so the first thing to do is to create a new policy. The default policy so created has the security level set to unrestricted, and four path rules defined, all of which are also unrestricted. The default SRP thus has precisely no effect – it serves purely as a starting point for you to define your own. The first step in configuring the new SRP is to change the security level to disallowed. This means exactly what it says – nothing at all will run except those cases you have explicitly allowed (set to unrestricted).
The allowed exceptions are defined in three stages:
The Enforcement Properties section has two settings: All software files except libraries (such as DLLs) / All software files and All users / All users except local administrators. The defaults are All software files except libraries (such as DLLs) and All users. These should be accepted. Applying the policy to DLLs as well involves significant processing effort, and also significant administrator effort in configuring the relevant DLLs, for no additional benefit, since the DLLs are only accessed from a top level module such as an EXE file, which is covered by the policy anyway. I have already stated my opinion that local admins should not be exempted from the policy automatically, but the machine removed from the policy temporarily in order to perform any exceptional action which is genuinely needed.
The Designated Filetypes section is a list of file extensions to which the additional rules, defining the exceptions, apply. The default list includes all the standard executable types, and entries should not be removed from the list without a very compelling reason. Again, be very clear: if an executable filetype is not in the list, files of that type will not run - period. Only add to the list.
Finally, the Additional Rules section defines the circumstances under which files of the designated types will run. There are four types of rule: Hash, Certificate, Path and Internet Zone, and they are applied in that order (within a particular type, there seems no way to prescribe an order for the rules to be applied – it probably doesn’t matter anyway). The first rule which a given file satisfies is the rule applied, any subsequent ones that the file may also satisfy are ignored. The action of the rule, i.e. its security level, is unrestricted or disallowed. Normally (since the default security level should always be set to disallowed) the rule’s security level will be unrestricted, but there are valid and necessary exceptions to this.
Hash rules identify a file by the hash value of its contents. A hash rule thus identifies one and only one logical file, identifying it by its contents, irrespective of the name (including extension, of course) or location. Certificate rules identify files digitally signed by a particular code signing certificate, and apply to all the designated filetypes; they cannot be restricted to a particular type. Path rules identify files by location. The path can be specified either explicitly or, delimited by % characters, by registry value or environment variable. The path rule can apply to all designated filetypes or, by using a wildcard (*.exe) only to files of a designated type. Finally, an Internet Zone rule applies to files from a particular internet zone – executed from the browser. In practice the only zone you would be likely to use is Trusted Sites – though Local Computer or Local Intranet is also possible; Internet and Restricted Sites are by definition unsafe and untrusted.
Note that Certificate Rules are not available by default, their availability must be enabled by Group Policy. The setting is Computer Configuration/Windows Settings/Security Settings/Local Policies/Security Options/System Settings: Use Certificate Rules on Windows Executables for Software Restriction Policies, and it needs to be enabled. Note further that this is one of those settings (like the wireless settings) that are available only when configuring the GPO from Windows Server 2003 – i.e. it is not available from XP. The effect of this setting is to set the registry value HKLM\SOFTWARE\Policies\Microsoft\Windows\Safer\CodeIdentifiers\AuthenticodeEnabled to 1; if it is 0, certificate rules are not applied.
The four default path rules allow execution from the Windows and Program Files directory trees, and are essential to allow the OS and applications to run, when the default security level is Disallowed. These should thus never be removed. If they were removed, or essential entries removed from the list of designated filetypes, then applying such a policy to a machine would simply cause it to die; everything would stop running, and it could not be restarted. Applying such a policy at the domain level would kill every machine in the domain. Applying it to the Domain Controllers OU would be effectively as bad – without directory services, none of the other machines could do anything useful. This is the Doomsday scenario hinted at several times already. The only way I can think of to recover from such a disaster would be to switch everything off and rebuild all the DCs, one by one. The first, forest root DC would be restored from the most recent backup (which would not contain the rogue SRP, of course). Every other DC would be have to be rebuilt as a member server, promoted to DC and replicate Active Directory from the forest root. I don’t think the DCs could simply be switched on and replicate from the forest root – the OS simply wouldn’t run because, this being a DC, it would take its security settings from its own copy of AD. (I am imagining what would happen, you understand. I haven’t tried it.)
You might think this an implausible scenario, but I can think of two ways in which it could occur, without involving either sabotage or stupidity. In the first place, the SRP documentation is not all it could be, and is in some respects actively misleading (no!) and even an intelligent person could easily misunderstand it. Specifically, you might think (as I initially did) that the SRP default security level applied only to the set of designated filetypes and thus innocently construct a rogue policy to apply a specific rule to a specific filetype, thinking it would block only files of that type, which didn’t satisfy the rule. (Anticipating the later sections of this article, I wanted to apply a certificate rule to PowerShell scripts, i.e. the PS1 filetype.) I learnt better when my machine died on me. Since this was an explicitly experimental exercise, no other machine was affected, and I could disable the rogue policy from another machine. (If you think this a naïve interpretation of the documentation, note that, when you remove an item from the designated filetypes list, a message is displayed ‘If you delete this filetype, programs of this type will run with unrestricted privileges’. This would be true only if the default security level were Unrestricted and you were applying rules to restrict particular cases. If the default is Disallowed, then this unqualified statement is actually wrong, and dangerously misleading.)
Another possible cause of disaster could be by using multiple SRPs. Only one SRP can be defined in a particular GPO, and this contains only one set of designated filetypes. You could define a second SRP with a different set of filetypes, such as my PS1example, and apply both GPOs to the same OU, where the effects would be cumulative (so I understand). Imagine that you have the two GPOs applied at the domain level. Everything works just fine until one day, somehow, one of the GPOs – the wrong one, natch! – gets disabled or deleted.
The following points that I noted in applying SRPs will, I think, be of general interest.
I created a new policy, taking all the defaults and then setting the security level to Disallowed, and applied it. I knew immediately that it was working, as I could no longer start programs from the desktop or the start menu. (I could use the Run command to start them – directly from the Windows or Program Files directories, of course – but not the shortcuts on the desktop or in the start menu.) This situation is rectified by adding the following Path Rules:
The former rule allows all the standard applications, available to all users; the latter allows the applications specifically installed/configured by the current user. Security Pro’s, particularly in respect of a managed desktop environment, might well consider it a good thing to omit the %USERPROFILE% rule.
I loaded a DVD in the drive. It didn’t autoload. I tried installing software from it, and couldn’t. Of course not. I needed a further path rule:
(that being the drive letter). Security Pro’s will certainly consider it a good thing for users not to be able to run stuff from CD/DVD.
I tried running Microsoft Update, and found I couldn’t. Of course not. I needed an Internet Zone rule:
Security Pro’s will probably consider it a bad thing for users to be able to run their own updates (even Microsoft Update) and opt instead to use WSUS and/or SMS.
I tried installing software from a network share. This required the path rule to specify the location by UNC, not drive letter:
All of these things are to be expected, so I don’t know why they took me by surprise.
SRPs in fact work very well. They really shine in a managed desktop environment, where they enable a very tight control to be imposed on what can be executed and what changes can be made (if you like, none at all). They are therefore likely to be rather less popular among users than among administrators (tough!). But they are a very practical answer to several serious security issues. They are less beneficial among servers, but even here, they are a significant contribution to Defence in Depth. For reasons which should now be clear, I’m wary of applying them to Domain Controllers.
So why are they so unfamiliar and so little used? Perhaps people are afraid of them?
The present article originated in a suggestion from a respected colleague, that anyone could circumvent the PowerShell code signing requirement by procuring a code-signing certificate from a trusted public certification authority, and using that to sign and thus run their own PowerShell scripts. (With an internal enterprise CA, we can control who is able to enroll code signing or any other certificates by the security settings of the relevant certificate template(s).) I couldn’t answer his claim there and then, so I have checked out closely how code signing of PowerShell scripts actually works out in practice, and it seems there is indeed a potential security weakness here. As is often the case, my investigation widened out into matters of general interest, way beyond the original issue. My testing has been done using an internal enterprise CA, but the same effects would surely apply to any other CA whose root certificate appears in the Trusted Root Certification Authorities certificate store of the machine or local user.
Enrolling a code signing certificate writes the certificate to the user’s personal certificate store. An externally procured certificate must likewise be imported to this location. This certificate can then be used to code-sign scripts, in the normal way. Refer to my article ‘Powershell Installation’ on the Industry Insiders blog, or to the general Powershell documentation, for how to do this. (Strictly, i.e. pedantically speaking, it is the user’s private key which is used to sign scripts, the certificate, i.e. the corresponding public key, is used to validate them. However, the code-signing process needs to identify the certificate, and presumably locates the relevant private key from that – after all, there could in principle be more than one certificate, and thus private key, present with this functionality.)
However, although you can now sign scripts, you still can’t (automatically) run them. To be able to do this, the same certificate must also be present in the user’s Trusted Publishers certificate store (or in the Trusted Publishers store of the machine, in which case it would apply to all users). If you try to run a script for which this is not the case (whether the code signer is yourself or someone else) the message Do you want to run software from this untrusted publisher? appears, with the options never run / do not run / run once / always run. If you select run once the script runs with no other effects. If you select always run then the script runs, and in addition the certificate is copied to the user’s Trusted Publishers store, so that scripts signed with this certificate will always run (for the current user) in future. So you, or any other user, always could run the script, but not unknowingly; the user must (on an ad-hoc basis or for all future instances) explicitly accept the publisher certificate.
A user needs admin privilege to be able to administer machine certificates, so a non-admin user could not write a code-signing certificate to the machine’s Trusted Publishers store, and thus make so-signed scripts runnable by other users without them being aware of the scripts’ provenance.
What a user could actually achieve with such a script depends as usual on their level of privilege. Clearly, there is scope for a rogue SysAdmin to run amok, but when isn’t there?
It occurred to me that the way to solve this problem is really quite simple – you use Group Policy to define a Software Restriction Policy, applied at the appropriate OU level, which adds PS1 to the list of designated filetypes, and configure a certificate rule to allow those which are signed by a particular certificate (which could be from your own internal Certification Authority or from a public CA, of course). Strictly speaking, the rule applies to any of the designated filetypes, but presumably the certificate is used to sign only PowerShell scripts. You need to add a further, path rule, to disallow all files of type PS1, from any location (otherwise PS1 files signed with some other certificate, which also happened to be in one of the allowed path locations, could, in principle, run). The certificate rule is applied first, thus allowing legitimate scripts, then the path rule, blocking any others. (This is an example where one would need to impose a Disallow rule even where the default security level is Disallow.)
This is, I thought, a remarkably elegant solution. Unfortunately, it doesn’t work. PowerShell scripts are executable only from within a PowerShell session (or as a parameter to an invocation of PowerShell). This completely circumvents SRPs. It is not the case, as has been argued, that SRPs apply only to files invoked directly from Windows Explorer; they also apply to files invoked from within a command prompt session – a situation exactly analogous to invoking them from within PowerShell, I would have thought. I raised the issue with Microsoft. It has taken a very long time to get an answer, which is that PowerShell ‘is designed to work that way’. This sounds to me merely a rephrasing of the classic developer’s response to an unforeseen interaction or side-effect ��� ‘it’s not a bug, it’s a feature!’ This article was written several months ago, and publication has been delayed awaiting resolution of this issue. I record the non-resolution with profound disappointment.
There is a further stark, outstanding issue with PowerShell code signing. I can see no straightforward (nor circuitrous, either) way of restricting the availability of PowerShell cmdlets, so anyone who has the necessary privilege to run an interactive PS session, can simply issue set-executionpolicy unrestricted and run any damn script they want. Have I missed something? If anyone knows how to fix this, do please post a comment. I expect the next version of PowerShell will include some functionality such as roles, to control access to the various facilities (hint, hint).
I have remarked before of the strange aspect of PKI whereby, when writing or speaking about it, the natural and straightforward way of expressing something is often not strictly correct (can indeed be the exact reverse of the truth), but that taking care to be precisely accurate sounds awfully long-winded, pedantic and precious. (There’s an example of this in the second paragraph of the PowerShell Code Signing – The Problem section – you don’t use a code-signing certificate to sign code, but rather the private key to which it corresponds; you use the certificate, i.e. the public key contained in it, to validate code so signed.)
In the discussion of the original point which gave rise to this article, the statement was made that certificates issued by any of the Trusted Root Certification Authorities (by implication, public CAs such as Verisign) ‘are automatically trusted’. People familiar with PKI will immediately understand what is meant by that, but anyone unfamiliar with the subject will with total certainty understand the statement literally, and thus incorrectly. In fact, the only significance of a CA’s presence in the trusted root CAs store is the availability in the current environment of that CA’s root certificate itself, and thus that one has the ability, when presented with a certificate issued by that CA, to validate it and be certain (given that one trusts the CA) that it is genuine. Having done so, one may then, with confidence, decide to accept it. But, unless some explicit enabling action has already been taken (such as, in the present context, copying the certificate to the trusted publishers store) the certificate will not be accepted automatically. (You see what I mean by long-winded, pedantic and precious – but that is, as strictly as I can express it, what it means precisely.)
Thanks to Adrian J. Beasley for providing yet another excellent article, this one's titled Software Restriction