Blog - Title

January, 2012

  • Understanding the AD FS 2.0 Proxy

    Hi guys, Joji Oshima here again. I have had several cases involving the AD FS 2.0 Proxy and there is some confusion on what it is, why you should use it, and how it works. If you are looking for basic information on AD FS, I would check out the AD FS 2.0 Content Map. The goal of this post is to go over the purpose of the AD FS 2.0 Proxy, why you would want to use it, and how it fits with the other components.

    What is the AD FS 2.0 Proxy?

    The AD FS 2.0 Proxy is a service that brokers a connection between external users and your internal AD FS 2.0 server. It acts as a reverse proxy and typically resides in your organization’s perimeter network (aka DMZ). As far as the user is concerned, they do not know they are talking to an AD FS proxy server, as the federation services are accessed by the same URLs. The proxy server handles three primary functions.

    • Assertion provider: The proxy accepts token requests from users and passes the information over SSL (default port 443) to the internal AD FS server. It receives the token from the internal AD FS server and passes it back to the user.
    • Assertion consumer: The proxy accepts tokens from users and passes them over SSL (default port 443) to the internal AD FS server for processing.
    • Metadata provider: The proxy will also respond to requests for Federation Metadata.

    Why use an AD FS 2.0 Proxy?

    The AD FS 2.0 Proxy is not a requirement for using AD FS; it is an additional feature. The reason you would install an AD FS 2.0 Proxy is you do not want to expose the actual AD FS 2.0 server to the Internet. AD FS 2.0 servers are domain joined resources, while the AD FS 2.0 Proxy does not have that requirement. If all your users and applications are internal to your network, you do not need to use an AD FS 2.0 Proxy. If there is a requirement to expose your federation service to the Internet, it is a best practice to use an AD FS 2.0 Proxy.

    How does the AD FS 2.0 Proxy Work?

    The claims-based authentication model expects the user to have direct access to the application server and the federation server(s). If you have an application (or web service) that is Internet facing, and the AD FS server is on the internal network, this can cause an issue. A user on the Internet can contact the application (or web service), but when the application redirects the user to the AD FS server, it will not be able to connect to the internal AD FS server. Similarly, with IDP-Initiated Sign on, a user from the Internet would not be able to access the sign on page. One way to get around this would be to expose the AD FS server to the Internet; a better solution is to utilize the AD FS 2.0 Proxy service.

    In order to understand how the proxy works, it is important to understand the basic traffic flow for a token request. I will be using a simple example where there is a single application (relying party) and a single federation server (claims provider). Below you will see an explanation of the traffic flow for an internal user and for an external user in a WS-Federation Passive flow example.

    image

    For an internal user (see diagram above):

    1. An internal user accesses claims aware application

    2. The application redirects the user to the AD FS server

    3. The AD FS server authenticates the user and performs an HTTP post to the application where the user gains access

    Note: The redirects are performed using a standard HTTP 302 Redirect.

    The posts are performed using a standard HTTP POST.

    image

    For an external user (see diagram above):

    1. An external user accesses claims aware application

    2. The application redirects the user to the AD FS 2.0 proxy server

    3. The proxy server connects to the internal AD FS server and the AD FS server authenticates the user

    4. The AD FS 2.0 proxy performs an HTTP Post to the application where the user gains access

    Note: Depending on the infrastructure configuration, complexity, protocol, and binding the traffic flow can vary.

    Basic Configuration of the AD FS 2.0 Proxy:

    Configure DNS:

    Configuring DNS is a very important step in this process. Applications, services, and other federation service providers do not know if there is a proxy server, so all redirects to the federation server will have the same DNS name (ex: https://sts.contoso.com/adfs/ls/) which is also the federation service name. See this article for guidance on selecting a Federation Service Name. It is up to the administrator to configure the internal DNS to point to the IP address of the internal AD FS server or internal AD FS server farm load balancer, and configure the public DNS to point to the IP address of the AD FS 2.0 Proxy Server or AD FS Proxy server farm load balancer. This way, internal users will directly contact the AD FS server, and external users will hit the AD FS 2.0 proxy, which brokers the connection to the AD FS server. If you do not have a split-brain DNS environment, it is acceptable and supported to use the HOSTS file on the proxy server to point to the internal IP address of the AD FS server.

    SSL Certificates:

    The internal AD FS server can have a certificate issued by your enterprise CA (or public authority), and should have a subject name that is the same as the Federation Service Name/DNS name that it is accessed with. Using Subject Alternative Names (SAN) and wildcards are supported as well. The AD FS 2.0 proxy needs to have an SSL certificate with the same subject name. Typically, you want this certificate to be from a public authority that is trusted and a part of the Microsoft Root Certificate Program. This is important because external users may not inherently trust your internal enterprise CA. This article can step you through replacing the certificates on the AD FS 2.0 server.

    Firewalls:

    • Internal and external users will need to access the application over SSL (typically port 443)
    • The AD FS 2.0 Proxy Server will need to access the internal AD FS server over SSL (default port 443)
    • Internal users will need to access the internal Federation Service on its SSL port (TCP/443 by default)
    • External users will need to access the Federation Service Proxy on its SSL port (TCP/443 by default)

    How does the AD FS 2.0 Proxy work?

    Proxy Trust Wizard prompts admin credentials for the internal federation service (AD FS). These credentials are not stored. They are used once to issue a proxy trust token (which is simply a SAML assertion) which is used to “authenticate” the proxy to the internal federation service. The internal AD FS server knows about the proxy trust token and knows that when it receives a proxy request that request must be accompanied by the proxy trust token.

    The proxy trust token has a configurable lifetime, and is self-maintained by the proxy and the federation service. The only time you need to touch it is if a server is lost or you need to revoke the proxy trust.

    When a proxy trust is revoked, the proxy trust token is invalidated and the federation service will no longer accept proxy requests from proxies who are attempting to utilize that token. You must re-run the Proxy Trust Wizard on ALL proxies in order to re-establish trust.

    Using PowerShell:

    Using PowerShell is an easy way to view and set configuration items regarding the proxy server. I’ve listed the more common commands and parameters used to configure the AD FS 2.0 proxy.

    On the proxy server:

    On the federation server:

    Conclusion:

    I hope you now have a better idea of how the AD FS 2.0 Proxy works and the basics on how it should be configured. If you want to dig deeper, there are some excellent resources for the AD FS 2.0 Proxy.

    Planning Federation Server Proxy Placement
    http://technet.microsoft.com/en-us/library/dd807130%28WS.10%29.aspx

    Certificate Requirements for Federation Server Proxies
    http://technet.microsoft.com/en-us/library/dd807054%28WS.10%29.aspx

    AD FS 2.0: How to Replace the SSL, Service Communications, Token-Signing, and Token-Decrypting Certificates
    http://social.technet.microsoft.com/wiki/contents/articles/2554.aspx

    Troubleshooting federation server proxy problems with AD FS 2.0
    http://technet.microsoft.com/en-us/library/adfs2-troubleshooting-federation-server-proxy-problems%28WS.10%29.aspx

    AD FS 2.0: Guidance for Selecting and Utilizing a Federation Service Name
    http://social.technet.microsoft.com/wiki/contents/articles/4177.aspx

    AD FS 2.0 Proxy Management
    http://blogs.msdn.com/b/card/archive/2010/06/02/ad-fs-2-0-proxy-management.aspx

    [MS-SAMLPR]: Security Assertion Markup Language (SAML) Proxy Request Signing Protocol Specification
    http://msdn.microsoft.com/en-us/library/ff470131(v=PROT.13).aspx

    [MS-MFPP]: Federation Service Proxy Protocol Specification
    http://msdn.microsoft.com/en-us/library/dd357118(v=PROT.13).aspx

    AD FS 2.0 Cmdlets in Windows PowerShell
    http://technet.microsoft.com/en-us/library/ee892329.aspx

    Joji “happy as a claim” Oshima

  • RPC over IT/Pro

    Hi folks, Ned here again to talk about one of the most commonly used – and least understood – network protocols in Windows: Remote Procedure Call. Understanding RPC is a foundation for any successful IT Professional. It’s integral to distributed systems like Active Directory, Exchange, SQL, and System Center. The administrator who has never run into RPC configuration issues is either very new or very lucky.

    Today I attempt to explain the protocol in practical terms. As always, the best way to troubleshoot is with an understanding of how things are supposed to work, so that when it fails the reasons are obvious.  If you have a metered or capped Internet connection, read this off hours – it’s a biggee.

    Some context

    The RPC concept has roots in ARPANET, but got its first business computing use – like so many others – at Xerox PARC as “Courier”. The Microsoft implementation is an extension of The Open Group’s DCE/RPC, sometimes called MSRPC. We further extended that into the Distributed Component Object Model (DCOM), which is RPC and COM. The Exchange folks heavily invested in RPC over HTTP. Microsoft also retains the legacy "RPC over SMB" system, often referred to as Named Pipes. That ends the brochure.

    As I began to learn RPC, the first problem I ran into was the documentation. It seemed to come in two forms:

    image
    Let’s do lunch – you like human?

    If you actually read the docs, you're let down in the details. It comes in two arrangements, both of which completely miss the IT boat:

    1. The “it’s all processes and libraries, get to coding” form:

    image
    See, it's just code!

    2. The “Jedi network magic” form:

    image
    These aren't the computers you're looking for… move along

    I find developers are often like Rain Man: specialist geniuses, bewildered by real life. This isn’t bad documentation, but IT pros aren’t the audience. The developers of RPC are providing a framework and since they live in a perfect world of design where nothing breaks, how it works is not important – they just want you to use the right APIs. The problem is I don’t care about the specifics of MIDL, stubs, or marshaling unless I’m at the point of debugging; I just want to know how it all works in practical networking terms. Then when it breaks, I have somewhere to start, and when I’m designing a distributed system, I’m not setting my customer up for headaches.

    Today I focus on MSRPC, as that’s the main RPC protocol of AD components. I may return someday to discuss the others, if you’re interested. And bribe me.

    The MSRPC details

    Let's start with an analogy: you meet a nice girl and really hit it off. Like an idiot, you manage to lose her phone number. You know that she works for Microsoft though, so you start by looking up the Charlotte office. You call and get a switchboard, so you ask for her by name. The operator tells you her number and then offers to transfer you – naturally, you say yes. Someone answers and you make sure it’s the nice girl by introducing yourself. You both exchange pleasantries, then make plans for dinner and a movie, with directions to the restaurant and a chat about the Flixster reviews. You hang up and think about what you’re going to say to keep her interested until the appetizers arrive. You called her on your mobile phone so you have the outgoing number saved in case you need to call back.

    There, now you understand MSRPC. No really, you do…

    1. A client application knows about a server application and wants to communicate with it.
    2. The client computer uses name resolution to locate the computer where that server application runs.
    3. The client app connects to an endpoint locator and requests access to the server application.
    4. The endpoint locator provides that info and the client connects to the server with an initial conversation.
    5. The client and server apps exchange instructions and data.
    6. The client and server apps disconnect.
    7. The client computer has a cache of name resolution and the connection that can save time reconnecting later.   

    RPC allows a client application to let other computers work on its behalf, offloading processing to more powerful centralized servers. Instead of sending real functions over the network, the client tells the server what functions to run, and then the server sends the data back. This has nothing to do with the OS: some of these applications can be both client and server – for instance, Active Directory multi-master replication. That RPC application is LSASS.EXE. I’m going to use it as our sample app.

    image

    There are a few important terms to understand:

    • Endpoint mapper – a service listening on the server, which guides client apps to server apps by port and UUID
    • Tower – describes the RPC protocol, to allow the client and server to negotiate a connection
    • Floor – the contents of a tower with specific data like ports, IP addresses, and identifiers
    • UUID – a well-known GUID that identifies the RPC application. The UUID is what you use to see a specific kind of RPC application conversation, as there are likely to be many
    • Opnum – the identifier of a function that the client wants the server to execute. It’s just a hexadecimal number, but a good network analyzer will translate the function for you. MSDN can too. If neither knows, your application vendor must tell you
    • Port – the communication endpoints for the client and server applications
    • Stub data – the information given to functions and data exchanged between the client and server. This is the payload; the important part

    There’s a lot more but we’re getting into developer country. I know it sounds like jabber, so let’s dissect this with a real-world example using our old friend NetMon and the latest open source parsers.

    Back to reality

    Here I have two DCs in the same AD site, named WIN2008R2-01 and WIN2008R2-02, with respective IP addresses of 10.0.0.101 and 10.0.0.102. I reboot DC2 and have a network capture running on DC1. I create a brand new test user and let it replicate, then I stop the capture. It’s critical to have a network capture see the whole conversation or it will be a mess to analyze; if possible, the captures should always be running on both client and server, but in this case, that’s not possible due to the reboot.

    image

    When you first examine AD replication traffic in NetMon (like above) it looks like Greek. What the heck is a stub parser? DRSR?

    Open the Options menu and select Parser Profiles. The reason you see the “Windows stub parser” messages is that by default, NetMon uses a balanced set of parsers designed for limited analysis without packet loss.

    image

    When analyzing captures on your desktop, set the active parser to “Windows” and you get the most detail.

    image

    While you’re in the Options, I also recommend configuring color filters. Since I am examining AD replication, I want visual cues for DRSR (Directory Replication Service Remote protocol), EPM (RPC Endpoint Mapper), MSRPC, and DNS. This makes skimming a capture easier.

    image

    Now I add a simple filter of: msrpc. Better. Let’s start deciphering:

    image

    Right away, we see the endpoint mapper request above. The tower for Directory Replication is in that request, using the UUID E3514235-4B06-11D1-AB04-00C04FC2DCD2 (that's how Netmon knows to parse it, by the way). It is connecting to TCP port 135. This happens shortly after LSASS.EXE starts, as domain controllers are nearly always talking about replication.

    Naturally, there is a response, and it contains several key ingredients:

    image

    You can see the towers - there may be more than one - and the floors in each tower with their ports. Importantly, you also see the status of the attempted connection. And a specific server port is listed. That port may be dynamic or static, it depends on the application’s configuration.

    Now the client application opens a local client port (again, maybe dynamic, maybe static) and binds to that new application port, using security; the original connection, by default, did not require special permissions - EPM is a switchboard, remember. Because this is MSRPC and domain controllers, this means Kerberos and packet privacy are required. This bind phase below is negotiation.

    image

    image

    The server responds with the (hopefully) successful negotiation, providing details about which security protocols were selected for further encryption of the traffic. The NegState field shows how this is not yet complete, but things are proceeding as planned.

    image

    This bind was the negotiation. What follows is the completion of the authentication and encapsulation phase, called an ALTER_CONTEXT operation. If all goes well, the authentication is accepted and RPC application communications proceeds with some nice secure packet payloads.

    image

    Everything after this point is application… stuff. RPC connected from a client port to a server port and then communicates along that "channel" for the rest of the conversation. The two halves of the application send each other requests and responses, with stub data used by the application's functions.

    Every application is different, but once you know each one's rules, it will work in a (relatively) predictable fashion. Since this is the well-documented Directory Replication Services application, what happens next is the DC creates a context handle, called a DRSBIND. It then does some work. Let's take a look at one example of the work by switching the NetMon filter to just DRSR, then apply it to our scenario.

    image

    Netmon is politely translating all of these RPC functions above into semi-intelligible words, like DRSBind, DRSReplicaSync, and DRSGetNCChanges. It knows that when there is an opnum it understands for a given protocol, it means an RPC function that the client is telling the server to run remotely on the client's behalf.

    If you examine one of those packets, you see that the data itself is encrypted (good!), but with knowledge of the opnum's purpose and that RPC reached this stage, you have a decent idea what it is doing or how to look it up based on the UUID and Opnum information, even if your network parsers are terrible. In this case:

    http://msdn.microsoft.com/en-us/library/cc228532(v=PROT.13).aspx

    Function Explanation
    IDL_DRSBind

    Creates a context handle necessary to call any other method in this interface.
    Opnum: 0

    IDL_DRSReplicaSync

    Triggers replication from another DC.
    Opnum: 2

    IDL_DRSGetNCChanges

    Replicates updates from an NC replica on the server.
    Opnum: 3

    IDL_DRSCrackNames

    Looks up each of a set of objects in the directory and returns it to the caller in the requested format.
    Opnum: 12

    IDL_DRSUnbind

    Destroys a context handle previously created by the IDL_DRSBind method.
    Opnum: 1

    image

    Importantly, you know that RPC and the network appear to be functioning correctly, so any application problems are likely inside the application itself. If the application has internal logging, you can use these network captures to correlate each opnum request/response to real work, and perhaps see where things are failing internally. If the application doesn’t have good security, you can see exactly what it's doing - but so can anyone else. Probably something to bring to the third party vendor's attention, as it will not be Microsoft.

    A polite application will tear down the connection with noticeable "unbind" traffic, and perhaps even send a network reset, but many simply abandon the conversation and let Windows deal with it later.

    image

    A final note: a domain controller has a great many RPC conversations going with multiple partners; always ensure you are looking at the same conversations by filtering based on IP addresses and ports, as well as your network analysis tools conversation ID system. NetMon makes this pretty easy:

    image

    And we're done. See? It’s just a phone call with a nice girl from Microsoft. Don’t be intimidated when she knows more about computers than you do, bub.

    Until next time.

    Ned "really pedantic chatter" Pyle

  • Friday Mail Sack: Carl Sandburg Edition

    Hi folks, Jonathan again. Ned is taking some time off visiting his old stomping grounds – the land of Mother-in-Laws and heart-breaking baseball. Or, as Sandburg put it:

    Hog Butcher for the World,
    Tool Maker, Stacker of Wheat,
    Player with Railroads and the Nation's Freight Handler;
    Stormy, husky, brawling,
    City of the Big Shoulders”

    Cool, huh?

    Anyway, today we talk about:

    And awayyy we go!

    Question

    When thousands of clients are rebooted for Windows Update or other scheduled tasks, my domain controllers log many KDC 7 System event errors:

    Log Name: System
    Source: Microsoft-Windows-Kerberos-Key-Distribution-Center
    Event ID: 7
    Level: Error
    Description:

    The Security Account Manager failed a KDC request in an unexpected way. The error is in the data field.

    Error 170000C0

    I’m trying to figure out if this is a performance issue, if the mass reboots are related, if my DCs are over-utilized, or something else.

    Answer

    That extended error is:

    C0000017 = STATUS_NO_MEMORY - {Not Enough Quota} - Not enough virtual memory or paging file quota is available to complete the specified operation.

    The DCs are being pressured with so many requests that they are running out of Kernel memory. We see this very occasionally with applications that make heavy use of the older SAMR protocol for lookups (instead of say, LDAP). In some cases we could change the client application's behavior. In others, the customer just had to add more capacity. The mass reboots alone are not the problem here - it's the software that runs at boot up on each client that is then creating what amounts to a denial of service attack against the domain controllers.

    Examine one of the client computers mentioned in the event for all non-Windows-provided services, scheduled tasks that run at startup, SCCM/SMS at boot jobs, computer startup scripts, or anything else that runs when the computer is restarted. Then get promiscuous network captures of that computer starting (any time, not en masse) while also running Process Monitor in boot mode, and you'll probably see some very likely candidates. You can also use SPA or AD Data Collector sets (http://blogs.technet.com/b/askds/archive/2010/06/08/son-of-spa-ad-data-collector-sets-in-win2008-and-beyond.aspx) in combination with network captures to see exactly what protocol is being used to overwhelm the DC, if you want to troubleshoot the issue as it happens. Probably at 3AM, that sounds sucky.

    Ultimately, the application causing the issue must be stopped, reconfigured, or removed - the only alternative is to add more DCs as a capacity Band-Aid or stagger your mass reboots.

    Question

    Is it possible to have 2003 and 2008 servers co-exist in the same DFS namespace? I don’t see it documented either “for” or “against” on the blog anywhere.

    Answer

    It's totally ok to mix OSes in the DFSN namespace, as long as you don't use Windows Server 2008 ("V2 mode") namespaces, which won't allow any Win2003 servers. If you are using DFSR to replicate the data, make sure all server have the latest DFSR hotfixes (here and here), as there areincompatibilities in DFSR that these hotfixes resolve.

    Question

    Should I create DFS namespace folders (used by the DFS service itself) under NTFS mount points? Is there any advantage to this?

    Answer

    DFSN management tools do not allow you to create DFSN roots and links under mount points ordinarily, and once you do through alternate hax0r means, they are hard to remove (you have to use FSUTIL). Ergo, do not do it – the management tools blocking you means that it is not supported.

    There is no real value in placing the DFSN special folders under mount points - the DFSN special folders consume no space, do not contain files, and exist only to provide reparse point tags to the DFSN service and its file IO driver goo. By default, they are configured on the root of the C: drive in a folder called c:\dfsroots. That ensures that they are available when the OS boots. If clustering you'd create them on one of your drive-lettered shared disks.

    Question

    How do you back up the Themes folder using USMT4 in Windows 7?

    Answer

    The built-in USMT migration code copies the settings but not the files, as it knows the files will exist somewhere on the user’s source profile and that those are being copied by the migdocs.xml/miguser.xml. It also knows that the Themes system will take care of the rest after migration; the Themes system creates the transcoded image files using the theme settings and copies the image files itself.

    Note here how after scanstate, my USMT store’s Themes folder is empty:

    clip_image001

    After I loadstate that user, the Themes system fixed it all up in that user’s real profile when the user logged on:

    clip_image002

    However, if you still specifically need to copy the Themes folder intact for some reason, here’s a sample custom XML file:

    <?xml version="1.0" encoding="UTF-8"?>

    <migration urlid="http://www.microsoft.com/migration/1.0/migxmlext/migratethemefolder">

    <component type="Documents" context="User">

    <!-- sample theme folder migrator -->

    <displayName>ThemeFolderMigSample</displayName>

     <role role="Data">

      <rules>

       <include filter='MigXmlHelper.IgnoreIrrelevantLinks()'>

       <objectSet>

        <pattern type="File">%CSIDL_APPDATA%\Microsoft\Windows\Themes\* [*]</pattern>

       </objectSet>

      </include>

     </rules>

     </role>

    And here it is in action:

    clip_image004

    Question

    I've recently been working on extending my AD schema with a new back-linked attribute pair, and I used the instructions on this blog to auto-generate the linkIDs for my new attributes. Confusingly, the resulting linkIDs are negative values (-912314983 and -912314984). The attributes and backlinks seem to work as expected, but when looking at the MSDN definition of the linkID attribute, it specifically states that the linkID should be a positive value. Do you know why I'm getting a negative value, and if I should be concerned?

    Answer

    The only hard and fast rule is that the forward link (flink) be an even number and the backward link (blink) be the flink's ID plus one. In your case, the flink is -912314984 then the blink had better be -912314983, which I assume is the case since things are working. But, we were curious when you posted the linkID documentation from MSDN so we dug a little deeper.

    The fact that your linkIDs are negative numbers is correct and expected, and is the result of a feature called AutoLinkID. Automatically generated linkIDs are in the range of 0xC0000000-0xFFFFFFFC (-1,073,741,824 to -4). This means that it is a good idea to use positive numbers if you are going to set the linkID manually. That way you are guaranteed not to conflict with automatically generated linkIDs.

    The bottom line is, you're all good.

    Question

    I am trying to delegate permissions to the DBA team to create, modify, and delete SPNs since they're the team that swaps out the local accounts SQL is installed under to the domain service accounts we create to run SQL.

    Documentation on the Internet has led me down the rabbit hole to no end.  Can you tell me how this is done in a W2K8 R2 domain and a W2K3 domain?

    Answer

    So you will want to delegate a specific group of users -- your DBA team -- permissions to modify the SPN attribute of a specific set of objects -- computer accounts for servers running SQL server and user accounts used as service accounts under which SQL Server can run.

    The easiest way to accomplish this is to put all such accounts in one OU, ie OU=SQL Server Accounts, and run the following commands:

    Dsacls "OU=SQL Server Accounts,DC=corp,DC=contoso,DC=com" /I:S /G "CORP\DBA Team":WPRP;servicePrincipalName;user
    Dsacls "OU=SQL Server Accounts,DC=corp,DC=contoso,DC=com" /I:S /G "CORP\DBA Team":WPRP;servicePrincipalName;computer

    These two commands will grant the DBA Team group permission to read and write the servicePrincipalName attribute on user and computer objects in the SQL Server Accounts OU.

    Your admins should then be able to use setspn.exe to modify that property on the designated accounts.

    But…what if you have a large number of accounts spread across multiple OUs? The above solution only works well if all of your accounts are concentrated in a few (preferably one) OUs. In this case, you basically have two options:

    1. You can run the two commands specifying the root of the domain as the object, but you would be delegating permissions for EVERY user and computer in the domain. Do you want your DBA team to be able to modify accounts for which they have no legitimate purpose?
    2. Compile a list of specific accounts the DBA team can manage and modify each of them individually. That can be done with a single command line. Create a text file that contains the DNs of each account for which you want to delegate permissions and then use the following command:

      for /f "tokens=*" %i in (object-list.txt) do dsacls "%i" /G "CORP\DBA Team":WPRP;servicePrincipalName

    None of these are really great options, however, because you’re essentially giving a group of non-AD Administrators the ability to screw up authentication to what are perhaps critical business resources. You might actually be better off creating an expedited process whereby these DBAs can submit a request to a real Administrator who already has permissions to make the required changes, as well as the experience to verify such a change won’t cause any problems.

    Author’s Note: This gentleman pointed out in a reply that these DBAs wouldn’t want him messing with tables, rows and the SA account, so he doesn’t want them touching AD. I thought that was sort of amusing.

    Question

    What is Powershell checking when your run get-adcomputer -properties * -filter * | format-table Name,Enabled?  Is Enabled an attribute, a flag, a bit, a setting?  What, if at all, would that setting show up as in something like ADSIEdit.msc?

    I get that stuff like samAccountName, sn, telephonenumber, etc.  are attributes but what the heck is enabled?

    Answer

    All objects in PowerShell are PSObjects, which essentially wrap the underlying .NET or COM objects and expose some or all of the methods and properties of the wrapped object. In this case, Enabled is an attribute ultimately inherited from the System.DirectoryServices.AccountManagement.AuthenticablePrincipal .NET class. This answer isn’t very helpful, however, as it just moves your search for answers from PowerShell to the .NET Framework, right? Ultimately, you want to know how a computer’s or user’s account state (enabled or disabled) is stored in Active Directory.

    Whether or not an account is disabled is reflected in the appropriate bit being set on the object’s userAccountControl attribute. Check out the following KB: How to use the UserAccountControl flags to manipulate user account properties. You’ll find that the penultimate least significant bit of the userAccountControl bitmask is called ACCOUNTDISABLE, and reflects the appropriate state; 1 is disabled and 0 is enabled.

    If you find that you need to use an actual LDAP query to search for disabled accounts, then you can use a bitwise filter. The appropriate LDAP filter would be:

    (UserAccountControl:1.2.840.113556.1.4.803:=2)

    Other stuff

    I watched this and, despite the lack of lots of moving arms and tools, had sort of a Count Zero moment:

    And just for Ned (because he REALLY loves this stuff!): Kittens!

    No need to rush back, dude.

    Jonathan “Payback is a %#*@&!” Stephens

  • Friday Mail Sack: Best Post This Year Edition

    Hi folks, Ned here and welcoming you to 2012 with a new Friday Mail Sack. Catching up from our holiday hiatus, today we talk about:

    So put down that nicotine gum and get to reading!

    Question

    Is there an "official" stance on removing built-in admin shares (C$, ADMIN$, etc.) in Windows? I’m not sure this would make things more secure or not. Larry Osterman wrote a nice article on its origins but doesn’t give any advice.

    Answer

    The official stance is from the KB that states how to do it:

    Generally, Microsoft recommends that you do not modify these special shared resources.

    Even better, here are many things that will break if you do this:

    Overview of problems that may occur when administrative shares are missing
    http://support.microsoft.com/default.aspx?scid=kb;EN-US;842715

    That’s not a complete list; it wasn’t updated for Vista/2008 and later. It’s so bad though that there’s no point, frankly. Removing these shares does not increase security, as only administrators can use those shares and you cannot prevent administrators from putting them back or creating equivalent custom shares.

    This is one of those “don’t do it just because you can” customizations.

    Question

    The Windows PowerShell Get-ADDomainController cmdlet finds DCs, but not much actual attribute data from them. The examples on TechNet are not great. How do I get it to return useful info?

    Answer

    You have to use another cmdlet in tandem, without pipelining: Get-ADComputer. The Get-ADDomainController cmdlet is good mainly for searching. The Get-ADComputer cmdlet, on the other hand, does not accept pipeline input from Get-ADDomainController. Instead, you use a pseudo “nested function” to first find the PDC, then get data about that DC. For example, (this is all one command, wrapped):

    get-adcomputer (get-addomaincontroller -Discover -Service "PrimaryDC").name -property * | format-list operatingsystem,operatingsystemservicepack

    When you run this, PowerShell first processes the commands within the parentheses, which finds the PDC. Then it runs get-adcomputer, using the property of “Name” returned by get-addomaincontroller. Then it passes the results through the pipeline to be formatted. So it’s 1 2 3.

    get-adcomputer (get-addomaincontroller -Discover -Service "PrimaryDC").name -property * | format-list operatingsystem,operatingsystemservicepack

    Voila. Here I return the OS of the PDC, all without having any idea which server actually holds that role:

    clip_image002[6]

    Moreover, before the Internet clubs me like a baby seal: yes, a more efficient way to return data is to ensure that the –property list contains only those attributes desired:

    image

    Get-ADDomainController can find all sorts of interesting things via its –service argument:

    PrimaryDC
    GlobalCatalog
    KDC
    TimeService
    ReliableTimeService
    ADWS

    The Get-ADDomain cmdlet can also find FSMO role holders and other big picture domain stuff. For example, the RID Master you need to monitor.

    Question

    I know about Kerberos “token bloat” with user accounts that are a member of too many groups. Does this also affect computers added to too many groups? What would some practical effects of that? We want to use a lot of them in the near future for some application … stuff.

    Answer

    Yes, things will break. To demonstrate, I use PowerShell to create 2000 groups in my domain and added a computer named “7-01” to them:

    image

    I then restart the 7-01 computer. Uh oh, the System Event log is un-pleased. At this point, 7-01 is no longer applying computer group policy, getting start scripts, or allowing any of its services to logon remotely to DCs:

    image 

    Oh, and check out this gem:

    image

    I’m sure no one will go on a wild goose chase after seeing that message. Applications will be freaking out even more, likely with the oh-so-helpful error 0x80090350:

    “The system detected a possible attempt to compromise security. Please ensure that you can contact the server that authenticated you.”

    Don’t do it. MaxTokenSize is probably in your future if you do, and it has limits that you cannot design your way out of. IT uniqueness is bad.

    Question

    We have XP systems using two partitions (C: and D:) migrating to Windows 7 with USMT. The OS are on C and the user profiles on D.  We’ll use that D partition to hold the USMT store. After migration, we’ll remove the second partition and expand the first partition to use the space freed up by the first partition.

    When restoring via loadstate, will the user profiles end up on C or on D? If the profiles end up on D, we will not be able to delete the second partition obviously, and we want to stop doing that regardless.

    Answer

    You don’t have to do anything; it just works. Because the new profile destination is on C, USMT just slots everything in there automagically :). The profiles will be on C and nothing will be on D except the store itself and any non-profile folders*:

    clip_image001
    XP, before migrating

    clip_image001[5]
    Win7, after migrating

    If users have any non-profile folders on D, that will require a custom rerouting xml to ensure they are moved to C during loadstate and not obliterated when D is deleted later. Or just add a MOVE line to whatever DISKPART script you are using to expand the partition.

    Question

    Should we stop the DFSR service before performing a backup or restore?

    Answer

    Manually stopping the DFSR service is not recommended. When backing up using the DFSR VSS Writer – which is the only supported way – replication is stopped automatically, so there’s no reason to stop the service or need to manually change replication:

    Event ID=1102
    Severity=Informational
    The DFS Replication service has temporarily stopped replication because another
    application is performing a backup or restore operation. Replication will resume
    after the backup or restore operation has finished.

    Event ID=1104
    Severity=Informational
    The DFS Replication service successfully restarted replication after a backup
    or restore operation.

    Another bit of implied evidence – Windows Server Backup does not stop the service.

    Stopping the DFSR service for extended periods leaves you open to the risk of a USN journal wrap. And what if someone/something thinks that the service being stopped is “bad” and starts it up in the middle of the backup? Probably nothing bad happens, but certainly nothing good. Why risk it?

    Question

    In an environment where AGMP controls all GPOs, what is the best practice when application setup routines make edits "under the hood" to GPOs, such as the Default Domain Controllers GPO? For example, Exchange setup make changes to User Rights Assignment (SeSecurityPrivilege). Obviously if this setup process makes such edits on the live GPO in sysvol the changes will happen, but then only to have those critical edits be lost and overwritten the next time an admin re-deploys with AGPM.

    Answer

    [via Fabian “Wunderbar” Müller  – Ned]

    From my point of view:

    1. The Default Domain and Default Domain Controller Policies should be edited very rarely. Manual changes as well as automated changes (e.g. by the mentioned Exchange setup) should be well known and therefore the workaround in 2) should be feasible.

    2. After those planned changes were performed, you have to use “import from production” the production GPO to the AGPM archive in order to reflect the production change to AGPM. Another way could be to periodically use “import from production” the default policies or to implement a manual / human process that defines the “import from production” procedure before a change in these policies is done using AGPM.

    Not a perfect answer, but manageable.

    Question

    In testing the rerouting of folders, I took the this example from TechNet and placed in a separate custom.xml.  When using this custom.xml along with the other defaults (migdocs.xml and migapp.xml unchanged), the EngineeringDrafts folder is copied to %CSIDL_DESKTOP%\EngineeringDrafts' but there’s also a copy at C:\EngineeringDrafts on the destination computer.

    I assume this is not expected behavior.  Is there something I’m missing?

    Answer

    Expected behavior, pretty well hidden though:

    http://technet.microsoft.com/en-us/library/dd560751(v=WS.10).aspx

    If you have an <include> rule in one component and a <locationModify> rule in another component for the same file, the file will be migrated in both places. That is, it will be included based on the <include> rule and it will be migrated based on the <locationModify> rule

    That original rerouting article could state this more plainly, I think. Hardly anyone does this relativemove operation; it’s very expensive for disk space– one of those “you can, but you shouldn’t” capabilities of USMT. The first example also has an invalid character in it (the apostrophe in “user’s” on line 12, position 91 – argh!).

    Don’t just comment out those areas in migdocs though; you are then turning off most of the data migration. Instead, create a copy of the migdocs.xml and modify it to include your rerouting exceptions, then use that as your custom XML and stop including the factory migdocs.xml.

    There’s an example attached to this blog post down at the bottom. Note the exclude in the System context and the include/modify in the user context:

    image

    image

    Don’t just modify the existing migdocs.xml and keep using it un-renamed either; that becomes a versioning nightmare down the road.

    Question

    I'm reading up on CAPolicy.inf files, and it looks like there is an error in the documentation that keeps being copied around. TechNet lists RenewalValidityPeriod=Years and RenewalValidityPeriodUnits=20 under the "Windows Server 2003" sample. This is the opposite of the Windows 2000 sample, and intuitively the "PeriodUnits" should be something like "Years" or "Weeks", while the "Period" would be an integer value. I see this on AskDS here and here also.

    Answer

    [via Jonathan “scissor fingers” Stephens  – Ned]

    You're right that the two settings seem like they should be reversed, but unfortunately this is not correct. All of the *Period values can be set to Minutes, Hours, Days, Weeks, Months or Years, while all of the *PeriodUnits values should be set to some integer.

    Originally, the two types of values were intended to be exactly what one intuitively believes they should be -- *PeriodUnits was to be Day, Weeks, Months, etc. while *Period was to be the integer value. Unfortunately, the two were mixed up early in the development cycle for Windows 2000 and, once the error was discovered, it was really too late to fix what is ultimately a cosmetic problem. We just decided to document the correct values for each setting. So in actuality, it is the Windows 2000 documentation that is incorrect as it was written using the original specs and did not take the switch into account. I’ll get that fixed.

    Question

    Is there a way to control the number, verbosity, or contents of the DFSR cluster debug logs (DfsrClus_nnnnn.log and DfsrClus_nnnnn.log.gz in %windir%\debug)?

    Answer

    Nope, sorry. It’s all static defined:

    • Severity = 5
    • Max log messages per log = 10000
    • Max number of log files = 999

    Question

    In your previous article you say that any registry modifications should be completed with resource restart (take resource offline and bring it back online), instead of direct service restart. However, official whitepaper (on page 16) says that CA service should be restarted by using "net stop certsvc && net start certsvc".

    Also, I want to clarify about a clustered CA database backup/restore. Say, a DB was damaged or destroyed. I have a full backup of CA DB. Before restoring, I do I stop only AD CS service resource (cluadmin.msc) or stop the CA service directly (net stop certsvc)?

    Answer

    [via Rob “there's a Squatch in These Woods” Greene  – Ned]

    The CertSvc service has no idea that it belongs to a cluster.  That’s why you setup the CA as a generic service within Cluster Administration and configure the CA registry hive within Cluster Administrator.

    When you update the registry keys on the Active CA Cluster node, the Cluster service is monitoring the registry key changes.  When the resource is taken offline the Cluster Service makes a new copy of the registry keys to so that the other node gets the update.  When you stop and start the CA service the cluster services has no idea why the service is stopped and started, since it is being done outside of cluster and those registry key settings are never updated on the stand-by node. General guidance around clusters is to manage the resource state (Stop/Start) within Cluster Administrator and do not do this through Services.msc, NET STOP, SC, etc.

    As far as the CA Database restore: just logon to the Active CA node and run the certutil or CA MMC to perform the operation. There’s no need to touch the service manually.

    Other stuff

    The Microsoft Premier Field Organization has started a new blog that you should definitely be reading.

    Welcome to your nightmare (Thanks Mark!)

    Totally immature and therefore funny. Doubles as a gender test.

    Speaking of George Lucas re-imaginings, check out this awesome shot-by-shot comparison of Raiders and 30 other previous adventure films:


    Indy whipped first!

    I am completely addicted to Panzer Corps; if you ever played Panzer General in the 90’s, you will be too.

    Apropos throwback video gaming and even more re-imagining, here is Battlestar Galactica as a 1990’s RPG:

       
    The mail sack becomes meta of meta of meta

    Like Legos? Love Simon Pegg? This is for you.

    Best sci-fi books of 2011, according to IO9.

    What’s your New Year’s resolution? Mine is to stop swearing so much.

     

    Until next time,

    - Ned “$#%^&@!%^#$%^” Pyle

  • Friday Mail Sack: It’s a Dog’s Life Edition

    Hi folks, Ned here again with some possibly interesting, occasionally entertaining, and always unsolicited Friday mail sack. This week we talk some:

    Fetch!

    Question

    We use third party DNS but used to have Windows DNS on domain controllers; that service has been uninstalled and all that remains are the partitions. According to KB835397, deleting the ForestDNSZones and DomainDNSZones partitions is not supported. Soon we will have removed the last few old domain controllers hosting some of those partitions and replaced them with Windows Server 2008 R2 that never had Windows DNS. Are we getting ourselves in trouble or making this environment unsupported?

    Answer

    You are supported. Don’t interpret the KB too narrowly; there’s a difference between deletion of partitions used by DNS and never creating them in the first place. If you are not using MS DNS and the zones don’t exist, there’s nothing in Windows that should care about them, and we are not aware of any problems.

    This is more of a “cover our butts” article… we just don’t want you deleting partitions that you are actually using and naturally, we don’t rigorously test with non-MS DNS. That’s your job. ;-)

    Question

    When I run DCDIAG it returns all warning events for the system event log. I have a bunch of “expected” warnings, so this just clogs up my results. Can I change this behavior?

    Answer

    DCDIAG has no idea what the messages mean and has no way to control the output. You will need to suppress the events themselves in their own native fashion, if their application supports it. For example, if it’s a chatty combination domain controller/print server in a branch office that shows endless expected printer Warning messages, you’d use the steps here.

    If your application cannot be controlled, there’s one (rather gross) alternative to make things cleaner though, and that’s to use the FIND command in a few pipelines to remove expected events. For example, here I always see this write cache warning when I boot this DC, and I don’t really care about it:

    image

    Since I don’t care about these entries, I can use pipelined FIND (with /v to drop those lines) and narrow down the returned data. I probably don’t care about the time generated since DCDIAG only shows the last 60 minutes, nor the event string lines either. So with that, I can use this single wrapped line in a batch file:

    dcdiag/test:systemlog | find /I /v "eventid: 0x80040022" | find /I /v "the driver disabled the write cache on device" | find /i /v "event string:" | find /i /v "time generated:"

    clip_image002
    Whoops, I need to fix that user’s group memberships!

    Voila. I still get most of the useful data and nothing about that write cache issue. Just substitute your own stuff.

    See, I don’t always make you use Windows PowerShell for your pipelines. ツ

    Question

    If I walk into a new Windows Server 2008 AD environment cold and need to know if they are using DFSR or FRS for SYSVOL replication, what is the quickest way to tell?

    Answer

    Just run this DFSRMIG command:

    dfsrmig.exe /getglobalstate

    That tells you what the current state of the SYSVOL DFSR topology and migration.

    If it says:

    • “Eliminated”

    … they are using DFSR for SYSVOL. It will show this message even if the domain was built from scratch with a Windows Server 2008 domain functional level or higher and never performed a migration; the tool doesn’t know how to say “they always used DFSR from day one”.

    If it says:

    • “Prepared”
    • “Redirected”

    … they are mid-migration and using both FRS and DFSR, favoring one or the other for SYSVOL.

    If it says:

    • “Start”
    • “DFSR migration has not yet initialized”
    • “Current domain functional level is not Windows Server 2008 or above”

    … they are using FRS for SYSVOL.

    Question

    When using the DFSR WMI namespace “root\microsoftdfs” and class “dfsrvolumeconfig”, I am seeing weird results for the volume path. On one server it’s the C: drive, but on another it just shows a wacky volume GUID. Why?

    Answer

    DFSR is replicating data under a mount point. You can see this with any WMI tool (surprise! here’s PowerShell) and then use mountvol.exe to confirm your theory. To wit:

    image

    image

    Question

    I notice that the "dsquery user -inactive x" command returns a list of user accounts that have been inactive for x number of weeks, but not days.  I suspect that this lack of precision is related to this older AskDS post where it is mentioned that the LastLogonTimeStamp attribute is not terribly accurate. I was wondering what your thoughts on this were, and if my only real recourse for precise auditing of inactive user accounts was by parsing the Security logs of my DCs for user logon events.

    Answer

    Your supposition about DSQUERY is right. What's worse, that tool's queries do not even include users that have never logged on in its inactive search. So it's totally misleading. If you use the AD Administrative Center query for inactive accounts, it uses this LDAP syntax, so it's at least catching everyone (note that your lastlogontimestamp UTC value would be different):

    (&(objectCategory=person)(objectClass=user)(!userAccountControl:1.2.840.113556.1.4.803:=2)(|(lastLogonTimestamp<=129528216000000000)(!lastLogonTimestamp=*)))

    You can lower the msDS-LogonTimeSyncInterval down to 1 day, which removes the randomization and gets you very close to that magic "exactness" (within 24 hours). But this will increase your replication load, perhaps significantly if this is a large environment with a lot of logon activity. Warren's blog post you mentioned describes how to do this. I’ve seen some pretty clever PowerShell techniques for this: here's one (untested, non-MS) example that could be easily adopted into native Windows AD PowerShell or just used as-is. Dmitry is a smart fella. Make sure that you if you find scripts that the the author clearly understood Warren’s rules.

    There is also the option - if you just care about users' interactive or runas logons and you have all Windows Vista or Windows 7 clients - to implement msDS-LastSuccessfulInteractiveLogonTime. The ups and downs of this are discussed here. That is replicated normally and could be used as an LDAP query option.

    Windows AD PowerShell has a nice built-in constructed property called “LastLogonDate” that is the friendly date time info, converted from the gnarly UTC. That might help you in your scripting efforts.

    After all that, you are back to Warren's recommended use of security logs and audit collection services. Which is a good idea anyway. You don't get to be meticulous about just one aspect of security!

    Question

    I was reading your older blog post about setting legal notice text and had a few questions:

    1. Has Windows 7 changed to make this any easier or better?
    2. Any way to change the font or its size?
    3. Any way to embed URLs in the text so the user can see what they are agreeing to in more detail?

    Answer

    [Courtesy of that post’s author, Mike “DiNozzo” Stephens]

    1. No
    2. No
    3. No

    :)

    #3 is especially impossible. Just imagine what people would do to us if we allowed you to run Internet Explorer before you logged on!

    image

     [The next few answers courtesy of Jonathan “Davros” Stephens. Note how he only ever replies with bad news… – Neditor]

    Question

    I have encountered the following issue with some of my users performing smart card logon from Windows XP SP3.

    It seems that my users are able to logon using smart card logon even if the certificate on the user’s smart card was revoked.
    Here are the tests we've performed:

    1. Verified that the CRL is accessible
    2. Smartcard logon with the working certificate
    3. Revoked the certificate + waited for the next CRL publish
    4. Verified that the new CRL is accessible and that the revoked certificate was present in the list
    5. Tested smartcard logon with the revoked certificate

    We verified the presence of the following registry keys both on the client machine and on the authenticating DC:

    HKEY_Local_Machine\System\CurrentControlSet\Services\KDC\CRLValidityExtensionPeriod
    HKEY_Local_Machine\System\CurrentControlSet\Services\KDC\CRLTimeoutPeriod
    HKEY_Local_Machine\System\CurrentControlSet\Control\LSA\Kerberos\Parameters\CRLTimeoutPeriod
    HKEY_Local_Machine\System\CurrentControlSet\Control\LSA\Kerberos\Parameters\UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors

    None of them were found.

    Answer

    First, there is an overlap built into CRL publishing. The old CRL remains valid for a time after the new CRL is published to allow clients/servers a window to download the new CRL before the old one becomes invalid. If the old CRL is still valid then it is probably being used by the DC to verify the smart card certificate.

    Second, revocation of a smart card certificate is not intended to be usable as real-time access control -- not even with OCSP involved. If you want to prevent the user from logging on with the smart card then the account should be disabled. That said, one possible hacky alternative that would be take immediate effect would be to change the UPN of the user so it does not match the UPN on the smart card. With mismatched UPNs, implicit mapping of the smart card certificate to the user account would fail; the DC would have no way to determine which account it should authenticate even assuming the smart card certificate verified successfully.

    If you have Windows Server 2008 R2 DCs, you can disable the implicit mapping of smart card logon certificates to user accounts via the UPN in favor of explicit certificate mapping. That way, if a user loses his smart card and you want to make sure that that certificate cannot be used for authentication as soon as possible, remove it from the altSecurityIdentities attribute on the user object in AD. Of course, the tradeoff here is the additional management of updating user accounts before their smart cards can be used for logon.

    Question

    When using the SID cloning tools like sidhist.vbs in a Windows Server 2008 R2 domain, they always fail with error “Destination auditing must be enabled”. I verified that Account Management auditing is on as required, but then I also found that the newer Advanced Audit policy version of that setting is also on. It seems like the DSAddSIDHistory() API does not consider this new auditing sufficient? In my test environment everything works fine, but it does not use Advanced Auditing. I also found that if I set all Account Management advanced audit subcategories to enabled, it works.

    Answer

    It turns out that this is a known issue (it affects ADMT too). At this time, DsAddSidHistory() only works if it thinks legacy Account Management is enabled. You will either need to:

    • Remove the Advanced Auditing policy and force the destination computers use legacy auditing by setting Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings to disabled.
    • Set all Account Management advanced audit subcategories to enabled, as you found, which satisfies the SID cloning function.

    We are making sure TechNet is updated to reflect this as well.  It’s not like Advanced Auditing is going to get less popular over time.

    Question

    Enterprise and Datacenter editions of Windows Server support enforcing Role Separation based on the common criteria (CC) definitions.  But there doesn't seem to be any way to define the roles that you want to enforce.

    CC Security Levels 1 and 2 only define two roles that need to be restricted (CA Administrator and Certificate Manager).  Auditing and Backup functions are handled by the CA administrator instead of dedicated roles.

    Is there a way to enforce separation of these two roles without including the Auditor and Backup Operator roles defined in the higher CC Security Levels?

    Answer

    Unfortunately, there is no way to make exceptions to role separation. Basically, you have two options:

    1. Enable Role Separation and use different user accounts for each role.
    2. Do not enable Role Separation, turn on CA Auditing to monitor actions taken on the CA.

    [Now back to Ned for the idiotic finish!]

    Other Stuff

    My latest favorite site is cubiclebot.com. Mainly because they lead me to things like this:


    Boing boing boing

    And this:


    Wait for the pit!

    Speaking of cool dogs and songs: Bark bark bark bark, bark bark bark-bark.

    Game of Thrones season 2 is April 1st. Expect everyone to die, no matter how important or likeable their character. Thanks George!

    At last, Ninja-related sticky notes.

    For all the geek parents out there. My favorite is:

    adorbz-ewok
    For once, an Ewok does not enrage me

    It was inevitable.

     

    Finally: I am headed back to Chicagoland next weekend to see my family. If you are in northern Illinois and planning on eating at Slott’s Hots in Libertyville, Louie’s in Waukegan, or Leona’s in Chicago, gimme a wave. Yes, all I care about is the food. My wife only cares about the shopping, that’s why we’re on Michigan avenue and why she cannot complain. You don’t know what it’s like living in Charlotte!! D-:

    Have a nice weekend folks,

    Ned “my dogs are not quite as athletic” Pyle

  • Security Compliance Manager 2.5 Beta is out

    Hi folks, Ned here with a quickie advert: The Security Compliance Manager 2.5 beta released the other day, with a bunch of new features and other goo.

    • Integration with the System Center 2012 IT GRC Process Pack for Service Manager-Beta:Product baseline configurations are integrated into the IT GRC Process Pack to provide oversight and reporting of your compliance activities.
    • Gold master support: Import and take advantage of your existing Group Policy or create a snapshot of a reference machine to kick-start your project.
    • Configure stand-alone machines: Deploy your configurations to non-domain joined computers using the new GPO Pack feature.
    • Updated security guidance: Take advantage of the deep security expertise and best practices in the updated security guides, and the attack surface reference workbooks to help reduce the security risks that you consider to be the most important.
    • Compare against industry best practices: Analyze your configurations against prebuilt baselines for the latest Windows client and server operating systems.
    • NEW baselines include:
      • Exchange Server 2007 SP3 Security Baseline
      • Exchange Server 2010 SP2 Security Baseline
    • Updated client product baselines include:
      • Windows 7 SP1 Security Compliance Baseline
      • Windows Vista SP2 Security Compliance Baseline
      • Windows XP SP3 Security Compliance Baseline
      • Office 2010 SP1 Security Baseline
      • Internet Explorer 8 Security Compliance Baseline

    Hot damn, #2 and #3 are what everyone kept asking for, and they’ve finally been delivered.

    Never heard of SCM? For shame, I’ve discussed it here a few times. You just don’t care what I have to say, DO YOU? I AM GOING TO SPEND FOUR HOURS ON THE PHONE TALKING ABOUT YOU WITH MY GIRLFRIENDS!!!

    Update 4/4/2012: SCM 2.5 no longer beta and is released to world. Thanks for the heads up Mike!

     

    - Ned “SCMbag” Pyle

  • How to become a PFE (worth reading if you are job hunting)

    Hi all, Ned here. Greg Jaworski has posted an informative read for those looking to join the ranks of Microsoft Premier Field Engineering. They are always hiring and if your New Year's resolution includes travel, career growth, and working for the largest software company in the world, I recommend you give it a look.

    How to become a Premier Field Engineer (PFE

    It has useful tips, an explanation of the interview process, and other helpful goo. This comes to you the new Ask PFE blog.

    They also appear to favor those with Polish surnames. I'm not saying it's required, but it seems to help. ;-P

    - Ned "Casimir" Pyle 

  • If you use Symantec Products, Read Me

    Ned here again, with a public service announcement similar to the previous one we did for RSA as it implicitly affects so many Microsoft customers. Symantec has announced:

    Symantec can confirm that a segment of its source code has been accessed. Upon investigation of the claims made by Anonymous regarding source code disclosure, Symantec believes that the disclosure was the result of a theft of source code that occurred in 2006.

    Read the rest here: http://www.symantec.com/theme.jsp?themeid=anonymous-code-claims&inid=us_ghp_banner1_anonymous

    Older versions of their security products appear to be safe as long as you were maintaining patching (as always with early announcements, return to make sure this story doesn’t change). However, but if you use PCAnywhere you must update (for free) to a patched version of 12.5 immediately. It goes without saying if you were using PCAnywhere prior to this announcement, you should commence auditing your remote access. Symantec isn’t clowning around here, their actual guidance is that you should not allow PCAnywhere external access to your corporate network at all:

    Customers should block pcAnywhere assigned ports (5631, 5632) on Internet facing network connections, or shut off port forwarding of these ports. Blocking these ports will help ensure that an outside entity will not have access to pcAnywhere through these ports, and will help ensure that the use of pcAnywhere remains within the confines of the corporate network.

    Which kind of defeats the purpose as I understand it, but whatever.

    - Ned “get to it” Pyle