Murat Cudi Erenturk, Insights of an Architect

This blog reflects my insights on IT trends, technology and processes. Ideas expressed here are my own and does not reflect opinions of Microsoft.

Murat Cudi Erenturk, Insights of an Architect

  • Why is Microsoft System Center Service Manager (SCSM) not on the Gartner Magic Quadrant?

    Hi,

     

    Today I came across a nice article on Service Manager and Gartner magic quadrant. Enjoy.

    http://www.cireson.com/business/why-is-microsoft-system-center-service-manager-scsm-not-on-the-gartner-magic-quadrant/#

     

  • Why you don’t want to add TVs to Service Manager?

    From time to time, I hear customers that they want to add IT assets such as TV sets into Service Manager Database (CMDB). It is compelling to have a list of items in one place and to use them as configuration items in Incidents and Service requests.

    Configuration Management Database (CMDB) is a central repository for all the configuration items in your Service management solution and requires maintenance to keep it up to date. If you have lots of Configuration items (CI) where there is no automated way to update the configuration, your CMDB becomes outdated and useless. Service Manager comes with connectors to import CI information from Active Directory, Operations Manager, Configuration Manager and Virtual machine Manager that makes updates a whole lot easier. You can fill-in your CMDB with all this information in a very short time and keep it updated regularly.

    Keep in mind the following when deciding what to import into Service manager:

    • Import items that can be updated: Do not bulk import items manually unless you have a compelling reason. Rely on Service manager connectors to import and keep your configuration information up to date. You can use CSV files to import new objects to Service manager but you will need to update the objects regularly manually using this method.
    • Not all configuration information is useful: Operations manager can monitor devices that have SNMP connectivity such as printers and you can import these objects as CI’s to Service Manager. However just having the technology for updates does not mean you need to import it to Service Manager. Balance the benefit of having the object with increasing the size of your Service manager database.
    • Change in configuration is the key: Rule of thumb for import is to check if configuration of the desired object needs to be tracked. ITIL is all about managing risk and change brings risks. If you have a class of objects that you need to track change in configuration, it might be considered as a candidate for import.

    Not all items that can be imported into Service manager brings value. You need to write down the pros and cons and decide accordingly.

     

     

     

  • Why do we need Service Manager when all I want is automation?

    Organizations are seeking ways to reduce cost in IT
    operations on every possible way. One of the areas that seem promising is automation.
    In the past the only way to automate certain tasks were writing scripts to do
    certain tasks. However using scripts has its own problems:


    • Writing a script is not a onetime event. A
      scripting solution is brittle and breaks easily when environment where scripts
      are running changes. Writing scripts that handles most of the possible exceptions
      is not an easy task and requires a lot of experience.

    • Scripting is a development skill. You need to
      keep people in your team to write and maintain scripts. Different products used
      to have different scripting languages but thanks to Powershell, it is getting
      standardized over most of Microsoft products.

    Microsoft has recently released System Center 2012 Orchestrator
    which is used to create workflows (called runbooks) in an easy way. Basically
    it helps IT pro’s visually create linked commands to do automated tasks. It can
    communicate with other systems through Integration packs and can be very
    powerful tool to fulfill your automation needs. If you need more information on
    orchestrator, you can start here.

    The problem with automating tasks is not about the tool that
    you are using. It is related to the processes. When you are doing a task
    manually, it is easier for somebody else to follow on what you are doing and
    when. If something goes wrong, you can search for event logs on who logged on
    to systems and ask questions on their actions. However if it is automated
    (either through scripting or orchestrator) tracking what went wrong becomes
    much more difficult. In order to ease have a smooth operation, you need to have
    a more structured approach. For example you need to have a change a request for
    the automated task (such as cleaning up old computer accounts from Active
    Directory) recorded together with the results so that you can search for it later.
    Keeping these kind of records were a manual task in the past. However
    orchestrator has a Service manager Integration pack that you can do these kinds
    of requests automatically.

    The best way to implement automation in IT systems is to
    have Service manager 2012 to keep records of what operations are being done and
    even providing capabilities such as approvals to keep it under control. For
    example you can have a scheduled task in orchestrator that searches for old
    computer accounts in Active Directory weekly. If it finds such accounts it will
    create a change request in Service manager which goes through standard approval
    processes to IT administrator and after this approval a runbook in Orchestrator
    is triggered to actually delete the accounts and results recorded in the change
    request. You would be able to see reports on change requests on when deletion
    of certain computer accounts were requested, who approved it and when it was finished.

    Using System Center 2012 Service manager together with
    orchestrator will save you lot of time without losing control of your IT
    environment.

  • Cross-forest Exchange Migration, notes from the field Part 3, Coexistence

    In the first part of this series I had an overview of
    Exchange migration which can be found here.

    In the second part of this series I provided details on how
    to check for inconsistencies on user attributes and set for UPN which can be
    found here.

    In this part of the series I will give you details on how to
    setup the coexistence. So here are the steps to configure coexistence:


    • Conditional
      Forwarding:
      As you have 2 different forests you will need to have DNS name
      resolution between the domains. You can use DNS conditional forwarding feature
      to do this.

    • Trust
      relationship:
      some of the tools that would be needed for migration (Hint
      ADMT) will need Windows trusts to be configured between the 2 forests. You will
      also need to configure Windows trusts for cross-forest availability.

    • Directory
      synchronization:
      After you start migrating the users, you need to make sure
      users are available on both sides. The recommended approach is to use FIM to
      synchronize users, distribution groups and contacts. While you are configuring
      you need to plan for migrating the users through your migration planning and
      will need to configure the new object provisioning through FIM. (Ex: What will
      happen when a new user is created in old forest during coexistence)

    • Control
      panel:
      remember our scenario. We are moving only Exchange functionality to
      the new forest. In this case you might consider using control panel to manage
      Exchange properties of the users. If this is the case you might have to do
      configuration on your Control Panel.

    • Coexistence
      Server:
      In order to migrate users and provide mail flow you can use an
      Exchange Server 2010 in the old forest. This will provide you with the new
      mailbox replication proxy functionality. You would also use this server
      together with the Exchange in new forest for providing availability services in
      cross-forest migration scenario. You would need to have certificate installed
      on this server that would be trusted by the new forest Exchange servers.

    • E-mail
      address policies:
      In order to flow mail between two organizations you will
      need to configure secondary e-mail addresses for each side.

    • Send and
      receive connectors:
      These will be needed on both sides to enable mail flow
      between 2 Exchange organizations acting as a single organization.

    • Cross-forest
      Availability:
      During mailbox migration you may want to have each side of
      your Exchange servers to be able to query availability information for
      respective recipients. For more information have a look here.

    • Auto
      discovery:
      You will need to configure auto discovery services so that one
      you start migrating the users, they will be able to reconfigure themselves for
      the new forest. Please keep in mind that this will work for seamlessly for
      Outlook anywhere and ActiveSync but if you configure the coexistence server as
      your Internet facing CAS servers you will only get a redirect, which means
      migrated users will be prompted for authentication on new servers. You can use
      an Access gateway solution to provide seamless redirection when the mailbox has
      been moved and they are accessing through OWA.

    These are the basic steps you will need to do configure
    Coexistence between 2 forests.

  • Cross-forest Exchange Migration, notes from the field Part 2, Setting UPN

    In the first part of this series I had an overview of
    Exchange migration which can be found here.

    In the second part of these series I will provide more about
    handling the transformation to UPN. When you decide to use e-mail addresses for
    your UPN, you will need to make sure that you create the UPN from the used
    e-mail address of the user. Although this may seem trivial it may not be.
    Generally you will want to create the UPN from user’s alias attribute and the E-mail
    domain. However alias attribute is populated only once during the mailbox
    creation and administrator can change the mail address of the user after it has
    been created. In order to identity these account we need a script to compare
    these values. The script will basically do the following:

    First read all the mailboxes in the organization and loop on
    them. Please note that you will need to put resultsize unlimited parameter to
    get the whole picture.

    get-mailbox
    -resultsize unlimited | foreach{

    Then you would need to get the e-mail addresses of the user
    inside the loop.

    for ($i=0;$i -lt
    $_.EmailAddresses.Count; $i++)

    Once you have the list, you will go through the list looking
    for address prefix SMTP which will give you the Primary SMTP address (Secondary
    ones will be given by smtp). Some of the users may have empty Email addresses
    so you need to check that condition also

    $address =
    $_.EmailAddresses[$i]

    $a=$address.AddressString.ToString()

    if
    ($address.PrefixString -eq "SMTP" -and $a.length -gt 0 -and $a.indexof("@")
    -gt 0)

    Now that you have found the address you need to store it to
    be used after the loop.

    $Primary=$a.substring(0,$a.indexof("@"))

    Now lets check if this matches the alias attribute

    if
    ([String]::Compare($_.Alias,$Primary,$True) -ne 0)

    You will generally have the necessary plumbing to write the
    results into a log file for easy consumption. The complete script can be found
    as an attachment to the blog. The script is provided as is without any warranty
    so use it at your own risk.

    After you have identified these users you will need to
    correct the alias attribute according to the primary SMTP address attribute.
    The script for this is left as an exercise to the user.

    You may be asking why we were so diligent about correcting
    the alias attribute instead of setting the attribute through a script. The
    reason is simple; writing scripts to touch large number of users requires
    careful testing.

     So now we need to do
    the following:


    • Create
      UPN suffixes:
      Creating UPN suffixes can easily be done through a single
      line of PowerShell. See here
      for more details.

    • Populate
      UPN prefix for each user:
      You can use ADModify tool to do
      this.

    After this tasks your users will be able to use the same
    e-mail address as their logon names.

  • Cross-forest Exchange Migration, notes from the field Part 1, Overview

    Exchange migration has always been a topic of interest for
    organizations. As more and more organizations depend on Exchange as their core
    infrastructure downtime during upgrades have been noticeable by the clients and
    need detailed planning. When there is a need to change the forest as a part of
    this upgrade, the problem becomes a complex migration exercise.

    Let’s take a hypothetical organization running Exchange 2007
    which wants to move to Exchange 2010 in a new forest. For the sake of argument
    lets say customer only wants to migrate exchange functionality to the new
    forest and the old forest will remain where Exchange will be uninstalled. When
    the number of clients involved is large, the mailbox move process can take
    longer than the organization can tolerate downtime and coexistence is needed.
    Coexistence can be defined as where you have 2 exchange organizations in 2
    different forests and it acts like a single organization. Here are important
    points to consider for kind of migration:


    • Mail
      flow:
      generally mail flow to and from Internet is done from the old
      organization during coexistence phase and mail between the organization is done
      with connectors in between.

    • Mail
      Access:
      For OWA users this will depend on where the mailbox is hosted at a
      given particular time. Exchange can provide redirection to the new environment,
      more on this later. For Outlook anywhere and ActiveSync users Autodiscover will
      need to be used in cross-forest configuration. In order for autodiscover to
      work you will need to have Outlook 2007 as a minimum on clients.

    • Availability:
      During coexistence you need to be able to query free/busy information. Exchange
      2010 supports several methods to get this information.

    In order for these functions to work, you need do analysis
    on source forest:


    • Single
      sign on:
      During coexistence you will need an entry point that can connect
      to both forests and would receive credentials from clients only once.

    • Account
      names:
      If customers are using Domainname\username format to logon, this
      will need to change when Exchange moves to the new forest. One way to solve
      this problem will be to use UPN. Users accessing old forest can start using UPN
      and the new forest will also have the same UPN but different forest name. Generally
      you would want to have UPN the same as the e-mail of the user.

    This part clearly shows you need to have a lot of
    preparation before you do the migration. We will focus on more details in later
    series.

  • What are the trends in IT for 2011

    • So here are a couple of trends that will affect our lives for the near future:

        • Processing Power: Computer processing power doubles every 18 months. We know this as Moore’s law. There are signs that this will slow for the coming years.
        • Hand-held devices: Small devices are capable of delivering high computational power that was only available to desktop computers 5 years ago. These devices will have multi-core CPU’s delivering great performance in the coming years. We might expect to see 3D displays on these devices too.
        • Multi-core: We will see more multi-core processors but harnessing the computational power will depend on software algorithms and optimization. Heterogeneous cores promise performance and efficiency gains.
        • Mobile social networks: Social networks are already a large part of our lives. It will become more dominant as everybody will have mobile devices as a part of their lives.
        • Bandwidth: Digital bandwidth is doubling faster than processing power. Our average connectivity to Internet is growing at an incredible speed. Globally connection speeds above 5 Mbps was %22 and increased %3 year over year. Check Akamai for detailed information.
        • Mobile Internet: More and more of Internet traffic is generated from mobile devices. The number has doubled and this trend is expected to continue.
        • Wireless bandwidth demand: The demand for bandwidth has grown while consumers expect to pay less for more available bandwidth. However mobile revenues are not reaching the levels needed for investment.
        • Storage Capacity: Digital data storage is doubling every 12 months. Current Information on Internet is estimated to contain close to 1.1 ZB (Zetabytes) of Information.
        • Storage vs bandwidth: When you look at disk drive capacity and consumer bandwidth, the rates are a little bit different. A good comparison can be seen here.
        • Sensors: Sensors would be embedded in almost every object we use. One of the new protocols for communication to watch for is ANT+. More information can be found here.
        • Micro Display technology: Micro displays are changing the world of physical displays. There were over 150 new pico-projector models released in 2010. Some interesting information can be found here
  • Internet of Things, why should I care?

    The idea of Internet of things is not new. It’s all about different devices being connected to Internet, not to provide information directly to a user but to consume or provide information about itself. For example a digital photo frame can use Internet to download pictures from the Internet. The idea is that the person that views the picture may not be the one selecting which picture to display. Taking this idea forward, devices can talk to other devices on Internet. So the whole Internet Sea is shared between humans and devices.

    So when does this become interesting? Recent advances in technologies provide lots of systems to be aware of their surroundings. For example your Windows 7 phone has cameras and accelerometers and can detect its orientation in your hand. With the help of software you it can tag the picture with the date/time and place (through GPS) when you are uploading it to a web site. So you are in a foreign country and you don’t know the language. You point your phone to the sign and in real time you see the same sign through your phone in your own language. There are actually prototypes for these applications for other mobile platforms that are impressive. So basically we are connecting sensors from all around the world to Internet. (Information on RFID sensors can be found here and development information can be seen here) I just want to stress how powerful this can be. Obviously there are privacy issues that needs to be taken care of but that’s a different topic.

    Microsoft announced Kinect for Xbox 360 a while ago. There have been lots of articles on how it would change the gaming experience.(One example can be seen here) Apart from being used as a controller for games, it became very popular for other uses. There have been different projects (Examples can be seen here, here and here) all around the world for using Kinect for never before thought purposes. The reason for this popularity was that we had all necessary components except the software to tie everything together. Now that we can use the power of software to get information from the data coming from the sensors, a whole new world begins. I am sure we will see these kinds of solutions embedded more and more into our lives.

    Just to give you a few examples on potential applications, you would see Intelligent TV sets. It will open when you sit in front of them and you would be able to control them through your hand gestures (Netflix on Xbox 360 can do this Today), no searching for the lost remote controls anymore. In the near future except for high security areas, we will start seeing face recognition for access control. That means no use of passwords or maybe even keys for some uses. So you may have a home system where there will be a log of every major event. Your family members entering home, going out. If they are using an Internet enabled shoes for jogging you will see how fast they are going or even if they need help.

    So Internet of things is about to become reality for the majority and will change our lives in ways you will never have imagined.

  • If you don’t care about Service Management, think again

    Some of you may have heard about ITIL and MOF and being a technical person processes and governance may not be that much appealing to you. You may have been thinking It’s only for large enterprises with lots of money and time to implement processes. That view is changing lately. As datacenter management is becoming more and more complex, tracking activities and governance is becoming a concern.

    Microsoft has a service management solution based on System Center Service Manager. Although you might think it is a new product actually it has been in the works for quite some time and has been rewritten several times before releasing as a product. There are several good resources around how SCSM is aligned with MOF and processes but the real reason behind using these type of solutions is to keep track of all service management activates and creating reports on them. For example if you deploy a service pack to your servers and some of the servers do not boot, the first question you would ask is why didn’t we see this during our testing? So who did the testing, when did it happen and what was the result and where is it now? That’s where you need the service management or update management to be specific. You need a solution just like SCSM that will record a need for change (apply Service Pack) create and record a workflow of events (Approve for testing, assign to a person for testing and recording results, approve for pilot servers, record results and approve for distribution to all servers) so that you can come back later to see if everything was done properly.

    So far, what I have told is not interesting for some. I am just imposing more paperwork to you who are the already busy doing work. However there is a lot of maintenance work that a datacenter admin would need to do in terms of checking files, running scripts etc. What if there is a wizard behind the curtains that can read the change requests, do all the maintenance tasks and put the results back to the change request. That wizard is Opalis. Opalis is a workflow engine that has integration packs with lots of other systems and can read events, objects from them, act on them and return the results back to the other systems. It is like writing scripts without entering a single line of code.

    Let’s talk about an example. One of the common things a domain administrator will do is search for old computer accounts in the domain and delete them. This has to be done regularly in order to keep your Active Directory clean. From a service Management perspective this is a process that needs approval. You can have a policy in Opalis that will trigger every month and run a script to search for old computer accounts in the domain and create a text file with computer names in it. Then it will create a change request in Service Manager from a template you already have for this process. Service Manager will record the request and trigger a review activity for the admin that will send you (the Domain Admin) an e-mail saying that you are expected to approve the deletion of old computer accounts. Going to Service Manager, you check the list of computer names and approve the request. Opalis will happily see that can trigger another policy to run a script to read the file and delete the computer accounts in the domain and put the result of the activity in the change management request and if successful will close the request automatically. Now that makes everybody happy. IT management can view these change management activities in their reports and domain admins do not need to remember running these scripts.

    Service Management can actually make people’s lives easier when SCSM is used together with Opalis.

  • Microsoft iscsi target goes public

    Some of you might not seen this announcement so I wanted to re-iterate the fact that Microsoft iscsi target is release to public. This is an important milestone as it shows virtualization is becoming one of the tools that can solve business problems like branch office productivity providing low cost solutions. Detailed information can be found in this blog.

  • Functional versus Performance problems

    When there is a multi-component system, there may be cases where system is working partially. There are two basic classes of problems:

    ·         Functional Problems: These problems are where one or more parts of the systems are not providing the necessary functions. The problem is generally well defined and can be easily isolated. The problem solution approach is directed toward dependency isolation and remediation. When a functional problem arises, the troubleshooting should start with isolating the dependencies of the non-functional part. This may not be as obvious when you start. There may be hidden external dependencies that will only manifest themselves when they stop functioning. This is one of the reasons on why every component in a system should provide on their state change to a central repository together with a reason if possible. For example when Exchange store service stops, you see an entry in the event log. Searching earlier events you also see that Windows is having problems accessing a volume where your Exchange databases reside. So now that you have isolated your dependencies, you can check on the host connector cables, storage device connections etc. However most of the time, problem is not solved but converted to another problem class.

    ·         Performance problems: These problems are where system is providing the necessary functions but the performance is not as expected. Generally finding these problems are much harder than functional problems. The primary reason for this is in functional problems you simply have a state change where in performance problems you need to take a history of the level of functions according to a given metrics. Most of harder to detect problems start as performance problems and convert to functional problems which are much more visible. However due to operational constraints root cause analysis is not carried out and once functional problems are identified they are converted to performance problems and pressure to solve them drops. In order to identify performance problems you need historical data from all related systems and correlate them to find a difference in performance and isolate the component(s) that is causing the problem. Sometimes this is easy if your systems are not affecting each other. However if you have a highly available web site, you will need to check the performance starting from your network links to load balancers to web servers and to databases. In order to see what the problem is, you need performance counters or operational logs from all systems with a common time source. This may be event log entries from your services but also can be much more detailed logs you need to collect to see inner workings of the service that is being provided. You may also need to have triggers to start/stop collecting data or you may have mountains of data to store and to analyze. If one of the components cannot provide detailed logs you may not be able to solve the problem. Next time you are buying a cheap switch/router check on data collection and reporting capabilities and decide if you want to take the operational risk with a system in question.

    When you are designing a multi layered service, you need to have necessary data collection mechanisms with a common time source that can be triggered based on events and can be stored for enough time to provide the course of events leading to a performance problem. This way it will be much easier to track your performance problems when faced with hard to solve issues.

  • Why not virtualize everything?

    Virtualization is seen as a magic wand in server world. What ever your problem, lets virtualize and consolidate and your problems will be gone. This is far from truth. When you look at applications runninng on servers most of them are not designed to handle a lot of resources. When you want to use them above their design limits their either fail or use so much resources that performance drops. In that case you will mostly use more servers with smaller hardware footprint. These are good candidates for virtualization. However there may be other alternatives that can use physical hardware more efficiently and thus do not need to be virtualized.

    One of the examples is Terminal Services. When using 32-bit Terminal services on Windows server 2003, you are limited with memory and CPU that you can consume. Scaling terminal services depends on the applications thats being used platform is also key for scaling. Even if you increase Ram above 4  GB (say  8 GB), your numbers will not increase due to 32-bit architecture limits. In 32-bit Windows users can use up to 2 GB, the left is used by kernel. You can reduce the memory kernel down to 1 GB and this will provide 3 GB for user applications. However there are other kernel resources that need memory and you are likely to hit those limits in a Terminal services environment. One way to increase your number of users, is to have a server with 8 GB of RAM, use Hyper-V and create 2 virtual machines and install 32-bit terminal services on top to get twice as much users. However when terminal services is used with virtualization, due to the nature of the application performance may drop. New processors have features that can eliminate this problem but your performance will not increase twice as much. The other alternative would be to move to 64-bit terminal services (or Remote Desktop Services in Windows Server 2008 R2) and use physical hardware. This will both increase the number of users on a single box and increase your performance due to advances in the connection protocol (RDP).

    Somethimes perfecting a in-house business application can take time. The general tendency will be to keep the application as it is as long as possible and try to change the environment to gain advantage. However this is only a temporary measure. Technology is evolving to become more efficient and provide more features. However your applications should evolve and adapt to the new environment. Protecting an application can cost you more than you think.  

  • How to stress test terminal services with Windows Powershell

    When you want to do scalability testing for your terminal services (or Remote Desktop Services on Windows Server 2008 R2) you need some automation to make this easier. There is Remote Desktop Load Simulation Tool that you can use to test your environment. I tried to use this in one of my customers recently and had to … enhance it to fit my needs.

    Briefly the tool is using a COM API (RemoteUIControl.dll) to connect through the Remote Desktop protocol to send necessary commands and you can use a script to simulate user activity. You would need to install 3 sets of tools on 3 different parts to do the testing. Please be aware that the method I am about to tell is not supported or endorsed by Microsoft. Instead of using the tool components provided I decided to use the API to create my own test tools. You need RemoteUIControl.dll and RUIDCOM.exe on your test clients. You will need TSAccSessionAgent.exe running on your terminal servers. You can simply get the client components installed on your tester client and Server components installed on RDS servers.

    Now to start the testing you would require users, lots of them. (Image the movie Matrix, where Neo tells “We need guns lots of them”). You can simply run a Windows Powershell script to create test users. Here is a simple loop to do this on your Windows Server 2008 R2 DC (Note that you will need the modules to be loaded to use New-ADuser cmdlet to function):

    for ($i=1;$i -le  $NumberOfusers;$i++)

    {   

    $CN="RDS"+$i

    $Password = ConvertTo-SecureString "12345678" -AsPlainText –Force

    $DN="CN=" + $CN + "," +$TargetOU

    New-ADUser $CN -Path $TargetOU  -AccountPassword $password -Enabled $True

    }

    Of course you would need to set the variables for your environment. After this is done, you will want to create a script that will run as a test user to create activity. I am leaving this as an exercise as there is pretty good vbscript example inside the tool I mentioned above.

    The next step will be to replicate those testers to create a simultaneously acting users to create load on your remote Desktop services. Here is a loop to do this. Please keep in mind that you do not need to run this on your DC and not even in the same script that is used to create users:

    for ($i=1;$i -le  $NumberOfusers;$i++)

    {   

          $CN="RDS"+$i

    [string[]]$Arglist="test2.vbs","-s:$Servername","-u:$CN","-p:$pwd","-d:$Domain","-f:1"

    Start-process -FilePath "c:\windows\system32\cscript.exe" -ArgumentList $ArgList

    sleep -seconds 1

    }

    When you first run this script you will see several (depending on number of users) windows pop up and they will connect to your remote desktop the first time and new profiles will be created. This is a very intensive operation and if your numbers of users are high, will easily choke your server(s). That is the reason to have a sleep statement to slow this down. Some of the users may not be able to connect or finish the script that you provide. The next run will be much easier.

    Finally if some of your connections are hanged you will need to terminate the processes on your client machine. This can be easily done by the following command:

    get-process -name ruidcom | foreach-object {$_.kill()}

    This is a very powerful command and should be used with caution. If you do not provide the name of the process, it will kill all the processes on your client which will instantly turn off your machine :)

  • How do you control computer usage habits for children?

    As most of you are on holiday, I wanted to share some insights on home computersJ.

    When you have more than one child like I do you always have competition for using the home computer. Competition is good as long as you have the necessary rules to facilitate smooth operation. Up until now, I had only one local account on our house computer running Windows 7 and the account. The user profile was locked down and only allowed for specific applications (read games) to run. However it had one major flaw. Any one child sitting in front of the computer can monopolize the time until a parent intervenes. This causes the all children except one to complain about how long the one at the computer is playing. So solution to that problem would need the following attributes:

    ·        The computer should keep track of who is allowed to logon and when and for what duration.

    ·        It should provide detailed logs around who is disallowed from logging on and how much time is remaining for a particular user.

    ·        It should provide global settings that can be set from a central location.

    ·        It should auto-install and create necessary information stores if necessary.

    ·        It should discourage the usage for a long time so that others can use the computer but allow it after a certain time interval so that it gives credit for persistence.

    The computer is in workgroup so using domain based controls is not an option. You can not use a script in start menu startup as it would be too easy to detect. The best approach is to use run registry key for local users. However I used parental control feature of Windows 7 and regedit was not available inside the restricted user account. The workaround is to logon to an administrative account and load restricted users registry hive (ntuser.dat) and set the run registry key. I created a powershell script that would implement the above attributes. Do not forget that run will issue the command before explorer is started so environment variables will not be there and you would need the full path. Below is the line I exported from regedit. I purposefully did not provide full file as loading hive will give different names for different users:

    "LogoffTimer"="C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\Powershell.exe -windowstyle hidden c:\\windows\\logoffTimer.ps1"

    This line will run the powershell script logoffTimer.ps1 without showing a window to the user. The script starts when the user logs on and first checks if registry values are present. These are used for storing information around time used on last session, logon count, last logon time and time used for the day. It will create if values are not present. This way if you need to add another variable to the script you do not need to reset the registry.

    if (Test-RegistryValue "hkcu:\software\erenturk\LogoffTimer","UsedDailyMinutes"  -eq $False)

    {

        new-itemProperty -path hkcu:\software\erenturk\logoffTimer -name "UsedDailyMinutes" -value $UsedDailyMinutes

    }

    else

    {

        $UsedDailyMinutes=(Get-itemProperty -path hkcu:\software\erenturk\LogoffTimer -name "UsedDailyMinutes").UsedDailyMinutes

    }

    Next we check if a day has passed since last logon, if it did we reset counters in registry for a new day otherwise we check if daily quota is reached and log off if necessary.

    $lastLogonDelta=New-TimeSpan -start $LastLogon -end $now

    $lastLogonDeltaDesc=GetTimeSpanDescription($lastLogonDelta)

     

    if ($LastLogon.day -eq $now.day)

    {

        add-content $logfile "$username has logged on for $UsedDailyMinutes minutes Today, $DailyLogonCount times and last logged $lastLogonDeltaDesc ago"

        add-content $logfile "$username has used $usedSessionMinutes minutes session time on last logon"

        $DailyLogonCount=$DailyLogonCount+1

        set-itemProperty -path hkcu:\software\erenturk\logoffTimer -name "DailyLogonCount" -value $DailyLogonCount

    }

    else

    {

        if ($LoggingLevel -gt 2) {add-content $LogFile "INFO: Day has passed since logon, reseting counters"}

        set-itemProperty -path hkcu:\software\erenturk\logoffTimer -name "DailyLogonCount" -value $DailyLogonCount

        set-itemProperty -path hkcu:\software\erenturk\logoffTimer -name "UsedDailyMinutes" -value $UsedDailyMinutes

        set-itemProperty -path hkcu:\software\erenturk\logoffTimer -name "UsedSessionMinutes" -value $UsedSessionMinutes

        add-content $logfile "$username is logging first time today,last logged on $lastLogonDeltaDesc ago"

    }

    If the user is logging on for a second time, we check if the session time is finished. This is implemented so that if user logs of before allowed time, he/she can logon immediately afterwards. This is generally needed for accidental logoffs. If their session time has finished, script will check for last logon and will not log you on before a certain time passes. This gives chance to other users to use the computers before the first one is allowed again.

    If user is finally allowed to log on, we create a loop that will awake every minute to see if time is finished and write the time left to log file, when it does writes the used minutes and logs off the user.

    $EndTime=$now.addMinutes($SessionTimeLeft)

    $TimeSpan=new-timespan $now $EndTime

    while ($timeSpan -gt 0)

    {

     $timeSpan = new-timespan $(get-date) $endTime

     sleep -Seconds 60

     $UsedDailyMinutes=$UsedDailyMinutes+1

     $usedSessionMinutes=$UsedSessionMinutes+1

     set-itemProperty -path hkcu:\software\erenturk\logoffTimer -name "UsedDailyMinutes" -value $UsedDailyMinutes

     set-itemProperty -path hkcu:\software\erenturk\logoffTimer -name "UsedSessionMinutes" -value $UsedSessionMinutes

     $Remaining=GetTimeSpanDescription($timeSpan)

     add-content $logfile "$Remaining remaining..." 

    }

     $UsedDailyMinutes=$UsedDailyMinutes+1

     $usedSessionMinutes=$UsedSessionMinutes+1

     set-itemProperty -path hkcu:\software\erenturk\logoffTimer -name "UsedDailyMinutes" -value $UsedDailyMinutes

     set-itemProperty -path hkcu:\software\erenturk\logoffTimer -name "UsedSessionMinutes" -value $UsedSessionMinutes

     add-content $logfile "Session time allowance is reached, user will be logged off"

     logoff

    After I implemented the script and made the necessary rules, I was amazed to see how fast it was received by the children. You can find the script attached to the post.

  • If you still have servers in all of your branches, think again

    If you have large distributed environments, you will have connected branches to your headquarters. More than a decade ago, these links had small bandwidth (around 64kbps) or even used X.25 like some of my customers. Generally links were unreliable and had a tendency to malfunction from time to time. Using backup lines were either prohibitively expensive or alternative technologies were in their infancy to be used reliably. Back then you needed servers in your branches and use caching on those servers so that you can resume your work in case your link goes down. Some of my customers had (and some still have) teams monitoring all the links (some over 1000 locations) and working with the ISP to resume service on some of them. My customers used to have large number of sites in Active Directory and file servers running on branch servers. You also needed backup software and tape drives on those machines to do local backup. When you work in these environments for some time you tend to attain a habit of keeping whatever you have and this blurs your vision of connectedness.

    During the last decade, link speeds and reliability have gone up considerably. You can use 3G wireless backup lines for your primary lines and link speeds have reached 1-5 Mbps for most of the places. Your mileage may vary but the point is link speeds have gone up at least 20 times (my home Internet connection speed has increased 40x times in this period) and you can attain high available lines with much less effort combining different technologies. Not only can you use higher bandwidth to connect your branches but you can have a different topology as well. Think of this as a slider where each point will enable different functionality as you increased your connected bandwidth. If you slightly increase your line bandwidth you can start taking backups from central location during nights or you can remove branch servers from your smaller branches. I did an analysis several years ago for one of my customers around what the optimal number of PC’s in branches need be to make it feasible to put branch servers. I included operational link costs, initial cost of the servers and an estimated maintenance cost for servers and came up with a magical number of 14. If branch had less than 14 PC’s customer placed no branch servers but serviced PC’s from central site instead. Of course your magical number may vary on your own conditions however the point is, the more you feel comfortable with the links the fewer servers you will need in branches.

    There are organizations that have created their topology over a decade ago and have not changed it since. Some still fear of unreliable links and keep Exchange servers in their branches. (One specific customer of mine has over 600 Exchange servers) Exchange Server is designed to be placed in central sites for the last two versions at least and it’s getting harder to deploy it in branches with each new version. Some customers refuse to use read only domain controllers (RODC) on the basis of the extra load it brings to the network. It may not be feasible to remove every branch server in your environment, however if you still have branch servers in all of your branches it is time to reconsider your server placement strategy.

    There is no point in trying to upgrade your software if you do not adapt yourself to the new perception of connectedness. Some of my customers are already using VPN over Internet between their central sites and branches and have reduced their branch servers with a goal of reaching down to a dozen locations that will have servers. Looking into the near future, we will be using IPSec VPN’s over IPv6 Internet for all of our client machines without even knowing which of branch servers is closest to you, so start getting ready now.

  • How will it affect your business if you do not embrace IPv6?

    If you are providing services to Internet, you should be watching the trends on Internet. There are various web sites providing information. The one I want to mention here is ten questions Internet Execs should ask and answer.

    Who are your potential customers? It turns out that USA, Russia, China, Brazil and India are the largest Internet markets. So let’s take a look at how they are doing in IPv6:

    ·        United States: United States government has been pushing IPv6 for some time now. There are several Internet service providers already providing Ipv6 addresses.

    ·        China: China started planning for Next generation Internet back in 2002. They have fully functional IPv6 backbone and they even provided a showcase with 2008 Olympic games which were provided from IPv6 infrastructure. (The link is IPv6).

    ·        India: India government has recently decided to implement IPv6. They will be using Ipv6 by 2012 according to an IDC study.

    ·        Brazil: Brazil has been using Internet through IPv6 for a couple of years. In fact south America has the fastest growing IPv6 address space in the world.

    ·        Russia: Use of IPv6 has been on the rise for Russia according to research made by Google.

    This means that top 5 of largest Internet markets are ready to use IPv6. If you are interested in how much addresses are allocated in each country, there is a list you can check out here.

    How are they accessing your site? The fastest growing area is mobile devices. There are around 120 million subscribers using iphone+iTouch+ipad and adding 60 million each year assuming the rate is constant. In just 4 years Japan Social networking switched from desktops (83% desktop/17% mobile in 2006) to laptops (14% desktops/84% mobile in 2010). This means mobile operators will needs lots of IP addresses for those mobile devices and the trend will increase in the near future. Please keep in mind that nearly all mobile platforms currently support IPv6 addresses.

    Implementing IPv6 is not the only option for mobile operators as they can still use IPv4 together with NAT. However using such technologies will break certain scenarios such as targeted advertising which is going to be a large market. As long as the infrastructure is ready there will be a shift to IPv6 addresses pretty quickly.

    When the end user is using IPv6, they will need extra services for accessing Ipv4 Internet sites. The overwhelmingly used technology is 6to4. This means, your potential customers will need to pass through gateways to access your Ipv4 sites. There will be different services in this space with various degrees of success. There will probably be some IPv4 islands which will not be accessible from IPv6 addresses.

    In a short time (probably starting in a year) mobile users will start having Ipv6 address. They will want to access IPv6 based services as they will not need to pay for 6to4 services or hit extra performance penalty. There will be a first mover’s advantage for web sites presenting IPv4 and Ipv6 addresses. The others will slowly or furiously (depending on your area of service) be getting fewer hits every day. For advertisers gathering IPv4 addresses used behind NAT will not provide detailed information so they will choose sites that could give them Ipv6 addresses. There will be less opportunity your site will be chosen for advertisements.

    Moving to Ipv6 is not going to happen overnight. However due to increase in mobile devices, Ipv6 will first be used by them. If you currently are or planning to provide services to mobile users, you need to start now or you will start losing customers and advertising income soon.

  • If you are still not using 64-bit operating systems you should read this

    From time to time I meet customers that are using older operating systems that are not 64-bit. Before I go any further let me give you the perspective:

    ·         X86 Platform: This is the original PC platform that we used to use back in 1980’s. It has a maximum support for 4 GB RAM.

    ·         I64 Platform (Itanium): This is the 64-bit platform which appeared first on stage and was modeled after a different architecture. It has support for much higher memory but is only available on expensive hardware. Due to architectural differences it needs to emulate x86 instructions in software and old applications written for x86 run much slower.

    ·         X64 Platform: This is the 64-bit platform that has now become mainstream. It is using a similar architecture with x86 and can run older applications on hardware. It does support much higher memory. The rest of this blog this is the platform that I will refer to when I use 64-bit.

    When we talk about the reason for not moving, it generally boils down to incompatible hardware or software that does not run properly under 64-bit operating systems. I have seen several customers using old fax add-on cards that is leaving them behind and several software that simply refuse to run. Maybe its time you should think of leaving fax as a communications technology. Some of the readers will jump saying that they depend on fax for their everyday operations. Although this seems like a valid reason for not moving to 64-bit, the point is there are valid alternatives both technically and politically that can be used that can help you use 64 bit systems. There is something more subtle but more important than this.

    Organizations tend to use technology as long as it works and does not cause any trouble. These technologies become brittle in time and become obstacles to innovation to your business. The way we do business is changing for everyone starting from coffee shops to large enterprises. You can not keep selling the same services and products forever. Nowadays success for organizations is measured by how much profit you are generating from the new products and services you are offering. This means adapting to change should be in your DNA as a company. This includes both planned changes and abrupt changes. If you do not embrace the change, you are losing adaptability to new conditions. If you do not adapt to change, your competition will and you will less likely to be fit and finally you will be extinct. This is the most important lesson organizations should borrow from evolution.

    Now the new problem organizations are facing is the rate of change which is increasing even faster each year. In order to remain competitive, you need to have a framework which makes technological change easier. See my earlier post on agile organizations. You should choose the right technology and put necessary processes to track its usefulness. Measuring usefulness can be difficult when you think of implementing this but when you do you will see that most of the technologies are replaceable with better ones after some time even though they are still functioning and providing value. When you change your mind set on change in technology you need to invest on technologies that are modular enough to change when needed easily and seamlessly.

    When you are investing in a new technology, you should definitely evaluate the contribution to your business. However you should also think about how adaptable the new technology is to the changing conditions. If it is not, account for this, during in your decision. If you don’t we will have the same conversation when you plan to implement IPv6 or any other disruptive technology on the horizion.

  • How is DNSSEC related to web site security?

    When you have a web site where Money is changing hands, customer trust has upmost importance. The moment you loose trust you loose your customers. You will need to invest on your security strategy in a multi layered fashion. Here is a short list (not a comprehensive one) of items you should keep in mind:

    ·         SSL certificate: You will need too have a SSL web site certificate that you can get from a well trusted authority. As expected the most important thing you will want to look at is their assurances and operations. Asking for a web certificate with highest key length is not enough, its about what policiees are in place. The questions you would need to ask is when your private key gets comprimized, how fast is their CRL updated?, what measures are taken to prevent comprimise of their intermediate and what standards their are applying to their operations.

    ·         Securing the environment: You would definetely want to have a secure network, securely configured host and applications. There are plenty of documentation on how to secure your routers, firewalls, locking down your servers and IIS configuration. If you would like to have more informataion please provide feedback and I will provide more information on this one. Get yourself ready for using IPv6. If you are planning for a web site or if you already have one running on older system, consider moving to Windows Server 2008 R2.

    ·         Secure Operations: Securing the environment is only the first half of the story. You need to keep it that way. This means you need to monitor your servers, keep them up to date and upgrade them when necessary. Fully secured web server with no recent updates is sitting ducks ready to be used by criminals.

    ·         Secure your web application: Its sometimes overlooked to get security review for your web application in place. No matter how good developers  you have, you will need to get a security review from a security experts. This is also true on updating your web applications.

    ·         Intrusion prevention and detection: Even if you did everything to secure your environment you will need to watch for activities on your web site. You need early warning signs if there is something unusual happening. This would need delicate tuning as these devices can create a lot of noise which can easily become overwhelming.

    There are different standards that you would need to adhere to and you should also check them out. For example if you want to process credit cards you would need to look at PCI DSS. However there is one more important part that needs your attention which is DNS.  DNS protocol has been around for a long time. When it was first introduced security was not a concern. However as Internet grew, attacks based on DNS has increased considerably. The worst part is that as DNS is distributed service you need to trust other entities to provide security for DNS service. When a client asks for a dns name, DNS server will ask several dns servers before returning and answer to the client. If anyone of these servers are comprimized, client is redirected to a different web server which may look just like the original web site but actually is planned to get your username and password or credit card numbers. The best way to solve this problem is a standard that has recently popularized namely DNSSEC (DNS System Security Extensions).

    DNSSEC is specified in RFCs 4033-4035. It adds new operations to DNS server and client and 4 new DNS records (DNSKEY,RRSIG,NSEC and DS). DNSSec digitally signs all records in a DNSzone. A client will obtain the public  key and validate that the responses are authentic. So when a client asks a question to DNS servers the answer is digitally signed. Each time you hop from DNS server to DNS server you know that the answer is genuine as long as signature is valid. DNSSec is a feature of Windows Server 2008 R2 and Windows 7. If you want to learn more about DNSSec on Windows you can find more information here. Even clients that do not understand DNSSEC can stil use the DNS servers in question, albeit without reaping the benefits of validation.

    One of the most important blockers for wide DNSSEC implementation was top level DNS zones not being signed. As of the time of this writing most of the top level zones have been digitally signed. One of the most important zones is .com and is expected to be signed early next year. This will be a key milestone to make DNSSec mainstream.

    When you are planning your DNS Infrastrcuture, you should keep in mind the following about DNSSEC:

    ·         Dynamic update is not supported. You should use DNSSec on your external DNS entries and not on your internal DNS where clients are using dynamic DNS.

    ·         DNSSec is not a lightweight protocol. You will need extra bandwidth and strong servers to handle DNSSec traffic.

    ·         Clients will need to understand DSSec messages, which will happen with new operating systems. Do not expect that all clients trying to access your web site is secured the moment you implement DNSSec on your servers.

    DNSSec will help secure Internet but it will need effort from all implementing parties. It would be necessary to start planning as soon not to be left behind.

    As always, feedbacks are welcome.

     

  • Why do I need to care for IPv6?

     

    Internet is using myriad of network protocols, the most important one being IP or Internet Protocol. This is the layer in which network decides how to send a packet to a given destination. Currently we are using IPv4 which has been with us for quite some time now and as you can tell it is showing its age. There are a couple of pain points in IPv4 that can be solved by Ipv6:

    ·        Address Space: IPv4 was designed to have 4 billion address spaces. Back in 1980s this was a huge number given the fact that there were only a couple of addresses being used. However the number of public IP addresses has grown to the limit. In fact Network Address Translation (NAT) and Classless Inter Domain Routing (CIDR) were technologies used to alleviate the address depletion problem. Number Resource Organization (NRO) has announced that almost %95 of addresses have been used. This means that last IP address blocks will probably be distributed in one year. If you want to provide an application on the Internet, you will probably need to use an IPv6 endpoint. Ipv6 will have 128 bit addresses which will be much larger in address space and is currently being used in Asia. We may well abandon use of NAT altogether when IPv6 is in use which will greatly simplify network topologies and firewall configurations.

    ·        Security: When IPv4 was first designed, there was no security technologies needed. However as Internet grow security became an issue and different protocols were created to solve the problems. IPSec was one of the security protocols that have been widely used. The good news is that IPv6 was designed with IPSec from ground up. So as long as devices or servers are supporting IPv6, secure connection can be established easily between them.

    ·        Configuration: IPv4 addresses need to be configured either manually or with DHCP service running on the network. Using DHCP can be a problem if there is more than one on the same network. IPv6 has address auto configuration properties so that nodes can configure their own IP address and default gateway without DHCP.

    ·        Flow Priority: Prioritized real time delivery of data is a part of Ipv4 but has some limitations like lack of packet prioritization with encrypted packets. IPv6 fully supports these capabilities and has enhanced handling of flow priority.

    Now that we have some understanding of what Ipv6 can bring to your organization let’s talk about how to get prepared for it. Internet backbone is already in the process of upgrade to Ipv6 and most of the work is done. The major part of the work needs to be done inside the organization. IPv4 has been used for so long that we expect every node (device and applications) to work seamlessly. However not every node will support use of Ipv6. You will first need to identify parts of your network that is not capable of using Ipv6. Then you will need to plan on replacing those nodes taking into account your device and application lifecycles. Most of the network devices are already Ipv6 ready. What I have been seeing is that applications are still in the process of upgrading to work with Ipv6. If you want to learn more about developing applications that work with IPv6 you can attend Microsoft PDC10 October 28-29 online or find the event closest to your home! See the map here.

    You do not need wait until all of your devices are capable of supporting IPv6. There are transitioning technologies that will help you interoperate IPv4 with Ipv6 technologies. When you first start you will probably have a small subnet working IPv6 and use these technologies to communicate with the rest of your internal network and Internet. Gradually you will expand your Ipv6 networks up to your network edge firewall.

    Ipv6 is the future and there is clearly no escape from it. The more you postpone your planning the more you will fall behind in adapting to the new networking capabilities of IPv6. I will urge all of the readers to think about what can be done to embrace IPv6 in their environment and create awareness for the upcoming changes.

    I would love to hear feedback on what you are thinking of the blogs you have been reading so far. Please provide ratings and suggestions so that I can provide better and relevant information to you.

  • How do I keep my job and benefit from public cloud?

    There is lots of thinking going on around how the cloud will change our lives. Some of the things done by IT professional today will be handled by the cloud in the coming years. So what can IT professionals do now so that they can be relevant to the business in the future? There are specific areas where local expertise will still matter. Here is a list:

    ·        Business knowledge: Organizations moving to the cloud would have more time focusing on business related issues. Successful IT pro will be more business oriented and less deep technical in nature. For example instead of focusing on how/when they will be moving mailboxes between sites, they will need to focus on compliance and policy related issues regarding messaging. Once these are set, they will be able to map the required settings for the messaging system either on premises or in the cloud.

    ·        Security: When organizations start moving some of their services to the cloud, there will be a period where some of the services will be provided by the cloud provider and some will be provided in house. It will be very important to provide secure communications between the services and clients. So edge security and network security will be a premium in skills requirements inside organizations. For example organizations would want different security measures accessing their own applications in the cloud versus any other parts of the World Wide Web. Compliance will mandate different security measures and network security will be a very important focus of IT departments.

    ·        Identity Management: When organizations shift on the Infrastructure Optimization model (more information on this is here) identity lifecycle management will be more important. They will need to define more policies around how identities are managed and secured. IT professionals will need to map how identities will use different resources according to given policies and plan their authorization. For example policy will mandate new-hire needs to have an e-mail account. IT pro’s will need to plan which security groups the new-hire should be a part of and what e-mail alias will be used for him/her. Then the actual provisioning can be done through on premise identity management solutions such as Forefront Identity Manager or cloud services.

    Organizations will be keeping some of their services in house for various reasons and those areas will still be areas where IT professionals will be needed. These will be vary among the different industries but IT professionals will still be an important part of the organizations for the foreseeable future.

  • Agile organizations and cloud computing

    21th century will mainly be about embracing change. Organizations will need to adapt to change in market conditions. The faster an organization can change its products or services and generate income based on those services; more it will be likely to survive. As a part of this requirement, companies will more and more depend on IT to provide agility to fuel business value. IT  departments will need to restructure themselves to increase their capabilities to provide more value from their assets.

    In order for IT to prove itself as a strategic asset, it would need to mature in operations.  Microsoft is using Infrastructure optimization model to measure how mature an IT organization is in different areas according to different set of criteria. This model has 4 stages for different areas of operation:

    ·         Basic: This is stage one where there is no standardization, no automation and no integration among different systems.  It organization is generally in reactive mode trying to put out fires, and there is no standard procedure or best practices available to solve common day to day problems.  For example there is no centralized Identity store such as  Active Directory and users are using local users to logon to their laptops.

    ·         Standard: This is stage two where there is standardization on different processes. IT is still in reactive mode but problems are categorized and best practices available for common problems. However, measuring quality of IT services is still nonexistent or depends on manual data collection methods.  For example all users are defined in Active Directory but there is no integration with an HR system.

    ·         Rationalized: This is stage 3 where processes are highly automated and there is integration with different systems. For example there is automation between the HR system and AD. When new employees are provisioned in HR system a user is created in AD. Note that in order for this to work, there needs to be consensus about what needs to be done when a new employee is hired for the company.

    ·         Dynamic: This is stage 4 where change in processes are also under control. This is the stage where IT is providing insight for the business and acts as a strategic asset for the organization.

    Most of the organizations do not realize how hard it is to change until the need arises. As you would guess it would need quite a lot of effort to move along from one stage to the next and the hardest part is not implementing the technology but it is to change perception in the organization on how things get done. Any person talking to HR on why they need define identity lifecycle will quickly understand that installing and configuring Forefront Identity Manager alone will not solve the issue.

    When organizations start seeing the benefits of moving from one stage to another, it will be easier to embrace the change. However this does not mean that change will be easy. It will take time to plan, test and implement the required processes using suitable Technologies. The good news is, using cloud technologies can ease the burden. Unfortunately cloud can mean different things to different people. From operational perspective there are 2 types of clouds:

    ·         Public Cloud: These are companies providing cloud services. Generally you pay for their services as you use them. You would be alleviated from the necessity to acquire, provision and maintain assets for the service in question. Depending on the service, you would have means to control the availability and performance and change them when necessary. Most of the time organizations would be sharing resources with other organizations of the cloud fabric. You can also choose to have your resources dedicated for your own use.

    ·         Private Cloud: This is the cloud where you build your own cloud services to provide to your own organization. This is more suitable for organizations with large number of IT assets and IT is mature to manage these efficiently.

    Small companies will not have the necessary IT to build private clouds and will partly move to public cloud infrastructure. There will be concerns around technical and non-technical issues, most being solved in the near future. We will see companies using mixed infrastructures and balance will gravitate toward public cloud where possible. 

    Larger organizations will have more complex requirements and may choose to host some of their services in the public cloud. However private cloud will also be a viable option that we will see materializing for these organizations.  When talking to the customers about the cloud, building a private cloud seems as a natural evolution of IT. However this is a delicate situation if IT is not mature according to Infrastructure optimization model. Implementing the technology even in massive scale will not help the organization build a successful private cloud. You need the right skills with enough people to operate the cloud. You need an accepted business model inside the company to sustain the service levels. You need a management willing to continuously improve efficiency, provide new services and retire old ones when necessary.

    Clearly organizations will need to evolve in the 21th century. They will need to change the way they do business. Cloud can help you change your business by providing services that can adapt to your business needs. However it will not help you create better value out of your business. Organization will need to transform itself to provide more value leveraging cloud as necessary.

  • Kurumsal ortamlarda MSN Messenger, Skype gibi uygulamaları Microsoft teknolojileriyle nasıl engellersiniz?

    Kurumsal ortamlarda verimli çalışma değerlendirildiğinde, Internet üzerinden kullanılan haberleşme teknolojileri üzerinde kontrol uygulanması gündeme gelir. Bu işlem yönetici gözüyle bakıldığında çok basit gibi gözükmekle beraber teknik olarak zor bir problemdir. Bunun temel sebebi bu tür uygulamalar, gelişen güvenlik teknolojilerine uyum sağlayacak şekilde evrimleşmişlerdir.

    İnternet’ten haberleşme bundan yıllar önce popular olduğunda kullanıcılar güvenlik duvarı kullanmadığından her uygulama kendine ait portlar kullanarak Internet’e çıkacak şekilde yazılmaktaydı. Kurumsal ortamlarda güvenlik duvarlarıyla Internet gezginleri için kullanılan portlar dışındakiler kapatıldığında sorun çözülebiliyordu.

    İkinci nesil uygulamalar Internet gezginleri için kullanılan HTTP protokünü kullanmaya başladılar. Hatta kurumsal ortamlarda kullanılan web proxy geçiş kapılarını da kullanabiliyorlardı. Güvenlik duvarlarında bu portlar kapatıldığında Internet erişimi durduğundan başka çözümlere ihtiyaç vardı. Kullanılabilecek yöntemlerden biri bu uygulamaların bağlantı kurduğu sunucuları bulup onlara erişimi kapatmaktı. Ancak dünya üzerinde bu tür hizmetleri veren servis sağlayıcılar yüksek erişilebilirlik sebebiyle pek çok yerde sunucu bulundurmakta ve bunların IP’leri de sıklıkla değişmekteydi. Dolayısıyla bugün IP’leri belirleyip kapatsanız bile, yarın MSN Messenger yada benzer bir uygulama başka sunucuları deneyerek Internet’e çıkabiliyordu. Bu sorunu aşmak için HTTP protokolünün içini de tarayan güvenlik duvarları gerekliydi. Bu işleme uygulama seviyesinde filtreleme deniyor. Gönderilen paketlerin Internet gezgini mi ya da haberleşme uygulamaları mı olduğunu gönderilen isteğin başlık kısmından öğrenmek ve isteği engellemek mümkün hale geldi.

    Bazı haberleşme uygulamaları, güvenlik duvarı kısıtlamalarından kurtulmak için haberleşmeleri kriptolu yapılan HTTPS (SSL) protokolüne kaydırdılar. Bu durumda uygulama seviyesindeki paketlerin içeriği şifrelendiği için güvenlik duvarları çaresiz kalmışlardı. Kurumsal ortamlar için çok tehlikeli olan zararlı kodlar kendilerini bilgisayarlara aktarırken bu yöntemleri kullanmaya başladığından tehlike büyüktü.  Bunun çözümü olarak HTTPS inspection yöntemi gelişti. Bu yöntemde kullanıcı SSL kullanan bir siteye bağlanmak istediğinde güvenlik duvarı ona kendi sertifikasyla cevap verip, Internet’teki siteye kendisi bağlanıyor. Böylece güvenlik duvarı içinden geçen trafiği izleyebiliyor.

    Yukarıda değinilen bütün teknolojiler Microsoft Threat Management Gateway (TMG) 2010 ürünü ile sağlanabilir. Bu amaçla Internet çıkışında sadece HTTP/HTTPS trafiğine izin verecek şekilde ayar yapılması gereklidir. Diğer protokollerle çıkış zaten güvenlik sebebiyle özel durumlar haricinde tercih edilmemelidir. Daha sonra HTTP Inspection özelliği devreye alınarak HTTP header’lardan User-Agent içerisinde ilgili uygulamanın imzası aranarak (Örneğin MSN messenger vs) bu isteklerin durdurulması ayarlanmalıdır. Son olarak eğer gerek duyulursa kurumsal ortamlarda bulunan sertifika makamından TMG sunucusuna bir sertifika verilerek bu sertifikaya istemcilerin güvenmesi sağlanır ve belirli trafikler için HTTPS inspection devreye alınabilir. Bu özellik kurumsal ihtiyaçlara göre çok esnek bir şekilde ayarlanabilir ve özellikle kullanıcı bilgisayarlarına zararlı kodlar indirilmesi Malware Inspection özelliği sayesinde engellenir.  Ayrıca TMG içerisinde kategori tabanlı filtreleme özelliği sayesinde bu tür uygulamaların bağlandığı sitelere gidiş de engellenebilir. Kategori bilgileri sürekli güncellendiğinden yukarıda değinilen güncel tutma zorunluluğuna da gerek kalmaz.

    Günümüzde kurumsal şirketlerin Internet’ten gelebilecek tehlikelere karşı kendilerini korumaları zorlaşmaktadır. Bu durumda Microsoft Threat Gateway ürünü kurumsal ortamların kapsamlı ihtiyaçlarına cevap verecek, esnek ve yüksek güvenlikli çözümler üretmek mümkündür.

  • Microsoft Web TV Çözümleri (3. Kısım)

    IIS Medya servisleri

    Windows Server 2008 ve Silverlight ile tamamen web tabanlı yeni bir Web Tv çözümü doğdu. Çözüm tamamen IIS tabanlı olduğundan IIS Medya servisleri adını aldı. Yayın yapılan teknoloji de Smooth Streaming olarak adlandırıldı. Bu teknoloji silverlight istemcisinin http içerisinden gelecek yayın açıklamasını alarak hangi çözünürükte yayınlar olduğunu anlamasıyla başlar. İstemci, sonrasında kendisine uygun bulduğu bant genişliğiyle ilk 2 saniyelik içeriği indirir ve göstermeye başlar. Bu arada isteğin geliş hızı, üstünde çalıştığı bilgisayarın CPU kullanımı gibi parametreleri değerlendirerek bir sonraki paketin ne kadar bant genişliği kullanması gerektiğine karar verir ve gerçek zamanlı olarak yayının kalitesini kullanıcıya en uygun deneyimi yaşatacak şekilde ayarlar. Buna adaptif yayın teknolojisi denir. Ağ üzerinden bakıldığında istemci web sayfasına erişiyormuş gibi gözükür. Oysa Silverlight arayüzü üzerinden gelişmiş özelliklerle yayın izlemek mümkündür. Bu özelliklerin başında yayın akışını kontrol yer alır. Sunucu üzerinde ayarlandığı durumda istemci gerçek zamanlı yayından kopup daha önceki zamana ait bir noktadan yayını almaya devam edebilir. Bu işlem sadece 2 saniyelik yayın paketi gerektirdiğinden Windows Media Services tarafında görülen yayın duraksamaları görülmez. İstemci istediğinde gerçek zamanlı yayının olduğu noktaya dönebilir. Bunun yanında yayını hızlı ya da yavaş izleme özelliği sunucunun aktaracağı içeriğin içinden anahtar kareleri çekerek göndermesiyle sağlanır. Ayrıca istenen her görüntü paketinin yanında ses paketi de istendiğinden IIS medya servisleriyle birden fazla sesi izlemek mümkündür ve aynı şekilde birden fazla altyazı da kullanılabilir.

    Genellikle http içeriği http proxy sunucularından oluşan CDN (Content Delivery Network) üzerinden müşterilere sunulur. Böylece içerik sağlayıcı kendisine gelen yoğun istek taleplerini CDN üzerindeki http proxy sunucularından geçirerek kendisine gelmeden karşılanmasını sağlar. Microsoft’un CDN oluşturmak için yine IIS’in bir parçası olan ARR (Advanced Request Routing)  teknolojisi kullanılabilir. Bu alt bileşen ile gelen istekler belirli kurallar çerçevesinde ön belleklenebilir ve yönlendirilebilir. Yönlendirme işlemi için arka plandaki sunuculara sağlık kontrolü ya da yük dengeleme yapılabilir. Bu sistem üzerinden hem gerçek zamanlı yayın yapılabildiği gibi hem de önceden kaydedilmiş içeriğin aktarılması mümkündür. Bu yönüyle her iki ihtiyaca da cevap veriyor olması yatırım maliyetini düşürür. Sistem yönetimi Microsoft’un sağladığı platform özellikleriyle (Powershell, Event forwarding) sağlanabildiği gibi daha gelişmiş ihtiyaçlar için System Center Operations Manager kullanılabilir.

    Yayının ya da içeriğin kriptolanması için WinDRM’in gelişmiş hali olan Playready DRM kullanılır. Bu sayede hem daha üst düzey kriptolojik fonkisyonlar kullanılır hem de daha esnek içerik koruma politikaları belirlenebilir. Silverlight istemci yayını ilk indirmeye başladığında Playready DRM’i algılayarak DRM sertifika sunucusuna bağlanır, gerekli yayın hakları ve sertifikayı indirerek yayını çözer.

    IIS medya servislerinin sağladığı zengin yayınlama formatı Smooth Streaming daha önceki sürümlerden farklıdır. İçerik olarak daha öncekilerden farklı olarak hem VC-1 hem de H.264 kullanılabilir. Ancak yayının kullanıcılara eriştirilebilmesi için Smooth Streaming yayın formatına çevrilmesi gereklidir. Diğer formatlardaki içeriğin gerekli yayın ya da içerik formatına çevirilmesi için Microsoft Expression Encoder ürünü kullanılabilir. Bu ürün ile hem VC-1 içerik hem de H.264 içeriği kodlanabilir. Gerçek Zamanlı içerik için bu ürünün yanında donanım tabanlı encoder cihazları da kullanılabilir. Bu cihazlar uzun süreli yayın yapılacağı zaman sağladıkları yüksek erişilebilirlik özellikleriyle öne çıkarlar.

    IIS medya servisleri Windows sürümlerinden bağımsız olarak geliştirilmeye devam etmektedir. Çıkacak yeni sürümüyle beraber mobil cihazlara da aynı platform üzerinden yayın sunabilmenin yanı sıra, 3 boyutlu yayınlar ve IP TV altyapısı gibi pek çok yeniliği de beraberinde getirecektir.

    IIS medya servisleri, Windows Server 2008 ve R2 üstüne yüklenebilen opsiyonel parçalar olup Internet’ten download edilebilir. Yeni bir teknoloji olmasına rağmen Smooth Streaming hem kalite hem getirdiği kolaylıklar açısından çok önemli yenilikler getirmektedir ve geleceğin yayın standartları arasında şimdiden yerini almıştır.

  • Microsoft Web TV Çözümleri (2. Kısım)

    Yayınlama hakkında genel bilgiler

    Yayının çevrilmesi yayın kalitesinin belirlenmesinde anahtar rolü oynar. Bazı içerikler orjinal formatında güzel görünürken, uygun filtreler kullanılmadan çevirim yapılırsa yeni formatında çok kötü sonuçlar verebilir. Yayının orjinal kalitesi çevirim sırasında arttırılamaz. Bu yüzden genellikle 2000 yılından daha önce üretilmiş olan (Dijital kamera ile çekim yapılmamış içerikler) içerikler dijital formata aktarıldığında yeni içerikler kadar güzel gözükmezler.

    Yayının donmadan izlenebilmesi için yerine getirilmesi gereken pek çok şart vardır. Hem sunucu ile istemci arasındaki bant genişliğinin yeterli olması, hem sunucuya erişen isteklerin belirli bir sayının altında olması hem de istemcinin indirilen içeriği gerçek zamanda decode/decrpt ederek ekranda gösterecek işlemci gücüne sahip olması gereklidir. Örneğin High Definition yayınlar, eski istemcilerde gösterilemeyecek kadar çok işlemci gücü gerektirirler.

    Yayının geniş kitlelere erişmesi için yayının çoklanması gereklidir. Yayının yapıldığı sunucu öbeğine orijin sunucuları denir. Yayın orijin sunuculardan çoklayıcı sunuculara aktarılır. Çoklayıcı sunucularda istenildiği kadar çoğaltılarak son kullanıcıya ulaştırılır. Özellikle gerçek zamanlı yayınlarda çoklayıcılardan sıkça yararlanılır.

    Windows Media Servisleri (WMS)

    Microsoft tarafından piyasaya çıkan ilk ürün Windows Media Services oldu. Bu çözüm 2000’li yılların başında High Definition (HD) yayın teknolojisinin yaygınlaşmasına öncülük etmiştir. Bu teknoloji Microsoft’un geliştirdiği yayın depolama formatı olan Windows Media (WMV, WMA,ASF ya da entüstri standardı ismiyle VC-1) kullanır. İstemci tarafında Windows’la beraber gelen Windows Media Player içinde bu depolama formatını algıladığından, kullanıcı Web sitesine gidip yayın’a tıkladığında yayın WMS sunucusu tarafından istemciye gönderilmeye başlanır, istemcide Windows Media Player tarafından işlenerek kullanıcıya gösterilir. Yayın başlamadan önce sunucu istemci ile arasında Internet’te ne kadar bant genişliği kullanılabileceği test edilir ve eğer yayın’ın birden fazla bant genişliği sürümü varsa uygun olan seçilerek istemciye gönderilmeye başlar.

    Bu teknolojinin en büyük avantajı Windows’la beraber gelmesi ve ayrı bir yazılım kurulumu gerektirmeden kullanıcıya erişmesidir. Bununla beraber bu teknoloji yayın yapabilmek için RTSP (UDP/TCP 554) protokolünü kullanır. Bu protokol günümüz web protokolleri (HTTP) ile uyumsuz olduğundan firewall tarafından geçirilmez. Bu yüzden HTTP protokolü üzerinden geçen bir sürümü de yapılmıştır. Yayının http paketleri içerisinden senkronizasyona bağlı olarak geçmesi gerektiğinden yayının geliş hızına bağlı olarak duraksamalar görülebilir.

    Yayınların VC-1 formatıne çevrilme işlemi Windows Media Encoder ile yapılabilabilir. Bu çevirme işlemi gerçek zamanlı yapılabileceği gibi başka formattaki bir içeriğin Windows Media formatına aktarılması ile de yapılabilir. İstenirse yayın DRM içeriğe dönüştürülebilir. Bu amaçla WinDRM teknolojisi kullanılabilir. İçerik kriptolandıktan sonra yayın sisteminden geçirilmesi için ayrıca bir işlem yapılmasına gerek kalmaz.

    Gerçek zamanlı yayının kullanıcılara iletilmesi için çoklayıcı teknolojilere ihtiyaç duyulur. Windows Media Proxy Servisleri kullanılabilir. Böylece hem origin sunucuları aynı anda gelen yayın isteklerini karşılamak zorunda kalmaz, hem de içerik proxy sunucularda önbelleklendiğinden bu sunuculara gelinmesine gerek kalmaz.

    Windows Media Servisleri halen Windows Server 2008/R2 işletim sistemi içinde opsiyonel bir parça olarak bulunmakta ve istendiğinde devreye alınarak kullanıma hazır hale getirilebilmektedir. Endüstride çok kabul görmüş ve kendini ispatlamış bir teknoloji olduğundan halen Web üzerinde pek çok sitedeki içerik bu formatı kullanır.

  • Microsoft Web TV Çözümleri (1. Kısım)

    Internet ilk popüler olduğunda Internet’e bağlantı hızları yavaş olduğu için içinde bolca yazının olduğu, tek tük resimlerin olduğu sayfalar vardı. Sonra bant genişliği arttıkça önce resimler arttı ve daha hareketli bir ortam oluştu. Günümüzde artık bant genişlikleri Internet üzerinden televizyon yayınlarının izlenmesine olanak verecek hale geldi. Görsel olarak televizyonda görmeye alıştığımız zengin içeriğin Internet ortamından benzer bir kalitede sunulması yeni teknolojilerin geliştirilmesini zorunlu kıldı. Önümüzdeki birkaç yıl içinde Internet üzerindeki bütün içeriğin büyüklük olarak yaklaşık %80’inin görsel yayın içeriği olması beklenmektedir. Bu yazıda Microsoft tarafından geliştirilen yayın teknolojilerinden bahsetmek istiyorum.

    Yayınlama teknolojileri sözlüğü

    Yayınlama teknolojilerinden bahsetmeden önce bu endüstride kullanılan terimlerden bahsetmek yerinde olur. Bir televizyon yayınını Internet üzerinden yayınlayabilmek için öncelikle bir yayın formatına ihtiyaç duyulur. Yayın formatı yayının ne kadar çözünürlükte ve hangi formattaki paketler halinde istemciye aktarılacağını ifade eder. Örnek yayın formatları arasında VC-1 ve H.264 sayılabilir. Bunun dışında yayının içeriğinin ekranda görüntülenmesi için de codec formatı önemlidir. Codec formatı içeriğin dosya içinde nasıl sıralandığını ve hangi sıkıştırma formatı ile hangi sırada açılacağını ifade eder. Örnek codec formatları arasında WMV, Mpg 2 ve Mpg 4 sayılabilir. Yayın sıkıştırılmış bir şekilde sunucudan istemciye aktarılır ve burada uygun codec tarafından gösterime hazır hale (decode işlemi) getirilerek ekranda gösterilir.

    Bazı durumlarda yayınların orjinal kaynaktan alındıktan sonra kaydedilmesi, gerekiyorsa formatının değiştirilmesi ve istendiğinde gönderilmesi gereklidir. Bu durumda gösterimi yapılan yayına Video On Demand (İstendiğinde Video) denir. Bazı durumlarda yayının kaynaktan alınırken gerçek zamanlı olarak formatının çevrilmesi ve hiç diske yazılmadan yayınlanması gerekir. Buna real time ya da Live Broadcasting (Canlı yayın) denir.

    Yayınların belirli haklarla korunmasına Digital Rights Management (DRM) adı verilir. Bu işlem temelde dosyanın simetrik bir anahtarla kriptolanmasını, daha sonra bu anahtarın asimetrik ama çok daha güclü bir kriptolama ile saklanması esasına dayanır. Bu ikinci kriptoyu açmak için yayın ya da dosya başlangıcında yayının ya da dosyanın indirildiği yere erişilerek yetki kontrolü yapılır ve ilgili anahtar indirilir. Kullanılan DRM teknolojisine ve yetkilendirmeye göre yetkiler değişiklik gösterebilir. Kripto konusunda gerekli ayarlamalar yapıldıktan sonra istemci uygulaması yayının kriptosunu açarak (decrypt işlemi) ekranda gösterir.

    Bir sonraki kısımda yayınlama hakkında genel bilgiler ve Windows Media services hakkında bilgiler vereceğim.