In this series of posts I will walk you through the processes of creating an Active/Active SQL server cluster using Hyper-V and Microsoft iSCSI target software for virtualized SAN. The target is to create first a storage server hosted on a normal Windows 2008 R2 server. Then connect to this server using two other machines as iSCSI initiators. Then I will create the windows cluster along with the DTC clustered service. A clustered SQL server instance will then be created. Finally another clustered SQL server instance will be created and Active/Active configuration of both instances will be applied.
The solution is fairly simple as per the below configuration.
You need to create three virtual machines as illustrated above. One as the AD and storage server and another two as the SQL server nodes that will act as Active/Active nodes.
These are all windows 2008 R2 servers and we have created the domain and joined all servers to this domain. You need also to setup two network cards in each machine to function as normal LAN connection and another one for the cluster heartbeat. It would be advisable also to separate the storage usage to another network if you have heavy usage. The configuration given here is all static with normal local IPs assigned on all network cards.
In this section we will go through the needed steps to create the virtual storage server based SAN.
1- Download the required iSCSI target software from http://www.microsoft.com/download/en/details.aspx?id=19867.
2- Copy the software to the storage server UK-LIT-AD in this case.
3- Double click the file to start the installation.
4- After it completes it will take you to a web page
5- Scroll down and click as below
6-
7-
8- Click install
9-
10- Now open server manager and you will find a new tree as below
11-
12- Give the new target a name (Just any name)
13- In the initiator list just click advanced and enter all the domain names of the servers that will have access to this target. In our case this is UK-LIT-DB1 and UK-LIT-DB2.
14- If it displays a warning about the multiple initiators just accept it.
15- Click finish. And now you have completed the creation of your iSCSI target and what remains is to add the required virtual disks to it.
16-
17- Place the new VHD and give it a name.
18- Choose the disk size
19- Click finish and this would create the fixed size disk.
20- You will need to create the following disks so just follow the same approach
Disk
Purpose
Quorum
Cluster Quorum
DTC1
DTC cluster 1 log disk
DTC2
DTC cluster 2 log disk
SQL1
SQL cluster 1 shared disk
SQL2
SQL cluster 2 shared disk
Now we will configure the two SQL nodes to be able to access these disks.
1- Log on to the first node UK-LIT-SQL1
2- Open the iSCSI initiator
3- Change the initiator name to match the machine name
4- In the discovery tab add a new discovery portal using the IP of the storage server.
5- Click on the targets tab and click refresh to show the available targets
6- Click connect then OK.
7- Go to the volumes and devices tab and click auto configure
8- Do the same steps on UK-LIT-SQL2 starting at step 1 above but change the initiator name to match the machine name as below
9- Go to any node of the two and open the server manager and then the disk management.
10- Bring all disks online to this node and then prepare them with primary partitions and format those using NTFS.
In the next parts I will show you how to configure the Active/Active SQL cluster.
Before Windows Server 8 and PowerShell 3.0, to manage your local, virtual or remote disks there were no native PowerShell cmdlets. You had only below choices;
- Using Diskpart (Easy for basic tasks but not flexible)
- Using WMI (Flexible but hard to use)
Diskpart has its own arguments and does not permit you to merge with PowerShell commands.
And using PowerShell, only chance for disk management is to use WMI library. Here are two examples:
Get-WmiObject -query "Select * from Win32_logicaldisk" |Ft $Item = @("DeviceId", "MediaType", "Size", "FreeSpace") Clear-Host Get-WmiObject -computer YourMachine -query ` "Select $([string]::Join(',',$Item)) from Win32_logicaldisk ` Where MediaType=12" | sort MediaType, DeviceID ` | Format-Table $item –auto
Get-WmiObject -query "Select * from Win32_logicaldisk" |Ft
$Item = @("DeviceId", "MediaType", "Size", "FreeSpace") Clear-Host Get-WmiObject -computer YourMachine -query ` "Select $([string]::Join(',',$Item)) from Win32_logicaldisk ` Where MediaType=12" | sort MediaType, DeviceID ` | Format-Table $item –auto
Not very user friendly right?
From now on, In Windows Server 8 and PowerShell 3.0, you have native disk management cmdlets for local, virtual and remote disks.
In this blog post we’ll cover some of these great commands with examples.
PowerShell 3.0’s new designed ISE has a right pane that shows all available commands in GUI. Here is my previous blog post covers that : http://blogs.technet.com/b/meamcs/archive/2012/03/30/powershell-3-0-shell-from-future.aspx
Using search function within this pane can sort all required commands.
Typing “disk” brings all available commands related to disk management.
Let’s dig in some of these commands in PowerShell 3.0. These are native commands and don’t need importing a module.
First basic command is Get-Disk.
Get-Disk returns all available disks or filtered list based on the criteria you specified.
I used Get-Disk to list all disk with only Number, OperationalStatus, Size and Partition columns. That also helps you to manage disks with a great flexibility.
You can also use Get-Disk cmdlet with various parameters such as;
- Number : Returns specific information for a disk which has a specified disk number.
- BusType: It returns only disks connected via specific bus type.
Get-Disk | Where-Object –FilterScript {$_.Bustype -Eq "USB"} Get-Disk | Where-Object –FilterScript {$_.BusType -Eq "iSCSI"}
Get-Disk | Where-Object –FilterScript {$_.Bustype -Eq "USB"}
Get-Disk | Where-Object –FilterScript {$_.BusType -Eq "iSCSI"}
Another useful cmdlet is Get-Volume. It returns a volume object or a set of Volume objects given parameters to be used to uniquely identify a volume, or parameters to identify a set of volumes that meet the given set of criteria.
Also you can use Get-Partition to get a list of all available partitions.
Get-Partition returns all partitions on all disks. To filter Get-Partition output, there are several useful parameters;
DiskNumber: Returns all partitions on specified disk.
DriveLetter: Returns partitions associated with the specified volume.
PartitionNumber: Specifies the number of partition.
There is no native parameter to specify disk type but you can use –FilterScript parameter to filter Type column.
Get-Partition | Where-Object –FilterScript {$_.Type -Eq "Basic"}
Other than getting detailed information for partitions, you can use New-Partition cmdlet to create new partitions on an existing Disk Object.
–UseMaximumSize parameter uses the maximum available space and –AssignDriveLetter parameter automatically assigns a drive letter to the partition while creating partitions.
But creating a new partition does not mean that it will be formatted. To format an existing partitions you should use Format-Volume cmdlet.
In below example, I created a new partition on disk 4 and then pipe that information to the Format-Volume cmdlet.
Format-Volume cmdlet always asks you for confirmations.
To bypass confirmation step, just use –confirm parameter with $false value.
As you can create new partitions, you can also remove existing partitions with Remove-Partition cmdlet. It requires DiskNumber and PartitionNumber parameters.
Using PowerShell means you can do bulk operations with all available commands. Let me show you a basic example.
I created a text file which has a DiskNumber first line(Column) and disk numbers for rest of each lines. It’s located as C:\Disk.txts on my computer.
And also I have 4 disks and all of them are offline.
To convert all that disks to online, I can use below one line PS command.
Firstly it loads C:\Disks.txt file into the shell and than for each line it runs Set-Disk command. $_.DiskNumber matches DiskNumber column in text file and reads each values.
You can change drive letter for an existing partition with Set-Partition cmdlet.
Resizing existing partition is also very straightforward step.
I have a single partition on disk 2 which was partitioned with it’s all available maximum size.( 1021MB)
Now I can use resize-partition cmdlet and –size parameter to create and resize existing partition.
Get-PartitionSupportedSizes cmdlet returns information on supported partition sizes for the specified Disk object.
For disk 2, there is one single partition (Partition 1) that has a MaxSize 1072627712
I can assign this value to a variable ($a) and this helps me to resize a partition with its maximum available size value.
Another command is Repair-Volume.
The Repair-Volume cmdlet performs repairs on a volume. The sub-function specified will be called as follows.
● Fix: Calls the legacy scan and fix behavior (chkdsk /f). ● Scan: Calls the Pro-scan only, all detected corruptions will be added to the $corrupt system file (chkdsk /scan). ● SpotFix: Calls the spot fix functionality by taking the volume offline and then spot fixes only issues that are logged in the $corrupt file, (chkdsk /spotfix).
chkdsk /f
$corrupt
chkdsk /scan
chkdsk /spotfix
Providing DriveLetter and repair type is enough to run.
Using DiskPart or WMI objects were not useful to design complex scripts. But in PowerShell 3.0 there are all required cmdlets to manage your local, virtual or remote disks.
Anıl ERDURAN
Here is the part 4 of System Center 2012 Orchestrator Runbook concept series.
In this blog post we’ll start to cover designing basic and complex runbooks.
Previous parts:
Part1 - System Center Orchestrator 2012 – Test & Start RunBooks
Part2 - System Center 2012 Orchestrator–RunBook Activities
Part3 - System Center 2012 Orchestrator–RunBook Basics
Orchestrator lets you to configure conditions with Smart Links. Smart Links connect individual activities in a runbook and support precedence between two activities.
Also as soon as previous activity finishes, these links invoke next activity. Best part is you can set link conditions to determine the direction of workflow.
For example you can monitor a folder and trigger next activity for each changed and deleted files within that folder. At this situation, you can split in two your workflow to configure separated activities for changed and deleted triggers. For example;
If monitor file activity notices a change event, then smart link will redirect entire workflow through upside activities.
If monitor file activity notices a delete event, then smart link will redirect flow through downside activities.
Look at the simple design;
For such a scenario, here are the basic configuration steps;
Monitor File activity monitors C:\Test folder for
Changed and Deleted items.
If you click first link;
you’ll notice that link will invoke next activity (Append Line) if Monitor File activity returns Success value. Please note that this is the default value for each Smart Link.
Include tab specifies conditions that enable data to flow to the next activity.
Exclude tab specifies the conditions that cause data to be excluded from the next activity. By default if an activity fails, link will not invoke next activity.
Also on Options tab you can configure link width and color. This is important for especially complex runbooks to highlight failover scenarios.
Trigger Delay sets the number of seconds that you want the smart link to wait before invoking the next step in the runbook.
In this runbook example, Append Line activity is not a mandatory, you can carry changed and deleted values from monitor activity to the links directly. But i just used it to show up triggered events.
It appends a line to the C:\Status.txt file with a value of Change Type from Monitor File activity.
Publishing data is also covered in my following article:
http://blogs.technet.com/b/meamcs/archive/2012/03/09/system-center-2012-orchestrator-runbook-activities.aspx
Now here is the magic starts. Click to Smart link to set a custom condition.
I want to set a condition as Status.txt from Append Line activity has a line of “Changed”. This provides that link will invoke next activity (Send Event Log Message) if specified condition is true.
Same condition for other smart link. This provides that link will invoke next activity (Send Email) if specified condition is true.
So if I change any file within C:\Test folder, smart link condition will redirect flow through the upper link and send an event log.
Let’s test with RunBook Tester.
Monitor file checks for changed and deleted files in C:\Test
I changed a file within C:\Test folder and saved it.
Orchestrator triggers the change and hops to the next activity.
Append Line activity appends “Changed” string to the status.txt file.
Now only matched link condition will continue on its way.
As you see below, Send Event Log Message is invoked by the upper smart link.
Here is the log file created by Orchestrator.
Because filtering in smart links is based on published datas from previous activities, you can also use smart link conditions with PowerShell based runbooks.
Here is an another example. It simply reads a text file to get required links. Then it checks for links with Select-String cmdlet and directs flow to the related side. Rest of workflow is about copying files to the remote servers and deleting source downloaded files.
Read Line activity just reads two lines.
Links.txt file includes two different URL to download filea or fileb from Internet.
To find out which link (FileA or FileB) is provided within txt file, I wrote a little custom PowerShell script.
It scans text file and search for DownloadfileA or DownloadFileB strings. If there is a line that includes one of those strings, (not null), StatusA or StatusB variables get filea or fileb values.
To pass these variable values to the next activity and links, simply configure them as Published Data.
Now I can tell to my smart links that to filter only for StatusA and StatusB static values.
Finally, if there is a URL in Links.txt file that has a string “downloadfilea”, smart link will redirect flow through upper activities. For links that includes downloadfilea string will flow through below activities.
Before finishing blog post, final important activity is Junction. It allows you to wait for multiple branches in a runbook to complete before continuing past the junction.
Junction activity can also publish data again from any branch so that downstream activities past the Junction activity can consume the data. Data from different branches than the one you selected will not be available
In man occasions you would be faced by the need to move workflows you developed using SharePoint designer from one site to another. This might be the case if for example you developed and tested the workflow on a testing environment and now wants to move to the production environment without the need to re-develop the workflow.
The steps to perform this is rather simple:
So open the SharePoint designer and open the source site. click on the workflows link on the left and then the workflow you want to move. Click on the ribbon on export to Visio button.
Now rename the file exported (VWI file) to be a ZIP file and open it with Windows Explorer. You will find a file named “workflow.xoml.wfconfig.xml”
Just delete this file. Now rename the archive back to be a VWI file.
Now open the destination site and click the import from Visio and browse to the edited VWI file. This will allow you to re-associate your workflow as if it was created or exported from Visio rather than the designer while preserving any development made in the workflow it self.
In part 1 of this series I showed you how to configure the virtual storage required for the cluster. In this part I will show you how to create the SQL cluster as an Active/Passive cluster and in the next part I will show how to convert it to an Active/Active cluster.
Now that we have configured the storage we can start the windows failover cluster configuration.
1- Install the windows failover clustering feature to both nodes from the add feature wizard.
2- Bring all shared storage online to the current node.
3- Open the cluster management console and click create cluster. Note that it would be preferable to disable all disks at this stage from the iSCSI target but the disk that will be used as the Quorum.
4- In the select servers page click browse and select the two nodes
5- Perform the cluster validation using the selection to run the cluster validation wizard
6- Select all tests
7- Review the validation and make sure there are no validation errors
8- Back to the create cluster wizard. Give the new cluster a name and an unused IP
9- The cluster is created and the first disk assigned to the first LUN is treated as the Quorum disk of the cluster
10- If you disabled all disks from the iSCSI target but the Quorum disk then you will need to add them as a new storage to the cluster once they are needed. It is advisable to add every disk you will use once you need it.
11- Go and enable the first disk that will be used for the first cluster DTC.
12- In the cluster management add the new storage.
13- Go to the Services and applications node and click Configure a new service or application and select the DTC service and then click next.
14- You can change the resource name if you want but you have to give it an unused IP
15- Select the disk
16- Click finish to the confirmation screen
17- Now the windows cluster is prepared and ready for SQL server installation with an instance of DTC.
1- Go to the iSCSI target and create or add the shared disk to be used by the SQL cluster
2- Open the SQL server setup and click on new SQL server cluster
3- Go through the normal setup process
4-
5- Enter the SQL cluster name and leave as the default instance (or name this instance if you require)
Please note that if you are using any virtualization technology other than Hyper-V and installed the guest additions, then you will need to uninstall these additions and restart the servers or the above step will fail and report that it cannot validate the above settings.
8-
10-
12-
13- This completes the installation of the first SQL cluster on the first node
14- Logon to the second SQL node and start the SQL setup and choose to add a new node to a failover cluster
15-
17-
18-
19- Now that completes setting up the second node for this SQL cluster
Now we will go through the installation of a second clustered SQL instance to be prepared as another active instance on the passive node later.
1- Go to the iSCSI target and create or add another shared disk to be used by the second SQL cluster
2- Go to one of the nodes and then open the iSCSI initiator and then click again on auto configure of the volumes and devices.
3- Now open the disk management utility and create the active partition on this disk and format it using NTFS.
4- Open the windows cluster management and add this disk to the cluster.
5- Open the SQL server setup and click on new SQL server cluster
6- Go through the normal setup process
8- Enter the SQL cluster name and the instance name as BCInst
10- Choose the already added disk
11- Choose a unique IP for his cluster
13-
14-
16- This completes the installation of the second SQL cluster on the first node
17- Logon to the second SQL node and start the SQL setup and choose to add a new node to a failover cluster
19- Choose the new cluster BCInst
20-
21-
22- Now that completes setting up the second node for this SQL cluster
In the next part I will show you how to configure the two created SQL instances in an Active/Active SQL configuration.
In part 1 of this series I showed you how to configure the virtual storage required for the cluster. In part 2 of this series I showed you how to configure two SQL instances on the created windows cluster. In this part I will show you how to configure these two SQL instances into an Active/Active configuration.
Since we need to configure an Active/Active configuration for this cluster and we do not want any instance to depend on other components from the other instance we will have to add another DTC clustered service to the windows cluster. This is to allow the separation of the DTC service between both instances. I will also show you how to configure the SQL service to depend on its relating DTC service instance so that it moves it along with the SQL instance.
5- Right click on the service and applications and click to create a new one
6- Choose DTC
7- Give it a name and a unique IP
8- Select the available storage
9- The second DTC clustered instance is created
1- Move one SQL instance and one DTC to the server UK-LIT-DB1
2- The other SQL instance and the other DTC make sure they are moved to the other server UK-LIT-DB2
1- Right click on the first SQL instance and click add resource
2- Select the available DTC (with GUID) service
3- Click next and finish
4- Bring the new resource online
5- Create a dependency between the SQL server service and the newly added DTC resource
6- Create a dependency between the newly created DTC service and the SQL server cluster name and disk to make sure it is moved with it.
7- Right click on the second SQL instance and click add resource
8- Select the available DTC (with GUID) service
9- Click next and finish
10- Bring the new resource online
11- Create a dependency between the SQL server service and the newly added DTC resource
12- Create a dependency between the newly created DTC service and the SQL server cluster name and disk to make sure it is moved with it.
The dependency report for one of the SQL server clusters should look something like the below diagram.
Now you will need to make sure that the preferred owner is one of the nodes for each couple of the SQL instances and the DTC instances.
- Services that has UK-LIT-DB-01 as the preferred owner
- Services that has UK-LIT-DB-02 as the preferred owner
This makes the two nodes working together as an Active/Active SQL cluster with the appropriate services running on both. So if you open the first node you will find a SQL server clustered instance running and a clustered DTC running. On the second node you will find the other clustered SQL server instance running and the associated clustered DTC.
Happy clustering
Network adapter teaming, also known as load balancing and failover (LBFO), allows multiple network adapters on a computer to be placed into a team for the following purposes:
§ Bandwidth aggregation
§ Traffic failover to prevent connectivity loss in the event of a network component failure
This feature has been a requirement for independent hardware vendors (IHVs) to enter the server network adapter market, but network adapter teaming has not been included in Windows Server operating systems before. Windows Server 8 Beta now has a built-in NIC teaming support across different NIC hardware types/makers.
Network adapter teaming requires the presence of a single Ethernet network adapter, which can be used for separating traffic that is using VLANs. All modes that provide fault protection through failover require at least two Ethernet network adapters. Windows Server “8” Beta supports up to 32 network adapters in a team.
Teaming Modes:
Server used in this post has four Network Interfaces with the following characteristics
- LAN01 (1 Gbps)
- LAN02 (1 Gbps)
- LAN03 (1 Gbps)
- LAN04 (1 Gbps)
By the end of this post we should have two network teams configured as per the table below:
Name
Team01
Team02
NIC Members
LAN01,LAN02
LAN03,LAN04
Teaming Mode
static teaming
Switch independent
Here’s a screen shot of the Server NIC’s before teams creation
To configure NIC teaming using PowerShell:
New-NetLbfoTeam -Name "Team01" -TeamMembers LAN01,LAN02 -TeamingMode Static
New-NetLbfoTeam -Name "Team02" -TeamMembers LAN03,LAN04 –TeamingMode SwitchIndependent
To get the Teaming proprieties and settings in PowerShell:
Get-NetLbfoTeam
Name : Team01
Members : {LAN02, LAN01}
TeamNics : Team01 - Default
TeamingMode : Static
LoadBalancingAlgorithm : TransportPorts
Status : Up
Name : Team02
Members : {LAN04, LAN03}
TeamNics : Team02 - Default
TeamingMode : SwitchIndependent
After successful creation of the teaming adapters a new team adapter icons will appear under network connections
To get all of the PowerShell commands available for NetLBFO
Get-Command -Module NetLbfo
Capability Name ModuleName
---------- ---- ----------
CIM Add-NetLbfoTeamMember NetLbfo
CIM Add-NetLbfoTeamNic NetLbfo
CIM Get-NetLbfoTeam NetLbfo
CIM Get-NetLbfoTeamMember NetLbfo
CIM Get-NetLbfoTeamNic NetLbfo
CIM New-NetLbfoTeam NetLbfo
CIM Remove-NetLbfoTeam NetLbfo
CIM Remove-NetLbfoTeamMember NetLbfo
CIM Remove-NetLbfoTeamNic NetLbfo
CIM Rename-NetLbfoTeam NetLbfo
CIM Set-NetLbfoTeam NetLbfo
CIM Set-NetLbfoTeamMember NetLbfo
CIM Set-NetLbfoTeamNic NetLbfo
To manage Teaming from Server Manager
Teaming creation, configuration and management can be also done through Windows Server 8 Beta Server Manager, to do so;
Windows Server “8” Beta introduces Scale-Out File Server with features that let you store server application data, such as a Hyper-V virtual machine files, on file shares, and obtain a similar level of reliability, availability, manageability, and high performance that you would expect from a storage area network.
In Windows Server “8” Beta, Scale-Out File Server is designed to provide scale-out file shares that are continuously available for file-based server application storage. Scale-out file shares provides the ability to share the same folder from multiple nodes of the same cluster. For instance, if you have a four-node file server cluster that is using Server Message Block (SMB) scale-out, a computer running Windows Server “8” Beta can access file shares from any of the four nodes. This is achieved by leveraging new features in the Windows file server protocol, failover clusters in Windows Server, and SMB 2.2. Administrators can provide scale-out file shares and continuously available file services to server applications and respond to increased demands quickly by simply bringing more servers online. All of this can be done in a production environment and it is completely transparent to the server application.
Key benefits provided by Scale-Out File Server in Windows Server “8” Beta include:
In this post we will walkthrough building and configuring and transparent file Server cluster composed of two nodes “ServerA.foresta.local and ServerB.foresta.local”
Note: Storage LUNs used in this post are created on Windows Server 8 Beta iSCSI target server that was detailed in previous post.
Then ServerB will be shown during the Select destination server page, complete the installation as per the previous steps.
Active Directory Recycle Bin helps minimize directory service downtime by enhancing your ability to preserve and restore accidentally deleted Active Directory objects without restoring Active Directory data from backups, restarting Active Directory Domain Services (AD DS), or rebooting domain controllers.
When you enable Active Directory Recycle Bin, all link-valued and non-link-valued attributes of the deleted Active Directory objects are preserved and the objects are restored in their entirety to the same consistent logical state that they were in immediately before deletion. For example, restored user accounts automatically regain all group memberships and corresponding access rights that they had immediately before deletion, within and across domains.
Many people have been looking for a simplified GUI to restore deleted objects which is now available in windows Server 8 Beta.
In this post we will walkthrough configuring Active Directory recycle bin, deleting and recovering a test user.
Environment details:
To enable Active Directory Recycle Bin using the Enable-ADOptionalFeature cmdlet
Important note:
To enable Active Directory Recycle bin the AD forest functional level has to be Windows Server 2008 R2 or later.
Note: in this post we are using Windows PowerShell ISE
2. Type the following cmdlet
PS C:\> Enable-ADOptionalFeature –Identity ‘CN=Recycle Bin Feature,CN=Optional Features,CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=xyz,DC=local’ –Scope ForestOrConfigurationSet –Target ‘xyz.local’
3. Once enabled Active Recycle bin create test01 user and delete it.
To Recover a Deleted objet
1. Open Server Manager, go to AD DS right click domain controller , open Active Directory Administrative Center
2. Click on the domain name and then select Deleted Objects
Deleted user “test01” will appear under deleted objects container, Right click on this deleted user two restore options will appear:
One another great feature of Windows Server 8 is ability to run your PowerShell commands on your browser from anywhere. This is really amazing feature. While PowerShell is becoming a “must” tool for almost all Microsoft products, using it on your mobile phone or browser will add great value to our day-to-day tasks.
This feature is called as Windows PowerShell Web Access and it resides as a Windows Server 8 feature.
In this blog post we’ll discuss how to install and configure this amazing feature.
PowerShell Web Access is not enabled by default in Windows Server 8. So firstly you have to add this feature using new designed Server Manager.
Open your Server Manager and click Add Roles and Features
Click Next for the Welcome Page.
In Features page, Select Windows PowerShell Web Access
Some IIS features are required to manage PowerShell over IIS.
Finish Installation.
As you notice setup warns us about additional configuration requirements.
These are;
For testing purposes you can use one simple line PowerShell command to configure all above steps.
Install-PswaWebApplication –UseTestCertificate
This command creates required application pool and web application. Also it creates and binds a self-signed certificate.
Please note that using a test certificate in a production environment is not recommended due to security reasons.
Here is the virtual directory that is created by command.
In order to connect /pswa virtual directory remotely, we have to create an authorization rule. By default no authorization rule is defined. You can check it with Get-PswaAuthorizationRule cmdlet.
To create authorization rule Add-PswaAuthorizationRule command can be used. But before, I created an Active Directory group called PSAdmins and add administrator as member.
And used –UserGroupName parameter.
You can also use –UserName parameter to create authorization rule for individual user objects.
Authorization rules are XML files and stored in Windows\Web\PowerShellWebAccess\Data directory.
You can manually edit and configure these files;
Now on my Windows 8 Client machine, I browse for related server with /pswa directory.
It gives certificate error as expected due to self-signed certificate.
Providing my credentials.
And here is the Web Based PowerShell. It’s connected to remote machine and all executed PS Commands runs on remote machine.
Now you can manage your systems with PowerShell from anywhere. Only requirement is to access related pswa directory over HTTPS. That’s really cool!
Anıl Erduran.