Like TechNet UK on Facebook
TechNet Team Blogs
Streaming live from San Francisco on June 7th, 2012
An innovative online and in-person event introducing cloud-based development technologies from Windows Azure. Make sure you watch the event live online at 21:00 – 00:00 BST
Chris Testa-O'Neill is a Senior Consultant for Coeo Ltd, a leading provider of SQL Server Managed Support and Consulting in the UK and Europe. He is a Microsoft Most Valuable Professional (MVP) for SQL Server, sole author of the MCTS SQL Server 2008 Microsoft E-Learning courses and technical reviewer for SQL Server 2012 BI Official Microsoft courses for Microsoft Learning. He is heavily involved with the SQL Server community as a speaker and an organiser of SQLBits, a Regional Mentor for SQLPASS and he runs his own user group in Manchester, UK. As well as being certified as a SQL Server MCDBA, MCTS and MCITP in all tracks.
This blog is part two of a series of blogs about implementing a SQL Server 2012 highly available BI environment. Part one can be found here
Three worlds to consider when embarking on this journey is:
To setup a SQL Server 2012 highly available BI environment you should establish the hardware and software that is required to support the architecture. As stated in the first blog, you may have to liaise with another team in order to resource these servers. Understanding the requirements will help you have meaningful conversations with the right people. The software includes:
So what is the end game? In this example we are assuming that your organisation has a requirement to use PowerView and/or PowerPivot for SharePoint. This software and hardware is collectively used together to create the following environment.
Network Load Balancer Layer
The Network Load Balancer (NLB) layer is there to accept client requests for a SQL Server 2012 Business Intelligence feature such as PowerView or PowerPivot for SharePoint that is hosted within a SharePoint 2010 farm. These requests are then directed to the first available web front end server to service the request. You should first establish with the network infrastructure team if a NLB already exists. Making use of an existing NLB will speed up the deployment of this infrastructure and can be managed by the infrastructure team. Should you require a NLB, Windows 2008 R2 Server or a hardware based solution can be used to fulfil this requirement. It is important to ensure that the NLB contains dual network cards that provides resilience should one card fail.
Conversations with the infrastructure team should also establish the IP address of the NLB. This is important as this address will be used to map to the hostname address that is decided for the SharePoint Farm which is discussed later in this article.
A second aspect that should be explored is the capability of the NLB to “probe” the web front end servers. “Probing” is a feature of NLB that is used to check the availability of servers that the NLB is directing requests to. The NLB will check to see if a server is available before forwarding requests to it. Should “probing” be employed on the NLB, make sure that the NLB does not just check for the availability of the Windows server. There could be a situation where Windows is working fine, but the SharePoint software installed on the web front end server has stopped. Check with the infrastructure team to see if the NLB can probe for the existence on running services such as SharePoint Foundation Service or other services so that there is a more realistic test of availability.
Web Front End Layer
The SharePoint Web Front End (WFE) is the layer between the applications that are stored in the SharePoint farm and the client requests to the applications. The WFE will direct requests for SharePoint services to the appropriate application server. A minimum of two WFE servers are required to facilitate a High Availability BI solution and the recommended hardware requirements for the WFE layer are:
Ensure sufficient space for the base installation and additional space for diagnostics such as logging, debugging and creating memory dumps.
The following software should be installed on each of the Web Front End servers
Additional WFE servers can be added to support higher concurrent connections to the SharePoint Farm or to provide dedicated servers for specific services such as Reporting Services or SharePoint Search Service.
Although WFE servers can be setup on dedicated hardware, they are typically setup within Hyper V. The key point when virtualising the servers is to ensure there is enough resources to host the virtual machines that are to be used within the virtualised environment. The virtualised environment can also have its own availability features to provide further resilience. Discuss these options with the team responsible for managing virtualised environments.
The application layer hosts the applications that are consumed by users of the SharePoint farm. From a SQL Server perspective this will include SQL Server Reporting Services, PowerPivot for SharePoint and Powerview. For the purpose of this article, a minimum of two application servers will be required to provide high availability. This is achieved by duplicating services on each server. It should be noted that you can add additional application servers for non-high availability reasons. An example could include offloading an application to a dedicated server that requires more power than other services. So you should consider which services require high availability and which services that do not.
The recommended hardware requirements for the application layer are the same as the WFE:
Note that you may require additional resources dependent upon the services you run on the application servers. As an example, the Visio Graphics Service in SharePoint may require additional memory should this be extensively used.
The following software should be installed on each of the application servers
Like the Web Front End layer, the application layer can too be virtualised and should be resourced appropriately to ensure the application servers do not suffer from resource contention
SharePoint Database Backend Layer
The SharePoint Database backend layer is used to host the SharePoint configuration databases from the SharePoint Farm. At the time of writing it is not clear if these databases can be stored or supported using SQL Server 2012 Always On availability groups. As a result the SQL Server Instance on which the SharePoint databases are installed can use an Active Passive cluster to ensure high availability. Keep an eye out for developments in this area.
Also ensure that the firewall ports are opened up to enable inbound communication from the SharePoint Service. As applications are installed and enabled within the SharePoint farm, SharePoint will create databases within the SharePoint backend layer.
While not strictly within the scope of the high availability BI infrastructure, you should make a note of all the Database Engine, Multi-Dimensional and Tabular SQL Server instances and the associated service accounts that Reporting Services, PowerPivot and PowerView will use as data sources for the creation of reports. This will become important should you enable Kerberos authentication and delegation within the SharePoint farm, discussed in part three of this series.
For a complete high availability solution, you should also ensure that the identified data sources are also configures in a highly available solution. Otherwise the data sources could be the source of a loss of service that reflects badly for the BI environment
What’s in a name?
To use features such as SQL Server 2012 Power View, there is a requirement to setup a SharePoint 2010 SP1 Farm. If your organisation does not have one already, give serious consideration to the url name that the users will type in a web browser to access the SharePoint environment. This is a decision that you will want to involve the business in as well. The name should reflect the purpose of the environment. As many of my engagements involve setting up a BI platform, customers will choose a name such as CompanyBI, an example being coeobi. Ultimately, this means that the users will type in http://coeobi in order to access to SharePoint 2010 platform.
From a technical perspective, the name must be registered in Domain Name Services (DNS) so that communication can happen seamlessly within your infrastructure. DNS is a name resolution service. Its primary purpose is to resolve a hostname to an IP address, similar to searching for the telephone number for an individual within a telephone directory. There are many other features that can be discussed about DNS. There is however a key point that is very important to the setup of your SharePoint name in DNS.
Ultimately the name you have chosen for your SharePoint Farm, in our example coeobi, must be mapped to the IP address of the Network Load Balancer. When discussing the creation of the DNS record, stress to the infrastructure team that the DNS record registered is either an A Resource Record that maps the name to the IP address if using IP version 4 or an AAA Resource Record that maps the name to the IP address if using IP version 6. Experience has shown that an assumption is made to create CNAME Record in DNS instead. This causes problems within the SharePoint farm in terms of authenticating users as the Kerberos protocol does not work well with CNAME records.
So in summary the following points should be established to get the foundation of the high availability BI farm up and running
In part three we will look at the impact of Active Directory and Kerberos on a high availabilty architecture
Internet Explorer 9 – it’s blisteringly fast, elegantly designed and the securest browser yet. Now you have two ways to win by voting for your favourite IE9 videos.
We’ve launched a competition to ask people what they would do to get IE9 at work and we’ll be posting the best examples of how far they’d go on TechNet from 28 May to 22 July. You’ll then be able to vote for your favourite between 9-22 July and be in with a chance to win a fantastic prize - £500 lastminute.com and £500 of Virgin Experience Days vouchers, plus a limited edition Xbox 360 320GB Star Wars Kinect Console!
In addition, we’ve produced three videos of our own to show just how far some people are prepared to go - if you vote for the one that ends up being the most popular, you’ll be entered into another prize draw. Head to TechNet now to vote for your favourite Microsoft video before 22 July and you could win £500 of lastminute.com and £500 of Virgin Experience Days vouchers.
Microsoft Trustworthy Computing team have recently published the Microsoft Security Intelligence Report (SIR) which provides information about the changing global threat landscape of exploits, vulnerabilities, and malware.
This special edition of the SIR provides summarized information about how malware (and other forms of potentially unwanted software) has evolved over the last 10 years:
You can download either the full report or the summary report here: http://www.microsoft.com/security/sir/story/default.aspx#!10year
This month our newsletter was all about cloud services and learning. We brought you content on subjects such as InfoPath, Office365, Windows Azure and the latest update on our Training and Certification programme.
We publish our newsletter every other week and it’s a great way to keep on top of the latest Microsoft news relating to your work. If you haven’t already registered for our newsletter you can do so by going here.
You may be aware that I am still keen on SQL Server and despite the changing nature of my role at Microsoft I still try and keep my hand in with the latest version. One thing I overlooked until I did SQL Relay was contained database security which is easy to use and can make your life a lot easier.
In previous versions of SQL Server logins are stored in the master database whether you use windows or SQL Server authentication. That’s fine until you want your database to move to another instance of SQL Server. For example if you enable mirroring your database could end up running on a secondary server and if you haven’t got a separate process to move the logins across to the secondary this could cause problems if those logins have changed or new ones have been created.
SQL Server 2012 fixes this with contained databases and makes deployment easier by enabling you to put the logins into the application database. As well as making database more portable it also means that uses connect to specific database rather than to the whole the instance thus limiting what they can do to just that database. For example they won’t even be aware that there are other databases in that instance.
For this to work you need to alter the properties of the instance either in options form the GUI:
or using the following TSQL:
EXEC sys.sp_configure ‘Contained database authentication’, 1 GO RECONFIGURE WITH OVERRIDE GO
EXEC sys.sp_configure ‘Contained database authentication’, 1
RECONFIGURE WITH OVERRIDE
I would then configure my database for contained security again either form the UI..
or in TSQL..
USE Master ALTER DATABASE [DeepFat] SET ‘containment = PARTIAL WITH NO_WAIT GO
ALTER DATABASE [DeepFat] SET ‘containment = PARTIAL WITH NO_WAIT
If I then create a login..
USE [DeepFat] CREATE USER [Deepfat] WITH PASSWORD = ********
CREATE USER [Deepfat] WITH PASSWORD = ********
I can do a simple test to see how this works. I open Management Studio and try and use this login to connect to my instance and it will fail..
because I also need to specify the database I am going to (by expanding the options and selecting the database to connect to) …
If I do that I will be able to connect but all I will see in management studio is my database..
This will work for all the other kinds of logins like windows authentication. So a nice simple way to contain the enthusiasm of your uses to wander around your SQL servers as well as making the database more portable.
If you want to try this you’ll need to use SQL Server 2012, and you might want to see what else is new in this release on the SQL Server track in the Microsoft Virtual Academy
I wanted to something different for SQL Relay last week as this is a community driven event and I didn’t want to cover areas that the other experts on the Relay team would cover. This left some gaps and I have had several follow up questions via twitter and e-mail which I want to cover in this post.
I covered several uses of the in memory column based technology in my session 2 of which are in SQL Server 2008 R2 (PowerPivcot for Excel and PowerPivot for SharePoint) and two are new for SQL Server 2012: Tabular Analysis Services and Columnstore indexes. Columnstore indexes are part of the database engine rather than part of Analysis services and are created much in the same way as for other indexes:
CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_CS_FactProductInventory] ON dbo.FactProductInventory ( ProductKey, DateKey, UnitCost, UnitsIn, UnitsOut, UnitsBalance )
CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_CS_FactProductInventory]
ProductKey, DateKey, UnitCost, UnitsIn, UnitsOut, UnitsBalance
They can speed up a query by 10-100x compared to a normal index however you can’t update a table with a Columsntore index on you have to disable it and then re-index after you have made your changes. There a good guide on its usage on the SQL Server wiki.
I would like to have covered of contained database security which as the name suggests means that the login credentials of users get stored in a given database rather than in master and for SQL Server authentication this means that the password is in there as well. As well as making the database more portable a user who just has their credentials in that database can’t change to another database and have little or no permissions outside that database. Therefore when you connect to a contained database you need to specify the database as well as the server/instance (for example in management studio) I can see there being huge advantages of this approach for developers wishing to make applications cloud ready and ISV’s can make deployments of their applications more easily.
I also got asked about crypto enhancements in SQL Server and these are:
I have to admit to glossing over this as I am not primarily a developer and I rarely play with SQL Server Express. LocalDB is an installation option in SQL Express and allows for a cutdown version of SQL Server to run against a local database. Note LocalDB doesn’t support Filestream and cannot be a merge replication subscriber and for more on this refer to LocalDB in MSDN
I have now checked and there is currently no drill down capability in Power View. If htis is important to you r business you can register your interest in this feature oadn/or clusteringn Connect (Microsoft portal for feedback on products)
SQL Server Guest Clustering/Always On
No matter how good your virtualisation stack is if you want to make SQL Server highly available you need some form of solution whereby a virtual machine can hand off the running of a SQL server database instance to another virtual machine for planned and unplanned downtime. Correct me if I am wrong but Vmware DRS simply doesn’t do this (neither does Hyper-V so I am not trying to criticise Vmware per se). Your choices were mirroring before the launch of SQL Server 2012 and now that’s out you have Always On. This should work well on Vmware as well as Hyper-V as there is no dependency on shared storage and hence iscsi support in your VMs.
So hopefully that helps clear up a few things I had to skate over in the interest of time, do ping me if I have missed your query off and look forward to chatting with you all agin at SQL Bits or some other community event soon.
We have had range of content that is just waiting to be delved into when you have a spare moment. If you’re into cloud content then this week is the week for you!
Pop over to the insights video as it is perfect for you. Also it’s worth looking at out Cloud Ninja post as well which provides the details of our certification program. We’ve also provided a set of details around cloud based labs and evaluation software.
As an added bonus anyone with children or friends and family in full time education will be interested in our DreamSpark article. If you would like to get this news direct to your inbox why not sign up to our newsletter, follow us on Twitter or like us on Facebook.
Steve Plank sat down with Jim Chapman, CEO of Retail Manager Solutions, who talks about their move from on-premises software to the cloud and then tells us how Windows Azure has changed their business. Sales cycle times have been dramatically reduced, customers save money and there is less work to do to get a solution up and running for one of their customers.
To find out more about Windows Azure and to take a free trial of the platform just pop over to the website: http://www.windowsazure.com/en-us/
Every couple of weeks we bring out the book of the fortnight in our TechNet Flash Newsletter. What you may not have noticed is that in with that we usually offer the book of the month at a significant discount. So it’s worth signing up to the newsletter to get these discounts as well as the server and tools news direct to your inbox.
This fortnight we’re sticking with our cloud theme and the book of the fortnight is Working with Microsoft Office 365.
Take control of your small business communication and increase your productivity-with Microsoft Office 365 cloud services. Through extensive walkthroughs, easy-to-follow procedures, and real-world tips, you’ll learn how to implement Office 365 for professionals and small businesses-without having to be an IT expert. You’ll also gain valuable knowledge on configuring, deploying, and maintaining Office 365 no matter which plan you choose.
Discover how to:
Sign up to this month’s TechNet Flash Newsletter to gain access to our 40% discount offer! You can get the print version of this book with free UK P & P for £18.59 (Regular Price: £30.99) and you can also get the E-book version at a discount too!