Just after a new release of SQL Server, I often get e-mails and calls from folks with this question: “Can I upgrade from Customer Technical Preview (CTP) x or Beta #x or Release Candidate (RC) to the “Released to Manufacturing” (RTM) version?”
Unfortunately, no. Right up until the last minute, things are changing in the code – and you want that to happen. Our internal testing runs right up until the second we lock down for release, and we watch the CTP/RC/Beta reports to make sure there are no show-stoppers, and fix what we find. And it’s not just “big” changes you need to worry about – a simple change in one line of code can have a massive effect.
Even if you've done this before and things seemed to go well, you may be in a difficult situation because of it.I’ve dealt with someone who faced this exact situation in SQL Server 2008. They upgraded (which is clearly prohibited in the documentation) from a CTP to the RTM version over a year ago. Everything was working fine.
But then…one day they had an issue. Couldn’t fix it themselves, we took a look, days went by, and we finally had to call in the big guns for support. Turns out, the upgrade was the problem. So we had to come up with some elaborate schemes to get the system migrated over while they were in production. This was painful for everyone involved. So in general it's just a really not a good idea.
There is one caveat to this story – if you are a “TAP” customer (you’ll know if you are), we help you move from the CTP products to RTM, but that’s a special case that we track carefully and send along special instructions and tools to help you along. That level of effort isn’t possible on a large scale, so it’s not just a magic tool that we run to upgrade from CTP to RTM. So again, unless you’re a TAP customer, it’s a no-no.
This past week we released SQL Server 2008 R2 to manufacturing. This is a huge accomplishment for the team and our customers are anxious to get their hands on it. I came across one blog post that expressed disappointment that the only thing they could download was the evaluation edition – they couldn’t wait to get their hands on a fully licensed edition, which will be available shortly.
Rather than go into a laundry list of what’s in the release here are links to a few of the RTM stories:
Even though I’m a Manageability Guy and there are some terrific manageability features in R2 the most important feature, in my opinion, is PowerPivot. PowerPivot is going to change everything about business intelligence for IT and information workers. Early in my career as an IT Pro I designed a system that used Excel Pivot Tables that were loaded with massive volumes of sales data. Unfortunately I had to have tens of Pivot Tables spread out across an almost equal number of Excel Workbooks. Since there were so many files and tables I had to build a monthly process for refreshing the data. Plus if one of the users wanted a new view of the data I had to craft it by hand for them. It’s an understatement to say this was a pain. If I had PowerPivot back then it would have greatly simplified my life and better supported the needs of my users. As you read up on PowerPivot you’re going to think it’s too good to be true, take it for a test drive to convince yourself how truly remarkable this technology is and how it’ll transform the way you think about BI.
First, the reason for the release schedule was to properly align Microsoft's flagship database product with Microsoft Office, and with Microsoft's "cloud" strategy. One of the strengths of the SQL Server platform is that it works well with our other products, and in Microsoft Office 2010 and the latest release of SharePoint we have included amazing array of Business Intelligence features for the "non-IT" worker. This means your business users can get at the data they need and want, and the IT department can still control and protect the data the way it should be. It's the best of all worlds.
But it doesn't stop there. As you may have heard, Microsoft is "all in", with a comprehensive cloud strategy. We have not only a complete cloud development platform (Azure) but also a relational database offering (SQL Azure) that goes beyond just hosting a SQL Server Instance in a rack somewhere. SQL Server 2008 R2 allows you to connect to SQL Azure like you're connecting to a local server. You now have capacity on demand, without losing any of your local systems or control.
And there's more - this release also includes the "Datacenter" edition, with support for up to 256 logical processors, data and backup compression (from SQL Server 2008) and the ability to use SQL Server with "Live Migration" - a virtualization technology that lets you move virtualized servers without downtime. These features, along with rapid adoption in the most mission-critical, enterprise-class environments means that you should consider SQL Server as a "Tier 1" application platform.
These are indeed exciting times for the data professional. Make sure you hit these links to learn more - your organization is counting on you as the data professional to know what's new and useful in the data world. You can also post any questions you have on this post - I'll try and make sure someone gets back with you:
SQL Server 2008 R2 Launch Site: http://www.sqlserverlaunch.com/
Official Microsoft Site for SQL Server 2008 R2: http://www.microsoft.com/sqlserver/2008/en/us/R2.aspx
If you want to learn more about SQL Server 2008 R2 this collection of videos is a terrific resource. There’s one covering an overview of the release and subsequent videos that drill in to specific feature areas. Each video is between 3 and 6 minutes long. The site also contains links to other resources and a listing of SQL Server 2008 R2 events across the globe.
I originally posted this on my blog but I think this audience might find it interesting. Yesterday I was having a conversation with the User Experience Program Manager on my team regarding two icons where one is supposed to represent an override of the other. It’s a deceptively simple problem. First, it seems there is no standard glyph for override. This means we have to invent something. As we set off doing this we have many things to consider. First, the icons have to be the same but different. Second, the differences must have meaning; we want them easy to understand and remember. For an expert user this is pretty easy. They’ll use the system everyday and quickly become comfortable with the icons. However, the novice user (or casual user) could end up getting lost and worse could make a bad decision if they don’t understand the differences between the icons. Most of the time it’s not readily apparent to customers or end users how much internal discussion goes into every detail of every feature; you just see the final result. If you could be a fly on the wall for a day, week, or month I think you’d walk away with a new appreciation of how difficult it is to develop great software and how much the SQL Server team loves doing it!
<Begin Original Posting>
In my day to day work I interact mostly with experts in the field of database technology and database administration. Every once in a while I get a gentle reminder that not everyone is an expert. Those reminders often come from forum questions from newbie and intermediate DBAs. When I come across one of these questions I’m thankful for it. It’s a great reminder that there is a spectrum of experience level out there.
Believe me, it’s hard work to make a feature easy for a newbie while at the same time powerful for an expert. One of the tricks we use to accomplish this is the script option on dialogs in SSMS. The dialog supports newbies (either new to SQL Server or new to the feature) whereas the scripting support gives them the option to learn what’s happening behind the scenes and build their expertise.
As we develop a new feature we’re intrinsically the resident expert. We take for granted certain knowledge of the internals. If we don’t challenge this we can easily end up shipping the wrong experience which can result in a few bad things, such as delaying the adoption of a new feature. One of the easiest ways to avoid this trap is to conduct usability studies early and often. We recruit DBAs of all experience levels and have them run through various tasks. Sometimes we got it right (when this happens we like to high-five each other and talk about how awesome we are) and other times we have to go back to the drawing board and make corrections in the experience (these are far more somber moments).
As a DBA I’m sure you encounter newbie through to expert users (other DBAs, developers, and end users). Do you change the way you interact with each person based on their experience level? next time you have a newbie come to you with a question, think about how you can help them become an expert.
Ross Mistry, a noted author on many Microsoft topics, has his latest book out, and it's on SQL Server 2008 R2. We're able to make that available to you for free in electronic form - just go here to pick up your copy: http://blogs.msdn.com/microsoft_press/archive/2010/04/14/free-ebook-introducing-microsoft-sql-server-2008-r2.aspx
One of the most exciting times in a product release cycle is when the launch festivities begin. In the product development team it means that we get to start the celebration! That is until we start work on the next release, which for us has already begun. The SQL PASS European conference in Neuss, Germany will feature a special full-day SQL Server 2008 R2 Launch Event. Donald Farmer is the keynote speaker! You can find more information about the event here.
I saw a post the other day that you should definitely go check out. It’s a cost/benefit decision, and although the author gives it a quick treatment and doesn’t take all points in the decision into account, you should focus on the process he follows. It’s a quick and simple example of the kind of thought process we should have as data professionals when we pick a server, a process, or application and even platform software.
The key is to include more than just the price of a piece of software or hardware. You need to think about the “other” costs in the decision, and then make the right one. Sometimes the cheapest option is the cheapest, and other times, well, it isn’t. I’ve seen this played out not only in the decision to go with a certain selection, but in the options or editions it comes in. You have to put all of the decision points in the analysis to come up with the right answer, and you have to be able to explain your logic to your team and your company. This is the way you become a data professional, not just a DBA.
You can check out the post here – it deals with Azure, but the point is the process, not Azure itself: http://blogs.msdn.com/eugeniop/archive/2010/03/19/windows-azure-guidance-a-simplistic-economic-analysis-of-a-expense-migration.aspx
OK, sort of.
I've been a fan of the Economist magazine since a friend of mine in Florida introduced me to it years ago. Imagine my shock when I check out the latest issue and find a story about - Big Data! This is the kind of thing we think about all the time in the SQL Server group here at Microsoft. Then when I read the article, I find this stunning bit of information: did you know that Wal-Mart handles more than 1m customer transactions every hour, feeding databases at more than 2.5 petabytes – the equivalent of 167 times the books in Americ’s Library of Congress?
And Wal-Mart uses SQL Server. And SQL Server 2008, no less - with Policy Based Management at the forefront of helping them track this all. Also, if you read the article, you'll see that Craig Mundie (Microsoft) and Eric Schmidt (yeah, the boss of Google) both sit on a presidential task force dealing with health care data.
This is heady stuff. Sometimes I get asked if SQL Server can scale, if it's really different than it used to be - and since I live and breathe the database world, I'm surprised by the question. But when I think how quickly we've moved into the upper tiers of the enterprise, I'm not as surprised.
Definitely check out the article here: http://www.economist.com/specialreports/displayStory.cfm?story_id=15557443. For more about a REALLY big SQL Server, check this out: http://msdn.microsoft.com/en-us/library/aa226316%28SQL.70%29.aspx. and Gizmodo (a must-follow blog) has a good story on Pivot (hit the link if you don't know what that is) here: http://gizmodo.com/5488641/, Awesome.
There has been a lot of buzz lately about "the cloud" and what it means for IT, and the organizations they serve. I just finished watching Steve Ballmer deliver Microsoft's vision for the cloud, wondering if it would be different than the offerings of other firms. It feels like it is different. So what is that strategy, and more importantly, what does that mean to the data professional?
First, let's recap what Steve mentioned for Microsoft's strategy. The word "cloud" can mean a lot of things, from hosting data (often called "somebody else's hard drives) to running software where the user logs into an application hosted on a web server somewhere (called Software as a Service, or SaaS). Microsoft actually has both. In fact, we've had things like Hotmail (which is a service) and the various Live offerings for quite some time. And we've had XBox Live, which is a hosted environment that is kind of a hybrid - there's a "fat" or hardware client that talks to the main server out on the web. There's also an offering of a complete Microsoft Office, SharePoint services and even LiveMeeting in the cloud - nothing to install, runs on lots of platforms, and more (details here).
But there's something new. Microsoft has two new offerings, called Azure, and SQL Azure. Azure is more of a programming platform that can host data, and SQL Azure is SQL Server in the cloud. But SQL Azure isn't just a server you rent - we actually maintain the systems, handle the optimizations and so on. You create databases and database objects. You can take the data from SQL Azure and send it to a local SQL Server, and vice-versa. There's also the ability (through something called Sync Services) to replicate data between the two.
So how does the Data Professional meld in the "cloud" to our day-to-day systems? How do we use Microsoft's strategy in our own strategies? I've seen a couple of interesting uses so far from those that are trying it out:
Front-End Start, Back-End Archive: In this mode, companies spin up an application quickly (with no server build) into Azure, backed by SQL Azure. They are able to quickly deploy the app, and if it grows they can bring that data in-house or even bring a subset of that data locally for reporting, keeping a smaller data-set up in SQL Azure.
Start There, Stay There: Some of the companies I've seen are taking the application ideas and starting them in SQL Azure or Azure (or both). No capital expense, no hardware purchases, no installs, nothing to deploy. When the app is developed, you just point your clients there and off they go. It's a pay-as-you-use system, so the costs mimic the profit.
Start There, Come Here: In other cases, organizations want to start projects quicker than they can get hardware and software installed. So they follow the previous process, and then bring the code and the database in-house when they are ready.
There are other strategies, such as using the cloud as part of High Availability/Disaster Recovery or a remote-office access system and so on, but whatever your reasons, you need to spend a little time getting familiar with the cloud - and Azure and SQL Azure should be high on your research list. Here's some places to get started:
Azure Overview: http://tinyurl.com/y8s52vh Windows Azure Training Kit: http://tinyurl.com/5vrt7q Writing a Azure Program in 5 steps: http://tinyurl.com/ye6chog Azure Data Sync: http://tinyurl.com/ylsykfb Future of programming with Azure Video: http://tinyurl.com/ykqsbsf Reference list for Windows Azure: http://tinyurl.com/yze9azr
In my last posting I spoke about changing databases. In this posting I want to jump into a little more detail on some things to improve the success of your migration project. I’ll spare you the lecture on planning – but remember the best way to improve the success of any migration project is ensure you spend time planning. So here are my tips for what I’ve seen work:
First, as part of the planning phase be sure to inventory all upstream and downstream dependencies. What systems feed into the one being replaced? What assumptions are these systems making about the system being replaced? This shouldn’t be too difficult but it’ll take a little time. Downstream dependencies are far harder and are going to require significant detective work. The main challenge here is degrees of separation. You should be able to find those systems that are one degree away by analyzing connections and SSIS/DTS packages. This won’t be straight forward but you should be able to account for the majority of cases. The farther you move down the dependency chain the harder it’ll be and the longer it’ll take.
Second, you’ll need robust test data. You won’t be able to account for every scenario so you need to think through the interesting cases. Hopefully you have a starting point, what you currently use to validate new releases of the existing system. You may need to expand this set to be more encompassing given you’re changing the entire system. When you create your test data think through any special processing windows, end of month, end of quarter, end of year, that may have different requirements. You may need different data sets to test each of these special processing windows. To create the test data you may want to use existing tools that capture data/transactions, some of which can obfuscate data.
Third, running test data through the system isn’t enough. You will need tools to compare the results from each system to ensure the new system yields the same results. Depending on your situation you may have to build custom tools. This may not seem like a good investment but it’ll pay dividends.
Fourth, running test data through each system is necessary and once you’ve achieved parity an even better validation is to run the systems in parallel. The old system should stay as primary but each transaction should secondarily be routed to the new system. For a batch processing system this will be easy. For on-line systems you’ll need a capture/replay tool. Though usually minimal, these tools do add overhead to the system so be sure to plan accordingly.
Fifth, create a bridge between the new and old system. This means, once the new system is the primary for upstream systems you'll want to load the old system to lessen the urgency of migrating downstream dependencies. This is a temporary solution until all downstream dependencies are migrated to the new system.
Lastly, once all dependencies have been migrated to the new system you can unplug the old. The team has worked extremely hard to reach this milestone and you don’t to let it pass by without celebration.
Every migration will be a little bit different. Some will be easy, taking as little as a weekend, while others will be extremely complex taking a year or more. The most important advice I can offer is break it into multiple phases and validate at the end of each phase before moving to the next!
I’m not sure why but there seems to be a lot of chatter lately about changing DB platforms. Information Technology Intelligence Corp (ITIC) recently ran a survey about DB migrations. You can find the result here (search the page for “ITIC Sunbelt 2010 SQL Server Survey Results”). At the risk of sounding like an advertisement Microsoft has provided a SQL Server Migration Tool Kit for some time. It aids the migration to SQL Server from Oracle, Sybase, MySQL and even Access. But this is never a slam dunk. It takes careful planning and testing to ensure the applications that use the data source continue to function properly.
I’ve never heard of a DBA waking up one morning and deciding to migrate from one DB platform to another. DB platforms are pretty sticky and there is almost always some catalyst for the change: license/contract renewal, change in supported platforms by a packaged application, vendor consolidation, etc. Let’s be honest migrations are scary projects and you want to be sure the drivers are well understood by the project team and the stakeholders. Then you want to be sure you take your time to plan and analyze the situation.
Over the course of my career I’ve worked on a few very large migration projects that involved more than just the database platform. These had large budgets and extremely stressful. It would have been very easy to become complacent and ignore the need to migrate but opportunity would have been lost. Most of the projects were successful (even the greatest baseball player of all time doesn’t bat 1,000). They resulted in cost savings, staying in a supported environment, or creating more flexibility for the business. The reward was worth the risk. I’d say the two key things that contributed to the success were: breaking the project into meaningful stages and a comprehensive set of regression tests (running against old and new and comparing the results).
I’ll have more to share about migrations in my next posting. Bottom line, don’t let FUD keep you from doing what you need to do to achieve your business goals.
Donald Farmer, he of the BI fame, has been on a roll lately. He and I have something of a competition going most of the time, but it's a "friendly" competition. See, my audience is usually more administration and programming focused; his more BI focused. And no one can deny that BI has a lot more eye-candy. Of course, I explain to Donald that I actually look better than he does, so we're even on the eye-candy part.
OK, enough of that. BI is a hot topic, and people are asking me about it quite a bit, much to Donald's amusement. Since he held a great chat on BI a while back, I thought I would share the link: The Business Intelligence Agenda for 2010. It's a free listen, but you do have to register.
OK, Donald. You owe me one.
Some things just really work well together. Peanut butter and chocolate. Abbot and Costello. Coffee and...well...anything. You get the idea. But there are some real advantages in using SQL Server 2008 (and of course the upcoming SQL Server 2008 R2 release) with Windows Server 2008 and higher. It's not just that they are both better than their predecessors, SQL Server actually takes advantage of the improvements in Windows Server 2008.
One practical example is in how Windows Server 2008 handles the infamous "drive offset". This is a small block size movement from the first part of the hard drive sectors - it's an internal thing - but it causes real issues with software that exercises the I/O subsystem, and makes its own calls there. Like SQL Server. In the past, the data professional had to follow a process called "Partition Alignment", and this had to be done when the system was set up. That's all now a thing of the past - with SQL Server 2008 and Windows 2008 Server, this just happens.
Another example is in how Windows 2008 Server deals with the "sliding TCP/IP window". This enhancement directly affects how fast SQL Server can send large frames of data - especially with Replication and large binary objects. At Microsoft we noticed tremendous speed gains just by moving to Windows 2008 Server.
There are lots of other examples - from new virtualization and consolidation changes in both products to clustering enhancements, and now in SQL Server 2008 R2 the ability to run the "sysprep" utility after SQL Server has been installed.
You can read more about this "pairing effect" in this White Paper.
And be sure to check out John Kelbley's Post on the Windows Server blog where he also talks about ways that SQL Server and Windows Server work "Better Together".
Most data professionals are familiar with the term "planned downtime" and "unplanned downtime". The first is painful to ask for, and the second is painful to explain. We strive not to have either. SQL Server 2008 and SQL Server 2008 R2 have introduced features, such as better recovery from corrupt pages in a Database Mirroring and so forth that attempt to keep the problems down. But "planned" and "unplanned" can also be used to describe our daily work - and we don't have a choice most of the time for how we deal with either kind.
Planned work is the task we do because it has a schedule, or at least *could* be scheduled. Backups, building a server, applying a service pack, reviewing the logs - all of these could be things that we can schedule. Looking at my "Tasks" in Microsoft Outlook, I have a lot of things that I have scheduled for today, this week, and this month. I never really close or complete some of them, I just change the due date to the next period of time when I need to deal with that task again.
Other work is very "unplanned". This kind of work can come from anywhere - from a co-worker who needs help, a manager with an emergency request, and most of the time from a server that has a problem, with anything from issues with Replication to a failed backup.
It's kind of difficult to meld these two together. When you're in the middle of building a server, it's hard to leave the server room, run downstairs to talk with an irate manager and then fix the issue with the system's database so that her application can still run. Even worse, for data professional it's often a case of having to prove it *isn't* the database that is causing the performance problem - but that's another post.
There are, however, tools and processes that can help you deal with both planned and unplanned work. As I mentioned, I use Outlook for just about everything, since I can access it from many locations (even my Windows Mobile phone) and it combines my calendar, tasks, contacts and of course e-mail in one place.
Another tool I've come to rely on is OneNote. Of course you can just use notepad or a Word-processor to take notes, but OneNote is integrated with Outlook (and just about every other Microsoft program), it can "share" notebooks between teams and has a rich set of tags to help qualify what I need to know visually and quickly.
But tools aren't the whole story. First, I try to keep a level head during the interruptions. I've been a data professional for a really long time, so I've gotten over the panic stage. It also helped that at one point in my career I volunteered as an Emergency Medical Technician (EMT) on an ambulance, which of course *really* puts you in life and death situations. After that, a server crash isn't cause for complete panic.
I have developed a process to deal with both of these kinds of work. I plan what I can - trying to look out as far as possible, creating checklists, and coordinating with the rest of my team and my organization. I try to get the most important planned work done as soon as possible - first thing in the week, first thing in the morning. That way, if I get an unplanned event, as much as possible of the planned work is complete. In a way, I'm planning for unplanned work!
For the truly unplanned work, such as an emergency, I keep a OneNote page nearby with links that are categorized by the type of issues I think I might face. I document each step I follow to correct the issue, even if I have to wait until later. I try and keep the energy from all of the emotions low, and work on the problem as systematically as it will allow. Above all, I communicate constantly, letting the right people know what has happened, what is happening now, and what I'm doing about it. That OneNote document comes in really handy here.
So how are you doing it? How do you handle the work that comes at you from all sides?
We discussed consolidation and virtualization - one of the hottest topics customers raise with us these days. CTOs are typically looking to control costs, but cost savings are hardly automatic: you must choose the right strategy. In some (notorious) cases, I have seen CPU utilization actually drop on consolidated boxes; typically because the servers were IO-bound.
Given these potential pitfalls, how are you to go about setting a strategy? A great starting point is our server consolidation guidance, including a flowchart to walk you through the right choices. For example, for many folks considering virtualization, security is a real concern - a flowchart helps you choose between virtualization, and instance or database consolidation. Similarly, for high availability, manageability and other potential concerns, we can walk you through the options.
Other resources include our all-up Virtualization Brief - at two pages it's certainly brief, but it will give you a pretty good high-level overview of the options and benefits for SQL Server, Microsoft SharePoint and Microsoft Exchange. We also have a rather good customer case study with Avanade, where you can see how the right choices enabled them to cut physical servers from 136 to 20, while increasing their database performance by 50%. (I can tell you now, they were not IO bound!)
In our video conversation, Ted also makes it clear that partners will be important in guiding customers. Our generic guidance and tools are invaluable for understanding our architectures and capabilities, but partners are especially well placed to get into specifics, whether hardware or integrated systems.
This conversation about virtualization and consolidation is one we often have directly with customers: it was great to be able to have it with Ted before the cameras for a wider audience.
In a recent episode of The Big Bang Theory, Sheldon was excited to spend the evening reinstalling the OS and apps on his laptop. My wife asked me if I would find that fun. I had to admit that it wouldn’t suck but there were a number of other things that I’d rather do. Last week I reviewed a soon to be published report on the cost of administering various database platforms, including SQL Server. While there is lots of interesting data in the report what I found most interesting was the breakdown of where DBAs spend their time. Since the report hasn’t been published I can’t give out any specific findings but I will share this: according to the report the activity DBAs spend most of their time on is deploying new database servers; accounting for about 40% of their time. I almost fell out of my chair when I read that statistic. Let me say that again: about 40% of a DBAs time is spent deploying new database servers. Really? See the end of this posting for the activities included in deployment.
It’s imperative that managers of DBAs perform a regular audit of where their team spends their time. Then use this data to drive decisions on where to standardized and/or automate tasks with the overarching goal of reducing the time spent on the particular task. In general DBAs make a good salary and the job requires a specialized skill set. Ensuring your DBAs are spending their time on the most important tasks is not only good for your business it’ll be good for the morale of your DBA team. In my experience DBAs don’t mind working hard, they just want to be working on important and interesting projects; I don’t know deploying database servers qualifies.
Activities included in deploying new database servers:
* If you find that your team is spending significant time deploying database servers you should take a hard look at the sysprep feature new in SQL Server 2008 R2. It just might be the silver bullet you’re looking for.
Well, just look again. $250 million is a helluva partnership investment, even for giants in the industry. It's always the case that press releases tend to be somewhat "glossy" - frankly, they all sound the same to me - so again, it's easy to overlook the details. Better than the press release, look here, to some of the applications that are already available, some of the case studies that already prove the value of the collaboration, and some of the details of hardware, software and services that are being integrated: http://bit.ly/5VTD4k
From a purely personal viewpoint, I have to say that the people I work with in Microsoft and HP are genuinely excited about the extended partnership. My friend at HP, John Santaferraro has been tweeting like crazy! For us it's not just marketing, and that's particularly true in the database and business intelligence fields. SQL Server is the most rapidly growing database platform, our BI is making strides in the market, and our partners in HP have also made some very compelling business intelligence investments.
So what is it, that we are offering?
For one thing, pre-configured, packaged solutions for OLTP, BI and DW workloads for different sizes of business. For SQL Server, this is great. Today, our customers see the cost-effectiveness, ease-of-use and power of the SQL Server and Microsoft BI platforms, but they often see only that. We are after all a platform company (having, I suspect, more platforms than the Jackson Five.) The new solutions build on the platform with HPs services and hardware experience which are, of course, first class. If this was a just a case of packaging up a services, software and hardware offering to make it easy to market, I think I would be unimpressed myself, but there is more to it. We are developing tools and services specifically for these offerings - from Microsoft, tools to make virtualization easier; from HP, customized BI professional services in information governance, master data management and so on. These professional services from HP are critical - they have over 11,000 certified Microsoft Professionals, so building out a portfolio of service offerings specifically for them, greatly increases our clout.
For the mid-market, I can see some of our other partners being concerned at first reading of the announcement. However, I see good news for them, too. Naturally, the noise has been about the big things HP and Microsoft can do together, even if somewhat exclusively, but the channel remains very important to us. ("Super-important" as Microsoft execs are wont to say.) After all, we have 32000 HP and Microsoft channel partners.
They will see much larger investment in our marketing programs - something like ten times the current spend, I believe. The investment will go into bundled software and hardware packages that should reduce sales cycles, new financing options to make integrated solutions easier to acquire, and - hugely important and hopefully worthy of a cheer - integrated support from dedicated field engineers.
I hope this gives some impression of why we are excited by the announcement. Of course, the proof of the pudding will be in the eating, and we'll be watching over the next year and more for the restaurant reviews of this particular dessert to come rolling in. I'm looking forward to them.
Meanwhile, it's time for me to get out of my hotel room, away from this laptop, and out into the Arizona sun. Even after so many visits, I'm not going to take it for granted. Look at the HP / MSFT announcement in that light.
For the last few days my inbox, Twitter and Facebook feeds have been full of advice about which words I should stop misspelling. To be fair, in English, I have relatively little problem, and any misspellings I do make may be sloppy, but rarely result in misunderstandings. On the other hand, there are some T-SQL usages that really do cause problems, for myself and others ...
SELECT
This is perhaps the first word we learn as T-SQL newbies, but there is still some confusion. Some people spell this with a star on the end - this is easy and natural, but it is often wrong and will only help you if you are either too lazy to write out a list of columns, or too intellectually incurious to care about performance.
AUTO_SHRINK
Actually, this is not so much a misspelling as a weirdly archaic word that is simply not acceptable in polite DBA society. Using it will fragment your indexes and your chances of social and professional success with equally devastating effects.
IN
I know what you're thinking. How could someone possibly misspell IN? However, as with English, there are some weird and wonderful things about T-SQL. In some circumstances you would be better to spell IN as EXISTS (especially when preceded by NOT.) The problem is that IN and EXISTS handle NULL values differently.
Jens Suessmeyer fro MS in Germany, came across the problem and gives a good example here: http://bit.ly/520pQM
Nor is this peculiar to T-SQL: it's the same for those in Oracle-land too: http://bit.ly/6fMRP5
In practice, I nearly always come across this problem when someone has changed a column to allow NULLS - they can then discover to their consternation that queries which "worked" previously now return no rows at all.
REPAIR_ALLOW_DATA_LOSS
You could be forgiven for using this strange spelling, as the word has indeed found its way into the language in this form. Although this spelling is correct, it is pronounced REPAIR_ENSURE_DATA_LOSS as you will indeed lose data if you use it. Please note, that using this word in the same sentence as "msdb" is a desperate faux pas, resulting only in pain and embarrassment.
And finally ...
I really could not let this article pass without recording my favorite misspelling, even though it has nothing to do with T-SQL. I once visited a financial services customer who had, just that morning, discovered a small typo in code that was re-implementing a legacy application. After a whiteboard session, where the notes had been left scrawled in an awkward hand, a developer had boldly sallied forth and coded up using RAND in place of ROUND. The result was a series of credit forecasts using a random number with the customers' closing balances as the seed, rather than using their rounded balance. Strange to say, nobody had noticed for ... well, let's just say for long enough. Even stranger, when the error was fixed, several of the financial wonks complained that the numbers were no longer so useful!
I think I may have set myself an all-but-impossible task: to choose ten bloggers who write about SQL Server, and who have been outstanding in the last year. Nearly impossible, not because I can't find ten, but because there are so many more worthy of recognition. In addition, many of those I will not be including are friends and colleagues, so the task may be as thankless as it is difficult.
Nevertheless, having set myself the goal, I may as well get on with it. My method was simple enough. I started with those blogs I subscribe to, and, of those, found the ones I bookmark most often. These were neatly objective measures, but I was still left with about 20 blogs to consider. Then I had to find some more subjective criteria: are the blogs helpful, insightful, original, well written, newsworthy, and so on. I excluded official Microsoft blogs, focusing instead on the community blogs, so there is only one Microsoft team member on the list.
Here then are My Top 10 for this last year. To be fair to the others who so narrowly missed out, I'll publish a longer blogroll later of those who I consider to be essential reading. For now, let me know what you think of my top ten, in strictly alphabetical order.
Bob Beaucheminhttp://www.sqlskills.com/BLOGS/BOBB/Like most of the bloggers in this top ten, Bob is an active and excellent speaker and writer. Bob is notably excellent when writing about data access and programmability, areas which require both sound understanding of the database technology and the ability to work with, and explain, the latest programming models. If you're an application developer working with SQL Server, then Bob is essential reading ... and don't miss his conference sessions either!
Rob Colliehttp://powerpivotpro.com/Rob is the only Microsoft employee on My Top 10 list because his blog is really very independent and hosted with a quite separate presence and identity. Rob has set out to create a compelling blog for the new PowerPivot product and he does a great job synthesizing his years of experience in the Excel world with his detailed knowledge of the PowerPivot technology. Even better, Rob presents compelling, easy-to-understand scenarios with a great sense of humor. If you're interested in PowerPivot, you need to follow this blog.
Kasper de Jongehttp://business-intelligence.kdejonge.net/This blog has been a revelation to me this year. Kasper works in the Netherlands and blogs on BI topics. One outstanding feature of his blog is his use of copious screenshots. Often, with a new product just out in public like PowerPivot or the new Report Builder, Kasper sedulously records his experience with setup and first impressions, all captured with useful screens and comments. Even I learn stuff about setting up our BI products here! It's not just about installation either: Kasper explores many new features with the same careful approach.
Andy Leonardhttp://sqlblog.com/blogs/andy_leonard/default.aspxI really enjoy Andy's blog, not just for the technical posts (especially about SSIS), but for the way he writes with a perceptiveness and passion about the community of SQL users. Andy persuades, cajoles and encourages SQL Server users to get out and be part of something bigger: whether blogging, or simply attending a conference or event. Even better, Andy is always very clear about how community support fits in to an often challenging and difficult career path.
Sean McCownhttp://www.infoworld.com/blogs/sean-mccownNow this is a kick-ass blog. In fact, often times you get the impression that Sean's key motivation in sitting down to blog for the day is just to kick some ass. But he chooses his victims well! Whether it is Microsoft's product teams, officious auditors, or even himself (for delivering a bad presentation), Sean is typically forthright and on target. Sean is also, like Andy Leonard, excellent at supporting DBAs in their career and personal development, with advice in the last year on technical skills, interview techniques and even office politics!
Adam Machanichttp://sqlblog.com/blogs/adam_machanic/default.aspxThe sheer breadth and depth of Adam's posts are testaments to his knowledge of SQL Server. I have only two things to say: read this blog, and try every code sample Adam posts. You'll be better for it.
Paul Randalhttp://www.sqlskills.com/BLOGS/paul/Paul is a former Microsoftie who often draws on his detailed understanding of the relational engine's internals to give unique insights on his blog. As an expert on DBCC, this blog is simply essential reading if you are interested in recovery or repair - it's title is, in fact,"In Recovery." Even better, if you want to avoid recovery and repair, you need to read this. Paul also writes very entertainingly, which really helps with the often deeply technical matter.
Jamie Thomsonhttp://sqlblog.com/blogs/jamie_thomson/Jamie's previous blog used to be called "SSIS Junkie." I don't think he has quite kicked the habit, as his technical posts about SSIS are always excellent, but there is certainly a wider range of interests on display here from data warehousing to SQL Azure.
Kimberly Tripphttp://www.sqlskills.com/blogs/kimberly/Kimberly is inimitable, both on stage and in her blog. I wouldn't know where to start recommending her work - and if I started I could hardly stop. Let me take one example. Want to know about indexing? Read this blog - for the examples, the technical detail, the good humour, and the sheer practicality of the advice. And that's only one topic. Read the blog, every post.
Chris Webbhttp://cwebbbi.spaces.live.com/Chris is an OLAP guy, and if you know OLAP (whether in the form of SQL Server Analysis Services or any other vendor) you really should subscribe to Chris's blog for its breadth. For those specifically in the SQL Server sphere, Chris's posts on the MDX query language, and more recently on the use of PowerPivot DAX, are not only practical and perceptive, but help to stretch your skills and cover challenging scenarios.
So that's the list. What do you think? Anyone I missed out that you feel really needs to be there? And if so, who would you remove? I'd be fascinated to hear from you.
In the coming year, you'll see a new release, SQL Server 2008 R2, which is, we hope, full of goodies for your DBAs. As ever, there will some features immediately relevant to your business, some that will enable you to do new things over time, and some that you may not plan to use, but which may yet be of interest. So, in general I recommend using that extra training time to do three things: extend your current skills; expand your range with new skills; and explore and incubate some experimental projects.
Extend your current skills
I have rarely met a DBA with time on their hands, and I know that in your business they now manage more physical servers than ever, and with virtualization and consolidation, more instances and more databases, with more data, than ever. So, if I may suggest one feature that you need to learn thoroughly in SQL Server 2008 R2 it is our multi-server and application management improvements. There is a great whitepaper here, from our team, on this topic: http://bit.ly/6yVmOL You'll find this really is an essential feature to save money and to manage a healthy environment in the coming years.
Expand with new skills
When we first talked about SQL Server together, back in 2005, we remarked then on how the scope of the DBAs role was changing. Not only did they manage databases, but with SQL Server your DBAs were also managing reporting systems, OLAP servers, and the ETL process. I know that your DBAs thought this was, and is, a good thing. They not only "owned the data" but all the surrounding services that integrated, enhanced and gave meaning to the data. In SQL Server this was relatively easy, as the development and management environments for all these services are highly integrated. However, there is another area which I suggest your DBAs should delve into : SharePoint. That is a new administration experience, so there is more to learn. But it will be a worthwhile investment of time. Here's why ...
Not only is SharePoint our fastest growing server product, it is also the heart of our collaboration platform; and, as such, SharePoint is fast becoming critical to Business Intelligence. You know about PowerPivot, of course. (See www.powerpivot.com for more on that, especially you can try the hands-on lab.) I expect that in your organization, your adoption of PowerPivot will be departmental - I don't think you can hold the marketing guys back! In this case, I can see your DBAs getting very involved, not only provisioning data, but managing the infrastructure. There's are a couple of great blogs out there already exploring PowerPivot for SharePoint: www.powerpivotgeek.com and www.powerpivottwins.com - and if you want to give your DBAs a head-start on SharePoint there are excellent training links here on Arpan Shah's blog: http://bit.ly/5Ez7xT
Explore and incubate
Finally, I always think it is good to experiment. Even if you have no immediate plans to use a technology, learning more about it can often uncover useful cases, and prepares the team well for the day when the CEO, fresh from reading his latest business magazine asks "Shouldn't we be doing this?" This year, he'll be asking about the cloud. I can just about guarantee it. Fortunately, SQL Azure, the first significant relational database technology for the cloud, is easy to experiment with - in fact, the development and admin tools are basically the same as you are used to. See the team site for more information: http://bit.ly/7zdfAJ I'm not suggesting yet that you port any applications to the cloud - but we'll help all we can if you want to, just let me know. However, I am sure you and your team will find plenty of opportunities to host experimental applications and incubations. We'll be pleased to help with that too.
So, in short, those are my recommendations for those extra training hours in 2010. It is going to be a good year for SQL Server, and it's great to have you aboard.
It's getting close to that time of year when you're going to start seeing lots of "the year in review" specials on television. I started in my new role working with our customers last December, so it seems only fitting that I take a moment and go over some of the highlights in the SQL Server community in the last year - what I've seen, what I've learned, and what has hit the headlines. I have a wonderful vantage point, working with our partners, our clients, and with the SQL Server team here in Redmond. I've traveled to several states, participated in lots of user groups, presentations and conferences, and I've learned a lot about how people use SQL Server in their organizations and what we've done to make that a better experience. Most companies started the year with a big emphasis on cost-saving and getting the most value out of SQL Server. I've helped lots of organizations figure out how they can migrate applications to SQL Server, and how to consolidate those servers onto fewer Instances - saving on hardware and software costs. This is a two-edged sword - you have to carefully plan these migrations and consolidations out, and understanding the right process to use (database stacking, Instance stacking and Virtualization) is vital to keeping the organization happy. Microsoft announced they would support using SQL Server in a Virtualized environment, and also began work on SQL Server 2008 R2 - which has even more options for consolidation.
And some organizations wanted to have even more flexibility, so 2009 saw the release of SQL Server Azure, the "database in the cloud". Each month I've seen more and more chatter on this offering, from small organizations that don't want to manage a server all the way up to huge companies that want the flexibility to rapidly create, deploy and manage their databases. Far from removing the need for a DBA, the data professionals are finding that their role is to help with their organization's data strategy, explaining how and when to use these kinds of offerings to reach the business goal.
This year has also been called "the year of the community", with the SQL Saturday movement becoming wildly popular, as well as an amazing turnout at the PASS conference. Almost 40% of the attendees to PASS this year were first-timers - and from the comments I heard, it won't be their last time either. At PASS the SQL Server Most Valuable Professionals (MVP's) wrote a book (which I'm still reading) called "Deep Dives" - with all of the proceeds donated to War Child, an international charity. They literally took Bill Gates at his word when he said to "give back". Amazing.
Along with consolidation, many data professionals are focusing on performance tuning. They need to get the most out of the systems they already have. I predict that the consolidation efforts will continue, as well as the emphasis on perf tuning. I've taught several performance tuning seminars this year, and I've been asked to do several more next year as well.
So where will 2010 take us? Well, a new release of SQL Server, Visual Studio, new modeling languages, developer tools and administration needs. Look for a bigger emphasis on PowerShell - it allows you to manage almost any Microsoft product, and talks equally well to other platforms and database systems. I also think that you'll see a pent-up demand for new projects as inventories run low and companies ramp up to supply demand. So buckle in. It's going to be a busy time.
One evening last week I was hanging out with a friend who is a professional photographer. As so often happens on such occasions, we whiled away some time comparing new toys, for we both had new cameras. Mine is small and perfectly formed (an Olympus, if you must know) and he had a high-end Nikon of such weight that I suspect it is mostly recommended by chiropractors looking for new work. However, my friend always carries a small point-and-shoot in his pocket, because, as he always reminds me: "The best camera is the one you have in your hand." It's no use having a great camera at home, if it's not with you when an opportunity arises; and, when the opportunity does arise, the camera to hand is indeed best.
Last week I also had six separate customer briefings in the Executive Briefing Center at Redmond. Now that the SQL Server and Office teams have just released their November CTPs, these were great opportunities to advise customers on what is coming in our next release and how to prepare for it. PowerPivot is by far and away the most popular feature, but I also had some surprising discussions around Master Data Services, our first foray into Master Data Management.
What surprised me, was that two of my customers, independently, said "We have needed Master Data for a while, but we could not find tools that we like. We'll certainly wait for Microsoft's solution."
Now, I'm flattered that they want to see Microsoft's offering, but really, if you have problems with master data you need to be looking at a solution, tools or no tools. (If you're new to the concept of Master Data Management or MDM, William McKnight has a couple of great articles, here and here.)
Fortunately, for these particular customers, even they can get started with MDM with the November CTP. For all SQL Server Enterprise Edition customers, Microsoft's MDS will be the tool to hand for Master Data, and therefore, as the photographers would say, the best tool for the job. Indeed, MDS is quite a comprehensive toolset, featuring: a master data hub based on the SQL Server relational engine; a thin-client stewardship portal for managing master data entities, and all their related hierarchies and versioning requirements; workflow integration and extensible extensible business rules; and role-based security.
During the briefings, we all agreed that we would start to review the tools technically, and to review the customers systems, and governance needs, as a matter of urgency.
To understand just how urgent the need is, I must return to my photographer friend. After comparing notes on cameras, our conversation turned to his finances. In particular, he was fuming about confused, duplicate and sometimes outdated information he was getting from a service provider following a merger: a classic master data problem. I am sure you have guessed already: he is an unhappy customer of one of my customers: and I know the advice he would give them. "The best tool for the job, is the tool you have. Just get on with it!"
A few countries around the world have a day set aside for giving thanks - and some do it all year long. We stop to give thanks to those who have made us what we are, and those in the past and present who have given us the benefits that we enjoy. From teachers to family, we owe them a lot.
I've noticed that several technology specialists, especially those that work with SQL Server, go even further. They practice "Active Thankfulness", where they donate their own time, money and effort to give back to the community. The Professional Association of SQL Server, or PASS, is staffed with volunteers, and at the recent PASS conference you could see this spirit of giving all over the event. People lent a helping hand to setting up, organizing and staffing many different activities, including those to help folks with their SQL Server questions. Microsoft donated the entire Customer Advisory Team (CAT) to the event, answering questions and delving deep into technical issues for free. Many of the SQL Server product team members came over to staff the "Ask the Experts" area, and other database professionals gave of their time to handle the "Birds of a Feather" tables - again, all free, all volunteer.
And then there were the SQL Server "Most Valuable Professionals", or MVP's, that donated not only their time but an amazing amount of effort to create a huge book called "Deep Dives", with all of the chapters and even the production costs donated. The money raised by this book go straight to "War Child International", a charity that aids children whose lives have been devastated by war. In the introduction, the MVP's explain that the impetus for this book was a response to Bill Gate's challenge to "do philanthropy where you are." And that's good advice - each of us has learned from someone, whether that's a college teacher or from a data professional that took the time to show us the ropes. You might not be able to find that person, but you can pay it forward by helping someone else. It can be volunteering at a SQL Saturday event, helping out at your local user group, or volunteering your time to help a charity with their technology needs. When you practice this kind of active thankfulness, you'll find the rewards far outweigh the level of work you put in.