With SQL Azure, you're no longer in charge of the Instance of SQL Server that you're running on. You're dropped into your environment at a database level. Also, the size of those databases are more restricted than you might be used to for your on-premise system (Keep in mind that SQL Azure has a set of use-cases, and those aren't always the same as the on-premise installation of SQL Server or other RDBMS systems). Because of these two reasons, you want to start "thinking at the database level". In one case, this means a shift in thinking at a lower level, and in another, at a higher level.

First, because you're at the database level when you enter the system, you don't need to control the Instance-level settings, or work with security and so on at the higher level. That measn that you should focus "down-level" to the database settings you do control.

Second, because of those size limits you need to think differently about the strategies you use for dealing with "big" data - in this case, as of this writing, the 1-50GB databases you can create on SQL Azure. In SQL Server on-premise installations, you can "partition" large sets of data by breaking them out using a Partition Scheme and a Partition Function - more on that here, with a great explanation of previous versions of SQL Server Partitioning. You also have access to FileGroups, which point to files that can be placed on different physical devices. In SQL Azure, you can think about the database as a container like the tables are in on-premise systems, in effect, "thinking up" to the database level.

This actually has some advantages - by placing data sets that you might partition by date, customer ID and so on in different databases, you gain the advantage of a different logical system running that database, gaining you CPU, memory and so on. My friend Wayne Berry has an excellent series of articles dealing with this starting here. He develops a strategy on Partitioning in SQL Azure in that series.

The point is that using SQL Azure requires understanding the way it holds and processes data. While it's ideally suited to data in the sub-50GB range, that doesn't mean that you don't have options to work with larger sets.