Official News from Microsoft’s Information Platform
Machine Learning Blog
As we emphasized in the previous blogs, with other in-memory technologies in the market today you have to migrate the entire database to memory or it's a separate database product that deals with in-memory, which means learning new APIs and dealing with application compatibility issues. Integrating in-memory technology right into the SQL Server engine might sound trivial at a first glance, but this architectural approach carries significant customer benefits which we will elaborate in this post.
First, there is no special installation requirements needed for In-Memory OLTP, as a matter of fact, when you install SQL Server 2014, you don’t even see an option referring to In-Memory OLTP as it is installed transparently with the SQL Engine. Also there is no special hardware requirement to start the development and test on In-Memory OLTP.
To summarize the steps to leverage In-Memory OLTP
a) Use the AMR tool to identify the hot tables and stored procedures
b) Migrate the hot tables to memory_optimized tables and access those tables through inter-op (term defined below)
c) Migrate the hot stored procedures to natively compiled
d) Iterative from a) since the performance signature has changed
Inter-op’ed access is an important concept in in-memory OLTP. As pointed out by the architecture blog, the in-memory OLTP storage engine can inter-operate with the traditional SQL Server relational engine (that is Query Optimizer and Query Processor), this allows an existing DB workload to treat the in-memory OLTP database as if they were traditional SQL tables in select, insert, delete and update, as well as join operations. Another key concept is the entire database doesn’t need to be migrated to reside in memory, instead, selected hot tables (from data access and contention point of view) can be migrated to In-memory OLTP and this requires only the additional memory allocation for those tables, for example, at 128GB while the whole database can have a 2 TB footprint on disk. This means you can optimize in-memory to fit your existing hardware, improving TPS without increasing hardware costs.
The AMR tool includes a lightweight performance gathering component that you can setup to collect the performance signature of either a live production system or a test system. It queries DMVs such as sys.dm_db_index_operational_stats and sys.dm_exec_query_stats to detect the system hot spots over a period of time and aggregate the information using data collector and MDW, and finally present the user with a set of reports to highlight the top candidates of tables and stored procedures for migration into In-Memory OLTP. We will provide more details on how to use the AMR tool in a following blog.
Once you have determined the table(s) for migration, there are two steps needed before you can declare a table as memory_optimized
a) Required: declare a filegroup to contain the checkpoint files for in-memory OLTP – for example
ALTER DATABASE TicketReservations ADD FILEGROUP [TicketReservations_mod_fg]
ALTER DATABASE TicketReservations
ADD FILE (NAME = [TicketReservations_fs_dir],
FILENAME = 'C:\mounthead\Data\TicketReservations_fs_dir')
to FILEGROUP [TicketReservations_mod_fg];
b) Optional: configure the in-memory OLTP memory usage limit (coming in CTP2) to provide performance stability for the traditional disk-based workloads
There are two ways to migrate a table to be memory_optimized
a) Use Transact-SQL to rename the original table and then recreate the table as memory_optimized, then “insert into table_in_mem select * from table_on_disk” to migrate the data. If there are incompatible column data types (for example LOBs), some workaround is required and the most common method is to split the incompatible columns into a separate table and join them through inter-op . The existing indexes might need to be migrated as well. Please refer to this section of the BOL for unsupported data types and some workarounds. Hash index works well for point lookup, but if you need range scans or ordered scans, you should consider range index coming in CTP2. One migration example is below
--Original disk-based table:
CREATE TABLE TicketReservationDetail (
iteration int not null,
lineId int not null,
col3 nvarchar(1000) not null,
ThreadID int not null,
constraint sql_ts_th primary key clustered (iteration, lineId))
--In-memory OLTP table:
constraint sql_ts_th primary key
nonclustered hash (iteration, lineId) with (bucket_count = 100000)
) with (memory_optimized = on)
b) Coming in CTP2 – there will be a Memory Optimization Advisor wizard in SSMS to migrate the table. The migration will advise on incompatible data types and proceed with actual migration if the column types and etc are compatible with In-memory OLTP. The wizard will also create the FG if it’s not already in place. More details to come with CTP2.
Stored Procedure Migration
Even though the table migration alone with query inter-op access is a good stopping point, and solves common data access the contention problems at the storage engine level, in-memory OLTP provides native compilation, which optimizes the relational engine layer as well. The AMR tool provides insight into the top candidate of stored procedures (SPs) to migrate. There is no wizard available yet to migrate the SPs, so this will be a more manual process. Please note that natively compiled SPs can only operate on memory optimized tables but if the T-SQL used in the current procedure is fully compatible with in-memory OLTP, the change is trivial, and also note the ease of using atomic block – for example
CREAT PROC InsertReservationDetails(@Iteration int, @LineCount int, @CharDate NVARCHAR(23), @ThreadID int)
DECLARE @loop int = 0;
while (@loop < @LineCount)
INSERT INTO dbo.TicketReservationDetail VALUES(@Iteration, @loop, @CharDate, @ThreadID);
SET @loop += 1;
--Natively Compiled SP (the many declarative setting in the header is to aggressively push optimization at the compilation time to reduce runtime costs)
CREATE PROC InsertReservationDetails(@Iteration int, @LineCount int, @CharDate NVARCHAR(23), @ThreadID int)
with native_compilation, schemabinding, execute as owner
BEGIN ATOMIC with (transaction isolation level = snapshot, language = N'us_english')
SET @loop += 1;
After each major step(s) of migration of tables and associated SPs, we recommend that you rerun the AMR tool and migrate additional tables and SPs iteratively. There is no fixed rule on how much of the database to migrate into in-memory OLTP, since it depends on how concentrated the workload is and how much ROI you want to achieve.
In addition - from the DBA’s perspective, existing tools such as SSMS will continue to work with full support for memory optimized tables and natively compiled SPs. In-memory objects’ backup is fully integrated into SQL’s backup and HADR will be fully integrated into AlwaysOn by CTP2. There are also added DMVs such as sys.dm_db_xtp_index_stats and sys.dm_db_xtp_checkpoint_files to monitor memory consumption and storage activities, as well as stock reports like the following.
For more information, download SQL Server CTP1 and get started today, or see the series introduction and index to read more posts in this series.
Comments in this blog are open and monitored for each post for a period of one week after the posting date. If you have a specific question about a blog post that is older than one week, please submit your question via our Twitter handle @SQLServer