Memory-optimized tables must fully reside in memory and can’t be paged out. Unlike disk-based tables where insufficient memory can slowdown an application, the impact to  memory-optimized tables upon encountering out-of-memory can be severe, causing  DML (i.e. delete, insert or update) operations to fail. While this adds a new dimension to managing memory, the application failure due to resource errors is not something new. For example, applications using disk-based tables can fail with resource errors such as running out of transaction log or TempDB or out of storage. It is the responsibility of DBAs/Administrators to make sure resources are provisioned and managed appropriately to avoid such failures. SQL Server provides a rich set of monitoring tools, including DMVs, PerfMon and XEvents to help administrators identify problems earlier so that a corrective action can be taken. Similarly, for memory-optimized tables, SQL Server provides a rich set of monitoring capabilities and configuration options so that you can manage your database/instance well and keep your application running smoothly.  The remainder of this blog walks thru each of the challenges and details how it can be addressed.

How do I estimate the memory needed?

This is the first question that you need consider when migrating an existing table(s) to memory-optimized table(s) or when you are considering a new application using memory-optimized tables. When migrating a disk-based table, you know, for example using sp_spaceused as described in http://technet.microsoft.com/en-us/library/ms188776.aspx ,  its current size so it is just a simple mathematical calculation to find the corresponding size for memory-optimized tables.  The key differences to be aware of are that memory-optimized tables cannot compressed like disk-based tables with ROW and PAGE compression, so, the memory-optimized table will likely be bigger. However, unlike indexes for disk-based tables, the indexes on memory tables are much smaller. For example, the index key is not stored with hash indexes and all indexes are, by definition, covered indexes. Please refer to http://msdn.microsoft.com/en-us/library/dn247639(v=sql.120).aspx for details. A more challenging task is to estimate the data growth.  While you can make a reasonable guess, the best way is the continuous to monitor the table size and the memory consumed by memory-optimized table (s) in your database and instance.  The same monitoring approach holds for new applications that are created with in-memory OLTP in mind.

How does SQL Server manage memory for memory-optimized tables?

The in-memory OLTP engine is integrated with SQL Server. Memory allocations to memory-optimized tables are managed by SQL Server Memory Manager and the allocated memory is tracked using familiar constructs and tools such as memory clerks and DMVs. The following DMV shows XTP memory clerks. The first row shows the memory allocated by system threads. The second row with name DB_ID_5 represents the consumers in the database objects and the third row with memory node-id 64 represents memory allocated to DAC (Dedicated Admin Connection).

-- Show the memory allocated to in-mempory OLTP objects

select type, name, memory_node_id, pages_kb/1024 as pages_MB

from sys.dm_os_memory_clerks where type like '%xtp%'

type                 name       memory_node_id pages_MB

-------------------- ---------- -------------- --------------------

MEMORYCLERK_XTP      Default    0              18

MEMORYCLERK_XTP      DB_ID_5    0              1358

MEMORYCLERK_XTP      Default    64             0

Also, there are new DMVs that can be used to monitor the memory consumed by the in-memory OLTP engine and memory-optimized tables. Please refer to http://msdn.microsoft.com/en-us/library/dn133203(v=sql.120).aspx for details.

Like any other memory consumer, the in-memory OLTP engine responds to memory-pressure, but to a limited degree. For example, the memory consumed by data and indexes can’t be released even under memory pressure. This is different than disk-based tables where an external memory pressure may cause the buffer pool to shrink which simply means there will be fewer data/index pages in memory. For this reason, it is all the more important to provision the memory for memory-optimized tables appropriately, otherwise in-memory OLTP engine  can starve other memory consumers including the memory needed by SQL Server for its operations which can ultimately leads to slow or unresponsive application. To address this, SQL provides a configuration option to limit the memory consumed by memory-optimized tables.

How can I limit memory consumed by memory-optimized tables?

Starting with SQL Server 2014, you can bind a database to a Resource Pool. This binding is only relevant when the database has one or more memory-optimized table. The memory available in the resource pool controls the total memory available to memory-optimized tables in the database.

For example, create a resource pool, mem_optpool as follows

CREATE RESOURCE POOL mem_optpool WITH (MAX_MEMORY_PERCENT = 40);

Now map the database, mydatabase, to this resource pool by executing the following command. With this command, you are specifying that the total memory taken by memory-optimized tables and indexes cannot exceed the limit in the resource pool. So for this case, the other 60% memory is available to other consumers.

EXEC sp_xtp_bind_db_resource_pool 'mydatabase', 'mem_optpool'

When configuring memory for memory-optimized tables, the capacity planning should be done based on MIN_MEMORY_PERCENT, not on MAX_MEMORY_PERCENT. This provides more predictable memory availability for memory-optimized tables as pools that have the min_memory_percent option set can cause memory pressure notifications against other pools to ensure the minimum percentage is honored.. To ensure that memory is available for the In-Memory OLTP database and help avoid OOM (Out of Memory) conditions, the values for MAX_MEMORY_PERCENT and MIN_MEMORY_PERCENT should be the same. SQL Server target memory is dynamic relative to the OS and setting a minimum memory would be recommended only if the server is not dedicated. For details, please refer to http://msdn.microsoft.com/en-us/library/dn465873(v=sql.120).aspx.

How does SQL Server reclaim memory taken by deleted/updated rows

The rows for memory-optimized tables are stored in-memory and are linked through Hash and non-clustered indexes as described http://msdn.microsoft.com/en-us/library/dn133190(v=sql.120).aspx. Concurrent access to memory-optimized table uses optimistic concurrency control  based on row versions.   Over time, the existing rows may get updated (update operation generates a row version(s)) and deleted but these rows can’t immediately be removed as there may be concurrent transactions that need these rows versions. These older row versions are garbage collected (GC’d) asynchronously when it is determined, based on the active transactions, that they are no longer needed. There is a GC system thread that shares the row version cleanup (i.e. GC) with user transaction activity to ensure that SQL Server is able to keep up with the GC.   When you configure the memory for your workload, you must account for additional memory needed for stale row versions. You can roughly estimate the memory needed for stale row versions using the following steps:

  1. Find the average number of rows updates/deleted by querying DMV dm_db_xtp_object_stats  at the beginning and end of a desired duration for your peak workload, and then compu
  2. Estimate the number (NR) of row versions = (duration of the longest transaction accessing HK tables in seconds) * (average number rows updated/deleted/sec)
  3. Assuming that delete/update operations were uniformly distributed across all memory-optimized tables, find the average row length RLen (including row header and index pointers).  You can now estimate the memory for row versions as (NR * Nlen)

You can use DMVs and Perfmon counters to monitor the progress of Garbage collection. Please refer to http://msdn.microsoft.com/en-us/library/dn133203(v=sql.120).aspx.