By Steve Willson, VP Technology Services EMEA at Violin Memory.
SQL 2014’s new in-memory computing takes several leaps forward in performance and control. Application designers can now control what data is permanently staged in memory (thus reducing IO waits) and how machines compile the SQL code (for amazingly faster execution). SQL 2014 also includes a new latchless locking mechanism (for the in-memory tables) that reduces inter-user transaction contention, allowing for increased concurrency. This is especially helpful as the number of users per system continues to grow. So, with the data now in-memory, what does this mean for storage?
If we could hold all SQL Server data in-memory using server DRAM, performance would be amazing, servers would be well utilized, applications would fly – but the reality is businesses have requirements other than just performance.
These types of challenges demand a persistent storage solution that compliments the high performance of in-memory computing. In-memory database processing is enabled when the data can get into the system, be changed by the system and remain intact between events.
If an application can write 20x faster, then it will need a persistent storage system that can ingest sustained writes 20x faster. In-memory processing allows for the transaction to start immediately (the data is already in DRAM) but it still has to write the change to storage before it can complete (there’s still a transaction log).
To ensure that the new high powered performance engine in SQL 2014 is properly enabled, Microsoft has been working on the next generation of transport (SMB 3.0 & SMB Direct). By adding multi-channel, transparent failover and RDMA type features to SMB they have created a protocol that runs faster than block (fibre channel) for cheaper (IP networking like 10Gb, Infiniband, etc). It also, via UNC naming and SOFS (Scale-out File System), allows for the next generation of architecture where everything is referenced by shares (no more static NTFS LUNs or drive mapping), secured and controlled by Active Directory, scales and migrates live, monitored and provisioned by System Center and PowerShell and also includes favourite features like mirroring, compression, encryption, deduplication, etc.
IT departments are anything but one dimensional so Microsoft has been working with many partners to ensure that all classes of storage will work in the same ecosystem. For branch office kits, archive targets or moderate workloads there is the Cluster In a Box class of products and for high end performance systems (SQL, SharePoint, Hyper-V farms, custom applications, etc) there is all-flash arrays like the new Violin Memory Windows Flash Array (WFA). These all work off the same principle of Microsoft service to storage, operated by Windows Failover Clusters embedded inside of storage appliances. In the case of the WFA, Microsoft spent over a year tuning the kernel and I/O stack optimizing for the world’s fastest storage experience.
In-memory processing can now be enabled by a natively Windows-embedded storage appliance that allows for extreme write loads, high availability, scalability, reduced administration time and delivers the high speed data access to large reporting systems sitting next to the high end transactional systems. Storage is no longer the bottleneck keeping applications back and instead will empower the speeds of the new high performance engines like in SQL 2014.Also check out our recent announcements on SQL Server 2014.
Did you find this article helpful? Let us know by commenting below, or reaching out to us via @TechNetUK.