Previously, we have discussed about new features in SQL 2014. Now, in this article, I will discuss further about SQL 2014. What’s more interesting in newest SQL 2014?
Cache frequently used data on SSDs
You can specify an SSD (or an SSD array) to be used to extend memory. SQL Server 2014 will automatically cache data there with zero risk of data loss. (Only clean pages, not dirty pages, are stored there.) The best use case is for read-heavy OLTP workloads. This works with local SSDs in clusters, too – each node can have its own local SSDs (just like you would with TempDB) and preserve the SAN throughput for the data and log files.
AlwaysOn Availability Groups get more secondaries
If you really need to scale out your reads, SQL 2014 gives you up to 8 secondaries (up from 4). Of course, you’ll be paying for Enterprise Edition licensing on these, but if you were already going to replicate data out to various reporting or BI servers, now your life is easier.
AlwaysOn AG readable secondaries will be more reliable
In SQL 2012, if your primary drops offline or the cluster loses quorum, the readable replica databases drop offline. In SQL 2014, the secondaries remain online and readable when the primaries aren’t available. However, keep in mind that typical AlwaysOn AG connections go through the AG listener name, and then fetch the list of readable replicas from the primary. This just means that in order to keep your report queries online, you can’t use the AG listener – you have to connect directly to the replica’s server name. I like using a separate set of DNS records for readable replicas, like readonly.mydomainname.com, and have my report servers point at those.
Failover Cluster Support for Clustered Shared Volumes
With regular volumes, only one node can own the volume at any given time. He owns the entire volume, and no other node can see/read/write files on that volume. However, Windows Server clusters have a type of drive volume called Clustered Shared Volumes with much more flexibility. Multiple cluster nodes can be connected to the same volume at the same time, but each node can access different files on the drive independently. Windows and Hyper-V have supported this for a while (see the BOL section on the benefits), and now SQL Server supports it too. The big benefit here is that if one node of the SQL Server cluster loses connectivity to the storage, it can still read and write data over the network to a different node’s SAN connection.
Hekaton: specialized in-memory OLTP tables
If your application is facing serious concurrency issues with thousands of simultaneous connections trying to lock data, Hekaton offers an intriguing solution. I’m not even going to try to type out an explanation here, but I’ll point out a few challenges with it:
You’ll probably need to change your data model. For example, identity fields aren’t supported – you’ll need to use a GUID as a primary clustered key.
You’ll also probably need to change your code. Hekaton works best with stored procedures, and specifically stored procs that it can compile into native code.
It’s memory-only. If you experience sudden data growth in your Hekaton tables, that means you can cache less of your other tables. If you run out of memory – well, let’s just say you’d better pray you don’t run out of memory, because hello, downtime.
Conclusion
Any question about newest SQL 2014? Or want to test the new SQL 2014 feature? Looking for SQL 2014 hosting? Try us as your SQL 2014 hosting solution.