European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server 2014 Hosting Russia - HostForLIFE.eu :: Restart Interrupted Restore Operations

clock January 8, 2015 07:19 by author Peter

Circumstances in which a restore operation is hindered are not exceptionally remarkable. This is the reason, in this post, we will demonstrate to you what you need to do with a specific end goal to restart the interrupted operation using T-SQL queries in SQL Server 2014.

In the event that your restore operation was interrupted, you can at present restart the methodology and proceed starting there where it got intruded.

This is a peculiarity that can be extremely valuable in the event that you have huge databases which you need to restore. On the off chance that the methodology of restoring falls flat near to the end, the majority of the times you can restart the whole operation from the point where it cleared out off, as opposed to restarting the whole restore procedure of the database.

To be particular, when you make your restore from tape, you can restart from the current tape as opposed to restarting from the first. Anyhow, if the restore was in the stage when it was being rolled forward, then no information will be replicated from that backup set.

Restart interrupted restore with T-SQL
In the event that you have utilized a T-SQL query to restore your database and the methodology was hindered for any reason, known or obscure, then what you need to do is to define a RESTART proviso toward the end of the same inquiry and run the question yet again.

Let’s assume that you have the following query, or something similar, and it got interrupted during execution.
-- Restore a full database backup of myDB database
RESTORE DATABASE myDB
FROM DISK = 'C:\myDB.bak'
GO

Presently, keeping in mind the end goal to proceed with and interrupted restore operations, connected for our situation, we are going to utilize the query from above completed with the WITH RESTART clause.

-- Just run the initial RESTORE statement specifying WITH RESTART in order to restart interrupted restore operations
RESTORE DATABASE myDB
FROM DISK = 'C:\myDB.bck'
WITH RESTART
GO



European SQL 2014 Hosting - UK :: SQL Server 2014’s In-Memory OLTP

clock October 31, 2014 08:53 by author Scott

Transactions in SQL Server’s In-Memory OLTP are rather straight forward. While there are probably optimizations that are not discussed, the basic design pattern is fairly easy to follow and could probably be reused in other projects.

Transactions in SQL Server’s In-Memory OLTP rely on a timestamp-like construct known as a Transaction ID. A transaction uses two timestamps, one for the beginning of the operation and one that is assigned when the transaction is committed. While multiple transactions can share the same start value.

 

Likewise each version of a row in memory has a starting and ending transaction id. The basic rule is that a transaction can only read data when RowVersion.StartingId <= Transaction.StartingId < RowVersion.EndingId.

For a DELETE operation, the row version’s ending id is initially set to the transaction’s starting id. Then a flag is set to indicate that a transaction is in process.

UPDATE operations begin like DELETE operations with the setting of an ending transaction id on the previous row version. Then a new row version is created with a starting transaction id that is equal to the transaction’s starting id. The ending id is initially set to infinity and again an active transaction flag is set. The old row version also gets a pointer to the new row version.

An INSERT operation is the same as an UPDATE without the need to delete the previous row version.

Commit and Validation

The commit phase starts by assigning a unique transaction id to current transaction. Then a validation process begins in which the affected records are checked for isolation errors. The type of errors depend on the level of transactional isolation requested. Only three levels, Snapshot, Repeatable Read, and Serializable, are supported by memory optimized tables.

Snapshot

Just like with normal tables, inserts into memory optimized tables can fail if another transaction has attempted to insert a row at the same time. But the way it fails is a bit different. Normally one transaction has to wait for the other to complete, after which the losing transaction just sees the duplicate row and never make the insertion attempt.

Here we instead see both transactions insert their row. After which they will read back the data to see if they won the race. If they don’t error 41325 is raised with the message “The current transaction failed to commit due to a repeatable read validation failure on table [name].”

Repeatable Read Transactions

MSDN has this warning about the repeatable read isolation level, “One important thing to note is that, because the repeatable read isolation level is achieved using blocking of the other transaction, the use of this isolation level greatly increases the number of locks held for the duration of the transaction.”

Since memory optimized tables don’t have locks, repeatable read works very differently for them. Rather than blocking other transactions, it rereads the rows at the end of the transaction. If any of them have changed, the transaction is aborted. This is reflected in error code 41305 with the message “The current transaction failed to commit due to a repeatable read validation failure on table [name].”

Serializable Transactions

Like Repeatable Read, Serializable Transactions traditionally relied on locks to keep other transactions from interfering with the data being examined. So instead it will check for to see if it failed to read any valid rows or encountered phantom rows. If either occurs then again the transaction will be aborted.

Post Processing

If validation is successful, the ending transaction id of each affected row version is set to the transaction’s ending id. Likewise the starting id for new row versions (e.g. from inserts and updates) is set to the transaction’s ending id. The active flags are cleared and the indexes are updated to point to the new records.

Garbage Collection

It should be noted that the indexes are not necessarily updated to remove pointers to the old row versions. Nor are the old versions deleted immediately.

Instead the Memory Optimized Tables require the use of a reference counted garbage collector. Details are not available yet, but based on the rest of the design its behavior is predictable. The GC will need to start at the indexes and check to see which of them point to out of date rows. When detected, it can decrement the reference counter and update the index to point to the most recent version of the row. If the counter reaches zero, then the row version is deleted.

The tricky part with the garbage collector is to know which rows to look at in the first place. One would speculate that simply iterating over all the rows of each index would be rather cost prohibitive.

Design Notes

When using In-Memory OLTP, developers need to be much more aware of their access patterns. If code isn’t written to avoid overlapping transactions then the resulting isolation level violations will make aborted transactions much more common than they would be using traditional tables.

 



European SQL 2014 Hosting - UK :: Depth Analysis SQL 2014

clock October 15, 2014 12:15 by author Scott

Previously, we have discussed about new features in SQL 2014. Now, in this article, I will discuss further about SQL 2014. What’s more interesting in newest SQL 2014?

Cache frequently used data on SSDs

You can specify an SSD (or an SSD array) to be used to extend memory. SQL Server 2014 will automatically cache data there with zero risk of data loss. (Only clean pages, not dirty pages, are stored there.) The best use case is for read-heavy OLTP workloads. This works with local SSDs in clusters, too – each node can have its own local SSDs (just like you would with TempDB) and preserve the SAN throughput for the data and log files.

AlwaysOn Availability Groups get more secondaries 

If you really need to scale out your reads, SQL 2014 gives you up to 8 secondaries (up from 4). Of course, you’ll be paying for Enterprise Edition licensing on these, but if you were already going to replicate data out to various reporting or BI servers, now your life is easier.

AlwaysOn AG readable secondaries will be more reliable

In SQL 2012, if your primary drops offline or the cluster loses quorum, the readable replica databases drop offline. In SQL 2014, the secondaries remain online and readable when the primaries aren’t available. However, keep in mind that typical AlwaysOn AG connections go through the AG listener name, and then fetch the list of readable replicas from the primary. This just means that in order to keep your report queries online, you can’t use the AG listener – you have to connect directly to the replica’s server name. I like using a separate set of DNS records for readable replicas, like readonly.mydomainname.com, and have my report servers point at those.

Failover Cluster Support for Clustered Shared Volumes

With regular volumes, only one node can own the volume at any given time. He owns the entire volume, and no other node can see/read/write files on that volume. However, Windows Server clusters have a type of drive volume called Clustered Shared Volumes with much more flexibility. Multiple cluster nodes can be connected to the same volume at the same time, but each node can access different files on the drive independently. Windows and Hyper-V have supported this for a while (see the BOL section on the benefits), and now SQL Server supports it too. The big benefit here is that if one node of the SQL Server cluster loses connectivity to the storage, it can still read and write data over the network to a different node’s SAN connection.

Hekaton: specialized in-memory OLTP tables

If your application is facing serious concurrency issues with thousands of simultaneous connections trying to lock data, Hekaton offers an intriguing solution. I’m not even going to try to type out an explanation here, but I’ll point out a few challenges with it:

You’ll probably need to change your data model. For example, identity fields aren’t supported – you’ll need to use a GUID as a primary clustered key.

You’ll also probably need to change your code. Hekaton works best with stored procedures, and specifically stored procs that it can compile into native code.

It’s memory-only. If you experience sudden data growth in your Hekaton tables, that means you can cache less of your other tables. If you run out of memory – well, let’s just say you’d better pray you don’t run out of memory, because hello, downtime.

Conclusion

Any question about newest SQL 2014? Or want to test the new SQL 2014 feature? Looking for SQL 2014 hosting? Try us as your SQL 2014 hosting solution.

 

 



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in