European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server 2019 Hosting - HostForLIFE.eu :: Resizing Tempdb In SQL Server

clock October 21, 2020 08:05 by author Peter

Occasionally, we must resize or realign our Tempdb log file (.ldf) or data files (.mdf or .ndf) due to a growth event that forces the file size out of whack. To resize we have three options, restart the SQL Server Service, add additional files, or shrink the current file. We most likely have all been faced with runaway log files and in an emergency situation restarting the SQL Services may not be an option but we still need to get our log file size smaller before we run out of disk space for example. The process of shrinking down that file can get tricky so I created this flow chart to help you out if you ever get into this situation.

Now its very important to note that many of these commands will clear your cache and will greatly impact your server performance as it warms cache backup. In addition, you should not shrink your database data or log file unless absolutely necessary. But doing so, it can result in a corrupt tempdb.
 
First thing you must do is issue a Checkpoint. A checkpoint marks the log as a “good up to here” point of reference. It lets the SQL Server Database Engine know it can start applying changes contained in the log during recovery after this point if an unexpected shutdown or crash occurs. Anything prior to the check point is what I like to call “Hardened”. This means all the dirty pages in memory have been written to disk, specifically to the .mdf and .ndf files. So, it is important to make that mark in the log before you proceed. Now, we know tempdb is not recovered during a restart it is recreated, however this is still a requirement.
    USE TEMPDB;    
    GO    
    CHECKPOINT;  


Next, we try to shrink the log by issuing a DBCC SHRINKFILE command. This is the step that frees the unallocated space from the database file if there is any unallocated space available. You will note the Shrink? decision block in the diagram after this step. It is possible that there is no unallocated space and you will need to move further along the path to free some up and try again.
    USE TEMPDB;    
    GO   
    DBCC SHRINKFILE (templog, 1000);   --Shrinks it to 1GB  


If the database shrinks, great congratulations, however for some of us we still might have work to do. Next up is to try and free up some of that allocated space by running DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE.
 
DBCC DROPCLEANBUFFERS

 
Clears the clean buffers from the buffer pool and columnstore object pool. This will flush cached indexes and data pages.
    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;  

DBCC FREEPROCCACHE
 
Clears the procedure cache, you are probably familiar with as a performance tuning tool in development. It will clean out all your execution plans from cache which may free up some space in tempdb. As we know though, this will create a performance as your execution plans now have to make it back into cache on their next execution and will not get the benefit of plan reuse. Now it’s not really clear why this works, so I asked tempdb expert Pam Lahoud (B|T) for clarification as to why this has anything to do with tempdb. Both of us are diving into this to understand exactly why this works. I believe it to be related to Tempdb using cached objects and memory objects associated with stored procedures which can have latches and locks on them that need to be release by running this. Check back for further clarification, I'll be updating this as I find out more.

    DBCC FREEPROCCACHE WITH NO_INFOMSGS;  

Once these two commands have been run and you have attempted to free up some space you can now try the DBCC SHRINKFILE command again. For most this should make the shrink possible and you will be good to go. Unfortunately, a few more of us may have to take a couple more steps through to get to that point.
 
The last two things I do when I have no other choice to get my log file smaller is to run those last two commands in the process. These should do the trick and get the log to shrink.
 
DBCC FREESESSIONCACHE
 
This command will flush any distributed query connection cache, meaning queries that are between two or more servers.
    DBCC FREESESSIONCACHE WITH NO_INFOMSGS;  

DBCC FREESYSTEMCACHE
 

This command will release all unused remaining cache entries from all cache stores including temp table cache. This covers any temp table or table variables remaining in cache that need to be released.
    DBCC FREESYSTEMCACHE ('ALL');  

In my early days as a database administrator I would have loved to have this diagram. Having some quick steps during stressful situations such as tempdb’s log file filling up on me would have been a huge help. So hopefully someone will find this handy and will be able to use it to take away a little of their stress.



SQL Server 2019 Hosting - HostForLIFE.eu :: SQL Injection Protection Methods

clock September 22, 2020 09:33 by author Peter

In this article, we are going to look at SQL Injection Protection Methods. SQL Injection is one of the most dangerous issues any organization can ever go through because it threatens organizational and customer confidentiality and integrity.

For us to be able to counter or prevent SQL Injection we need to look closely at how each type of Injection is implemented to figure out how best to avoid the implemented method. To start with we will look at the means of injection which in this case is SQL statements. Every type of injection is manipulating a SQL statement to pass a malicious statement. So can this be avoided? Yes. It is very true that SQL injections have remained a huge threat to web development and preventing them has not yet proven to be 100% accurate but in the meantime, there are some measures and vulnerabilities that we can handle and still be to limit the exposure to injection attacks.
 
Prevention of SQL Injections
In-band Injection (Classic) includes Error-Based Injection and Union-based injection. With this type of injection, the attacker uses the same channel to launch and gather information. The biggest vulnerability in this attack is dynamic SQL statements either in a simple statement or a stored procedure. Often developers assume that Stored Procedures are not injection targets but they are if they are implemented dynamically. In a secure application, it is handy to use a parameterized query which behaves as if it's pre-running the query with placeholder data in the input field, and this way the server knows what command to expect. This way the query will not be altered outside the expected variables (i.e. a SELECT statement cannot be changed to a DROP statement).
 
Example
    EXEC SelectAllCustomers @FulName = '" + parameter1 + "', @PostalCode = '" + parameter2 + "'  

A Stored Procedure implemented in this way will still be open to injection attack because of its dynamic nature. Since a lot has been tried and has failed, the most secure way to protect our web applications from Classic Injection is to use strict validations on user input. This has its disadvantages because it might restrict users when they using the application but it is the safest way to avoid Classic injection.
 
In Asp.net applications one can use the following Validation Control to monitor user input.
<asp:RegularExpressionValidator ID="RegularExpressionValidator1" ControlToValidate="TextBox1" ValidationExpression="[a-zA-Z0-9]*[^!@%~?:#$%^&*()0']*" runat="server" ErrorMessage="Invalid Input" ForeColor="Red" Font-Size="XX-Small" Display="dynamic" Font-Italic="true"></asp:RegularExpressionValidator>  

The above control will ensure that no special characters are included in the user input and will show a client-side response to inform the user that the input cannot be accepted.

And this will only allow letters and numbers which basically cannot be used in the injection. This may be seen as a disadvantage given that there some unique names such as ‘Ren’ee’ with special characters in them and this might limit user flexibility when using the web application.

Other than this we should also bear in mind that databases have their own security features which include READ and WRITE permissions so it is very important to ensure that our database cannot allow READ and WRITE permissions to UNKNOWN logins. You can find more on MSSQL permissions via this link.
 
Microsoft also put out an SQL Injection Inspection Tool that sniffs out invalidated input that is contained in the query. There are also other tools available online to search and scan your web application or website for vulnerabilities. They test your application using different types of SQL injection techniques. This will allow you to know the weaknesses and fix them beforehand.
 
The use of a Web Application Firewall for web applications that access databases can help identify SQL injection attempts and may help prevent SQL injection attempts from reaching the application.
 
Another safety precaution would be to encrypt passwords in the database. Password hashing is a safe way of ensuring that confidential passwords are not stored in the database as plain-text as this could also culminate into an internal threat if an untrusted member of the organization has access to the database. Encrypting passwords on insertion may be one way to ensure that the malicious attacker will not gain access to sensitive information.
 
The following snippet shows an insert statement that makes use of one-way hashing and encrypts just the input given by the user and this makes the database safer. There are many other ways of encryption which are supported by MS SQL Server.
    INSERT INTO dbo.[tblUser] (user_login, PasswordHash, FullName, Dept)  
            VALUES(@user_login, HASHBYTES('SHA2_512', @PWD), @pFullName, @Dept)  

Data in the table will look like this,

Another tip is to use error messages that reveals little of what is happening internally. In Try...Catch statements it is wiser to use a custom message once an exception has been encountered.
 
Example
    Try  
    Using con As New SqlConnection(ConfigurationManager.ConnectionStrings("Constring").ConnectionString)  
    Using cmd = New SqlCommand("SaveBeneficiary", con)  
    cmd.CommandType = CommandType.StoredProcedure  
    cmd.Parameters.AddWithValue("@Surname", txtBenSurname.Text)  
    cmd.Parameters.AddWithValue("@firstName", txtBenfName.Text)  
    cmd.Parameters.AddWithValue("@sex", cmbSex.SelectedValue)  
    cmd.Parameters.AddWithValue("@IDNum", txtBenIDNO.Text)  
    cmd.Parameters.AddWithValue("@Marital_Status", cmbBenMaritalStatus.SelectedValue)  
    cmd.Parameters.AddWithValue("@DOB", txtBenDOB.Text)  
    cmd.Parameters.AddWithValue("@Address", txtBenAddress.Text)  
    cmd.Parameters.AddWithValue("@Phone", txtBenContact.Text)  
    cmd.Parameters.AddWithValue("@Employer", txtBenEmployer.Text)  
    cmd.Parameters.AddWithValue("@Relationship", cmbRelationship.SelectedValue)  
    cmd.Parameters.AddWithValue("@PolicyNum", txtPolicyNo.Text)  
    cmd.Parameters.AddWithValue("@isDeceased", 0)  
    If con.State = ConnectionState.Open Then  
    con.Close()  
    End If  
    con.Open()  
    cmd.ExecuteNonQuery()  
    Response.Write("<script>alert('Beneficiary Details Successfully Saved') ; location.href='customer_registration.aspx'</script>")  
    Catch ex As Exception  
    MsgBox("Error")  
    End Try  


It is wiser to use the ex.Message for internal debugging and show little information to the users for protection.
 
For attacks such as Out-of-band injection you would want to ensure that your application does not have the following weaknesses:
    No network security parameters to restrict DNS or HTTP outbound requests.
    Sufficient privileges to execute the necessary function to initiate the outbound request.
    Lack of input validation on a web application.

Once these vulnerabilities are taken care of, it will ensure that your data is not attacked and accessed using an unknown domain, thus your application will be safe from Out-band Injection.
 
Prevention from SQL injection can also be countered through the use of modern and trusted technologies but all the information above can help you prevent your application from unauthorized and unwanted access.



SQL Server 2019 Hosting - HostForLIFE.eu :: Resizing Tempdb In SQL Server

clock September 16, 2020 08:36 by author Peter

Occasionally, we must resize or realign our Tempdb log file (.ldf) or data files (.mdf or .ndf) due to a growth event that forces the file size out of whack. To resize we have three options, restart the SQL Server Service, add additional files, or shrink the current file. We most likely have all been faced with runaway log files and in an emergency situation restarting the SQL Services may not be an option but we still need to get our log file size smaller before we run out of disk space for example. The process of shrinking down that file can get tricky so I created this flow chart to help you out if you ever get into this situation.
 
Now its very important to note that many of these commands will clear your cache and will greatly impact your server performance as it warms cache backup. In addition, you should not shrink your database data or log file unless absolutely necessary. But doing so, it can result in a corrupt tempdb.
 
Let’s walk through it and explain somethings as we go along.
First thing you must do is issue a Checkpoint. A checkpoint marks the log as a “good up to here” point of reference. It lets the SQL Server Database Engine know it can start applying changes contained in the log during recovery after this point if an unexpected shutdown or crash occurs. Anything prior to the check point is what I like to call “Hardened”. This means all the dirty pages in memory have been written to disk, specifically to the .mdf and .ndf files. So, it is important to make that mark in the log before you proceed. Now, we know tempdb is not recovered during a restart it is recreated, however this is still a requirement.
    USE TEMPDB;    
    GO    
    CHECKPOINT;  

Next, we try to shrink the log by issuing a DBCC SHRINKFILE command. This is the step that frees the unallocated space from the database file if there is any unallocated space available. You will note the Shrink? decision block in the diagram after this step. It is possible that there is no unallocated space and you will need to move further along the path to free some up and try again.
    USE TEMPDB;    
    GO   
    DBCC SHRINKFILE (templog, 1000);   --Shrinks it to 1GB
 

If the database shrinks, great congratulations, however for some of us we still might have work to do. Next up is to try and free up some of that allocated space by running DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE.


DBCC DROPCLEANBUFFERS

Clears the clean buffers from the buffer pool and columnstore object pool. This will flush cached indexes and data pages.
    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;  

DBCC FREEPROCCACHE
 
Clears the procedure cache, you are probably familiar with as a performance tuning tool in development. It will clean out all your execution plans from cache which may free up some space in tempdb. As we know though, this will create a performance as your execution plans now have to make it back into cache on their next execution and will not get the benefit of plan reuse. Now it’s not really clear why this works, so I asked tempdb expert Pam Lahoud (B|T) for clarification as to why this has anything to do with tempdb. Both of us are diving into this to understand exactly why this works. I believe it to be related to Tempdb using cached objects and memory objects associated with stored procedures which can have latches and locks on them that need to be release by running this. Check back for further clarification, I'll be updating this as I find out more.
    DBCC FREEPROCCACHE WITH NO_INFOMSGS;  

Once these two commands have been run and you have attempted to free up some space you can now try the DBCC SHRINKFILE command again. For most this should make the shrink possible and you will be good to go. Unfortunately, a few more of us may have to take a couple more steps through to get to that point.
 
The last two things I do when I have no other choice to get my log file smaller is to run those last two commands in the process. These should do the trick and get the log to shrink.
 
DBCC FREESESSIONCACHE
This command will flush any distributed query connection cache, meaning queries that are between two or more servers.
    DBCC FREESESSIONCACHE WITH NO_INFOMSGS;  

DBCC FREESYSTEMCACHE
This command will release all unused remaining cache entries from all cache stores including temp table cache. This covers any temp table or table variables remaining in cache that need to be released.
    DBCC FREESYSTEMCACHE ('ALL');  

In my early days as a database administrator I would have loved to have this diagram. Having some quick steps during stressful situations such as tempdb’s log file filling up on me would have been a huge help. So hopefully someone will find this handy and will be able to use it to take away a little of their stress.

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server 2019 Hosting - HostForLIFE.eu :: Using Magic Tables Without A Trigger

clock September 8, 2020 09:36 by author Peter

Today, I am going to show how to use magic tables in the SQL server without using a trigger.

In our daily working environments, we need to maintain history or track of master data in transaction tables for a clear understanding of how data changed day by day. We have different approaches to achieve this work like creating triggers, temporal tables, and some other things. But, we achieve without creating triggers and temporal tables in our database. As all we know, there are two magic tables, i.e, inserted, and deleted. We use these in our triggers. In many scenarios, triggers are not preferable because they hold and block a full table when they fired. Today, I came up with a good approach of using those beautiful magic tables in our SQL server database with creating a trigger. For example, I want to insert all values into table2 from table1 after inserting the values in table1 by using a single insert statement.
 
First, I am creating two tables in my database with the following code:
    CREATE TABLE Table1 (ID1 INT, Col1 VARCHAR(100))  
    GO  
    CREATE TABLE Table2 (ID2 INT, Col2 VARCHAR(100))  
    GO  


After that, I am inserting values in table1 by using the insert statement and inserting it into table2 at a time without using another insert statement, but I am using a magic table with the below code:
    INSERT INTO Table1 (ID1, Col1)  
    OUTPUT inserted.ID1, inserted.Col1  
    INTO Table2  
    VALUES(1,'Col'), (2, 'Col2')  
    GO  

Check the result after executing the above code whether the data inserted into a table2 or not using the select statements.
    SELECT *  
    FROM Table1  
    GO  
    SELECT *  
    FROM Table2  
    GO  


I got the output as follows:

In the same way, we can insert fetched records from another table by using a select statement. in General, we use to insert data into a table that is fetched by using the select command. You can observe the code below for a better understanding.
    insert into #temp1(BankAccountTransactionId)  
    output inserted.BankAccountTransactionId into #temp2  
    select BankAccountTransactionId    
    from Bankaccounttransactions (nolock) where transactiondate ='2020-08-01'  
    go  

In the above query, I inserted data into #temp1 that was fetched from my table present in my database. You can observe the output by selecting records from both #temp1 and #temp2 tables.
 
In the same way, we can use our other magic table deleted to track old data. Here I am updating data in col1 column agonist id1 = 1 in table1 and you observe in table2, a record was inserted with the old value of table one. For a clear understanding, truncate data in table2 before going to update the data in table1. Execute the below code.
    update  Table1 set col1 = 'col3'   
    OUTPUT deleted.ID1, deleted.Col1  
    INTO Table2  
    where ID1 = 1  


I am updating the record in col1 column against id1=1. For a better understanding of data changes in the table, I want to track history in table2. I can store old data in table2 by using the above code. The output is as follows:

 
We can also store them in log tables which we were deleted by using a magic table deleted. Generally, we never use such scenarios as the track of deleted records. I want to delete records from #temp1 but I need to store the records in my audit table I;e, #temp2. The below code will help in this scenario.
    delete from Table1  
    OUTPUT deleted.ID1, deleted.Col1  
    INTO Table2  


See the output below by executing the following command:
    SELECT *  
    FROM Table1  
    GO  
    SELECT *  
    FROM Table2  
    GO  


Notice that I can store old data and insert data into multiple tables at a time using a single insert statement without using triggers and temporal tables.



SQL Server 2019 Hosting - HostForLIFE.eu :: How To Find A String Repeated In Paragraph By Using SQL Server?

clock September 1, 2020 08:59 by author Peter

I went for a SQL Developer interview and the interviewer asked me to write code to find how many times a string appeared in a paragraph. For example, consider the below string.
    set @string = 'A stored procedures is a subroutine available to applications that access a relational database management system.     
    Such procedures are stored in the database data dictionary.     
    Uses for stored procedures include data-validation or access-control mechanisms'   


In the above example, I want to find "Procedures" repeated in a paragraph. The word repeated 3 times, and so the output should be 3. Please have a look at the below steps to see how I achieved the desired output.
 
In general, we have heard this type of question in a different manner, like how many times is a letter repeated in a given string. For example; I want to find how many times the letter "a" is repeated in the word "database".
select LEN(@string)-len(REPLACE('database','a',''))
 
After executing the above command we get the output as 3. But, it will not work in my scenario, because, my paragraph has 262 words and the "Procedures" string repeated 3 times and the length of my word is 10. If I execute the above I get the result as 30. So we need to go  a bit further to achieve the correct result. If I divide my output with the length of the word which I need to find we can get the correct result. I am attaching the sample code below, execute that code for better understanding.
    Declare @string varchar(max)    
        
    set @string = 'A stored procedures is a subroutine available to applications that access a relational database management system.     
    Such procedures are stored in the database data dictionary.     
    Uses for stored procedures include data-validation or access-control mechanisms'    
        
    select (LEN(@string)-len(REPLACE(@string,'procedures','')))/LEN('procedures')  


After executing the above code I got the output as follows.

For better understanding please try this with your own examples.

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server 2019 Hosting - HostForLIFE.eu :: Database Connection String MS-SQL Server - Easy Steps

clock August 28, 2020 07:42 by author Peter

This is the easiest way to create a connection string to the database without taking too much time. So, many times at the beginner level, I faced problems on the connection string and providers associated with the connection string. Then, I found a way to handle it simply following the below steps. I think it should be shared with you all.
 
Hope you like this simple tutorial!
 
Let's begin with the steps:
 
Step 1
Open Notepad on your PC.
 
Step 2
Now Save the Notepad File with Ctrl+Shift+S as Connection.udl on the desktop.
 
Step 3
Now go to the Desktop and double click on the file Connection.udl. The opened file looks like:

Step 4
Now select or Enter server Name
 
Step 5
Now Select Database Name
 
Step 6
Now Press Test Connection.
 
Step 7
Click OK on Test Connection PopUp and Click OK on the Connection.udl File.
 
Step 8

Then open the Connection.udl file with Notepad, then you get the Database Connection String.

Note
This Connection.udl file only create when it is not on your system by following Step1 to Step2.
Otherwise, you can continue with From Step 3 to Step 8. Those who are unable to create a Connection.udl file, please find the attachment, and continue with Step 3 to Step 8.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server 2019 Hosting - HostForLIFE.eu :: Using LAST_QUERY_PLAN_STATS In SQL Server

clock August 14, 2020 11:00 by author Peter

In my opinion, SQL Server 2019 is one of the greatest releases of the product that Microsoft has ever produced. The amount of improvements across the platform really allows data professionals to have better and faster queries while simultaneously minimizing effort. One of the many things that it fixes is the ability to get an actual plan without having to re-run the query.

Currently, if you are not on SQL Server 2019 and wanted to see an execution plan, you would attempt to dive into the execution plan cache to retrieve an estimated plan. Keep in mind that this just an estimated plan and the actual plan, while the shape should be the same, will have runtime metrics. These actual runtime metrics could be, different than what is shown in the estimated plan, so it is important to get the actual whenever possible.
 
With the introduction of lightweight statistics, SQL Server can retain the metrics of the actual execution plan if enabled. Note that this could introduce a slight increase in overhead, however, I haven’t yet seen it be determinantal. These metrics are vastly important when performing query tuning.
 
Metrics
    Actual number of rows per operator
    Total CPU time
    Total execution time
    Actual maximum degree of parallelism
    Memory granted and subsequently used

Two new items are also introduced for this particular feature, one being a new database scoped configuration, LAST_QUERY_PLAN_STATS, and the other a new dynamic management function, sys.dm_exec_query_plan_stats.
 
LAST_QUERY_PLAN_STATS
Introduced in SQL Server 2016, database scoped configurations allow you to have more granular control of how the database is going to behave.
 
LAST_QUERY_PLAN_STATS can either be set globally with trace flag 2451 or on each individual database. It does require compatibility level of 150 so you must be on SQL Server 2019 to take advantage of this new ability. You can enable this database scoped configuration with this command.
    ALTER DATABASE SCOPED CONFIGURATION SET LAST_QUERY_PLAN_STATS = ON 

Like with anything else, there is a minor overhead with enabling this. SQL Server does have to work just a tad bit harder to capture and store the runtime metrics for the plans. This overhead should be minimal but as always, proper testing should be completed against your given workloads.
 
SYS.DM_EXEC_QUERY_PLAN_STATS
Along with the new database scoped configuration comes a new dynamic management function, or DMF. This DMF requires the plan_handle, which is obtains by cross applying it with one of the other dmvs, such as,
    dm_exec_cached_plans
    dm_exec_query_stats
    dm_exec_requests
    dm_exec_procedure_stats
    dm_exec_trigger_stats


Here is an example query,

    SELECT * FROM SYS.DM_EXEC_CACHED_PLANS CP 
    CROSS APPLY SYS.DM_EXEC_QUERY_PLAN_STATS(CP.PLAN_HANDLE)  PS 
    WHERE PS.DBID = 10 


Let’s take a look at this feature in action.
 
Finding the Last Known Actual Plan
 Using WideWorldImporters, let’s first ensure that the database is at compatibility level of 150 and the new database scoped configuration is also enabled.

 
We can see that it is now at the correct compatibility level and the database scoped configuration is also been enabled. Now we can run a query and go find the actual plan utilizing the new DMF.
 
I am going to do a simple join between the Sales.Orders and Sales.OrderLines tables. After the results are returned, I’ll interrogate the new DMF using sys.dm_exec_cached_plans as a way to get the plan handle
    select * from sales.orders o 
           inner join sales.OrderLines ol on o.OrderID=ol.OrderID; 
    select size_in_bytes,cacheobjtype,objtype,plan_handle,ps.dbid, ps.query_plan 
    from sys.dm_exec_cached_plans cp 
           cross apply sys.dm_exec_query_plan_stats(cp.plan_handle) ps 
    where ps.dbid = 10 


Using LAST_QUERY_PLAN_STATS in SQL Server
 
The resulting query_plan column is being derived from the new DMF. If you click on the hyperlink in SQL Server Management Studio, it will open the query plan. Once opened, hit F4 or right click to show the properties of the plan and select the Clustered Index Scan of the Sales.OrderLines table.

You can now see the actual runtime statistics that SQL Server captured when the query was executed.
 
As I mentioned at the beginning of this post, SQL Server 2019 is one of the best releases of the product that Microsoft has ever accomplished. Features such as lightweight query profiling and LAST_QUERY_PLAN_STATS just continue to make the data professionals life that much easier.
 
Have you migrated to SQL Server 2019 yet? Are you using this feature? If you are, drop me a note! I’d love to hear about your experience with it!



SQL Server 2019 Hosting - HostForLIFE.eu :: New Resumable Online Index Create SQL Server 2019

clock August 5, 2020 13:32 by author Peter

SQL Server 2019 brings a very exciting new feature that was long overdue. Resumable Online Index Creation is one of my favorite new things. This, when paired with Resumable Index Rebuilds introduced with SQL Server 2017, really gives database administrators much more control over the index processes.

Have you ever started to build a new index on very large table only to have users call and complain their process is hung, not completing, or sthe ystem is slow? That’s when you realize you’re the cause because you tried to sneak in a new index. I have many times; because creating a new index can impact the performance and can be a problematic process for users when you have no or little downtime windows available. When you kill the create process, it rolls back requiring you to start from the beginning the next time. With resumable Online Index Creation, now you have the ability to pause and restart the build at the point it was paused. You can see where this can be very handy.

To use this option for creating the index, you must include "RESUMABLE=ON".

CREATE INDEX MyResumableIndex on MyTable (MyColumn) WITH (ONLINE=ON, RESUMABLE=ON, MAX_DURATION=30)  

Let’s say you have only two 30-minute windows available to create this new index over the next two days. You could use the MAX_DURATION option with the new RESUMABLE=ON to specify the time interval for an index being built. Once the 30 minutes time is up, the index building automatically gets paused if it has not completed. When you’re ready the next day, you can RESUME right where it left off, allowing you to complete the process. Very cool.

Another added benefit is managing transaction log growth. As we all know, creating indexes, especially large ones, can cause hefty log growth events and can unfortunately lead to running out of disk space. This new functionality allows us to better manage that. We can now pause the process and truncate or backup the log mid process building the index in chunks.

In the case of when you create an index only to get complaints from users or manage your log growth, you can simply do the below to PAUSE and restart it when a time is better, or your transaction log maintenance has completed.

You can KILL the SPID creating the index or run the below.
ALTER INDEX MyResumableIndex ON MyTable PAUSE; 

To restart -

ALTER INDEX MyResumableIndex on MyTable RESUME; Or simply re-execute your CREATE INDEX statement  

According to MSDN, Resumable online index create supports the follow scenarios:

  • Resume an index creation operation after an index create failure, such as after a database failover or after running out of disk space.
  • Pause an ongoing index creation operation and resume it later allowing to temporarily free system resources as required and resume this operation later.
  • Create large indexes without using as much log space and a long-running transaction that blocks other maintenance activities and allows log truncation.

Note
SORT_IN_TEMPDB=ON is not supported when using RESUMABLE=ON

Once you pause it, how do you know how far the index got and how much is left to be created? With the Resumable REBUILD Index feature added in SQL Server 2017, we have also got a new sys.index_resumable_operations system view. This view shows us the percentage complete, current state, start time, and last pause time. I am very excited about this new Index Create feature. I think this is a big WIN for SQL Server 2019.

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server 2019 Hosting - HostForLIFE.eu :: Using JSON Support in SQL Server 2019

clock April 22, 2020 10:03 by author Peter

There's so many new capabilities to play around and explore in SQL Server 2019. One of the new capabilities that has caught our attention is the introduction of JSON support. In this post, I will get started in creating a JSON document using SQL Server 2019.

SQL Server 2016 will support JSON (JavaScript Object Notation). JSON is an open, text-based exchange format based on JavaScript’s object literal notation. JSON is a popular data-interchange format used in modern web and mobile applications, as well for storing unstructured data. It is supported in several DB/NoSQL engines and this improvement will enable developers to put some JSON processing logic in the SQL Server that will enable them to parse, query, analyze, and update JSON data.

Step 1

In this post, you will query a standard table and then convert it into JSON format. For this example, you are querying the standard AdventureWorks database:

SELECT TOP(2) PP.[PersonType], PP.[FirstName], PP.[MiddleName], PP.[LastName], EA.[EmailAddress]
FROM Person.Person PP
INNER JOIN [Person].[EmailAddress] EA
ON EA.[BusinessEntityID] = PP.[BusinessEntityID]

Sample output for rows inside SQL Server Management Studio looks like below:

Step 2

Now, you will add the FOR JSON AUTO support to get the same output in JSON format.

SELECT TOP(2) PP.[PersonType], PP.[FirstName], PP.[MiddleName], PP.[LastName], EA.[EmailAddress]
FROM Person.Person PP
INNER JOIN [Person].[EmailAddress] EA
ON EA.[BusinessEntityID] = PP.[BusinessEntityID]
FOR JSON AUTO

The output looks like below:

[
{
"PersonType":"EM",
"FirstName":"X",
"MiddleName":"Y",
"LastName":"Z",
"EA":[{"EmailAddress":"[email protected]"}]
},
{"PersonType":"EM",
"FirstName":"A",
"MiddleName":"B",
"LastName":"C",
"EA":[{"EmailAddress":"[email protected]"}]
}
]

Step 3

You can also generate JSON using the PATH option like:

SELECT TOP(2) PP.[PersonType], PP.[FirstName], PP.[MiddleName], PP.[LastName], EA.[EmailAddress]
FROM Person.Person PP
INNER JOIN [Person].[EmailAddress] EA
ON EA.[BusinessEntityID] = PP.[BusinessEntityID]
FOR JSON PATH

The output for this query would look little different like:

[
{"PersonType":"EM", "FirstName":"X”, “MiddleName":"Y”, “LastName":"Z”, “EmailAddress":"[email protected]"},{"PersonType":"EM”, “FirstName":"A”, “MiddleName":"B”, “LastName":"C”, “EmailAddress":"[email protected]"}
]

Now that you are able to generate few simple output using JSON keyword.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server 2019 Hosting - HostForLIFE.eu :: Creating a custom sequence in MS-SQL Server

clock December 13, 2019 11:56 by author Peter

A sequence object in MS-SQL Server is designated to define and get only integer values, such as int, bigint, smallint, tinyint. However, if we want to generate sequence value(s) that are alpha-numeric, then we can define a Stored Procedure that can combine to generate an alpha-numeric combination of sequence values. This blog gives a complete idea of how this can be implemented.

CREATE DATABASE sampdb1

use sampdb1

--First Create a sequence object s3 which will generate numbers from 1 to 5 and cycles

CREATE SEQUENCE s3

START WITH 1

INCREMENT BY 1

MINVALUE 1

MAXVALUE 5

CYCLE

--Ensuring that the sequence object is defined properly and generating defined values.

SELECT NEXT VALUE FOR s3

--Defining a Stored Procedure that generates a Custom-sequence of values. This stored procedure is going to just get the generated alpha-numeric combination of the sequence.

CREATE PROCEDURE genSeqVals AS

Begin

DECLARE @n as int, @msg varchar(4)

SELECT @n=NEXT VALUE FOR s3

PRINT 'A'+cast(@n as varchar(2))

End

--Execute the following code and check.

Exec genSeqVals

--Defining another Stored Procedure that generates a Custom-sequence of values. This stored procedure returns the alpha-numeric combination of the sequence value that is generated using an OUTPUT parameter.

CREATE PROCEDURE getSeqVals(@res varchar(4) OUTPUT) AS

Begin

DECLARE @n as int, @msg varchar(4)

SELECT @n=NEXT VALUE FOR s3

SET @res = 'A'+cast(@n as varchar(2))

End

--Execute the following code and check.

DECLARE @seqnum varchar(4)

EXECUTE getSeqVals @seqnum output

print @seqnum


This is one way in which sequence objects can be custom-implemented as per the requirement. I hope that the above lines of code have given you deeper insight into T-SQL for custom implementation. Happy coding!

 



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in