European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE :: Resizing Tempdb In SQL Server

clock January 8, 2021 08:57 by author Peter

Occasionally, we must resize or realign our Tempdb log file (.ldf) or data files (.mdf or .ndf) due to a growth event that forces the file size out of whack. To resize we have three options, restart the SQL Server Service, add additional files, or shrink the current file. We most likely have all been faced with runaway log files and in an emergency situation restarting the SQL Services may not be an option but we still need to get our log file size smaller before we run out of disk space for example. The process of shrinking down that file can get tricky so I created this flow chart to help you out if you ever get into this situation.
 
Now its very important to note that many of these commands will clear your cache and will greatly impact your server performance as it warms cache backup. In addition, you should not shrink your database data or log file unless absolutely necessary. But doing so, it can result in a corrupt tempdb.
 
Let’s walk through it and explain somethings as we go along.
First thing you must do is issue a Checkpoint. A checkpoint marks the log as a “good up to here” point of reference. It lets the SQL Server Database Engine know it can start applying changes contained in the log during recovery after this point if an unexpected shutdown or crash occurs. Anything prior to the check point is what I like to call “Hardened”. This means all the dirty pages in memory have been written to disk, specifically to the .mdf and .ndf files. So, it is important to make that mark in the log before you proceed. Now, we know tempdb is not recovered during a restart it is recreated, however this is still a requirement.
    USE TEMPDB;    
    GO    
    CHECKPOINT;  

Next, we try to shrink the log by issuing a DBCC SHRINKFILE command. This is the step that frees the unallocated space from the database file if there is any unallocated space available. You will note the Shrink? decision block in the diagram after this step. It is possible that there is no unallocated space and you will need to move further along the path to free some up and try again.
    USE TEMPDB;    
    GO   
    DBCC SHRINKFILE (templog, 1000);   --Shrinks it to 1GB  

If the database shrinks, great congratulations, however for some of us we still might have work to do. Next up is to try and free up some of that allocated space by running DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE.
DBCC DROPCLEANBUFFERS
 
Clears the clean buffers from the buffer pool and columnstore object pool. This will flush cached indexes and data pages.
    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;  

DBCC FREEPROCCACHE
Clears the procedure cache, you are probably familiar with as a performance tuning tool in development. It will clean out all your execution plans from cache which may free up some space in tempdb. As we know though, this will create a performance as your execution plans now have to make it back into cache on their next execution and will not get the benefit of plan reuse. Now it’s not really clear why this works, so I asked tempdb expert Pam Lahoud (B|T) for clarification as to why this has anything to do with tempdb. Both of us are diving into this to understand exactly why this works. I believe it to be related to Tempdb using cached objects and memory objects associated with stored procedures which can have latches and locks on them that need to be release by running this. Check back for further clarification, I'll be updating this as I find out more.
    DBCC FREEPROCCACHE WITH NO_INFOMSGS;  

Once these two commands have been run and you have attempted to free up some space you can now try the DBCC SHRINKFILE command again. For most this should make the shrink possible and you will be good to go. Unfortunately, a few more of us may have to take a couple more steps through to get to that point.
 
The last two things I do when I have no other choice to get my log file smaller is to run those last two commands in the process. These should do the trick and get the log to shrink.
 
DBCC FREESESSIONCACHE
This command will flush any distributed query connection cache, meaning queries that are between two or more servers.
    DBCC FREESESSIONCACHE WITH NO_INFOMSGS;  

DBCC FREESYSTEMCACHE
This command will release all unused remaining cache entries from all cache stores including temp table cache. This covers any temp table or table variables remaining in cache that need to be released.
    DBCC FREESYSTEMCACHE ('ALL');  

In my early days as a database administrator I would have loved to have this diagram. Having some quick steps during stressful situations such as tempdb’s log file filling up on me would have been a huge help. So hopefully someone will find this handy and will be able to use it to take away a little of their stress.

HostForLIFE SQL Server 2019 Hosting
HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE.eu :: How To Get The Entire Database Size Using C# In SQL Server?

clock December 18, 2020 06:53 by author Peter

This article shows how we can determine the size of an entire database using C# and the size of each and every table in the database using a single SQL command. To explain this, it is divided into three sections,

  • Getting Size of one Table in the Database using single SQL Command
  • Getting Size of each Table in the Database using single SQL Command
  • Getting Size of the entire Database using C#

Getting Size of one Table in the Database using single SQL Command
To get the table size in SQL Server, we need to use the system stored procedure sp_spaceused. If we Table Name as an argument, it gives the disk space used by the table and some other information like: Number of rows existing in the table, Total amount of reserved space for Table, Total amount of space reserved for the table but not yet used and Total amount of space used by indexes in Table.

Example
For the ADDRESS table in our database, if we run,
sp_spaceused 'TADDRS'

It will give the following result,

Getting Size of each Table in the Database using single SQL Command

We have seen how we can determine the size of one table. Now, suppose we want to determine the size of each table in the entire database. We could find the size of any table using this command just by changing the Table name in the parameter. Is that right? But would it not be much better if we have a one-line SQL command that gives the size of each table? Right?
 
Fortunately, SQL Server provides a way to do this. A stored procedure "sp_MSforeachtable" could do easily for us!
 
The sp_MSforeachtable stored procedure is one of many undocumented stored procedures tucked away in the depths of SQL Server. You can find more details of "sp_MSforeachtable" in this link.
 
sp_MSforeachtable is an undocumented stored procedure that is not listed in MSDN Books Online and can be used to run a query against each table in the database. In short, you can use this as,
EXEC sp_MSforeachtable @command1="command to run"
 
In the "command to run" put a "?" , where you want the table name to be inserted. For example, to run the sp_spaceused stored procedure for each table in the database, we'd use,

It will give the size of each table (including other details) like,

Getting the Size of the entire Database
Now we want to get the total size used by a database i.e. the additional space used by each table in the database. We have seen in the previous section how we can get the size of each table in the database. Here is sample code in C# that can be used to calculate the size of the entire database.
 
In this, we are executing the same command what we discussed in the above section and are querying the database using simple ADO.Net. We get the result in a DataSet and then iterate each table to get its size. The table size is stored in the "data" column of each table (see the above Picture-1). We just add the "data" column value of each table to get the size of the entire database. Sample code is also attached with this article.
    class MemorySizeCalculator  
    {  
      public void GetDbSize()  
      {  
        int sum = 0;  
       
     // Database Connection String   
     string sConnectionString = "Server = .; Integrated Security = true; database = HKS";  
       
      // SQL Command [Same command discussed in section-B of this article]  
      string sSqlquery = "EXEC sp_MSforeachtable @command1=\"EXEC sp_spaceused '?'\" ";  
       
      DataSet oDataSet = new DataSet();  
       
      // Executing SQL Command using ADO.Net  
      using (SqlConnection oConn = new SqlConnection(sConnectionString))  
      {  
             oConn.Open();  
             using (SqlCommand oCmdGetData = new SqlCommand(sSqlquery, oConn))  
             {  
                  oCmdGetData.ExecuteNonQuery();  
                  SqlDataAdapter executeAdapter = new SqlDataAdapter(oCmdGetData);  
                  executeAdapter.Fill(oDataSet);  
             }  
             oConn.Close();  
      }  
      // Iterating each table  
      for (int i = 0; i < oDataSet.Tables.Count; i++)  
      {  
        // We want to add only "data" column value of each table  
        sum = sum + Convert.ToInt32(oDataSet.Tables[i].Rows[0]["data"].ToString().Replace    
        ("KB", "").Trim());  
      }  
        Console.WriteLine("Total size of the database is : " + sum + " KB");  
     }  
    } 

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE.eu :: How to Sort Numbers in SQL Server Without A Sorting Function

clock November 25, 2020 08:18 by author Peter

Today, I'm gonna show you how to sort numbers in SQL Server. It's not a difficult task but not an easy way. In the front end are many functions that for sorting values but SQL Server has no predefined function available.

For example I will sort the numbers 12,5,8,64,548,987,6542,4,285,11,26. SQL Server has no array list or array so how can we hold the values after sorting the numbers? SQL Server has temporary tables. Temporary automatically creates and drops the table after the execution.

First of all, create a temporary table. Suppose a problem occurs in SQL Server or during program execution. A Temporary table can't be deleted or dropped the proper way. When we want to create a table a second time a confirm error occurs as in the following:

There is already an object named '#temp' in the database.

So this type of problem is avoided by checking first if the table exists like this:

    IF  EXISTS (SELECT * FROM sys.tables 
    WHERE name = N'#temp' AND type = 'U') --check the #temp already exists in database or not  
    --Not:-  type U stand for user 
    begin 
    drop table #temp 
    end 

If the table already exists in the database then drop the table #temp.

My values are 12,5,8,64,548,987,6542,4,285,11,26. They need to be be split up before the sort. How can we split the numbers? Of course we can at the comma (,).  If I split the at the comma then I get the numbers like this: 12 5 8 64 548 and so on. One question then arises is, how to split the value? Don't worry, I have done that.

select left('12,45,18,95',

CHARINDEX(',','12,45,18,95')-1))

If i run this query it should be return the value is 12

After that everything is fine, we get the value from the #temp table.

    select ROW_NUMBER() over (order by value) 'srNo', value from #temp order by value

The following is a complete Stored Procedure to sort the numbers. 

   ALTER proc [dbo].[Porc_sortnumber] 
    as 
    begin 
    DECLARE @value VARCHAR(MAX)='1,2,5,6,12,88,47,95,56,20' 
    declare @lenth int =1 
    IF  EXISTS (SELECT * FROM sys.tables 
    WHERE name = N'#temp' AND type = 'U') --check the #temp allready exists in database or not  
    --Not:-  type U stand for user 
    begin 
    drop table #temp 
    end 
    create table #temp (id int identity(1,1),value int)  
    while(@lenth!=0 ) 
    begin 
    insert into #temp(value) values(left(@value,(CHARINDEX(',',@value)-1))) 
    set @value= right(@value,len(@value)-((CHARINDEX(',',@value)))) 
    set @lenth=CHARINDEX(',',@value) 
     
    end 
    insert into #temp(value) values(@value) 
    select ROW_NUMBER() over (order by value) 'srNo', value from #temp order by value 
    end 

Output

I hope this article will helpful!

HostForLIFE.eu SQL Server 2014 Hosting



SQL Server Hosting - HostForLIFE.eu :: Cascading Deletes in LINQ to SQL

clock November 11, 2020 07:34 by author Peter

This article will discuss alternative methods for performing cascading deletes using LINQ to SQL.  Cascading delete refers to the action of removing records associated by a foreign key relationship to a record that is the target of a deletion action.  LINQ to SQL does not specifically handle cascading deletes and it is up to the developer to determine whether or not that action is desired.  It is also up to the developer to determine how to go about accomplishing the cascading delete.
 
Problem
The problem with performing a cascading delete is not new to LINQ to SQL and one has essentially the same alternatives for performing such a delete.  The issue is one of determining how to handle the deletion or retention of records associated with a record targeted for deletion where that record maintains a foreign key relationship with records contained within other tables within the database and more specifically where the foreign key fields are not nullable.
 
As an example, consider the customer table within the Northwind database.  The customer table has a foreign key relationship established with the Orders table (which in turn maintains a foreign key relationship with the Order_Details table).  In order to delete a customer which has associated Orders, one needs to dispose of or otherwise handle the associated records in both the Orders and Order_Details tables.  In the LINQ to SQL jargon, the associated tables are referred to as entity sets.
 
LINQ to SQL will not violate the foreign key relationships and if an application attempts to delete a record with such relationships in place, the executing code will throw an exception.
 
Using the Northwind example, if one were to attempt to delete a customer with associated orders, an exception will occur.  That is not really a problem, that is how it should be, otherwise, why have foreign key relationships at all.  The issue is really one of determining if you would really want to delete records with associated entity sets, and if you do, how would you want to handle it - do you want to keep the associated records or delete them right along with the targeted record?

There are several possible alternatives at your disposal.  You can handle the cascading deletes using LINQ to SQL from within your code, or you can handle the foreign key relationships from within SQL Server.
 
If you were to execute this code against the Northwind database, it would create a customer with an associated order and order details.
    try  
    {  
              Customer c = new Customer();  
              c.CustomerID = "AAAAA";  
              c.Address = "554 Westwind Avenue";  
              c.City = "Wichita";  
              c.CompanyName = "HostForLIFE.eu";  
              c.ContactName = "Peter";  
              c.ContactTitle = "Boss";  
              c.Country = "USA";  
              c.Fax = "888-335-5933";  
              c.Phone = "888-225-4934";  
              c.PostalCode = "88888";  
              c.Region = "EA";  
       
              Order_Detail od = new Order_Detail();  
              od.Discount = .25f;  
              od.ProductID = 1;  
              od.Quantity = 25;  
              od.UnitPrice = 25.00M;  
       
              Order o = new Order();  
              o.Order_Details.Add(od);  
              o.Freight = 25.50M;  
              o.EmployeeID = 1;  
              o.CustomerID = "AAAAA";  
       
              c.Orders.Add(o);  
       
              using (NWindDataContext dc = new NWindDataContext())  
              {  
                       var table = dc.GetTable<Customer>();  
                       table.InsertOnSubmit(c);  
                       dc.SubmitChanges();  
              }  
    }  
    catch (Exception ex)  
    {  
              MessageBox.Show(ex.Message);  
    }


But if you then tried to delete the customer without handling the entity sets using something like this:
    using (NWindDataContext dc = new NWindDataContext())  
    {  
       
              var q =  
                       (from c in dc.GetTable<Customer>()  
                       where c.CustomerID == "AAAAA"  
                       select c).Single<Customer>();  
       
                       dc.GetTable<Customer>().DeleteOnSubmit(q);  
                       dc.SubmitChanges();  
    }


Handling the Delete with LINQ to SQL

You can handle the cascading deletes manually deleting all of the related entities in the associated entity set; here is a simple approach to doing that:
    try  
    {  
              using (NWindDataContext dc = new NWindDataContext())  
              {  
       
                       var q =  
                       (from c in dc.GetTable<Customer>()  
                       where c.CustomerID == "AAAAA"  
                       select c).Single<Customer>();  
       
                       foreach (Order ord in q.Orders)  
                       {  
                                 dc.GetTable<Order>().DeleteOnSubmit(ord);  
       
                                 foreach (Order_Detail od in ord.Order_Details)  
                                 {  
                                           dc.GetTable<Order_Detail>().DeleteOnSubmit(od);  
                                 }  
                        }  
                        dc.GetTable<Customer>().DeleteOnSubmit(q);  
                        dc.SubmitChanges();  
              }  
              UpdateDataGrid();  
    }  
    catch (Exception ex)  
    {  
              MessageBox.Show(ex.Message);  
    }


In looking at this example, to delete the customer along with the related orders and order details, the code first selects the matching customer by the customer ID field (it's the primary key).  Once a match is found, the code loops through the orders related to each customer and marks them for deletion using the DeleteOnSubmit call.
 
Further, since another relationship exists between the order and order details, the code loops through all of the order details associated with the order and marks them for deletion as well.  Lastly, the customer itself if marked for deletion and then Submit Changes is called on the data context.  The order in which the entities are marked for deletion does not matter, LINQ to SQL sorts that out during the execution of the Submit Changes call based upon the configuration of the foreign keys.

Handling the Cascading Delete from within SQL Server
It is possible to manage the cascading deletes entirely from within SQL Server.  To do this, one need only set the delete rule for the foreign key relationship to cascade.

If you have a database diagram built, the easiest way to set the delete rule is to merely open the diagram, click on the foreign key relationship within the diagram, and then open the INSERT and UPDATE property to expose the Delete Rule property, and then set the Delete Rule property to Cascade as shown in Figure 3.
 
To repeat the example of deleting a customer with related orders, if we were to set all of the constraint delete rules to cascade we could delete a customer with this bit of code:
    try  
    {  
              using (NWindDataContext dc = new NWindDataContext())  
              {  
                       var q =  
                       (from c in dc.GetTable<Customer>()  
                       where c.CustomerID == "AAAAA"  
                       select c).Single<Customer>();  
       
                       dc.GetTable<Customer>().DeleteOnSubmit(q);  
                       dc.SubmitChanges();  
              }  
              UpdateDataGrid();  
    }  
    catch (Exception ex)  
    {  
              MessageBox.Show(ex.Message);  
    }


As you can see in this example code, there was no need to bother with marking each member of an entity set for deletion as in this case SQL Server was instructed on how to deal with a deletion of the customer or order records.  As a result, deleting the customer also results in the deletion of the related records contained in the Order and Order Details tables.
 
Handling the Cascading Delete From within SQL Server
It is also possible to set the foreign key field in the entity sets to nullable and then set the delete rule for that field to "Set Null".  One could also set a default value for the field and set the delete rule to "Set Default".
 
Either approach could be useful if there was a need to delete (in this example) a customer record but retain the order and order detail records.  Either approach could be handled in a manner similar to that used in the previous solution alternative.  Setting the foreign key value to nullable is probably not advisable but it is a workable alternative.
 
Handling the Cascading Delete With a Stored Procedure
One may create or add a stored procedure that will accomplish the cascading delete and evoke that stored procedure using LINQ to SQL.  Stored procedures added to the designer may be evoked directly from the data context, for example, if we had stored procedure called DeleteCustomer that took the customer ID as an argument and handled the cascading deletes, we could do something like this:
    Using(NwindDataContext dc = new NwindDataContext())  
    {  
         dc.DeleteCustomer("AAAAA");  
    }


Cascading deletes are not new to LINQ to SQL; it is the same issue it has always been.  In this article, I have described a few approaches to dealing with cascading deletes from within the code and from the SQL Server side but as is true with many things in .NET, there are several other ways to accomplish such action from within LINQ to SQL.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE.eu :: Multiple Active Result Sets (MARS)

clock November 5, 2020 08:10 by author Peter

MARS- Multiple Active Result Sets is a feature supported in sqlserver 2005. In this method having a single connection to the database, our applications can execute. Multiple queries and stored procedures and divides each result of a query or stored procedure as active results. These results are forward-only, read-only data. Previous to MARS, it was required multiple database connections to retrieve data for a separate query or stored procedure, but within a single connection to the database, it's possible to retrieve all data. This helps your application performance and code management greatly.

In the previous methods each sqlconnection object must be disposed of correctly and datareaders associated with each query must be disposed of. Otherwise, it will lead to some errors like "There is already an open DataReader associated with this Connection- error".

With the introduction of MARS in Sqlserver2005 bye-bye to all these problems and performance bottlenecks.

Let's start the Implementation

Settings

You must change your connection string as follows :

String connectionString = "Data Source=TestServer;" +"Initial Catalog=Pubs;IntegratedSecurity=SSPI;" + "MultipleActiveResultSets=True";

MARS is disabled by default on the Connection object. You have to enable it with the addition of MultipleActiveResultSets=true in your connection string.

Program:

This c# code snippet demonstrates the use of MARS.

using System.Data;  
using System.Data.SqlClient;  

public class MARSdemo {  
Public static void Main() {  
    String MarsConnectionStr = "Data Source=TestServer;" + "Initial Catalog=Pubs;IntegratedSecurity=SSPI;" + "MultipleActiveResultSets=True";  
    string Mars_Command_Str1 = "select * from Salary;";  
    string Mars_Command_Str2 = "select * from Employees;";  

    SqlCommand cmd1 = new SqlCommand(Mars_Command_Str1, MarsConnectionStr);  
    SqlCommand cmd2 = new SqlCommand(Mars_Command_Str1, MarsConnectionStr);  
    SqlDataReader rdr1 = null;  
    SqlDataReader rdr2 = null;  
    try {  
        con.Open();  
        rdr1 = cmd1.ExecuteReader();  
        rdr2 = cmd2.ExecuteReader();  
    } catch (SqlException ex) {} finally {  
        if (con.State == ConnectionState.Open) {  
            con.Close();  
        }  
        rdr1.Close();  
    }  
}  
}

If you try to execute this code with sqlserver 2000 or lower versions, u will get an InvalidOperationException and the following error message.

"There is already an open DataReader associated with this Command which must be closed first."

MARS works on the concept of "Multiplexing" and "interleaving". SQL's Select statement is a multiplexed enabled statement. These multiplexed enabled statements can pause its execution in between and allow a non-multiplexed statement to execute like an INSERT statement.

For example, consider you are retrieving 1 million records using a select statement and in between an insert query comes via a MARS-enabled application under the same connection object, then select statement pauses until this insert operation completes and afterward resumes execution.

Note: Managed code (stored procedures, functions, triggers) can't be multiplexed.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE.eu :: INTERSECTION in SQL Server

clock February 26, 2020 11:06 by author Peter

Sometimes, we have a scenario when we want to get a common set of rows from 2 different result sets. For example, we have 2 queries and both returns employees record. If we want to find who all employees are present in both result sets, that time we can use INTERSECT to get the result. Below is the graphical representation of how INTERSECT works.

If you see in the above graphical representation, the left 2 circles have the B and C letters in common. The left side picture shows how the 2 circles have B and C letters common which is nothing but an intersection.

Now let's see how it works in the database.

So we are going to create 2 tables, EmpTable and ManagerEmp and then we will insert records in them.
CREATE TABLE EmpTable(EmpName   VARCHAR(50),City    VARCHAR(50),Title   VARCHAR(50)) 
CREATE TABLE ManagerTable(EmpName   VARCHAR(50),City    VARCHAR(50),Title   VARCHAR(50)) 
 
INSERT INTO EmpTable 
SELECT EmpName='John',City='Stamford',Title='Operator' 
 
INSERT INTO EmpTable 
SELECT EmpName='Luis',City='Danbury',Title='Electrical Engineer' 
 
INSERT INTO EmpTable 
SELECT EmpName='Smith',City='Wilton',Title='Driver' 
 
 
INSERT INTO ManagerTable 
SELECT EmpName='Mike',City='Wilton',Title='Driver' 
 
INSERT INTO ManagerTable 
SELECT EmpName='Smith',City='Wilton',Title='Driver' 
 
INSERT INTO ManagerTable 
SELECT EmpName='Jonathan',City='Armonk',Title='Accountant' 
 
INSERT INTO ManagerTable 
SELECT EmpName='Warner',City='Stamford',Title='Customer Service' 
 
INSERT INTO ManagerTable 
SELECT EmpName='Luis',City='Danbury',Title='Electrical Engineer' 


Now run below the query to find out the common employees in both tables.
SELECT * FROM EmpTable 
 
SELECT * FROM ManagerTable 
 
-- INTERSECTION 
SELECT * FROM EmpTable 
INTERSECT 
SELECT * FROM ManagerTable
 

Here is the output.

If you see below, in both the "EmpTable" and the "ManagerTable" tables, Luis and Smith both are employees. To join these 2 queries with INTERSECT, it gave these 2 names.

One thing is to remember here is both the tables/result sets should have the same columns and the same datatype for those columns, otherwise, it may give you "Conversion failed when converting...." if the data type does not match.

 



SQL Server Hosting - HostForLIFE.eu :: Rethinking Sorting In SQL Server Stored Procedures

clock February 5, 2020 11:03 by author Peter

We know that sorting can be one of the most expensive things in an execution plan as shown below. However, we continue to do ORDER BYs repeatedly. Yes, I 100% agree that there is a need to sort a results set and that this should be done in the procedure for good reason, but my concern is having multiple sorts, erroneous sorts, and the sorts that can be done elsewhere. These are the ones that waste resources and can stifle performance.

Many of us writing procedures tend to write in code blocks. We write the SELECT, JOINS, FROMs and WHERES then immediately follow it up with and ORDER BY as a way to check result sets before moving onto the next block of code. I admit I do this almost every time. But what most developers do not do is remove unneeded ORDER BYs that are not required. This is very costly and can lead to suboptimal performance not only of your procedure but also for TEMPDB as this is where all sorting takes place.
Do you sort in your procedures that are used for data consumers like reports, ETL or an application? If you do, I ask, why are you sorting in the procedure and not in the consumer ? Many report end users will resort the data in Excel, or the report itself gives parameters for custom sorts or the data doesn’t need a sort at all. Why are you wasting resources on the SQL Server side just for it to be nullified? By removing unneeded sorts or performing the sort in the application tier you can have big performance gains. I would rather have a report, ETL process or application take the performance hit then a procedure.
 
Let’s look at one of the procedures that are available in AdventureWorks2016CPT3 called uspGetOrderTrackingBySalesOrderID. We will run it using the example execution in the code and then remove the ORDER BY, compile and rerun. We will be able to see clearly see the difference.
    USE [AdventureWorks2016CTP3] 
     
    GO 
     
    /****** Object:  StoredProcedure [dbo].[uspGetOrderTrackingBySalesOrderID]    Script Date: 1/28/2020 11:31:16 AM ******/ 
     
    SET ANSI_NULLS ON 
     
    GO 
    SET QUOTED_IDENTIFIER ON 
    GO 
     
    ALTER PROCEDURE [dbo].[uspGetOrderTrackingBySalesOrderID] 
     
       @SalesOrderID [int] NULL 
    AS 
    BEGIN 
    /* Example: 
     
          exec dbo.uspGetOrderTrackingBySalesOrderID 53498 
    */ 
       SET NOCOUNT ON; 
       SET STATISTICS IO, TIME ON 
     
       SELECT 
          ot.SalesOrderID, 
          ot.CarrierTrackingNumber, 
          ot.OrderTrackingID, 
          ot.TrackingEventID, 
          te.EventName, 
          ot.EventDetails, 
          ot.EventDateTime 
       FROM 
          Sales.OrderTracking ot, 
          Sales.TrackingEvent te 
       WHERE 
          ot.SalesOrderID = @SalesOrderID AND 
          ot.TrackingEventID = te.TrackingEventID 
       --ORDER BY 
       --   ot.SalesOrderID, 
       --   ot.TrackingEventID; 
    END; 

Plan with ORDER BY

Plan without ORDER BY and Query Store graph showing the difference in duration between the two. You can clearly see the performance improvement, and this was just one sort in a very simple procedure. Take a moment and consider the sorting that happens in your code. I’d ask that when writing store procedures, doing code reviews or performance tuning that you take a second to ask why the sorts are being done in the data tier and if they can be performed elsewhere. You can see get some performance gains not only in your code but in TEMPDB as well when sorting is reigned in.



SQL Server Hosting - HostForLIFE.eu :: SQL Server In Memory Table Indexes

clock January 29, 2020 10:49 by author Peter
Now that I have written about In-Memory Tables and Migrating to In-Memory tables, let’s look at indexes and how they are created and how they work within those tables. As you can imagine indexes, called memory optimized indexes are different for these types of tables, so let’s see just how different that are from regular tables. Before we dive into this subject it is VERY important to note the biggest differences.

First, ALL memory optimized indexes MUST be created when the table is created or migrated. You cannot add indexes in an existing table without dropping and recreating the table. Secondly, currently you can only have 8 indexes per table including your primary key. Remember that every table must have a primary key to enforce a secondary copy for a minimum of schema durability This means you can only really add 7 additional indexes so be sure to understand your workloads and plan indexing accordingly.

Third, Memory Optimized Indexes only exists in memory they are not persisted to disk and are not logged in the transaction logs. Therefore, this means they are also recreated upon database startup and do incur a performance hit as they are rebuilt.

Next, there is no such thing as key lookups against an In-Memory table, as all indexes are by nature a covering index. The index uses a pointer to the actual rows to get the needed fields instead of using a primary key like physical tables do. Therefore, these are much more efficient in returning the proper data.

Lastly, there also is no such thing as fragmentation for these indexes, since these are not read from disk. Unlike on disk indexes, these do not have a fixed page length. On disk index use physical page structures within the B-Tree, determining how much of the page should be filled is what the Fill Factor does. Since this is not a requirement fragmentation does not exist.

Ok now that we made it through all of that, let’s look at the types of indexes you can create and gain an understanding of what they are and how they are created.

Nonclustered HASH Index
This index is used to access the In-Memory version of the table, called a Hash. These are great for predicates that are singleton lookups and not ranges of values. These are optimized for seeks of equality values. For example, WHERE Name = ‘Joe’. Something to keep in mind when determining what to include in your indexes is this; if your query has two or more fields as your predicate and your index only consists of one of those fields, you will get a scan. It will not seek on that one field that was included.

Understanding your workloads and indexing on the appropriate fields (or a combinations thereof) is important since you are limited to only 7 additional indexes. Given that this In-Memory OLTP is mainly focused on heavy insert/update workloads, and less so reading, this should be less of a concern.

These types of indexes are highly optimized and do not work very well if there are a lot of duplicate values in an index, the more unique your values better the index performance gains you will get. It is always important to know your data. When it comes to these indexes knowing your memory consumption plays a part. The hash index type is a fixed length and consume a fixed amount of memory determined upon creation. The amount of memory is determined by the Bucket Count value. It is extremely important to make sure this value is as accurate as possible. Right sizing this number can make or break your performance, too low of a number according to Microsoft “can significantly impact workload performance and recovery time of a database.

Look for my upcoming blog on determining bucket counts for more information. Meanwhile you can learn more about hash indexes at docs.microsoft.

Using T-SQL (both methods give the same result)
Example One (Note the index comes after the table fields)

CREATE TABLE [Sales]   
([ProductKey] INT NOT NULL,  
[OrderDateKey] [intNOT NULL,  
INDEX IDX_ProductKey HASH ([ProductKey]) WITH (BUCKET_COUNT = 100))  
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY)  

Example Two (Note the index comes after the field)
CREATE TABLE [Sales]  
([ProductKey] INT NOT NULL INDEX IDX_ProductKey HASH WITH (BUCKET_COUNT = 100),  
[OrderDateKey] [intNOT NULL)  
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY)  

Nonclustered Index
These are also used to access the In-Memory version of the table however, these are optimized for range values such as less than and equal to, inequality predicates and sorts orders. Examples are WHERE DATE between ‘20190101’ and ‘20191231’ and WHERE DATE <> ‘20191231’. These indexes do not require a bucket count or fixed memory amount. The memory consumed by these indexes are determined by the actual row counts and size of the indexed key columns which makes it a simpler to create.

Moreover, in contrast to hash indexes which needs all fields required for your predicate to be part of your index to get a seek, these do not. If your predicates have more than one field and your index has that one of those as its leading index key value, then you can still attain a seek.

Using T-SQL (both methods give the same result)
Example One (Note the index comes after the table fields)

CREATE TABLE [Sales]   
([ProductKey] INT NOT NULL,  
[OrderDateKey] [intNOT NULL,  
INDEX IDX_ProductKey ([ProductKey]))  
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY)

Example Two (Note the index comes after the field)

CREATE TABLE [Sales]  
([ProductKey] INT NOT NULL INDEX IDX_ProductKey,  
[OrderDateKey] [intNOT NULL)  
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY) 

Determining which index type to use can be tricky but Microsoft has given us a great guide in the below chart. As you can see there some key differences to how In- Memory table indexes, memory optimized indexes, work compared to the normal disk indexes we are used to. Like with any other table design it is important to consider your index needs before you embark on creating or migrating to memory optimized tables. You’ll be happy you did.



SQL Server Hosting - HostForLIFE.eu :: SQL Comments Statement

clock January 20, 2020 11:15 by author Peter

SQL Comments statement can make your application easier for you to read and maintain. For example, we can include a comment in a statement that describes the purpose of the statement within your application with the exception of hints, comments within SQL. The statement does not affect the statement execution. Please refer to using hints on using this particular form of comment statement. 
 
A comment can appear between any keywords, parameters, or punctuation marks in a statement. You can include a comment in a statement in two ways:

  • Begin the comment with a slash and an asterisk (/*). Proceed with the text of the comment. This text can span multiple lines.
  • End the comment with an asterisk and a slash (*/). The opening and terminating characters need not be separated from the text by a space or a line break.
  • Begin the comment with -- (two hyphens). Proceed with the text of the comment. This text cannot extend to a new line. End the comment with a line break.

Some of the tools used to enter SQL have additional restrictions. For example, if you are using SQL*plus, by default you cannot have a blank line inside a multiline comment.
 
For more information, please refer to the documentation for the tool you use as an interface to the database. A SQL statement can contain multiple comments of both styles. The text of a comment can contain any printable characters in your database character set.
 
The comment statement indicates the user-provided text. Comments can be inserted on a separate line, nested at the end of a SQL command line, or within a SQL statement. The server does not evaluate the comment. 
SQL Comment uses the two hyphens (--) for single-line or nested comments. Comments inserted with -- are terminated by a new line, which is specified with a carriage return. Character (U+000A), line feed character (U+000D), or a combination of the two in SQL comments.
 
There is no maximum length for comments. The following table lists the keyboard shortcuts that you can use to comment or uncomment text.
 
Syntax
    -- text_of_comment    

Examples
The following example uses the -- commenting characters.
 
Syntax
    -- Choose the sample database.     
    USE sample;     
    GO     
    -- Choose all columns and all rows from the Address table.     
    SELECT *     
    FROM OrderDetails     
    ORDER BY OrderId  ASC; -- We do not have to specify ASC because      
    -- that is the default.     


SQL Single Line Comments
Single line comments start with --. Any text between -- and the end of the line will be ignored (will not be executed). The following example uses a single-line comment as an explanation.
 
Syntax 
   --Select all:   
    SELECT * FROM OrderDetails ; 


The following example uses a single-line comment to ignore the end of a line.
 
Syntax
    SELECT * FROM OrderDetails -- WHERE OrderName='Coffee';  

The following example uses a single-line comment to ignore a statement.
 
Syntax
    --SELECT * FROM OrderDetails;   
    SELECT * FROM OrderDetails ;  


SQL Multi-line Comments
SQL Multi-line comments start with /* and end with */. Any text between /* and */ will be ignored. The following example uses a multi-line comment as an explanation.
 
Syntax 
    /*Select all the columns   
    of all the records   
    in the OrderDetails table:*/   
    SELECT * FROM OrderDetails;  


 The following example uses a multi-line comment to ignore many statements.
 
Syntax
    /*SELECT * FROM Customers;   
    SELECT * FROM Products;   
    SELECT * FROM Orders;   
    SELECT * FROM Categories;*/   
    SELECT * FROM OrderDetails;  


To ignore just a part of a statement, also use the /* */ comment. The following example uses a comment to ignore part of a line.
 
Syntax
    SELECT CustomerName, /*City,*/ Country FROM Customers;  

The following example uses a comment to ignore part of a statement
 
Syntax
    SELECT * FROM OrderDetails WHERE (OrderName LIKE 'L%'   
    OR OrderName LIKE 'R%' /*OR OrderName  LIKE 'S%'   
    OR OrderName LIKE 'T%'*/ OR OrderName LIKE 'W%')   
    AND OrderName ='Mango'   
    ORDER BY OrderrAddress;  


Summary
In this article, you learned how to use a SQL Comments statement with various options.

 



SQL Server Hosting - HostForLIFE.eu :: Transaction Explained in SQL Server

clock December 18, 2019 11:56 by author Peter

 In this blog, I will explain the SQL Transaction. A transaction is a logical unit of work. Each transaction begins with a specific task and ends with all tasks in a group successfully complete. If any tasks fail, it means that the transaction fails. All steps must be committed (transaction Success) or rolled back (transaction failure). A transaction begins to initiate the execution of the SQL statement. A transaction must be committed or rolled back. It is separate operations succeed is transaction succeed and committed to the database. If any separate operation fails means transaction failure and must be undone rolled back. The following are the properties of a transaction.
 
1. Atomicity
2. Consistency
3. Isolation
4. Durability

Transaction Process

  • BEGIN TRANSACTION - Starts the transaction
  • ROLLBACK - If an error occurred, reverts the existing transaction changes
  • COMMIT - No error occurred, then it saves all transaction states
  • SAVEPOINT - Rollback particular named transaction

Example
    BEGIN TRANSACTION T1 
    UPDATE TB_NAME SET FIRST_NAME ='R' WHERE ID=1 
    COMMIT; 
     
    BEGIN TRANSACTION T1 
    UPDATE TB_NAME SET FIRST_NAME ='R' WHERE ID=1 
    ROLLBACK; 
     
    SAVEPOINT T1 
    UPDATE TB_NAME SET FIRST_NAME ='R' WHERE ID=1 
    ROLLBACK T1; 

HostForLIFE.eu SQL Server Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes. We have customers from around the globe, spread across every continent. We serve the hosting needs of the business and professional, government and nonprofit, entertainment and personal use market segments.

 



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in