European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

European SQL Hosting :: Performance Between CLR vs T-SQL

clock March 6, 2019 08:17 by author Scott

I am pretty sure that all of us read or even participated in quite a few heated discussions about Common Language Runtime (CLR) code in Microsoft SQL Server. Some people state that CLR code works faster than T-SQL, others oppose them. Although, as with the other SQL Server technologies, there is no simple answer to that question. Both technologies are different in nature and should be used for the different tasks. T-SQL is the interpreted language, which is optimized for set-based logic and data access. CLR, on the other hand, produces compiled code that works the best for imperative procedural-style code.

Even with imperative code, we need to decide if we want to implement it in CLR or as the client-side code, perhaps running on the application servers. CLR works within SQL Server process. While, on one hand, it eliminates network traffic and can provide us the best performance due to the “closeness” to the data, CLR adds the load to the SQL Server. It is usually easier and cheaper to scale out application servers rather than upgrading SQL Server box.

There are some cases when we must use CLR code though. For example, let’s think about the queries that performing RegEx evaluations as part of the where clause. It would be inefficient to move such evaluations to the client code and there is no regular expressions support in SQL Server. So CLR is the only choice we have. Although, in the other cases, when procedural-style logic can be moved to the application servers, we should consider such option. Especially when application servers are residing closely to SQL Server and network latency and throughput are not an issue.

Today we will compare performance of the few different areas of CLR and T-SQL. I am not trying to answer the question – “what technology is better”. As usual it fits into “It depends” category. What I want to do is looking how technologies behave in the similar tasks when they can be interchanged.

Before we begin, let’s create the table and populate it with some data.


As the first step, let’s compare the user-defined functions invocation cost. We will use the simple function that accepts the integer value as the parameter and returns 1 in case if that value is even. We can see CLR C# implementation below.

As we can see, there are the attributes specified for each function. Those attributes describes different aspects of UDF behavior and can help Query Optimizer to generate more efficient execution plans. I would recommend specifying them explicitly rather than relying on default values.

One of the attributes – DataAccess – indicates if function performs any data access. When this is the case, SQL Server calls the function in the different context that will allow access to the data. Setting up such context introduces additional overhead during the functional call, which we will see in a few minutes.

T-SQL implementation of those functions would look like that:


Let’s measure average execution time for the statements shown below. Obviously, different hardware leads to the different execution time although trends would be the same.

Each statement performs clustered index scan of dbo.Numbers table and checks if Num column is even for every row from the table. For CLR and T-SQL scalar user-defined functions, that introduces the actual function call. Inline multi-statement function, on the other hand, performed the calculation inline without function call overhead.

As we can see, CLR UDF without data access context performs about four times faster comparing to T-SQL scalar function. Even if establishing data-access context introduces additional overhead and increases execution time, it is still faster than T-SQL scalar UDF implementation.

The key point here though is than in such particular example the best performance could be achieved if we stop using the functions at all rather than converting T-SQL implementation to CLR UDF. Even with CLR UDF, the overhead of the function call is much higher than inline calculations.

Unfortunately, this is not always the case. While we should always think about code refactoring as the option, there are the cases when CLR implementation can outperform inline calculations even with all overhead it introduced. We are talking about mathematical calculations, string manipulations, XML parsing and serialization – to name just a few. Let’s test the performance of the functions that calculate the distance between two points defined by latitude and longitude.

 



We can see that CLR UDF runs almost two times faster comparing to inline table-valued functions and more than five times faster comparing to T-SQL scalar UDF. Even with all calling overhead involved.

Now let’s look at the data access performance. The first test compares performance of the separate DML statements from T-SQL and CLR stored procedures. In that test we will create the procedures that calculate the number of the rows in dbo.Numbers table for specific Num interval provided as the parameters. We can see the implementation below

Table below shows the average execution time for stored procedure with the parameters that lead to 50,000 individual SELECT statements. As we can see, data access from CLR code is much less efficient and works about five times slower than data access from T-SQL.

Now let’s compare performance of the row-by-row processing using T-SQL cursor and .Net SqlDataReader class.

As we can see, SqlDataReader implementation is faster.

Finally, let’s look at the performance of CLR aggregates. We will use standard implementation of the aggregate that concatenates the values into comma-separated string.


As with user-defined functions, it is extremely important to set the attributes that tell Query Optimizer about CLR Aggregate behavior and implementation. This would help to generate more efficient execution plans and prevent incorrect results due to optimization. It is also important to specify MaxByteSize attribute that defines the maximum size of the aggregate output. In our case, we set it to -1 which means that aggregate could hold up to 2GB of data.

Speaking of T-SQL implementation, let’s look at the approach that uses SQL variable to hold intermediate results. That approach implements imperative row-by-row processing under the hood.

As another option let’s use FOR XML PATH technique. It is worth to mention that this technique could introduce different results by replacing XML special characters with character entities. For example, if our values contain < character, it would be replaced with &lt; string.

Our test code would look like that:


When we compare the performance on the different row set sizes, we would see results below

As we can see, CLR aggregate has slightly higher startup cost comparing to T-SQL variable approach although it quickly disappears on the larger rowsets. Performance of both: CLR aggregate and FOR XML PATH methods linearly depend on the number of the rows to aggregate while performance of SQL Variable approach degrade exponentially. SQL Server needs to initiate the new instance of the string every time it concatenates the new value and it does not work efficiently especially when it needs to be populated with the large values.

The key point I would like to make with that example is that we always need to look at the options to replace imperative code with declarative set-based logic. While CLR usually outperforms procedural-style T-SQL code, set-based logic could outperform both of them.

While there are some cases when choice between technologies is obvious, there are the cases when it is not clear. Let us think about scalar UDF that needs to perform some data access. Lower invocation cost of CLR function can be mitigated by higher data access cost from there. Similarly, inline mathematical calculations in T-SQL could be slower than in CLR even with all invocation overhead involved. In those cases, we must test different approaches and find the best one which works in that particular case.



European SQL Hosting - Amsterdam :: How to Truncate Log File in SQL Server 2008/2008R2/2012

clock February 22, 2013 07:17 by author Scott

When we create a new database inside the SQL Server, it is typical that SQL Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension.
* .MDF is called as Primary Data File.
* .LDF is called as Transactional Log file.
 
Sometimes, it looks impossible to shrink the Truncated Log file. The following code always shrinks the Truncated Log File to minimum size possible.

USE DatabaseName

GO
ALTER DATABASE [DBName] SET RECOVERY SIMPLE WITH NO_WAIT

DBCC SHRINKFILE([DBName_log], 1)

ALTER DATABASE [DBName] SET RECOVERY FULL WITH NO_WAIT

GO

 

 



European SQL 2012 Hosting - Amsterdam :: How to Transfer Data Two SQL Server Databases

clock January 30, 2013 07:42 by author Scott

Sometimes we need to transfer database data and objects from one server to another. In this article I represent a tool which can transfer data and database objects like tables and stored procedure scripts to another SQL Server database. Practically I've transferred database objects from SQL Server 2005 to SQL Server 2012. Then I thought it could help other stuff which would face these kinds of problems!  

Background 

Once I had a requirement to transfer data between two online databases. I saw many tools on the internet for transferring data between two SQL Server databases, but I decided to develop this kind of a tool because I believed that if you write some code you learn something new... 

In this article we learn the following points: 

- How to connect to a SQL Server database. 
- How to generate a Table and SP script programmatically. 
- How to copy data between two tables using Bulk Copy.
- How to insert data in an Identity column manually.  

Using the code 

The name of this project is DataTransfer. I use Windows Forms Application in Visual Studio 2008 to developed this project. I separate three sections to design the UI. See the image below pointing out the five main functionalities:



1. Point-1 in section-1: this section is used to take the source server information. Source server means a SQL Server database that has some data and objects needed to transfer.

2. Point-2 in section -2: this section is used to take the destination server information. Destination server means a SQL Server database where we place transferable objects and data.

3. Point-3 in sections 1 and 2: used to set SQL Server connection authentication because most of the time when we connect a database, we use SQL Server Authentication or Windows Authentication.

When we move to transfer an object or some data the first time we build the source and destination server connection considering the above point.   

Connection string build code:

public void BuildConnectionString()
{
    if (chkSAuthentication.Checked)
    {
        strSrc = "Data Source=" + txtSServerName.Text + ";Initial Catalog=" +
          txtSDatabase.Text + ";User Id=" + txtSLogin.Text +
          ";Password=" + txtSPassword.Text;
    }
    else
    {
        strSrc = "Data Source=" + txtSServerName.Text +
          ";Initial Catalog=" + txtSDatabase.Text + ";Integrated Security=True";
    }

    if (chkDAuthentication.Checked)
    {
        strDst = "Data Source=" + txtDServerName.Text + ";Initial Catalog=" +
          txtDDatabase.Text + ";User Id=" + txtDLogin.Text +
          ";Password=" + txtDPassword.Text;
    }
    else
    {
        strDst = "Data Source=" + txtDServerName.Text +
          ";Initial Catalog=" + txtDDatabase.Text +
          ";Integrated Security=True";
    }
}


1. Point-4 in section-3: used to transfer behavior. There are two main options: one for Table object and another for Store Procedure object. When we select a table, the text box prepares to get transferable table name, and when we select a SP then the text box prepares to get the SP name.

2. Point-5 in section-3: used mainly when we try to transfer a Table object from a database to another database. When we transfer a Table then there are two transferable options: a table script and table data. This application creates a table script from a source database and executes this script to create a table in a destination database. 

To create a table script we use a T-SQL statement. When we execute this T-SQL statement it returns a datatable with some rows. Those rows have a total table script with Identity, and a Primary key script.

Table script generate code: 

public string GetTableScript(string TableName, string ConnectionString)
{
    string Script = "";

    string Sql = "declare @table varchar(100)" + Environment.NewLine +
    "set @table = '" + TableName + "' " + Environment.NewLine +
        //"-- set table name here" +
    "declare @sql table(s varchar(1000), id int identity)" + Environment.NewLine +
    " " + Environment.NewLine +
        //"-- create statement" +
    "insert into  @sql(s) values ('create table [' + @table + '] (')" + Environment.NewLine +
    " " + Environment.NewLine +
        //"-- column list" +
    "insert into @sql(s)" + Environment.NewLine +
    "select " + Environment.NewLine +
    "    '  ['+column_name+'] ' + " + Environment.NewLine +
    "    data_type + coalesce('('+cast(character_maximum_length as varchar)+')','') +
    ' ' +" + Environment.NewLine +
    "    case when exists ( " + Environment.NewLine +
    "        select id from syscolumns" + Environment.NewLine +
    "        where object_name(id)=@table" + Environment.NewLine +
    "        and name=column_name" + Environment.NewLine +
    "        and columnproperty(id,name,'IsIdentity') = 1 " + Environment.NewLine +
    "    ) then" + Environment.NewLine +
    "        'IDENTITY(' + " + Environment.NewLine +
    "        cast(ident_seed(@table) as varchar) + ',' + " + Environment.NewLine +
    "        cast(ident_incr(@table) as varchar) + ')'" + Environment.NewLine +
    "    else ''" + Environment.NewLine +
    "   end + ' ' +" + Environment.NewLine +
    "    ( case when IS_NULLABLE = 'No' then 'NOT ' else '' end ) + 'NULL ' + " + Environment.NewLine +
    "    coalesce('DEFAULT '+COLUMN_DEFAULT,'') + ','" + Environment.NewLine +
    " " + Environment.NewLine +
    " from information_schema.columns where table_name = @table" + Environment.NewLine +
    " order by ordinal_position" + Environment.NewLine +
    " " + Environment.NewLine +
        //"-- primary key" +
    "declare @pkname varchar(100)" + Environment.NewLine +
    "select @pkname = constraint_name from information_schema.table_constraints" + Environment.NewLine +
    "where table_name = @table and constraint_type='PRIMARY KEY'" + Environment.NewLine +
    " " + Environment.NewLine +
    "if ( @pkname is not null ) begin" + Environment.NewLine +
    "    insert into @sql(s) values('  PRIMARY KEY (')" + Environment.NewLine +
    "    insert into @sql(s)" + Environment.NewLine +
    "        select '   ['+COLUMN_NAME+'],' from information_schema.key_column_usage" +
Environment.NewLine +
    "        where constraint_name = @pkname" + Environment.NewLine +
    "        order by ordinal_position" + Environment.NewLine +
        //"    -- remove trailing comma" +
    "    update @sql set s=left(s,len(s)-1) where id=@@identity" + Environment.NewLine +
    "    insert into @sql(s) values ('  )')" + Environment.NewLine +
    "end" + Environment.NewLine +
    "else begin" + Environment.NewLine +
        //"    -- remove trailing comma" +
    "    update @sql set s=left(s,len(s)-1) where id=@@identity" + Environment.NewLine +
    "end" + Environment.NewLine +
    " " + Environment.NewLine +
    "-- closing bracket" + Environment.NewLine +
    "insert into @sql(s) values( ')' )" + Environment.NewLine +
    " " + Environment.NewLine +
        //"-- result!" +
    "select s from @sql order by id";
    DataTable dt = GetTableData(Sql, ConnectionString);
    foreach (DataRow row in dt.Rows)
    {
        Script += row[0].ToString() + Environment.NewLine;
    }

    return Script;
}

To create a SP script we use a SQL Server database built-in Sp name "sp_helptext", it has a parameter to get the SP name.

SP script generate code:

public string GetSPScript(string SPName, string ConnectionString)
{
    string Script = "";

    string Sql = "sp_helptext '" + SPName + "'";

    DataTable dt = GetTableData(Sql, ConnectionString);
    foreach (DataRow row in dt.Rows)
    {
        Script += row[0].ToString() + Environment.NewLine;
    }

    return Script;
}

When we get both scripts from the source database then simply execute the destination database for transferring both objects to another database.

Now we transfer data between two servers. In this project we use two options to transfer data: Bulk copy method, or generate an insert statements according to source table and data then execute those statements in the destination server. 

Bulk data copy code:

void TransferData()
{
    try
    {
        DataTable dataTable = new Utility().GetTableData("Select * From " + txtTableName.Text, strSrc);

        SqlBulkCopy bulkCopy = new SqlBulkCopy(strDst, SqlBulkCopyOptions.TableLock)
        {
            DestinationTableName = txtTableName.Text,
            BatchSize = 100000,
            BulkCopyTimeout = 360
        };
        bulkCopy.WriteToServer(dataTable);

        MessageBox.Show("Data Transfer Succesfull.");
    }
    catch (Exception ex)
    {
        MessageBox.Show(ex.Message);
    }
}

Copy data using Insert statements code:  

void TransferDataWithTableScript()
{
    try
    {       

        DataTable dataTable = new Utility().GetTableData("Select * From " + txtTableName.Text, strSrc);

        if (!string.IsNullOrEmpty(new Utility().GetIdentityColumn(txtTableName.Text, strSrc)))
        {
            string InsertSQL = "";
            InsertSQL += "SET IDENTITY_INSERT [" + txtTableName.Text + "] ON " + Environment.NewLine;

            string ColumnSQL = "";
            foreach (DataColumn column in dataTable.Columns)
            {
                ColumnSQL += column.ColumnName + ",";
            }
            ColumnSQL = ColumnSQL.Substring(0, ColumnSQL.Length - 1);

            foreach (DataRow row in dataTable.Rows)
            {
                string ColumnValueL = "";
                foreach (DataColumn column in dataTable.Columns)
                {
                    ColumnValueL += "'" + row[column.ColumnName].ToString().Replace("''", "'") + "',";
                }
                ColumnValueL = ColumnValueL.Substring(0, ColumnValueL.Length - 1);

                InsertSQL += "Insert Into " + txtTableName.Text +
                  " (" + ColumnSQL + ") Values(" +
                  ColumnValueL + ")" + Environment.NewLine;
            }

            InsertSQL += "SET IDENTITY_INSERT [" + txtTableName.Text + "] OFF " + Environment.NewLine;

            new Utility().ExecuteQuery(InsertSQL, strDst);
        }       

        MessageBox.Show("Data Transfer Succesfull.");
    }
    catch (Exception ex)
    {
        MessageBox.Show(ex.Message);
    }
}

We need to use second data copy option because when a table has an Identity column and we try to copy data from another table, the new table generates a new ID for the identity column and not use the existing identity column data. For this situation we use this option to copy data. Also we use an extra statement to insert an identity column value before we execute an insert statement in the destination server, and after we execute the insert statement, we need to execute another statement. 

Identity column code:  

SET IDENTITY_INSERT [" + txtTableName.Text + "] ON // before execute insert statement
SET IDENTITY_INSERT [" + txtTableName.Text + "] OFF // after execute insert statement

 



European SQL 2012 Hosting - Amsterdam :: Data Alerting in SQL Server 2012

clock September 18, 2012 08:43 by author Scott

You can create a data alert to email notification in SQL Server 2012 reporting services. This data alert sends e-mail notification when only when specific conditions in the data are true at a schedule time. This Date Alerting feature is available only when reporting services runs in sharepoint integration mode. It works only with reports that are designed using Report Designer or Report Builder. You can not create alert for Power View reports.

Data Alerts Designer

You can create one or more data alerts for any report provided report must return the data at the time you create the data alert.


Steps to create a Data Alert

Open the report that you want to add the data alert, Select new Data Alert from the actions menu in reports toolbar



Note
: You must have a permission in sharepoint to create an alert.

Using Data Alert designer you can define rules for one or more data regions in the report that control reporting services send an alert. When you save the alerting definitions, reporting services saves it in the alerting database and schedules a corresponding SQL Server Agent Job.




You can create one or more rules that compares a field value to value that you enter, Data Alert designer combines multiple rules for the same data feed by using logical AND operator.


In schedule section of the Data Alert designer , you can configure the daily, weekly intervals at which to run the SQL Server Agent job for the data alert.




Final, you must specify email address as a recipient for the data alert. Reporting services alert service manages the process of refreshing data feed and applying the rules in the data alert definition. Alerting service adds an alerting instance to the alerting database.




The email for successful data alert shows the user name of the person who created the alert and description of the data from the alert and rows from the data feed that generated the data alert. The Sample alert shown as below




If an error occurs during the alert processing then it sends an alert message to recipient describing the error message.


Data Alert Manager

Data Alert Manager lists all data alerts that you created for the report as shown below. To open the Data Alert Manager , Open the document library which has the report then click the down arrow and select Manage Data Alerts.


 



European SQL Hosting - Amsterdam :: Import SQL Server Data to an Access Database

clock August 28, 2012 07:14 by author Scott

SQL Server data can be transferred to a Microsoft Access database via importing or linking. Importing essentially creates a copy of the SQL Server data in the Access database. Once the data is imported, changes made in either the SQL Server database or the Access database are not reflected in one another. In other words, think of it as a one-way street. Linking, on the other hand, provides a direct connection between the two entities that will reflect any changes made to the data in either entity.

Importing comes in handy when have data stationed in your SQL Server database that you want to transfer to your Access database permanently. This specific tutorial will explain how to import SQL Server Data to an Access database. An upcoming tutorial detailing the linking process will be posted in the future.


How to Get Your Data Ready Before Import


Before you begin preparing for the import process, you will obviously need to have the proper information in place to connect to your SQL Server database. If you are not the database administrator, contact them and secure the relevant login information first.


Once you have the necessary information, connect to the SQL Server database containing the data you are planning to import to Access.


You will have the option to import several objects (tables or views) at the same time. As is usually the case when importing data from different programs, errors can occur if the formatting is not correct before the import begins. To ensure that your database’s source data gets imported without problems, follow the guidelines in this checklist:


1. Your data must be limited to a maximum of 255 columns, as that is all that Access will support.


2. Your source data cannot exceed 2GB in size. Access limits database size to 2GB, and you will also have to concede some storage for system objects as well. If your SQL Server source data is very large due to having multiple tables, you will probably encounter an error when trying to import them via an .accdb file. Should this apply to you, linking your SQL Server data to Access will provide a solution to your woes. You will be shown how to link the data in an upcoming article in this series.


3. It will be necessary to manually create relationships between new and existing tables due to the fact that Access does not create them automatically once the data is imported. You can create the table relationships by selecting the File tab, clicking Info, and selecting Relationships.


Now that you have looked over your SQL Server source data and checked that it meets the prerequisites, go to the Access database that will be the destination for the data. Before you can import anything to the Access database, you will need the appropriate permissions to do so.


You can either add data to an existing database, or add it to a fresh, blank database. To add data to a blank database, select File, click New, and select the Blank Database option. If you are adding data to an existing database, check over the tables to see that there are no errors.


How to Import Your Data


With your data prepared, it’s now time to finally import it. Open up your Access destination database, and click on the External Data tab. Go to Import & Link, and select the ODBC Database option. Select the Import the source data into a new table in the current database option. Click OK.


You must now select your data source, or .dsn file. If you see your .dsn file, click on it and select OK. If you need to create a new .dsn file, clicking New will bring up the Create New Data Source wizard, which is detailed in the next paragraph of this tutorial. If you already selected your data source, skip the next two paragraphs.


In the Create New Data Source wizard, select SQL Server from the driver list. Click Next. Enter a name for the .dsn file. You’ll need write permissions for the folder where you are saving the .dsn file. Click Next, followed by Finish.


You should now see the Create a New Data Source to SQL Server window. You have the option of entering a description of the data source in the Description box. Enter one in, or leave it blank. Enter the name of the SQL Server you want to connect to in the corresponding box, and click Next. Choose whether to use Windows NT authentication or SQL Server authentication, and click Next. Check the box next to the Change the default database to option if you plan on connecting to a specific database. Click Next, followed by Finish. Check over the summary, and select Test Data Source. If your test results are fine, click OK twice. Click OK once again until the Select Data Source window is closed.


In the Import Objects window, go to the Tables. Here is where you click on the tables or views to import. Click OK to begin importing the data.


It is recommended that you save the steps you just performed as an import specification. This will save you time as you repeat the process in the future. In the Get External Data – ODBC Database window, look for and add a check in the box next to the Save Import Steps option. Enter a name for the import specification. You can optionally enter a description in the appropriate box as well. If you have Microsoft Outlook 2010 installed and want to execute the data import during specific times, check the box next to the Create Outlook Task option. Otherwise, you can run the import at your own discretion without the help of Outlook. Click Save Import to save the specification. You will now have the import specification at your fingers the next time you want to import data between SQL Server and Access.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in