European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

European SQL 2016 Hosting - HostForLIFE.eu :: Indexes In SQL Server

clock August 7, 2018 09:19 by author Peter

One of the most important routes to high performance in SQL server database is an index. It is a database object which is used to speed up the querying process by providing quick access to rows in the database tables. By using Indexes we can save time and can improve the performance of database queries and applications. An Index contains keys built from one or more columns in the table mapped to the storage location of the specified data. When we create an index on any column, SQL server internally maintains a separate table called index table, so that whenever a user tries to retrieve the data from the existing table,  depending on the index Table SQL server goes directly to the table and retrieves the required data very quickly.

In the Table we can use a maximum of 250 Indexes. The Index Type refers to the way the index is stored internally by SQL server. So a Table can contain two types of indexes:

  • Clustered Index
  • Non-clustered Index

Clustered Indexes
The only time the data rows in a table are stored in sorted (ascending order only) order structure is when the table contains a clustered index. When a table has a clustered index, then it is called a clustered table. If a table has no clustered index, its data rows are stored in an unordered structure. A table can have only 1 clustered Index on it, which will be created when a primary key constraint is used in a Table.

Non-Clustered Indexes
Non-clustered Indexes will not have any arrangement order (unordered structure) of the data in the table. In a table, we can create 249 non clustered Indexes.If we don't mention clustered indexes in a table then a default is stored as non-clustered Indexes.

European SQL 2016 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European SQL 2016 Hosting - HostForLIFE.eu :: Using UNION With Different Tables, Fields And Filtering

clock July 31, 2018 09:22 by author Peter

The command "UNION" is the perfect way to join two tables with the same data context. Whether they have or do not have the same fields, you need to classify the data.
Look at this selected query.
Select t.* From ( 
(Select 1 as typePerson, tenName as namePerson, tenSalary as payMoney From Teachers Where tenAge>30) 
UNION 
(Select 0 as typePerson, stdName as namePerson, 0 as payMoney From Students Where stdYearFinish=2016) 
) t 
Where t.namePerson like ‘Maria%’ 
Order By t.tenName , t.typePerson;*


It creates a temporary alias “t”;
It classifies each data row “typePerson”, 1 (true) for teachers and 0 (false) to Students;
It filters the age of the teachers;
It filters the end year on the school to the year 2016;
After the UNION, it filters by the field name Person that begins with Maria.

Observations
All fields must be at the same position and the same data type, you can make all kinds of selects, joins, where etc. The "UNION ALL" command is better than UNION if you want to select all rows. If they are the same**, this is not the case. In this sample, you can make it a View.

CONCLUSION

You must be careful of the position of the fields and the type, and you can use cast too.

This selection is just an example.
The SQL UNION ALL operator is used to combine the result sets of 2 or more SELECT statements. It does not remove duplicate rows between the various SELECT statements (all rows are returned).

European SQL 2016 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



SQL Server 2016 Hosting - HostForLIFE.eu :: Auto Query Generator In MSSQL Server

clock July 26, 2018 08:01 by author Peter

If you’re a developer, irrespective of the platform, you  have to work with databases. Creating SQL statements for tables is quite often a monotonous job and it gets hectic especially when dealing with gigantic tables that have hundreds of columns. Writing SQL statements manually every time becomes a tiresome process. Before explaining the script, I want to share the reason to write this script and how it is helping my peers. We have code standard on the database side. Below points are standards.

  • Need to maintain a separate stored procedure to every table
  • Don’t use * in the query instead specify the column
  • Use the correct data type and size of a column
  • Every parameter should be nullable in a stored procedure.

I am developing an application which is related to machines using .NET and SQL Server. The database design consists of some master tables and transactional tables. All the transactional table has more than 30 columns.

To meet my code standards, I need to mention all columns with correct data type and size in stored procedure parameters like below,
CREATEproc [dbo].[USP_PCNitemCreation] ( @Id int, @machineName varchar(50)=NULL, @furnacename varchar(50)=NULL, @minValue int=NULL, @maxValue int=NULL, @createdDate datetime=nullvarchar(100)=NULL ) 

All the queries should specify the column instead of using the start(*).
select machineName,furnacename from trn_furnace where Id=@Id 

It consumes more time and is a boring task. So, I plan to write the script to is cut down on the time it takes and boring repeated work. We cannot automate the logic, but we can automate the repeated task.

Then I write the below script which really cuts down on all of our above pain points.

Auto Query Generator Stored Procedure for MSSQL Server,
CREATEproc [dbo].[USP_QuerycreationSupport] ( @table_Name varchar(100)=NULL ) AS  
BEGINDECLARE @InserCols   NVARCHAR(max)DECLARE @Inserparam  NVARCHAR(max)DECLARE @Insertquery NVARCHAR(max)DECLARE @Selectquery NVARCHAR(max)DECLARE @Update      NVARCHAR(max)DECLARE @DeleteQuery NVARCHAR(max) 
  -- sp paramSELECT '@'+c.NAME+Space(1)+Casecast(t.Nameasnvarchar(40))WHEN'nvarchar'THEN  
  t.NAME    +'('+cast(c.max_length asnvarchar(30))+')'  
WHEN'varchar'THEN  
  t.NAME+'('+cast(c.max_length asnvarchar(30))+')'  
WHEN'char'THEN  
  t.NAME+'('+cast(c.max_length asnvarchar(30))+')'  
WHEN'decimal'THEN  
  t.NAME        +'(18,2)'  
  ELSE t.nameend+'=null,'AS colss FROM sys.columns c innerjoin sys.types t ON c.user_type_id = t.user_type_id leftouterjoin sys.index_columns ic ON ic.object_id= c.object_idand ic.column_id = c.column_id leftouterjoin sys.indexes i ON ic.object_id= i.object_idand ic.index_id = i.index_id WHERE c.object_id=object_id(@table_Name)SELECT'Insert query'SET @InserCols=(selectdistinct  
  (  
         select sc.NAME+','  
         FROM   sys.tables st innerjoinsys.columns sc  
         ON st.object_id= sc.object_id  
         WHERE  st.NAME= @table_Name forxmlpath(''),  
                type).value('.','NVARCHAR(MAX)'))  
  -- Return the result of the functionSELECT @InserCols=LEFT(@InserCols,Len(@InserCols)-1)  
  --select @InserColsSET @Inserparam=(selectdistinct  
  (  
         select'@'+sc.NAME+','  
         FROM   sys.tables st innerjoinsys.columns sc  
         ON st.object_id= sc.object_id  
         WHERE  st.NAME= @table_Name forxmlpath(''),  
                type).value('.','NVARCHAR(MAX)'))  
  -- Return the result of the functionSELECT @Inserparam=LEFT(@Inserparam,Len(@Inserparam)-1)  
  --select @InserparamSET @Insertquery='insert into '+@table_Name+'('+@InserCols+')'+'values'+'('+@Inserparam+')'SELECT @InsertquerySELECT'Update Query'SET @Update=(selectdistinct  
  (  
         select sc.NAME+'=@'+sc.NAME+','  
         FROM   sys.tables st innerjoinsys.columns sc  
         ON st.object_id= sc.object_id  
         WHERE  st.NAME= @table_Name forxmlpath(''),  
                type).value('.','NVARCHAR(MAX)'))  
  -- Return the result of the functionSELECT @Update=LEFT(@Update,Len(@Update)-1)  
  --select @UpdateSET @Update='UPdate '+@table_Name+' set '+@UpdateSELECT @Update  
  -- For select QuerySELECT'Select Query'SET @Selectquery='select '+@InserCols +' from '+ @table_NameSELECT @Selectquery 
  -- For Delete QuerySELECT'Delete Query'SET @DeleteQuery='delete from '+ @table_NameSELECT @DeleteQuery 
end 


How to use this script,
Step 1 - Create the stored procedure using the above code or attached code.
Step 2 - Execute the stored procedure and pass your table name as a parameter.
Exec USP_QuerycreationSupport@table_Name='mstCustomer' 

Should not pass the database object in the table name
Exec USP_QuerycreationSupport@table_Name='[dbo].[mstCustomer]' 

Once you execute the Stored Procedure as mentioned above, you get all the SQL statements as shown here. You could easily use the generated SQL statements elsewhere. You get all basic SQL statements like Select, Insert, Update & Delete.


HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



European SQL 2016 Hosting - HostForLIFE.eu :: FOR JSON Clause With AUTO Mode In SQL Server 2016

clock July 24, 2018 07:26 by author Peter

In the release of SQL Server 2016 CTP 2 one of the features that was introduced is JSON clause. So, the first question that comes into everyone’s mind is What is JSON? JSON stands for JavaScript Object Notation. JSON is a lightweight format which is used for storing and interchanging the data. JSON uses standard JavaScript functions to convert the JSON data into native JavaScript objects. The main purpose of using FOR JSON is used to create new JSON objects. We can format the query results using FOR JSON clause in these ways,
With AUTO mode
With PATH mode
With ROOT option
Output with INCLUDE_NULL_VALUES option


In this blog, we will discuss the query formatting using FOR JSON clause with AUTO mode option.

Syntax for FOR JSON clause with AUTO option is like this:
FOR JSON AUTO

When AUTO option is used, the format of JSON is determined automatically on the basis of the number of columns present in the SELECT statement list. A FROM clause is necessary inquery with FOR JSON AUTO option.

When you join tables, columns present in the first table are used as properties of the root object in JSON array while columns present in the second table will be automatically formatted as a nested object within the root object.

Let’s execute the below query and see the JSON output.
SELECT sp.BusinessEntityID, 
   sp.TerritoryID, 
   st.CountryRegionCode, 
   st.[Group] TerrritoryGroup 
   FROM sales.salesperson sp 
   JOIN Sales.SalesTerritory st ON sp.TerritoryID = st.TerritoryID 
   WHERE sp.TerritoryID = 10 
FOR JSON AUTO 

After executing the above query, we get the output in this format.
[{ 
    "BusinessEntityID": 289, 
    "TerritoryID": 10, 
    "st": [{ 
        "CountryRegionCode": "GB", 
        "TerrritoryGroup": "Europe" 
    }] 
}]

Brackets [ ] represents JSON array in output.

Here, in the output, we can see that table Sales.SalesTerritory is automatically formatted as a nested object under parent object.

So we have generated a formatted query output using JSON clause. I will continue with other ways of formatted output using JSON clause in my next blogs.

European SQL 2016 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European SQL 2016 Hosting - HostForLIFE.eu :: Using Window Ranking Function With UPDATE Statement

clock July 17, 2018 07:59 by author Peter

I was working on a scenario in which I needed to assign a unique value to a column. I was trying to update the column using window ranking function ROW_NUMBER() but I got an error that ‘Windowed function can only appear in SELECT or ORDER BY clause’:

Then I did some workaround and used the Windows function indirectly using a CTE (Common Table Expression) for this. I will show you step by step how I accomplished this using CTE.

Let’s first create a table with some test data,
    CREATE TABLE Test 
    ( 
        ID INT, 
        Value VARCHAR(10) NOT NULL 
    ) 
    GO 
     
    INSERT INTO Test (Value) VALUES('Sunday'),('Monday'),('Tuesday'),('Wednesday'),('Thursday'),('Friday'),('Saturday') 
    GO 

As we can see that in column ID NULL values get inserted as we didn’t specify any values for this column during INSERT statement. So, when I tried this UPDATE statement using ROW_NUMBER() with SET I got the following error,
    UPDATE Test 
    SET ID = ROW_NUMBER() OVER(ORDER BY Value) 
    GO 

Then I used CTE to update the ID values in table test in this manner,
    ;WITH CTE AS 
    ( 
        SELECT Value, ROW_NUMBER() OVER(ORDER BY Value) as RN 
        FROM Test 
    ) 
     
    UPDATE T 
    SET ID = RN 
    FROM CTE C JOIN Test T ON T.value = C.Value 


When I ran this SQL code I didn’t get any error and ID column values to get updated with unique values.

European SQL 2016 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European SQL 2016 Hosting - HostForLIFE.eu :: Alternative To SQL "In" Condition When Working With Many Values

clock July 10, 2018 06:58 by author Peter
In this post, we will be discussing an alternative to using a SQL “IN” condition, where many values need to be parsed to the SQL query through the use of a User Defined Data Type to get exact records back based on a list of ID's. There will probably be a very small number of cases where additional steps (as detailed in this post) will need to be carried out. Afterall, SQL Server has a very large limit on the number of values the “IN” condition can handle, based on the length of instruction (max 65k).

However, based on personal experience where there is a bigger need for dealing with large volumes of data, investing extra time in database architecture is always good.

The “IN” Condition

For those who are not familiar, the “IN" condition is probably one of the most flexible methods of returning specific records from a table based on multiple list of values. The “IN” condition is a nice shorthand way to carry out multiple OR conditions. A basic example of the condition in use is:

SELECT *  
FROM Products  
WHERE ID IN (1, 2, 3, 4)  

User Defined Data Type

First introduced in SQL Server 2008, User Defined Data Types defines a way to storing temporary data in a structure similar to a table. The User Defined Data Type can be populated through a Table-Valued parameter within a stored procedure. This will be the key to what we will be using in dealing with many values for our alternative to using the IN condition.

Scenario: Selecting List of Products

I encountered a scenario that required a flexible way to get back a number of specific records from my database when I received a list of product ID’s from an external API for cross-referencing. Normally, I would just grab all the ID’s and parse them directly into the IN condition. It was an unknown as to how many ID’s would received from the external API and a different approach needed to be considered to ensure the application would be scalable.

Create A New User Defined Data Type

The first thing we need to do is to create a User Defined Data Type that will allow us to pass the list of our product ID’s in a table structure. The structure itself is very simple and contains just one column called ProductID.
CREATE TYPE Type_ApiProductIDs AS TABLE    
(  
    ProductID int  

Create Stored Procedure

Next we create a stored procedure called “spGetAllProductsByIDs” that will contain a parameter based on the data type we created above. In this case, the parameter is @ProductsTVP. Note that when using a variable based on a data type in a stored procedure, it must be declared as READONLY.

CREATE PROCEDURE spGetAllProductsByIDs  
(  
    @ProductsTVP Type_ApiProductIDs READONLY  
)  
AS  
BEGIN  
    SELECT  
        ID,  
        ProductName,  
        ProductPrice,  
        ProductDescription  
    FROM   
        CMS_Products p  
    INNER JOIN @ProductsTVP AS tvp ON p.ID = tvp.ProductID   
END 
The stored procedure is simply getting back all products from CMS_Products table based on a values stored in @ProductsTVP parameter through a join. Now all that needs to be done is to use this stored procedure in code.

Passing User Defined Data Type In Code

A method called GetProductsBySearchTerm() gets a list of product ID’s based on search terms from a call to the SearchApiHelper class, which then transforms this output to the “prodIdsTable" DataTable structure similar to our Type_ApiProductIDs data type. To use this DataTable with our stored procedure, we’ll be using a SqlDbType.Structured SQL parameter.
 
In addition to a DataTable, a IEnumerable<SqlDataRecord> and DbDataReader can be used in conjunction with the SqlDbType.Structured parameter.  
public static void GetProductsBySearchTerm(string searchTerm) 

    #region Get Product IDs from API Based On Search Term 
      List<int> searchProductIds = SearchApiHelper.GetResults(searchTerm); // Get list of product ID's. 
      // Create a Data Table in the same structure to User Data-Type. 
    DataTable prodIdsTable = new DataTable(); 
    prodIdsTable.Columns.Add(new DataColumn("ProductID", Type.GetType("System.Int32"))); 
      // Populate "prodIdsTable" datatable with ID's from SearchApiHelper. 
    if (searchProductIds?.Count > 0) 
    { 
        foreach (int id in searchProductIds) 
        { 
            DataRow newRow = prodIdsTable.NewRow(); 
            newRow["ProductID"] = id; 
            prodIdsTable.Rows.Add(newRow); 
        } 
    }  
    #endregion 
      if (prodIdsTable?.Rows.Count > 0) 
    { 
        DataSet dsProducts = new DataSet(); 
          using (SqlConnection sqlConn = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnectionString"].ToString())) 
        { 
            try 
            { 
                sqlConn.Open(); 
 
                using (SqlCommand sqlCmd = new SqlCommand("spGetAllProductsByIDs", sqlConn)) 
                { 
                    sqlCmd.CommandType = CommandType.StoredProcedure; 
 
                    SqlParameter tvpParam = sqlCmd.Parameters.AddWithValue("@ProductsTVP", prodIdsTable); 
                    tvpParam.SqlDbType = SqlDbType.Structured; 
 
                    SqlDataAdapter da = new SqlDataAdapter(); 
                    da.SelectCommand = sqlCmd; 
                    da.Fill(dsProducts); 
                } 
 
                // Do something with the data returned from dsProducts DataTable... 
            } 
            catch (Exception e) 
            { 
                throw e; 
            } 
        } 
    } 
}

European SQL 2016 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European SQL 2016 Hosting - HostForLIFE.eu :: Warehousing JSON formatted data in SQL Server 2016

clock March 8, 2017 10:34 by author Scott

In this article, I continue to review the exciting features available in SQL Server 2016. One such feature is the long awaited T-SQL support for JSON formatted data. In this article we take a look at how JSON support will impact data warehouse solutions.

Background

Since the advent of EXtensible Markup Language (XML) many modern web applications have focused on providing data that is both human-readable and machine-readable. From a relational database perspective, SQL Server kept up with these modern web applications by providing support for XML data in a form of an XML data type and several functions that could be used to parse, query and manipulate XML formatted data.

As a result of being supported in SQL Server, data warehouse solutions based off SQL Server were then able to source XML-based OLTP data into a data mart. To illustrate this point, let’s take a look at the XML representation of our fictitious Fruit Sales data shown in figure below.

To process this data in data warehouse, we would first have to convert it into relational format of rows and columns using T-SQL XML built-in functions such as the nodes() function. 

The results of the above script are shown in figure below in a recognisable format for data warehouse.

Soon after XML became a dominant language for data interchange for many modern web applications, JavaScript Object Notation (JSON) was introduced as a lightweight data-interchange format that is more convenient for web applications to process than XML. Likewise most relational database vendors released newer versions of their database systems that included the support for JSON formatted data. Unfortunately, Microsoft SQL Server was not one of those vendors and up until SQL Server 2014, JSON data was not supported. Obviously this lack of support for JSON, created challenges for data warehouse environments that are based off SQL Server.

Although there were workarounds (i.e. using Json.Net) to addressing the lack of JSON support in SQL Server, there was always sense that these workarounds were inadequate, time-wasting, and were forcing data warehouse development teams to pick up a new skill (i.e. learn .Net). Fortunately, the release of SQL Server 2016 has ensured that development teams can throw away their JSON workarounds as JSON is supported in SQL Server 2016.

Parsing JSON Data into Data Warehouse

Similarly to XML support in SQL Server, SQL Server supports of JSON can be classified into two ways:

  • Converting Relational dataset into JSON format
  • Converting JSON dataset into relational format

However, for the purposes of this discussion we are focusing primarily on the second part – which is converting a JSON formatted data (retrieved from OLTP sources) into a relational format of rows and columns. To illustrate our discussion points we once again make use of the fictitious fruit sales dataset. This time around the fictitious dataset has been converted into a JSON format as shown below.

ISJSON function

As part of supporting JSON formatted data in other relational databases such as MySQL and PostgreSQL 9.2, there is a separate JSON data type that has been introduced by these vendors. Amongst other things, JSON data type conducts validation checks to ensure that values being stored are indeed of valid JSON format.

Unfortunately, SQL Server 2016 (and ORACLE 12c) do not have a special data type for storing JSON data instead a variable character (varchar/nvarchar) data type is used. Therefore, a recommended practice to dealing with JSON data in SQL Server 2016 is to firstly ensure that you are indeed dealing with a valid JSON data. The simplest way to do so is to use the ISJSON function. This is a built-in T-SQL function that returns 1 for a valid JSON dataset and 0 for invalids.

Image below shows us the implementation of ISJSON function whereby we validate our fictitious sample dataset.

OPENJSON function

Now that we have confirmed that we are working with a valid JSON dataset, the next step is to convert the data into a table format. Again, we have a built-in T-SQL function to do this in a form of OPENJSON. OPENJSON works similar to OPENXML in that it takes in an object and convert its data into rows and columns.

Figure below shows a complete T-SQL script for converting JSON object into rows and columns.

Once we execute the above script, we get relational output shown below.

Now that we have our relational dataset, we can process this data into data warehouse.

JSON_VALUE function

Prior to concluding our discussion of JSON in SQL Server 2016, it is worth mentioning that in addition to OPENJSON, you have other functions such as JSON_VALUE that could be used to query JSON data. However this function returns a scalar value which means that unlike the multiple rows and columns returned using OPENJSON, JSON_VALUE returns a single value as shown below.

If you the JSON object that you are querying doesn’t have multiple elements, than you don’t have to specify the row index (i.e. [0]) as shown below.

Conclusion

The long wait is finally over and with the release of SQL Server 2016, JSON is now supported. Similarly to XML, T-SQL support the conversion of JSON object to relational format as well the conversion of relational tables to a JSON object. This support is implemented via built-in T-SQL functions such as OPENJSON and JSON_VALUE. In spite of all the excitement with the support of JSON is SQL Server 2016, we still don’t have a JSON data type. The ISJSON function can then be used to validate JSON text.



SQL Server 2016 Hosting - HostForLIFE.eu :: Isolation Level In SQL Transaction

clock February 28, 2017 08:37 by author Peter

Isolation is one of the properties of SQL Transaction. Isolating / separating transactions from each other to maintain Data Integrity in Database is called Isolation.Before going to implementation part of isolation, we will understand why isolation is required in database ?

Why Is Isolation Level Required ?
While developing large enterprise/public networking kind of applications where a huge number of users access same Database, same Table and at the same Time, Data concurrency situation may occur. We will discuss this situation into 4 parts:

  • Loss of Data
  • Dirty Read
  • Phantom Read
  • Inconsistency Analysis

Loss of Data
Let's take an example - Suppose, there are 2 users accessing the same table, at the same moment, to update the same row. Each transaction is unaware of the other transaction. User A updates the row and then User B updates the same row. What happened here is the last transaction made my User B overwrites the updated record of User A and User A lost his/her data in the table.

Dirty Read
This is otherwise known as Uncommitted Dependency. Let's take another example - Suppose, User A and User B are accessing a table row at the same time. User A wants to read and User B wants to update the row. In the friction of time difference, transactions are executed. So, when User B not yet updated the row (during the update process), User A reads that row and got the old record which may not be correct for his/her operation. This situation is known as Dirty Read.

Phantom Read
This is also known as Phantom Problem. Let's again take another example - Suppose User A is granted to insert a row but the same time User B inserted that row. Now, when User A tries to insert, he/she can't. Then, he/she will get angry and say- "Hey you committed that this is available for me to insert, but you cheated on me and granted someone else to do so !" . You may/might get this problem while the reservation of Train/Movie ticket.

Inconsistency Analysis
This is also known as Non-Repeatable Problem. Let's take the same example of User A and User B. Suppose, User A executes a transaction having three queries - a stored procedure or transaction or individual query with a batch. 1st query is to read a table row, the 2nd query is to update that, and the 3rd query is to read that again. By doing this, User A wants to generate the report. As we know User B is not a gentleman he always spoils the intention of User A, he accessed the table row in between the two Read queries of User A and did some operation like Delete ! Now, User A has already modified the data and when he wants to read it again, he is surprised ! He got inconsistency in data.

In the above-mentioned points, we understood that - as User A is a weak person, User B always plays with User A and forcefully does it's job by dominating him. But as a Database Administrator or Developer, we need to help the User A. Now, we have a weapon called "Isolation Level" by using which we can cooperate User A to maintain its integrity.

Isolation Levels
SQL Server provides 5 Isolation levels to implement with SQL Transaction to maintain data concurrency in the database.

Isolation level is nothing but locking the row while performing some task, so that other transaction can not access or will wait for the current transaction to finish its job.

Let's write a transaction without Isolation level.
BEGIN TRANSACTION MyTransaction 
BEGIN TRY 
UPDATE Account SET Debit=100 WHERE Name='John Cena' 
UPDATE ContactInformation SET Mobile='1234567890' WHERE Name='The Rock' 
COMMIT TRANSACTION MyTransaction 
PRINT 'TRANSACTION SUCCESS' 
END TRY 
BEGIN CATCH 
ROLLBACK TRANSACTION MyTransaction 
PRINT 'TRANSACTION FAILED' 
END CATCH  


In the above code snippet, we have not implemented Isolation Level. Therefore, there are chances of data concurrency. Now, we will go through all the five Isolation levels and find the solution for our situation.

Read Uncommitted

When this level is set, the transaction can read uncommitted data resulting in the Dirty Read problem. With this isolation level, we allow a transaction to read the data which is being updated by other transaction and not yet committed. Suppose User A is trying to read a row which is being updated by User B. Here, we are allowing User A to read the un-updated/uncommitted data i.e old data.

Example
SET TRANSACTION ISOLATION LEVEL  
READ UNCOMMITTED 
BEGIN TRANSACTION MyTransaction 
BEGIN TRY 
UPDATE Account SET Debit=100 WHERE Name='John Cena' 
UPDATE ContactInformation SET Mobile='1234567890' WHERE Name='The Rock' 
COMMIT TRANSACTION MyTransaction 
PRINT 'TRANSACTION SUCCESS' 
END TRY 
BEGIN CATCH 
ROLLBACK TRANSACTION MyTransaction 
PRINT 'TRANSACTION FAILED' 
END CATCH 
Read Committed

This prevents Dirty Read. When this level is set, the transaction can not read the data that is being modified by the current transaction. This will force user to wait for the current transaction to finish up its job. Suppose User A is trying to read a row which is being updated by User B. Here, we are asking User A to wait for the User B to finish its update task, and giving the updated/correct data to User A. But the problem with this level is - it can't resolve Phantom Read or Inconsistency Analysis i.e it asks User A to wait for Read but not for update or insert.

Example
SET TRANSACTION ISOLATION LEVEL  
READ COMMITTED 
BEGIN TRANSACTION MyTransaction 
BEGIN TRY 
UPDATE Account SET Debit=100 WHERE Name='John Cena' 
UPDATE ContactInformation SET Mobile='1234567890' WHERE Name='The Rock' 
COMMIT TRANSACTION MyTransaction 
PRINT 'TRANSACTION SUCCESS' 
END TRY 
BEGIN CATCH 
ROLLBACK TRANSACTION MyTransaction 
PRINT 'TRANSACTION FAILED' 
END CATCH 
Repeatable Read

This level does every work that Read Committed does. but it has one additional benefit. User A will wait for the transaction being executed by User B to execute it's Update query as well, like Read Query. But Insert query doesn't wait, this also creates Phantom Read problem.

Example
SET TRANSACTION ISOLATION LEVEL  
REPEATABLE READ
Snapshot

This level takes a snapshot of current data. Every transaction works on its own copy of data. When User A tries to update or insert or read anything, we ask him to re-verify the table row once again from the starting time of its execution, so that he can work on fresh data. with this level. We are not giving full faith to User A that he is going to work on fresh data but giving high-level changes of data integrity.

Example
SET TRANSACTION ISOLATION LEVEL  
SNAPSHOT 
Serializable

This is the maximum level of Isolation level provided by SQL Server transaction. We can prevent Phantom Read problem by implementing this level of isolation. It asks User A to wait for the current transaction for any kind of operation he wants to perform.

Example
SET TRANSACTION ISOLATION LEVEL  
SERIALIZABLE

Isolation level also has a problem called "Dead Lock"- "Both the transactions lock the object and waits for each other to finish up the job". Dead Lock is very dangerous because it decreases the concurrency and availability of database and the database object. We will discuss Dead-Lock in the later part of this topic.

Hope, I covered the necessary points that can help many of us to implement Isolation level in the project with the identification of real situations. Believe me friends, it is a good practice to implement Transaction with "Isolation Level" in every type of projects.

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server 2016 Hosting - HostForLIFE.eu :: FORMATMESSAGE

clock February 21, 2017 08:25 by author Peter

In this post, I will show you how to using bigint with FORMATMESSAGE. SQL Server 2016 added the FORMATMESSAGE function.  According to Books On Line, FORMATMESSAGE constructs a message from an existing message in sys.messages or from a provided string. The functionality of FORMATMESSAGE resembles that of the RAISERROR statement. However, RAISERROR prints the message immediately, while FORMATMESSAGE returns the formatted message for further processing.

So let's take a look at this new function, run the following
SELECT FORMATMESSAGE('Signed int %i, %i', 50, -50)
SELECT FORMATMESSAGE('Unsigned int %u, %u', 50, -50);


Here is the output if you run that

--------------------------------------------
Signed int 50, -50
Unsigned int 50, 4294967246

Here is what the type specifications that you can use are

Type specification         Represents
d or i                          Signed integer
o                                Unsigned octal
s                                String
u                               Unsigned integer
x or X                         Unsigned hexadecimal

We used i to denote a signed integer, we also used u to denote a unsigned integer

Let's look at another example, this time we are using a variable. The variable will be an integer and we  are using i as the type specification

DECLARE @Val int = 1
SELECT FORMATMESSAGE('The value you supplied %i is incorrect!', @Val);


Here is the output
---------------------------------------
The value you supplied 1 is incorrect!

That worked without a problem. Now let's use a variable of the bigint data type, we are using the same type specification as before

DECLARE @Val bigint = 1
SELECT FORMATMESSAGE('The value you supplied %i is incorrect!', @Val);

Here is the output
---------------------------------------------------------------------------
Error: 50000, Severity: -1, State: 1. (Params:).


The error is printed in terse mode because there was error during formatting.
Tracing, ETW, notifications etc are skipped.

As you can see that did not work, so what can we do?
One thing we can do is converting the value to a varchar and then use s as the type specification
DECLARE @Val bigint = 1
SELECT FORMATMESSAGE('The value you supplied %s is incorrect!',
   CONVERT(VARCHAR(100),@Val));

You will again get this as output
---------------------------------------

So converting to varchar worked, but what if we want to use a bigint data type without converting to a varchar?

Another way is to use I64d as the type specification
DECLARE @Val bigint = 1<br />
SELECT FORMATMESSAGE('The value you supplied %I64d is incorrect!', @Val);


You will get this
---------------------------------------

The value you supplied 1 is incorrect!
So there you have it, if you want to use bigint with FORMATMESSGAE use I64d as the type specification, or convert to varchar and use s as the type specification.

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server 2016 Hosting - HostForLIFE.eu :: How to Auto Increment ID Column Using nvarchar DataType In SQL Server?

clock February 14, 2017 07:53 by author Peter

This article is about Auto Increment ID Column using nvarchar datatype in SQL Server 2016 . SQL server provides identity property to get auto-increment record values where the datatype used is “int”. This article explains a way to use nvarchar datatype column to get auto-increment record values.
e.g. S00001, E00001, INV00001. (Here there are prefix like S, E, INV used to denote the record ids.)

This article provides a solution to use alphanumeric record ids incremented automatically.

Solution
Create an Employee table as below,

Here “ID” column is for storing record id as similar to Serial No of any entry made. “EmployeeNo” column is used to store the alphanumeric auto increment id. “Name” stores the name of the Employee. Create a Windows Form with a TextBox and a Button control


The above figure shows the use of a textbox to enter name in the ”employee” table where the id is auto-incremented.
Code
    Import the NameSpace

Create 2 methods OpenConnection() and CloseConnection() to handle the SQL Server Connection object.
        private void OpenConnection(SqlConnection con) { 
            con.ConnectionString = "Data Source=.; Initial Catalog=demodb; Integrated Security=True"; 
            if (con.State == ConnectionState.Closed) { 
                con.Open(); 
            } 
        } 
        private void CloseConnection(SqlConnection con) { 
            con.Close(); 
        } 


On the Insert button click the id is auto-incremented and the relevant data is inserted in the employee table. (id, employeeno and name)
        SqlConnection con = new SqlConnection(); 
        OpenConnection(con); 
        string id = AutoIncrementID(); 
        int idLimit = 7; 
        string eno = "E" + ZeroAppend("0000000" + id, idLimit); 
        string query = "insert into employee values ('" + id + "', '" + eno + "', '" + txtName.Text + "')"; 
        SqlCommand cmd = new SqlCommand(query, con); 
        int i = cmd.ExecuteNonQuery(); 
        if (i > 0) { 
            MessageBox.Show("Data Added", "Add", MessageBoxButtons.OK, MessageBoxIcon.Information); 
        } 
        CloseConnection(con); 

Here a method AutoIncrementID() is used to do the task of auto-incrementing based on the last entry inserted in the table.
        private string AutoIncrementID() { 
            SqlConnection con = new SqlConnection(); 
            OpenConnection(con); 
            SqlCommand cmd = new SqlCommand("SELECT ISNULL(MAX(ID),0) + 1 from employee", con); 
            SqlDataReader dr = cmd.ExecuteReader(); 
            string id = null; 
            if (dr.Read()) { 
                id = dr[0].ToString(); 
            } 
            CloseConnection(con); 
            return id; 
        } 
        //The function ZeroAppend is used to append “0” after the prefix to the code. 
        public static string ZeroAppend(string data, int idLimit) { 
            return data.Substring(data.Length - idLimit); 
        } 

Output,

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in