European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server 2016 Hosting - HostForLIFE.eu :: Using Materialized Views Effectively

clock September 18, 2018 09:08 by author Peter

Materialized views have served me well in day-to-day life – both as an application developer and business intelligence developer. In this article, I’ll explain some of the “why’s,” along with a specific example that takes advantage of the SQL MERGE statement to incrementally reconcile changes to a persistent data store, optimized for reads.

Motivations and Mechanics
The link above offers some good detail on why and how we might use materialized views, but I can offer some practical examples, too.

We had a system with an event table; we could determine a lot of detail from events, but querying against them was complex, and ultimately the product of those queries was heavily used. Simply putting those queries into views was not ideal for performance (it was tried!). Instead, we used triggers on the event table to calculate and store the resulting query values. The rule was the only way they were updated was through event changes, and this was an efficient update, based on the row-by-row feed (leveraging indexes). In keeping with the principle of this being a materialized view, we could have lost this persisted table and subsequently fully recover it based on the original view which served as the source for the drip-feed. These persisted values were used extensively by application and report alike, but this was not a “free” process: OLTP transactions bore the cost of maintaining the table, but the benefits far outweighed the cost.

In a BI setting, we had a report that became extremely slow. The reporting tool (out of our control) had constructed some ugly SQL, and this was largely due to the natural complexity and multiple layers of views. The results were correct – but a correct report isn’t useful if it never completes! It was possible to identify a set of three attributes, where if they were included on an additional table, they’d hopefully act as a core filter when included in the report (based on what I expected the report tool and SQL Server would probably do). That set of attributes could be persisted as a read-only materialized view, and the result was exactly as desired: the report became fast with no other changes.

What about alternatives such as indexed views? This is a very real option – but only when the limitations of indexed views don’t become a problem. In both the situations described above, the need for OUTER JOIN’s and subqueries automatically ruled out indexed views. For materialized views, there are really no limitations on the complexity of source queries, and given the target is a physical table, we can use all available tools such as partitioning, COLUMNSTORE indexes, etc.

A consideration when using materialized views is latency. In my first example, using a trigger meant the materialized view was always in-sync with source table updates. In the second example, the refresh of the materialized view could occur as a final step of the process that was used to populate the source tables (effectively ETL). In many other cases, it’s not practical to keep your materialized view in-sync with your source in real-time. If that’s the case, you’ll need to make decisions about how current your view needs to be: is one minute good enough? An hour? A day? At that point, SQL Agent or your scheduler of choice can be used.

I’ve already alluded to a few ways to update a materialized view – one of which is triggers – but when you’re not using triggers, what are your options?

TRUNCATE / INSERT. This tends to be straightforward, but it has limitations including the fact you’ll have some period when your table is empty before it’s refreshed. For cases where you have no good time window options to filter an incremental update or most of your data can change, this approach lets you effectively rebuild your entire table from scratch.

  1. Using sp_rename (or ALTER SCHEMA). This approach is like TRUNCATE/INSERT, but you’re filling a work table, swapping out your “real” materialized view and swapping in the work table to replace it (the swap itself in a transaction). The advantage is the period when the table appears empty can be eliminated. The problem tends to be with blocking: a schema lock is required to do the swap operation, and although the swap itself can be fast, it can be blocked by readers and writers. (With a heavily used table, this can become a problem.)
  2. Using a MERGE statement. This tends to be a good choice, especially if only a small subset of your rows will change during each refresh.
  3. Using SSIS. This approach is particularly suited to BI solutions, most often when combining data from disparate sources. Ultimately, though, with something like materialized view population, the final step can still involve a MERGE statement, called through a SQL Task.


A More Detailed Example

Let’s look at a contrived example that contains a single core event table and a code table:
    CREATE SCHEMA [Source] AUTHORIZATION dbo 
    GO 
    CREATE SCHEMA [Dest] AUTHORIZATION dbo 
    GO 
     
    CREATE TABLE [Source].[EventType] ( 
    EventTypeID tinyint NOT NULL IDENTITY PRIMARY KEY, 
    EventTypeCode varchar(20) NOT NULL, 
    EventTypeDesc varchar(100) NOT NULL); 
     
    INSERT [Source].[EventType] (EventTypeCode, EventTypeDesc) VALUES ('ARRIVE', 'Widget Arrival'); 
    INSERT [Source].[EventType] (EventTypeCode, EventTypeDesc) VALUES ('CAN_ARRIVE', 'Cancel Widget Arrival'); 
    INSERT [Source].[EventType] (EventTypeCode, EventTypeDesc) VALUES ('LEAVE', 'Widget Depart'); 
    INSERT [Source].[EventType] (EventTypeCode, EventTypeDesc) VALUES ('CAN_LEAVE', 'Cancel Widget Depart'); 
     
    CREATE TABLE [Source].[Event] ( 
    WidgetID int NOT NULL, 
    EventTypeID tinyint NOT NULL REFERENCES [Source].[EventType] (EventTypeID), 
    TripID int NOT NULL, 
    EventDate datetime NOT NULL, 
    PRIMARY KEY (WidgetID, EventTypeID, TripID, EventDate)); 
       
    CREATE INDEX IDX_Event_Date ON [Source].[Event] (EventDate, EventTypeID) INCLUDE (WidgetID); 
    CREATE INDEX IDX_Event_Widget ON [Source].[Event] (WidgetID, TripID, EventDate) INCLUDE (EventTypeID); 
    CREATE INDEX IDX_Event_Trip ON [Source].[Event] (TripID, WidgetID); 

The nature of this scenario is we will record the dates of arrival and departure for widgets (key: WidgetID), which can come and go multiple times based on a “trip” (key: TripID). If we incorrectly recorded an arrival or departure, we’ll use a cancellation event that ties back to the widget and trip in question. This fits in with the idea of a log table which only allows insertion. (Truthfully, we could also have done this via “canceled attributes” as well, but let’s assume this is our standard.)

Let’s say our query of interest is this, placing it in a view for convenience,
    CREATE VIEW [Dest].[uv_WidgetLatestState] 
    AS 
    SELECT 
        lw.WidgetID 
        , la.LastTripID 
        , lw.LastEventDate 
        , la.ArrivalDate 
        , (SELECT MAX(de.EventDate) 
            FROM [Source].[Event] de 
            WHERE de.EventTypeID = 3 
            AND de.WidgetID = lw.WidgetID 
            AND de.TripID = la.LastTripID 
            AND NOT EXISTS 
                (SELECT 0 
                FROM [Source].[Event] dc 
                WHERE lw.WidgetID = dc.WidgetID 
                AND la.LastTripID = dc.TripID 
                AND dc.EventTypeID = 4 
                AND dc.EventDate > de.EventDate)) AS DepartureDate 
    FROM 
        (SELECT 
            e.WidgetID 
            , MAX(e.EventDate) AS LastEventDate 
        FROM 
            [Source].[Event] e 
        GROUP BY 
            e.WidgetID) lw 
        LEFT OUTER JOIN 
        (SELECT 
            ae.WidgetID 
            , ae.TripID AS LastTripID 
            , ae.EventDate AS ArrivalDate 
        FROM 
            [Source].[Event] ae 
        WHERE 
            ae.EventTypeID = 1 
        AND ae.EventDate = 
            (SELECT MAX(la.EventDate) 
            FROM [Source].[Event] la 
            WHERE la.EventTypeID = 1 
            AND la.WidgetID = ae.WidgetID 
            AND NOT EXISTS 
                (SELECT 0 
                FROM [Source].[Event] ac 
                WHERE la.WidgetID = ac.WidgetID 
                AND la.TripID = ac.TripID 
                AND ac.EventTypeID = 2 
                AND ac.EventDate > la.EventDate))) AS la ON lw.WidgetID = la.WidgetID 


This tells us the most recent arrival and departure date for a widget, along with the trip identifier for the last trip, based on event dates. (This supports the possibility that an arrival could get cancelled, although our standard will be to only store cases where we have a non-NULL ArrivalDate.) It’s obvious this type of query can be useful to manage the current business process for our hypothetical widgets - our events remain as detail to document history.

Ignoring the fact that we might be okay with performance achieved only by indexing on the source tables, suppose it’s important we persist the results of this query. Our materialized view table could be defined as,
    CREATE TABLE [Dest].[WidgetLatestState] (   
    WidgetID int NOT NULL PRIMARY KEY,   
    LastTripID int NOT NULL,   
    LastEventDate datetime NOT NULL,   
    ArrivalDate datetime NOT NULL,   
    DepartureDate datetime NULL); 
       
    CREATE INDEX IDX_WidgetLatestState_LastTrip ON [Dest].[WidgetLatestState] (LastTripID); 
    CREATE INDEX IDX_WidgetLatestState_Arrival ON [Dest].[WidgetLatestState] (ArrivalDate); 
    CREATE INDEX IDX_WidgetLatestState_Departure ON [Dest].[WidgetLatestState] (DepartureDate); 


In this case, our key for the query would be the WidgetID – all other values are derived details about a given widget. We’ve defined this as the primary key, and we’ve added a couple of additional non-clustered indexes on the target.

Let’s also assume we’re fine with having the target data updated on a schedule. We can do an effective “full refresh” using a single MERGE statement,
    MERGE [Dest].[WidgetLatestState] AS a 
     USING ( 
     SELECT 
       v.[WidgetID] 
        , v.[LastTripID] 
        , v.[LastEventDate] 
        , v.[ArrivalDate] 
        , v.[DepartureDate] 
     FROM 
       [Dest].[uv_WidgetLatestState] v 
     ) AS T 
     ON 
     ( 
       a.[WidgetID] = t.[WidgetID] 
     ) 
    WHEN MATCHED AND t.ArrivalDate IS NOT NULL THEN 
         UPDATE 
          SET LastTripID = t.LastTripID 
        , LastEventDate = t.LastEventDate 
        , ArrivalDate = t.ArrivalDate 
        , DepartureDate = t.DepartureDate 
    WHEN NOT MATCHED BY TARGET AND t.ArrivalDate IS NOT NULL THEN 
          INSERT ( 
            WidgetID 
        , LastTripID 
        , LastEventDate 
        , ArrivalDate 
        , DepartureDate 
          ) VALUES ( 
            t.[WidgetID] 
        , t.[LastTripID] 
        , t.[LastEventDate] 
        , t.[ArrivalDate] 
        , t.[DepartureDate] 
          ) 
    WHEN MATCHED AND t.ArrivalDate IS NULL THEN 
         DELETE; 

My preference in this situation is to deal in three primary objects: the target table (materialized view), a view that expresses the desired query, and a procedure that houses our MERGE (plus adds some logging). With the hard logic contained in the view, the MERGE is largely just boilerplate code and makes it easy to templatize.

I’ve created a test harness available on GitHub that demonstrates the performance of doing a TRUNCATE/INSERT using 2,150,000 source events. (Note: if you clone this repository to try it out, search for the “TODO” and change the file path of the data file to match your local directory for where you extract the Event20180903-1336.dat file.) The result of 4.15 seconds (3 trials, averaged) becomes a baseline on top of which we can add tweaks. The most obvious is to see what the performance is by doing a MERGE with no data changes, post-initial-population. This is 3.40 seconds, and checking @@ROWCOUNT after the operation, we see that it’s updated 196,291 records – which is basically everything in the materialized view. In this case, there was a mild performance gain over TRUNCATE/INSERT and we didn't spend any time with no data in the target table.

The first real tweak is to filter our first WHEN MATCHED clause to check for material changes on non-key columns, as accomplished by,
    WHEN MATCHED  
         AND t.ArrivalDate IS NOT NULL 
         AND ((a.[LastTripID] <> CONVERT(int, t.[LastTripID])) 
              OR (a.[LastEventDate] <> CONVERT(datetime, t.[LastEventDate])) 
              OR (a.[ArrivalDate] <> CONVERT(datetime, t.[ArrivalDate])) 
              OR (a.[DepartureDate] <> CONVERT(datetime, t.[DepartureDate]) OR (a.[DepartureDate] IS NULL AND t.[DepartureDate] IS NOT NULL) OR (a.[DepartureDate] IS NOT NULL AND t.[DepartureDate] IS NULL))) THEN 


This costs something, namely the need to read and compare data, but saves on the need to write back rows where nothing changed. In the test harness, this shows as a net benefit with a result of 1.14 seconds, and @@ROWCOUNT shows zero rows affected.

Another tweak is to use a control date to limit the scope of change checks to a certain timeframe. That timeframe could be fixed (e.g. the last month) or be tied to when the process was the last run. The latter obviously leads to the minimum necessary range but does require some extra infrastructure, including maintaining the control date in a persisted form. For storage of this date, I prefer to use a name/value pair table that can support multiple such needs within a single (BI) database. Doing it this way also requires that we’ve got a way to identify “change dates” in the source data, something that I discuss in another recent C# corner article. (In my example here, the LastEventDate is assumed to be monotonically increasing.)

Using a control date is something I demonstrate in the GitHub script, with an important change being to filter the query of the source view itself,
    USING ( 
    SELECT 
      v.[WidgetID] 
    , v.[LastTripID] 
    , v.[LastEventDate] 
    , v.[ArrivalDate] 
    , v.[DepartureDate] 
    FROM 
      [Dest].[uv_WidgetLatestState] v 
    WHERE 
      v.LastEventDate > @lastprocessed 


The remainder of the query already has the necessary bits to insert, update or delete depending on what the final state of the data is, as returned by the view.

The performance of this final version is 0.46 seconds - 89% better than our baseline! (That includes all the overhead of dealing with loading and storing the control date.) Doing a similar test with some added new events shows similar performance.

Our non-clustered indexes are there to support efficient lookup against the calculated fields (e.g. LastEventDate, DepartureDate). I also tried running a query against the source view as opposed to the materialized view as another benchmark,
    SELECT @temp = COUNT(*) 
    FROM [Dest].[uv_WidgetLatestState] w 
    WHERE w.DepartureDate IS NULL 
    AND w.ArrivalDate IS NOT NULL; 

Unsurprisingly the materialized view performance is 94% better than using the source tables through the “non-materialized” view.

Conclusion
Materialized views can be a life-saver for both applications and BI solutions – but they should follow the design principle of being a read-only, fully-computed product of other data.

I showed how a simple MERGE statement supported the incremental population of a materialized view, with good performance. My example was highly simplistic, and you should evaluate your workload based on what’s required and what’s acceptable, considering performance, complexity, data integrity, etc. Obviously doing this for every query in your system is impractical, so pick-and-choose wisely. I have another blog article that expands on some of what I’ve discussed here and offered some ideas on how to templatize the building of materialized views, significantly cutting down on the coding effort.

European SQL 2016 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 

 



SQL Server 2016 Hosting - HostForLIFE.eu :: 10 SQL Server Shortcuts You Must Know

clock September 6, 2018 08:30 by author Peter
SQL Server is a relational database management system (RDBMS) developed by Microsoft. As a Database Server, it is a software product with the primary function of storing and retrieving data as requested by other software applications, which may run either on the same computer or on another computer across a network. Many developers are familiar with using some of the below listed shortcuts for SQL Server Management Studio. Using keyboard is always a preferred way of working as it boosts the working speed tremendously. Thus, I thought of sharing my experience listing these shortcuts that I usually find helpful while working with SQL Server Management Studio.

New Window
CTRL + N: Open up a new query Window in SQL Server Management Studio (SSMS).

Comment Code
CTRL + K, CTRL + C: Comment the selected text.
CTRL + K, CTRL + U: Uncomment the selected text.

Go to Line
CTRL + G: Go to specified line number in the current query window.

Result Pane
CTRL + R: Shows/Hides the Result Pane. Toggle the query results.
CTRL + T: Display results to Text
CTRL + D: Display results to Grid
CTRL + SHIFT + F: Display results to File

Change Case

CTRL + SHIFT + U: TChange the selected text to UPPER CASE.
CTRL + SHIFT + L: Change the selected text to lower case.
 
IntelliSense
CTRL + SPACE, TAB: Using Ctrl + Space, suggestions would be given, and using Tab, you can complete that suggestion.
Query Execution
F5 or ALT + X or CTRL + E: Execute all the queries written on query window.
CTRL + F5: Parse the query to check if there are any syntax errors.

Profiler
CTRL + ALT + P: Open up SQL Server Profiler. Profiler is generally used for tracing and analysing.

System SP
ALT + F1 (Select any stored procedure on query editor and press ALT + F1) : It runs the sp_help system stored procedure.
CTRL + 1: In the same way, it runs the sp_who system stored procedure. It will provide you the details like who created the SP, spid, host name, on which DB the SP was created and so on.
Screen
SHIFT + ALT + ENTER: Toggle full screen mode.
I hope you found the post "Ten SQL Server Shortcuts You Must Know" useful and worth reading.

What do you think?
If you have any questions or suggestions, please feel free to email us or put your thoughts in the  comments below. We would love to hear from you. If you found this post useful, please share with your friends and help them to learn.

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



European SQL 2016 Hosting - HostForLIFE.eu :: SQL QUERY With CONVERT And VARCHAR

clock August 23, 2018 09:05 by author Peter

Yesterday, I faced one problem which I would like to highlight for you. One of my testing users generated RDLC report and he was very shocked to find that his tested email address was truncated when he viewed in the report and in "Export to Excel" functionality.

As this report is working for last 2 to 3 years and it's working for multiple uses, so I thought the following points will help me out.

  • Email address might be wrong for this employee.
  • There must be some substring function which were written in column for Email Address in RDLC report.
  • Debugging the code level whether a substring was used.

But, I was suprised to identify the root cause. For Email address, the code in SQL was written as SELECT CONVERT(VARCHAR, EmailAddress) which was truncating it and giving us the wrong result.

Written some dummy email address for illustration.

If you try out -
select CONVERT( VARCHAR, '[email protected]')

It will return an output as - "987654321.987126515151@abcdef." , Thus, the maximum lenght is consider here which is 30 characters.
So just for information, always use Convert(Varchar(Max), <ColumnName>) or Convert(Varchar(<size>), <ColumnName>) to get the result correct.
Hope it will help you out .

European SQL 2016 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



SQL Server 2016 Hosting - HostForLIFE.eu :: Contained Database : No Need For Server Level Logins Anymore

clock August 21, 2018 11:06 by author Peter

Starting in SQL Server 2012 and in Azure SQL Database, Microsoft introduced the concept of a contained database. A contained database is a database that is completely un-reliant on the instance of SQL Server that hosts the database including the master database. Theoretically, this makes a database much easier to move between servers (You’ll note the absence of SQL Agent jobs from this post, that’s a different problem.). One of the biggest benefits is that it allows database level authentication, so there is no need to have user logins at Server level.

Contained database now enables us to make a database more portable. I can backup and restore to any instance of SQL Server and the database will carry all its logins with it. There is no longer a need to script out all logins and create those at the server instance level for a user to connect to that restored database. I personally have run into the issue of missing logins at instance level when restoring to another server. In these cases, I have had to go back and script out those logins to apply them to the new instance. You can see how in an emergency where the source server may not be available that not having access to those logins could present a real issue. This is also beneficial for databases that are members of Always On Availability Groups--you don’t have to create logins on each server.

In addition to portability, contained databases allow us to expand the control of login creation to more than just the database administrator or highly-privileged user accounts. Traditional databases require you to create server level roles and server level permissions in order to grant database rights to a user.  With contained databases, you avoid this, database owner and users with an ALTER ANY USER permission can now control access to the database. One drawback is the database user account must be independently created in each database that the user will need which adds a little more maintenance.

Below, I will show you how to enable this option at both the server and database levels. From there, I will show you how to create user logins and what the difference is between traditional (non-contained) login accounts and contained users.

Enable at Server level

Script
EXEC sys.sp_configure N'contained database authentication', N'1' 
GO 
RECONFIGURE WITH OVERRIDE 
GO 

GUI

Enable at database level

Note the word “Partial” in the dropdown and script.

PER MSDN

The contained database feature is currently available only in a partially contained state. A partially contained database is a contained database that allows the use of uncontained features.

Use the sys.dm_db_uncontained_entities and sys.sql_modules (Transact-SQL) view to return information about uncontained objects or features. By determining the containment status of the elements of your database, you can discover what objects or features must be replaced or altered to promote containment.

Script
USE [master] 
GO 
ALTER DATABASE [AdventureWorks2016CTP3] SET CONTAINMENT = PARTIAL WITH NO_WAIT 
GO 


GUI


To Add a User
Below, you will note a few differences in syntax. Traditionally we used the work LOGIN while contained uses USER. Also, note that when adding or changing database permissions, the ALTER statements are very different. Traditional uses ROLE and MEMBER while Contained uses AUTHORIZATION and SCHEMA.

Traditional NON-Contained, adding user and granting READ/WRITE to a database,
CREATE LOGIN JoeShmo WITH PASSWORD = '1234Password'; 
 
USE [AdventureWorks2016CTP3] 
GO 
CREATE USER [JoeShmo] FOR LOGIN [JoeShmo] 
GO 
USE [AdventureWorks2016CTP3] 
GO 
ALTER ROLE [db_datareader] ADD MEMBER [JoeShmo] 
GO 
USE [AdventureWorks2016CTP3] 
GO 
ALTER ROLE [db_datawriter] ADD MEMBER [JoeShmo] 
GO 


Contained Database adding user and granting READ/WRITE to a database -- this works for both SQL Authentication and Windows.
CREATE USER JoeShmo WITH PASSWORD = '1234strong_password'; 
 
USE [AdventureWorks2016CTP3] 
GO 
ALTER AUTHORIZATION ON SCHEMA::[db_datareader] TO [JoeShmo] 
GO 
USE [AdventureWorks2016CTP3] 
GO 
ALTER AUTHORIZATION ON SCHEMA::[db_datawriter] TO [JoeShmo] 
GO 


If changing to Contained database and you want to convert all your Server Logins to contained database users, Microsoft has given us a great script to use. I have reposted it below. The example must be executed in the contained database.
DECLARE @username sysname ;  
DECLARE user_cursor CURSOR  
    FOR   
        SELECT dp.name   
        FROM sys.database_principals AS dp  
        JOIN sys.server_principals AS sp   
        ON dp.sid = sp.sid  
        WHERE dp.authentication_type = 1 AND sp.is_disabled = 0;  
OPEN user_cursor  
FETCH NEXT FROM user_cursor INTO @username  
    WHILE @@FETCH_STATUS = 0  
    BEGIN  
        EXECUTE sp_migrate_user_to_contained   
        @username = @username,  
        @rename = N'keep_name',  
        @disablelogin = N'disable_login';  
    FETCH NEXT FROM user_cursor INTO @username  
    END  
CLOSE user_cursor ;  
DEALLOCATE user_cursor ; 


Aside from the lack of support for MSDB, the one other issue I’ve run into with contained databases was an application that contained multiple databases supporting the applications but used SQL logins. In this case, it was a version of dynamics -- with Windows logins this is easy -- you simply create the login in each database and let Active Directory deal with the passwords. However, with contained databases the passwords are local to each database--so it’s a challenge to sync these accounts. With my current customer in this situation, we’ve reverted to server logins and used Dbatools to sync the passwords between servers. I can think of many ways contained database can add benefits, I can’t wait to play around with it more.

European SQL 2016 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



SQL Server 2016 Hosting - HostForLIFE.eu :: Stuff Function In SQL Server

clock August 16, 2018 09:23 by author Peter

Stuff is a function in SQL Server used to perform special operations on a string value.
The below operations can be performed,

  • Remove string part from string expression.
  • Insert/Append string at specified index.

Syntax
select STUFF(string_value, start_index, no_of_chars_to_replace, replace_string); 

Remove String Part
select STUFF('hai_hello',0,2,''); 

Important Note
Start Index begins from 1 in STUFF Function.

Proper Index
select STUFF('hai_hello',1,2,''); 

Insert String Content
You can insert a string content by specifying the index location and set number of characters to replace to zero. Note that the third parameter value should be zero.
DECLARE @testString varchar(3) = 'abc'; 
select STUFF('hai_hello', 1, 0, @testString); 

Replace String Content
You cannot replace string by specifying the old characters here.
But you can replace the string by specifying start location and number of characters to replace.

Note
The third parameter value should be the length to replace.

Example 1
DECLARE @testString varchar(3) = 'abc'; 
select STUFF('hai_hello', 1, DATALENGTH (@testString), @testString); 

Example 2
select STUFF('hai_hello',1,2,'abc');

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server 2016 Hosting - HostForLIFE.eu :: How To Replace Newline Character From The SQL Server Field?

clock August 14, 2018 11:41 by author Peter

In this post, we will learn about how to replace newline characters from SQL fields. In this post, we have to use replace function for replace string and also use char function to remove newline characters and replace them. Here I will give you the syntax of replace function, how to use char function, and the meaning of 10 in char function.
Syntax
Description of parameter value,

  • string - Source string
  • string_to_replace - String to search for in string1
  • replacement_string - Replacement string will be replaced string_to_replace with replacement_string in string1

What Char(10) in SQL. Execute the below selected query for checking what char(10) contains. Char(10) displays blank result in SQL query result which means it's \r or \n
SELECT CHAR(10) 

Below query replaces char(10) to html <br /> tag,
REPLACE(EventNote, CHAR(10),'<br />')

See the 4th result in the screenshot for replacing string display with HTML <br /> tag.

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



European SQL 2017 Hosting - HostForLIFE.eu :: To Overcome "The Given Key Was Not Present In The Dictionary" Exception In MySQL

clock August 9, 2018 08:37 by author Peter

In this article, I will give one of the solutions to overcome the "The given key was not present in the dictionary" exception in MySQL. Developers may face this error in many situations, but I faced this error in query execution after migrating the MSSQL Database to MySQL.

Recently, I have migrated a database from MSSQL to MySQL. After migration, I tried to run the simple select query with joins which already worked in my application with MSSQL connection string. When I ran the same query with MySQL Connection string, I got the “The given key was not present in the dictionary” Exception. But if I run the same query in MySQL Workbench, it's working fine.

So, here is my sample query which I tried to execute through MySqlDataAdapter.
Select * from usersdetails; 
nMySqlDataAdapter = new MySqlDataAdapter(xszQuery, zSqlConn); 


When this query passes MySqlDataAdapter, I'm getting the exception “The given key was not present in the dictionary”. But the same query is working fine in MySQL Workbench.

It's the same process if I pass the query like,
Select usersID,usersname from usersdetails; 
nMySqlDataAdapter = new MySqlDataAdapter(xszQuery, zSqlConn); //Working fine without any exception. 

I have searched for many solutions on Google, but none worked for me. Some solutions said that error in connection string needs to include the charset=utf8 parameter with the connection string. But that didn't work for me.

Then I found something:
Select usersID,usersname from usersdetails; // this query working fine, no error 
Select usersID,usersname,usersAddress from usersdetails; // this query returns exception 
Select usersID,userGUID from usersdetails; // this query working fine, no error 
Select usersID,userGUID,userDepartment from usersdetails;  // this query returns exception 


So, the problem is something with the columns which I trying to select, then I found what makes each column differ from the other.

What I found is “Collation” value of each column is different based on column datatype, I don’t know in what basis these values are assigned during migration.

What is Collation in MySQL?
A collation is a set of rules that defines how to compare and sort character strings. Each collation in MySQL belongs to a single character set. Every character set has at least one collation, and most have two or more collations.

A collation orders characters based on weights. Each character in a character set maps to a weight. Characters with equal weights compare as equal, and characters with unequal weights compare according to the relative magnitude of their weights.

To know more about “Collation” click here.

I followed two different methods to overcome this exception.

Method 1
By selecting “Table default” value in “Collation” option for the all columns in a table we can overcome this exception.


Method 2
During migration from MSSQL to MySQL, on manual editing step, we can see the tables, column names, and respective datatypes. In the following image on step 3, by default MySQL had taken “CHARACTER SET ‘utf8mb4’ ” as charset value for some columns. We can edit this section, just select and delete “CHARACTER SET ‘utf8mb4’ ” and apply changes. Now all columns with “Collation” value become “Table default” after migration.

By using these two ways we can overcome this exception. In this article, I have given one of the ways to overcome “The given key was not present in the dictionary” exception. If anybody knows any other way to overcome this exception, please mention in the comment  box. I hope this article is very useful.

European SQL 2017 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European SQL 2016 Hosting - HostForLIFE.eu :: Indexes In SQL Server

clock August 7, 2018 09:19 by author Peter

One of the most important routes to high performance in SQL server database is an index. It is a database object which is used to speed up the querying process by providing quick access to rows in the database tables. By using Indexes we can save time and can improve the performance of database queries and applications. An Index contains keys built from one or more columns in the table mapped to the storage location of the specified data. When we create an index on any column, SQL server internally maintains a separate table called index table, so that whenever a user tries to retrieve the data from the existing table,  depending on the index Table SQL server goes directly to the table and retrieves the required data very quickly.

In the Table we can use a maximum of 250 Indexes. The Index Type refers to the way the index is stored internally by SQL server. So a Table can contain two types of indexes:

  • Clustered Index
  • Non-clustered Index

Clustered Indexes
The only time the data rows in a table are stored in sorted (ascending order only) order structure is when the table contains a clustered index. When a table has a clustered index, then it is called a clustered table. If a table has no clustered index, its data rows are stored in an unordered structure. A table can have only 1 clustered Index on it, which will be created when a primary key constraint is used in a Table.

Non-Clustered Indexes
Non-clustered Indexes will not have any arrangement order (unordered structure) of the data in the table. In a table, we can create 249 non clustered Indexes.If we don't mention clustered indexes in a table then a default is stored as non-clustered Indexes.

European SQL 2016 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European SQL 2016 Hosting - HostForLIFE.eu :: Using UNION With Different Tables, Fields And Filtering

clock July 31, 2018 09:22 by author Peter

The command "UNION" is the perfect way to join two tables with the same data context. Whether they have or do not have the same fields, you need to classify the data.
Look at this selected query.
Select t.* From ( 
(Select 1 as typePerson, tenName as namePerson, tenSalary as payMoney From Teachers Where tenAge>30) 
UNION 
(Select 0 as typePerson, stdName as namePerson, 0 as payMoney From Students Where stdYearFinish=2016) 
) t 
Where t.namePerson like ‘Maria%’ 
Order By t.tenName , t.typePerson;*


It creates a temporary alias “t”;
It classifies each data row “typePerson”, 1 (true) for teachers and 0 (false) to Students;
It filters the age of the teachers;
It filters the end year on the school to the year 2016;
After the UNION, it filters by the field name Person that begins with Maria.

Observations
All fields must be at the same position and the same data type, you can make all kinds of selects, joins, where etc. The "UNION ALL" command is better than UNION if you want to select all rows. If they are the same**, this is not the case. In this sample, you can make it a View.

CONCLUSION

You must be careful of the position of the fields and the type, and you can use cast too.

This selection is just an example.
The SQL UNION ALL operator is used to combine the result sets of 2 or more SELECT statements. It does not remove duplicate rows between the various SELECT statements (all rows are returned).

European SQL 2016 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



SQL Server 2016 Hosting - HostForLIFE.eu :: Auto Query Generator In MSSQL Server

clock July 26, 2018 08:01 by author Peter

If you’re a developer, irrespective of the platform, you  have to work with databases. Creating SQL statements for tables is quite often a monotonous job and it gets hectic especially when dealing with gigantic tables that have hundreds of columns. Writing SQL statements manually every time becomes a tiresome process. Before explaining the script, I want to share the reason to write this script and how it is helping my peers. We have code standard on the database side. Below points are standards.

  • Need to maintain a separate stored procedure to every table
  • Don’t use * in the query instead specify the column
  • Use the correct data type and size of a column
  • Every parameter should be nullable in a stored procedure.

I am developing an application which is related to machines using .NET and SQL Server. The database design consists of some master tables and transactional tables. All the transactional table has more than 30 columns.

To meet my code standards, I need to mention all columns with correct data type and size in stored procedure parameters like below,
CREATEproc [dbo].[USP_PCNitemCreation] ( @Id int, @machineName varchar(50)=NULL, @furnacename varchar(50)=NULL, @minValue int=NULL, @maxValue int=NULL, @createdDate datetime=nullvarchar(100)=NULL ) 

All the queries should specify the column instead of using the start(*).
select machineName,furnacename from trn_furnace where Id=@Id 

It consumes more time and is a boring task. So, I plan to write the script to is cut down on the time it takes and boring repeated work. We cannot automate the logic, but we can automate the repeated task.

Then I write the below script which really cuts down on all of our above pain points.

Auto Query Generator Stored Procedure for MSSQL Server,
CREATEproc [dbo].[USP_QuerycreationSupport] ( @table_Name varchar(100)=NULL ) AS  
BEGINDECLARE @InserCols   NVARCHAR(max)DECLARE @Inserparam  NVARCHAR(max)DECLARE @Insertquery NVARCHAR(max)DECLARE @Selectquery NVARCHAR(max)DECLARE @Update      NVARCHAR(max)DECLARE @DeleteQuery NVARCHAR(max) 
  -- sp paramSELECT '@'+c.NAME+Space(1)+Casecast(t.Nameasnvarchar(40))WHEN'nvarchar'THEN  
  t.NAME    +'('+cast(c.max_length asnvarchar(30))+')'  
WHEN'varchar'THEN  
  t.NAME+'('+cast(c.max_length asnvarchar(30))+')'  
WHEN'char'THEN  
  t.NAME+'('+cast(c.max_length asnvarchar(30))+')'  
WHEN'decimal'THEN  
  t.NAME        +'(18,2)'  
  ELSE t.nameend+'=null,'AS colss FROM sys.columns c innerjoin sys.types t ON c.user_type_id = t.user_type_id leftouterjoin sys.index_columns ic ON ic.object_id= c.object_idand ic.column_id = c.column_id leftouterjoin sys.indexes i ON ic.object_id= i.object_idand ic.index_id = i.index_id WHERE c.object_id=object_id(@table_Name)SELECT'Insert query'SET @InserCols=(selectdistinct  
  (  
         select sc.NAME+','  
         FROM   sys.tables st innerjoinsys.columns sc  
         ON st.object_id= sc.object_id  
         WHERE  st.NAME= @table_Name forxmlpath(''),  
                type).value('.','NVARCHAR(MAX)'))  
  -- Return the result of the functionSELECT @InserCols=LEFT(@InserCols,Len(@InserCols)-1)  
  --select @InserColsSET @Inserparam=(selectdistinct  
  (  
         select'@'+sc.NAME+','  
         FROM   sys.tables st innerjoinsys.columns sc  
         ON st.object_id= sc.object_id  
         WHERE  st.NAME= @table_Name forxmlpath(''),  
                type).value('.','NVARCHAR(MAX)'))  
  -- Return the result of the functionSELECT @Inserparam=LEFT(@Inserparam,Len(@Inserparam)-1)  
  --select @InserparamSET @Insertquery='insert into '+@table_Name+'('+@InserCols+')'+'values'+'('+@Inserparam+')'SELECT @InsertquerySELECT'Update Query'SET @Update=(selectdistinct  
  (  
         select sc.NAME+'=@'+sc.NAME+','  
         FROM   sys.tables st innerjoinsys.columns sc  
         ON st.object_id= sc.object_id  
         WHERE  st.NAME= @table_Name forxmlpath(''),  
                type).value('.','NVARCHAR(MAX)'))  
  -- Return the result of the functionSELECT @Update=LEFT(@Update,Len(@Update)-1)  
  --select @UpdateSET @Update='UPdate '+@table_Name+' set '+@UpdateSELECT @Update  
  -- For select QuerySELECT'Select Query'SET @Selectquery='select '+@InserCols +' from '+ @table_NameSELECT @Selectquery 
  -- For Delete QuerySELECT'Delete Query'SET @DeleteQuery='delete from '+ @table_NameSELECT @DeleteQuery 
end 


How to use this script,
Step 1 - Create the stored procedure using the above code or attached code.
Step 2 - Execute the stored procedure and pass your table name as a parameter.
Exec USP_QuerycreationSupport@table_Name='mstCustomer' 

Should not pass the database object in the table name
Exec USP_QuerycreationSupport@table_Name='[dbo].[mstCustomer]' 

Once you execute the Stored Procedure as mentioned above, you get all the SQL statements as shown here. You could easily use the generated SQL statements elsewhere. You get all basic SQL statements like Select, Insert, Update & Delete.


HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in