European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE :: Executing Dynamic SQL Queries

clock August 31, 2023 10:04 by author Peter

Dynamic SQL refers to SQL statements that are constructed at runtime rather than being hardcoded into an application. It allows for more flexibility and dynamism in SQL queries. Here are some key points about Dynamic SQL:

  • It allows you to create SQL statements dynamically based on input parameters or data values that are only known at runtime. The final SQL statement is not known until execution time.
  • It allows you to dynamically build SELECT, INSERT, UPDATE, DELETE statements, etc. at runtime.
  • It allows you to execute SQL statements directly from application code without having to hardcode the SQL.
  • It provides flexibility as the SQL statement can change based on business logic and input parameters.

Here are some examples.

Example 1. Basic Dynamic Query
Suppose you have a simple database table named "Products" with columns "ProductID," "ProductName," and "Price." You want to create a dynamic SQL query to retrieve product information based on user-defined search criteria.
DECLARE @ProductName NVARCHAR(50) = 'Widget';
DECLARE @MinPrice DECIMAL(10, 2) = 10.00;
DECLARE @MaxPrice DECIMAL(10, 2) = 50.00;
DECLARE @SQL NVARCHAR(MAX);

SET @SQL = 'SELECT * FROM Products WHERE 1=1';

IF @ProductName IS NOT NULL
SET @SQL = @SQL + ' AND ProductName = @ProductName';

IF @MinPrice IS NOT NULL
SET @SQL = @SQL + ' AND Price >= @MinPrice';

IF @MaxPrice IS NOT NULL
SET @SQL = @SQL + ' AND Price <= @MaxPrice';

EXEC sp_executesql @SQL, N'@ProductName NVARCHAR(50), @MinPrice DECIMAL(10, 2), @MaxPrice DECIMAL(10, 2)',
                @ProductName, @MinPrice, @MaxPrice;

Example 2. Table Name as a Variable
Suppose you need to perform similar operations on different tables based on user input, and the table name itself is a variable.
DECLARE @TableName NVARCHAR(50) = 'Customers';
DECLARE @City NVARCHAR(50) = 'New York';
DECLARE @SQL NVARCHAR(MAX);

SET @SQL = 'SELECT * FROM ' + QUOTENAME(@TableName) + ' WHERE City = @City';

EXEC sp_executesql @SQL, N'@City NVARCHAR(50)', @City;


Example 3. Using Dynamic Cursors
Dynamic SQL can also be used to generate and execute cursor-related statements based on certain conditions.
DECLARE @CursorName NVARCHAR(50) = 'ProductCursor';
DECLARE @SQL NVARCHAR(MAX);
DECLARE @ProductID INT, @ProductName NVARCHAR(100);

SET @SQL = 'DECLARE ' + QUOTENAME(@CursorName) + ' CURSOR FOR SELECT ProductID, ProductName FROM Products';

EXEC sp_executesql @SQL;

OPEN @CursorName;

FETCH NEXT FROM @CursorName INTO @ProductID, @ProductName;

WHILE @@FETCH_STATUS = 0
BEGIN
-- Process the data
PRINT 'ProductID: ' + CONVERT(NVARCHAR(10), @ProductID) + ', ProductName: ' + @ProductName;

FETCH NEXT FROM @CursorName INTO @ProductID, @ProductName;
END;

CLOSE @CursorName;

DEALLOCATE @CursorName;


Example 4. Basic Dynamic Query with Nested Subquery
Suppose you have two tables: "Orders" and "Customers," and you want to retrieve orders for a specific customer based on their name.
DECLARE @CustomerName NVARCHAR(100) = 'John Doe';
DECLARE @SQL NVARCHAR(MAX);

SET @SQL = '
SELECT OrderID, OrderDate, TotalAmount
FROM Orders
WHERE CustomerID IN (
    SELECT CustomerID
    FROM Customers
    WHERE CustomerName = @CustomerName
)';

EXEC sp_executesql @SQL, N'@CustomerName NVARCHAR(100)', @CustomerName;

Example 5. Dynamic Query with Nested Subquery and Conditional Logic
Consider an example where you want to retrieve orders for a specific customer and optionally filter by order status.
DECLARE @CustomerName NVARCHAR(100) = 'Jane Smith';
DECLARE @OrderStatus NVARCHAR(50) = 'Shipped';
DECLARE @SQL NVARCHAR(MAX);

SET @SQL = '
SELECT OrderID, OrderDate, TotalAmount
FROM Orders
WHERE CustomerID IN (
    SELECT CustomerID
    FROM Customers
    WHERE CustomerName = @CustomerName
)';

IF @OrderStatus IS NOT NULL
SET @SQL = @SQL + ' AND OrderStatus = @OrderStatus';

EXEC sp_executesql @SQL, N'@CustomerName NVARCHAR(100), @OrderStatus NVARCHAR(50)',
               @CustomerName, @OrderStatus;


Example 6. Dynamic SQL with Multiple Nested Subqueries
Let's say you want to retrieve a list of products along with their suppliers and categories, filtered by a specified category name.
DECLARE @CategoryName NVARCHAR(50) = 'Electronics';
DECLARE @SQL NVARCHAR(MAX);

SET @SQL = '
SELECT p.ProductID, p.ProductName, s.SupplierName, c.CategoryName
FROM Products p
INNER JOIN Suppliers s ON p.SupplierID = s.SupplierID
INNER JOIN Categories c ON p.CategoryID = c.CategoryID
WHERE p.CategoryID IN (
    SELECT CategoryID
    FROM Categories
    WHERE CategoryName = @CategoryName
)';

EXEC sp_executesql @SQL, N'@CategoryName NVARCHAR(50)', @CategoryName;


Dynamic SQL provides the flexibility to adapt SQL queries to changing requirements or user inputs. Dynamic SQL with nested subqueries allows for the creation of complex and customizable queries based on runtime conditions. However, it also comes with some potential security risks, such as SQL injection if not handled carefully. Proper validation and sanitization of input parameters are crucial when using dynamic SQL to prevent these risks.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: How to Calculate Running Total in SQL?

clock August 28, 2023 08:01 by author Peter

Running totals are an important concept in SQL because they allow you to compute cumulative sums of values in your data. There are, however, two sorts of running totals to consider: unpartitioned and partitioned. In this post, we will look at these two types, their applications, and examples to help you understand their importance in data analysis.
Running Totals Without Partitions

Without any resets or specified criteria, unpartitioned running totals are generated throughout the full dataset.

They are useful when tracking cumulative values across all data sets, such as when computing cumulative sales over time.

Example
Assume we have the following data in a table called Orders.
--Create Table
CREATE TABLE Orders (
  Order_Date DATE,
  Customer_Id INT,
  Sales INT
);

-- Insert some data into the table
INSERT INTO Orders (Order_Date, Customer_Id, Sales) VALUES
('2023-01-01', 1, 100),
('2023-01-02', 1, 50),
('2023-01-03', 2, 200),
('2023-01-04', 2, 100),
('2023-01-05', 1, 100),
('2023-01-06', 2, 100);

 

To calculate an Unpartitioned running total of sales, you can use the following SQL query.
SELECT Customer_Id,Order_Date,Sales,
SUM(Sales) OVER (ORDER BY Order_Date) AS Running_Total
FROM Orders Order By Order_Date;


Output
The result accumulates the running total across all dates without any resets.

Partitioned Running Totals
Partitioned running totals are calculated over a subset of the data based on specific criteria or partitions.
They are useful when you want to calculate totals within specific categories or groups, like counting daily sales separately or calculating totals for different customer segments.

Example

Suppose we have the same table called Orders as above in the unpartitioned example.
To calculate a Partitioned running total of sales within each customer_id, you can use the following SQL query.
SELECT Customer_Id,Order_Date,Sales,
SUM(Sales) OVER (PARTITION BY Customer_Id ORDER BY Order_Date) AS Running_Total
FROM Orders Order By Customer_Id,Order_Date


The result accumulates the running total separately for each customer_id.


Summary
In this article, we learn about how to calculate running totals in SQL. Running totals are powerful tools for cumulative calculations. Understanding the difference between unpartitioned and partitioned running totals is essential for effective data analysis.

Unpartitioned running totals accumulate values across the entire dataset, while Partitioned running totals allow you to calculate totals within specific partitions, making them invaluable for segmenting and analyzing data. Incorporate these concepts into your SQL projects to enhance your data analysis capabilities.

If you find this article valuable, please consider liking it and sharing your thoughts in the comments.

Thank you, and happy coding!



SQL Server Hosting - HostForLIFE :: IIF Function In SQL Server

clock August 24, 2023 08:41 by author Peter

In this article, I will go over the notion of the IIF Function in SQL Server. First, let's establish a database with some dummy data in it. I am supplying you with the database, as well as the tables holding the records, on which I am demonstrating the various examples. Let's see what happens.

CREATE DATABASE Peter_OFS
PRINT 'New Database ''Peter_OFS'' Created'
GO

USE [Peter_OFS]
GO

CREATE TABLE [dbo].[Employee] (
    EmployeeID INT IDENTITY (31100,1),
    EmployerID BIGINT NOT NULL DEFAULT 228866,
    FirstName VARCHAR(50) NOT NULL,
    LastName VARCHAR(50) NOT NULL,
    Email VARCHAR(255) NOT NULL UNIQUE,
    DepartmentID VARCHAR(100) NOT NULL,
    Age INT  NOT NULL,
    GrossSalary BIGINT NOT NULL,
    PerformanceBonus BIGINT,
    ContactNo VARCHAR(25),
    PRIMARY KEY (EmployeeID)
);

CREATE TABLE [dbo].[tbl_Orders] (
    OrderId INT IDENTITY (108, 1) PRIMARY KEY,
    FoodieID INT,
    OrderStatus TINYINT NOT NULL, -- ==>> OrderStatus: 4: Cancelled; 3: Pending; 2: Processing; 1: Completed
    OrderDate DATE NOT NULL,
    ShippedDate DATE,
    RestaurantId INT NOT NULL,
);

Let's check our following tables by using the following queries.

1) To get the data from the "Employee" table, use the following query.
SELECT * FROM Peter_OFS..Employee

2) To get the data from the "tbl_Orders" table, use the following query.
SELECT * FROM Peter_OFS..tbl_Orders

The IIF Function
IIF is a logical function that returns one of two values based on whether the boolean expression is true or false. In other words, the IIF() method returns "true_value" if a condition is TRUE and "false_value" if it is FALSE.

Important Points
In SQL Server, IIF is a logical function.

  • SQL Server 2012 introduces IIF.
  • IIF is an abbreviation for CASE Expression.
  • IIFs can only be nested to a maximum of ten levels.
  • The IIF function returns the data type with the highest precedence from the types "true value" and "false value."

Syntax
IIF(boolean_expression, true_value, false_value) is an IIF function.

  • boolean_expression: A syntax error will be thrown if the argument is not a boolean expression.
  • true_value: If boolean_expression evaluates to "TRUE", it returns the value supplied in the "true_value" parameter.
  • false_value: If boolean_expression evaluates to "FALSE," the value specified in the "false_value" parameter is returned.


Examples
The examples in this section demonstrate the IIF Function's capability. Let's see what happens.
1) The IIF function compares integer values.
Because boolean_expression is true, the next example will return true_value.

SELECT IIF( 25 * 10 = 250, 'TRUE', 'FALSE' ) AS 'Result'

2) IIF Function with variables
In the following example, variables are used to calculate two integer values.
DECLARE @a INT = 25, @b INT = 12;
SELECT IIF( @a * @b = 300, 'TRUE', 'FALSE' ) AS 'Result'

3) IIF with String Functions
A) The following example accepts a string with a length greater than 10.
SELECT IIF(LEN('Hello! Scott') > 10, 'StringAccepted', 'StringRejected') AS [Result]

B) The following example checks the ASCII value.

SELECT IIF(ASCII('A') = 65, 'ASCIIAccepted', 'ASCIIRejected') AS [Result]

C) The following example compares string data using the IIF Function.
DECLARE @Person VARCHAR (25) = 'Peter'
SELECT @Person + ' likes ' + IIF(@Person = 'Peter', 'Mercedes-Benz Maybach', 'Audi A8') AS [Result]

4) IIF Function with data type precedence
SELECT IIF(21 < 11, 551.50, 551) Result

5) IIF Function with NULL
A) With NULL Constants
If we specify "NULL" in true_value and false_value, this statement will result in an error.
SELECT IIF( 25 * 12 = 300, NULL, NULL ) Result

B) With NULL Parameters
DECLARE @aa INT = NULL, @bb INT = NULL
SELECT IIF( 25 * 12 = 300, @aa, @bb ) Result

6) IIF Function With Aggregate Function
SUM()

The following example summarizes the total orders along with the order status.
SELECT
   SUM(IIF(OrderStatus = 1, 1, 0)) AS 'Completed',
   SUM(IIF(OrderStatus = 2, 1, 0)) AS 'Processing',
   SUM(IIF(OrderStatus = 3, 1, 0)) AS 'Pending',
   SUM(IIF(OrderStatus = 4, 1, 0)) AS 'Cancelled',
   COUNT(OrderId) AS 'Total Orders'
FROM tbl_Orders
WHERE YEAR(OrderDate) = 2021

7) Nested IIF Function (with GROUP BY Clause)

The following example summarizes the total orders along with the order status.
SELECT
   IIF(OrderStatus = 1, 'Completed',
      IIF(OrderStatus=2, 'Processing',
         IIF(OrderStatus=3, 'Pending',
            IIF(OrderStatus=4, 'Cancelled', '')
            )
         )
      ) AS [Order Status],
   COUNT(OrderId) AS 'Total Orders'
FROM tbl_Orders
GROUP BY OrderStatus

Points To Remember
In the key points, I have already mentioned that the IIF function is the shorthand form of the CASE Expression. And, yes, it's true. Internally, SQL Server converts IIF to CASE Expression and executes it.

Step 1
To check this, execute the following query with the "Actual Execution Plan" (Alternatively, press the "Ctrl + M" to include the Actual Execution Plan).

SELECT EmployeeID, CONCAT(FirstName , ' ' , LastName) AS [Full Name],
      Email, DepartmentID, GrossSalary,
      IIF(ContactNo IS NULL, 'Not Available', ContactNo) AS [Contact Number]
FROM Peter_OFS..Employee

Step 2
Now, right-click on "Compute Scalar" and click on the "Properties" option to proceed.

Step 3
And, you can see that SQL Server converts IIF to CASE expression internally.

Difference Between IIF Function and CASE Expression In SQL Server

Now, let's look at the quick difference between IIF Function and CASE Expression in SQL Server.

Key Points IIF Function CASE Expression
 Type IIF is a function. CASE is an expression.
Return Value Returns one of two values. Returns one of the multiple possible result expressions.
 Return Type  Returns the data type with the highest precedence. Returns the data type with the highest precedence.
 Nesting IIFs can only be nested up to a maximum level of 10. SQL Server allows for only 10 levels of nesting in CASE expressions.
 Portability IIF is SQL Server 2012+ specific. The CASE expression is cross-platform and works on all SQL platforms.

See you in the next article, until then take care and happy learning.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Usage, Features, and Limitations of SQL Server CTE

clock August 15, 2023 07:54 by author Peter

In a SQL Server SELECT, INSERT, UPDATE, or DELETE query, you can refer to a temporary named result set known as a common table expression (CTE). CTEs, which were first made available in SQL Server 2005, are comparable to derived tables and views. Following a WITH clause, a CTE name and query expression are specified to define CTEs. The main query can refer to the CTE as if it were a table or view after the CTE query finishes and fills the CTE with data.

Some important benefits of CTEs

  • CTE queries can be modularized into individual CTE blocks that can then be readily reused in different parts of the query. This increases readability and consistency.
  • CTEs allow you to design recursive searches in which a CTE refers itself. This is beneficial for data that is hierarchical or tree-like.
  • Simpler semantics: CTEs are easier to understand and write since they employ less sophisticated SQL syntax than derived tables.
  • CTEs perform better than nested views and subqueries when it comes to optimization. The optimizer can use CTE results to improve performance in tempdb.

When Should CTEs Be Used?

  • Here are some examples of frequent applications for CTEs:
  • As previously stated, CTEs let you to split down complex logic into simpler modular parts, boosting readability.
  • Reusable query logic - Once a CTE is defined, it can be referenced several times in the query. This reduces the need for repetitious logic.
  • CTEs can recursively reference themselves to query hierarchical data such as org charts, folders, and so on.
  • Replace views - In some circumstances, a CTE can accomplish the same thing as a view but with higher efficiency since the optimizer can better tune the CTE query.
  • Replace derived tables - CTEs can be used to replace derived tables in order to simplify query syntax.
  • Data exploration/investigation - Because CTE definitions are localized to a single statement, they might be beneficial for ad hoc data exploration prior to permanent table storage.

Now let's look at some examples to demonstrate how to write and use CTEs.

Syntax
WITH CTE_Name (Column1, Column2)

AS
(
    -- CTE Definition using SELECT
)

SELECT *
FROM CTE_Name

A WITH clause is used before the primary SELECT statement to define the CTE. Following the WITH keyword, you supply the CTE name and optional column list. The AS keyword denotes the beginning of the CTE definition query.

After you've defined the CTE, you may use it in the primary SELECT query just like any other table or view.

Consider the following example.

WITH Sales_CTE (SalesPerson, SalesAmount)

AS
(
   SELECT SalesPerson, SUM(SalesAmount)
   FROM Sales
   GROUP BY SalesPerson
)

SELECT *
FROM Sales_CTE

This CTE summarizes sales per person into a temporary result set named Sales_CTE. The main query simply selects from the CTE to display the sales summary.

Multiple CTEs

You can define multiple CTEs in a single query by listing them sequentially after the WITH clause:
WITH Sales_CTE (SalesPerson, SalesAmount)

AS
(
SELECT SalesPerson, SUM(SalesAmount)
FROM Sales
GROUP BY SalesPerson
),

TopSales_CTE (TopSalesPerson, TopSalesAmount)

AS
(
SELECT TOP 1 SalesPerson, SalesAmount
FROM Sales_CTE
ORDER BY SalesAmount DESC
)

SELECT *
FROM TopSales_CTE


Here we define two CTEs - Sales_CTE and TopSales_CTE. The second CTE references the first CTE. The main query selects the top salesperson from the second CTE.

Recursive CTE Example

One of the key benefits of CTEs is the ability to write recursive queries. Here is an example to find all managers and employees in a hierarchy.
WITH Managers_CTE (EmployeeID, ManagerID, EmployeeName, ManagerName, Level)

AS
(
-- Anchor member
SELECT e.EmployeeID, e.ManagerID, e.EmployeeName, m.EmployeeName,
       0 AS Level

FROM Employees e
INNER JOIN Employees m
ON e.ManagerID = m.EmployeeID

UNION ALL

-- Recursive member that references CTE name
SELECT e.EmployeeID, e.ManagerID, e.EmployeeName, m.EmployeeName,
       Level + 1
FROM Employees e
INNER JOIN Managers_CTE m
ON e.ManagerID = m.EmployeeID
)

-- Outer query

SELECT *
FROM Managers_CTE


The anchor member defines the root level of the hierarchy.
The recursive member joins back to the CTE name to get to the next level.
UNION ALL combines each round of recursion.
Outer query returns the final resultset.

This builds the org hierarchy iteratively until all levels are retrieved.
CTE with INSERT Example

In addition to SELECT, CTEs can be used with data modification statements like INSERT.
WITH Sales_CTE (SalesID, SalesPersonID, SalesAmount)

AS
(

SELECT SalesID, SalesPersonID, SalesAmount
FROM Sales
WHERE SalesDate = '20180901'
)

INSERT INTO SalesByDay (SalesDate, SalesPersonID, SalesAmount)
SELECT '20180901', SalesPersonID, SalesAmount

FROM Sales_CTE


This inserts sales for a specific date into a separate SalesByDay table using a CTE as the data source.

CTE with UPDATE Example
You can also leverage CTEs with updated statements.

WITH Sales_CTE (SalesID, SalesAmount)

AS
(

SELECT SalesID, SalesAmount
FROM Sales
WHERE SalesDate = '20180901'

)

UPDATE SalesByDay
SET SalesAmount = Sales_CTE.SalesAmount

FROM SalesByDay
INNER JOIN Sales_CTE
ON SalesByDay.SalesID = Sales_CTE.SalesID
WHERE SalesByDay.SalesDate = '20180901'


Here we populate matching rows in another table using values from the CTE.
CTE with DELETE Example

Similarly, CTEs can be utilized with DELETE statements.

WITH InactiveSales_CTE (SalesID, SalesDate)
AS
(
SELECT SalesID, SalesDate
FROM Sales
WHERE SalesDate < '20180101'
)

DELETE SalesByDay
FROM SalesByDay
WHERE SalesID IN (SELECT SalesID
             FROM InactiveSales_CTE)


This deletes related rows in another table based on inactive sales data from the CTE.

Temporary CTE Benefits

A key benefit of CTEs is that they are temporary named result sets that only exist during query execution. This provides several advantages.

  • No need to persist CTEs in the database, unlike views or permanent tables. This reduces storage overhead.
  • Can reference CTEs multiple times in a statement without repetitive subqueries or joins. Improves maintainability.
  • Optimizer can tailor a temporary CTE query plan, unlike a persisted view which has a fixed query plan.
  • Can replace inline derived tables and views to simplify and improve query semantics.
  • Great for ad hoc data investigation before determining permanent tables.


In summary, CTEs are very useful in SQL Server for simplifying complex logic, improving query readability, handling recursive queries, and temporarily staging data transformations for business reporting and analysis. As you gain more experience with SQL Server, be sure to add CTEs to your development toolbox.

Features of CTEs

  • Improved Readability and Maintainability: CTEs enhance the readability of complex queries by breaking them into smaller logical sections. This is especially useful when dealing with queries involving multiple joins, subqueries, or complex calculations. The segmented structure makes it easier to understand and troubleshoot the query.
  • Modularity and Reusability: CTEs enable the creation of modular SQL code. You can define CTEs for specific tasks or calculations and then reuse them across different parts of the main query. This promotes code reusability, reduces redundancy, and simplifies the modification of specific parts of the query.
  • Recursive Queries: CTEs are ideal for building recursive queries, where a query references itself to traverse hierarchical or recursive data structures. This is commonly used for tasks like navigating organizational charts, product categories, or tree-like data.
  • Self-Joins and Window Functions Simplification: When dealing with self-joins or complex calculations involving window functions, CTEs provide a clearer and more organized way to express the logic. They break down intricate operations into manageable steps, leading to more concise and readable code.
  • Code Organization and Reusability: Complex subqueries can be defined within CTEs, allowing for cleaner code organization. This organization makes it easier to understand the purpose of each part of the query, leading to improved maintainability.
  • Optimization Opportunities: In some cases, SQL Server's query optimizer can optimize CTEs more effectively than equivalent subqueries. This optimization can lead to better execution plans and potentially improved performance.


Limitations of CTEs

  • Single-Statement Scope: CTEs are scoped to a single SQL statement. They cannot be referenced across different statements in the same batch. This limitation can restrict their use in complex scenarios that involve multiple related statements.
  • Performance Considerations: While CTEs enhance query organization, they may not always result in the most optimal execution plans. In certain cases, complex CTEs can lead to performance issues, especially when dealing with large datasets or intricate queries.
  • Memory Usage: Recursive CTEs, which are used for hierarchical or recursive queries, can consume significant memory, particularly when dealing with deep hierarchies. This can lead to performance degradation if memory usage is not managed effectively.
  • Lack of Indexing Support: CTEs do not support indexing. Unlike temporary tables, CTEs do not allow you to create indexes to improve query performance. This can be a limitation when working with large datasets that require efficient access patterns.
  • Nested CTEs and Complexity: Nesting multiple CTEs within each other can lead to complex and challenging-to-maintain code. Overuse of nesting can make the query difficult to understand, debug, and optimize.
  • Limited Use in Stored Procedures: CTEs are more commonly used in ad-hoc queries. While they can be used within stored procedures, their single-statement scope can sometimes be restrictive when working with multiple statements in a procedure.
  • Complexity Management: While CTEs enhance query readability, they can also introduce complexity, especially when dealing with deeply nested or highly recursive queries. Overusing CTEs might lead to code that is harder to understand and maintain.


Conclusion
Common Table Expressions (CTEs) in SQL Server offer valuable features that enhance the readability, modularity, and organization of complex queries. Their ability to handle recursive operations and simplify self-joins and window functions makes them a versatile tool for developers. However, it's crucial to be aware of the limitations, such as single-statement scope, performance considerations, and lack of indexing support. By understanding both the features and limitations of CTEs, developers can leverage them effectively to create optimized and maintainable SQL code. Properly using CTEs requires a balance between leveraging their advantages and mitigating potential drawbacks.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Working with JSON in SQL Server

clock August 9, 2023 07:43 by author Peter

JSON (JavaScript Object Notation) has grown in popularity as a simple and versatile data format for delivering and storing information. SQL Server's JSON data type and many built-in functions allow you to store and handle JSON data. This tutorial will teach you how to query, index, and operate with JSON documents in SQL Server.

Create one table and work with a table named "members" to store member information, including locations in JSON format
CREATE TABLE Members (
    userID INT PRIMARY KEY,
    FirstName NVARCHAR(50),
    LastName NVARCHAR(50),
  Location NVARCHAR(max),
    Salary NVARCHAR(50)
);

What is the best way to insert JSON data into SQL Server?
JSON data can be stored in SQL Server using the JSON data type introduced in SQL Server 2016 and later editions. JSON documents can be directly stored in columns using the JSON data type, allowing for rapid querying and processing.

INSERT INTO Members (userID, FirstName, LastName, Location,Salary)
VALUES (2, 'Peter', 'Scott', '[{"Type": "Home", "Street": "London", "City": "lucknow", "Zip": "
111111"}, {"Type": "Work", "Street": "s1", "City": "Manchester", "Zip": "111111"}]',70000);

How to query JSON data in SQL Server?
Using SQL Server to Query JSON Data: SQL Server has multiple techniques for interacting with JSON data. JSON_VALUE, which extracts a scalar value from a JSON string, is one of the most frequently used functions.
SELECT userID, FirstName, LastName,
    JSON_VALUE(Location.Value, '$.Street') AS Street,
    JSON_VALUE(Location.Value, '$.City') AS City,
    JSON_VALUE(Location.Value, '$.Zip') AS Zip,Salary
FROM Members
CROSS APPLY OPENJSON(Location) AS Location
WHERE JSON_VALUE(Location.Value, '$.Type') = 'Home';

How to filter data in JSON file?
The JSON_QUERY function allows you to filter JSON data based on set parameters.


SELECT userID, JSON_QUERY(Location) AS Membername
FROM Members;

How to update JSON data in SQL Server?
SQL Server has functions to add, update, and remove properties from JSON documents.


UPDATE Members
SET Location = JSON_MODIFY(Location, '$[2].street', 'New area')
WHERE userID = 2;

How to Aggregate JSON Data in SQL Server?
The FOR JSON clause can aggregate JSON data and return result sets using JSON formatting.
SELECT FirstName, LastName
FROM Members
FOR JSON auto ;

Thanks for reading, and I hope you like it.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: SSAS Tabular Models Documentation Guide

clock August 1, 2023 09:48 by author Peter

In the labyrinthine world of data modeling, using tools like SQL Server Analysis Services (SSAS) often feels like you're assembling a thousand-piece puzzle. The beauty and intricacy of a tabular model lie in its vast networks of tables, relationships, and measures, all working harmoniously to reveal valuable insights. However, without effective documentation, this puzzle might become a maze for those trying to understand it in the future.

I recently navigated the exciting process of working with a complex SSAS tabular model - an impressive structure of around 20 tables, a web of relationships, and an array of measures. Along the way, I realized the necessity for clear, thorough, and accessible documentation. Not only does it ease the navigation for future explorers of this model, but it also ensures the maintainability and scalability of the work.

Today, I'd like to share the comprehensive approach I used to document my tabular model, the best practices I discovered, and the strategies that made the process smooth and efficient.

Step 1. Harnessing the Power of Descriptions
Every object in SSAS, be it tables, columns, measures, or relationships, has a property known as "Description." I used this feature to provide a meaningful description for every object, allowing anyone reviewing the model to grasp its components' purpose and role quickly. Best practice tip. Always provide concise yet comprehensive descriptions, and maintain a consistent style.

Step 2. The Indispensable Data Dictionary
I created a data dictionary to detail table names, column names, their respective data types, descriptions, and any relevant notes. This serves as a reliable reference point, especially for those new to the model. Best practice tip. Keep the dictionary updated and synchronize it with the model to ensure they're always in alignment.

Step 3. Visualize with Diagrams

I used software like Visio to create comprehensive diagrams to represent the relationships between tables. These diagrams provide an overview of the model's interconnectedness, making it more comprehensible. Best practice tip. Make your diagrams clear and easy to follow, and ensure they represent the model's structure correctly.

Step 4. Detailing DAX Formulas
For each measure in the model, I meticulously documented the DAX formulas, explaining their purpose and logic. This transparency helps future developers understand the model's inner workings. Best practice tip. Be precise and detailed in your formula descriptions, explaining the "what" and the "why."

Step 5. Process and Refresh Strategy
I documented my process and refreshed my strategy to ensure reviewers understood how the cube's data stays up-to-date. This offers insights into when and how the data updates. Best practice tip. Include potential dependencies or bottlenecks in your documentation to provide a complete picture of the data refresh process.

Step 6. Centralizing the Information

With all the information at hand, it was crucial to present everything in a unified, accessible format. Tools like Microsoft Word and SharePoint allowed me to create a central hub of information that could be easily accessed and understood. Best practice tip. Keep your documentation easily accessible and organize it in an intuitive way.

Automation can be a game-changer. Third-party tools like Power BI Helper, DAX Studio, or SQL Power Doc for PowerShell can generate documentation automatically, saving you significant time and streamlining the process. Best practice tip. Regularly update and review auto-generated documentation to ensure it accurately reflects the current state of your model.

A crucial lesson from my experience. Effective documentation is a living entity; it grows, adapts, and evolves with your model. As a best practice, always keep it updated whenever you make changes to the model.

As we navigate the exciting world of data modeling, let's aim to make our data puzzles a little less puzzling and a lot more engaging!

Until next time, happy modeling!

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: How Do I Write CRUD Stored Procedures?

clock July 24, 2023 08:09 by author Peter

You must create a set of SQL Server stored procedures known as CRUD (Create, Read, Update, Delete) stored procedures in order to quickly and securely interact with your database tables. CRUD operations are crucial in database systems because they enable data modification.

How can we build a SQL server insert stored procedure?
You can use the Create operation to add new records to the database. creating a stored procedure.
CREATE PROCEDURE InsertProject
    @ProjectName NVARCHAR(500),
    @ClientName NVARCHAR(500),
    @ProjectManagerId NVARCHAR(500),
    @ProjectDescription NVARCHAR(500) = NULL,
    @StartDate DATETIME
AS
BEGIN
    SET NOCOUNT ON;

    INSERT INTO Projects
    (
        [ProjectName],
        [ClientName],
        [ProjectManagerId],
        [CreatedDate],
        [ProjectDescription],
        [StartDate]
    )
    VALUES
    (
        @ProjectName,
        @ClientName,
        @ProjectManagerId,
        GETUTCDATE(),
        @ProjectDescription,
        @StartDate
    )
END

EXEC InsertProject
    @ProjectName = 'Management Software',
    @ClientName = 'Peter',
    @ProjectManagerId = 'A51DC085-073F-4D3A-AFC8-ACE61B89E8C8',
    @ProjectDescription = 'This is a sample Management Software.',
    @StartDate = '2023-01-20 12:45:00.000';

How may a Read stored procedure be created in the SQL server?
The Read operation is used to retrieve data from the database. You can create a select stored procedure by using a straightforward, constraint-free SELECT statement to retrieve all the data from a table.

CREATE PROCEDURE GetAllProject

AS
BEGIN
    SET NOCOUNT ON;
   SELECT Id, ProjectName, ClientName,ProjectDescription,StartDate,EndDate,CreatedDate from Projects
END

EXEC GetAllProject

Output

Use a WHERE clause with the proper condition to retrieve a specific row from the table using a unique identifier (such as Id) within a stored procedure.
CREATE PROCEDURE GetProjectByProjectId
     @Id INT
AS
BEGIN
    SET NOCOUNT ON;
   SELECT ProjectName,ClientName,ProjectDescription,CreatedDate from Projects
      WHERE Id = @Id;
END


EXEC GetProjectByProjectId @Id=2

Output

How we can create an Update stored procedure in the SQL server?

Existing records in the database can be modified using the Update operation.

CREATE PROCEDURE [UpdateProject]
     @id INT
    ,@ProjectName NVARCHAR(500)
    ,@ClientName NVARCHAR(500)
    ,@ProjectManagerId NVARCHAR(500)
    ,@ProjectDescription NVARCHAR(500)
    ,@StartDate DATETIME = NULL
    ,@EndDate DATETIME = NULL
    ,@UpdatedDate DATETIME = NULL
AS
BEGIN
    UPDATE Projects
    SET ProjectName = @ProjectName
        ,ClientName = @ClientName
        ,ProjectManagerId = @ProjectManagerId
        ,ProjectDescription = @ProjectDescription
        ,StartDate = @StartDate
        ,EndDate = @EndDate
        ,UpdatedDate = getutcdate()
    WHERE Id = @Id
END

SQL

EXEC UpdateProject
    @Id=1,
    @ProjectName = 'TimeSystem Software',
    @ClientName = 'Peter',
    @ProjectManagerId = 'A51DC085-073F-4D3A-AFC8-ACE61B89E8C8',
    @ProjectDescription = 'This is a sample TimeSystem Software.',
    @StartDate = '2023-01-20 12:45:00.000',
    @EndDate = '2023-04-20 12:45:00.000',
    @UpdatedDate= getutcdate();


Output

How we can create a Delete stored procedure in the SQL server?
To delete records from the database, use the Delete operation. To develop a stored process for deletion
CREATE PROCEDURE DeleteProjectById
   @id int
AS
BEGIN
    SET NOCOUNT ON;

    DELETE FROM Projects
    WHERE id = @id;
END


EXEC DeleteProjectById @Id=2

Output

To preserve accurate information and past information, gentle deletion rather than hard deletion is frequently used in database designs. Instead of physically removing records from the database, soft delete involves listing them as inactive or removed. This strategy enables previous tracking and data retrieval.
CREATE PROCEDURE [DeleteProject]
@id int
AS
BEGIN


UPDATE Projects SET IsDelete =1, IsActive =0
WHERE Id = @id


END

EXEC DeleteProject @Id=2

Output

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE :: Recursive Queries in SQL

clock July 18, 2023 10:55 by author Peter

What are Recursive queries in SQL?
Recursive queries in SQL are queries that involve self-referential relationships within a table. They allow you to perform operations that require iterative processing, enabling you to traverse and manipulate hierarchical data structures efficiently.

Syntax of Recursive Queries
WITH RECURSIVE cte_name (column1, column2, ...) AS (
    -- Anchor member
    SELECT column1, column2, ...
    FROM table_name
    WHERE condition

    UNION ALL

    -- Recursive member
    SELECT column1, column2, ...
    FROM table_name
    JOIN cte_name ON join_condition
    WHERE condition
)
SELECT column1, column2, ...
FROM cte_name;


Recursive queries consist of two main components,

1. Anchor Member
The anchor member establishes the base case or initial condition for the recursive query. It selects the initial set of rows or records that serve as the starting point for the recursion. The anchor member is a regular SELECT statement that defines the base case condition.

2. Recursive Member
The recursive member defines the relationship and iteration process in the recursive query. It specifies how to generate new rows or records by joining the result of the previous iteration with the underlying table. The recursive member includes a join condition that establishes the relationship between the previous iteration and the current iteration. It also includes termination criteria to stop the recursion when certain conditions are met.

Example 1. Hierarchical Data - Employee Hierarchy
Consider the "Employees" table, which has the columns "EmployeeID" and "ManagerID." The employees reporting to a certain manager will be retrieved via a recursive query that traverses the employee hierarchy.
-- Create the Employees table
CREATE TABLE Employees (
    EmployeeID INT,
    ManagerID INT
);

-- Insert sample data
INSERT INTO Employees (EmployeeID, ManagerID)
VALUES (1, NULL),
       (2, 1),
       (3, 1),
       (4, 2),
       (5, 2),
       (6, 3),
       (7, 6);

-- Perform the recursive query
WITH RECURSIVE EmployeeHierarchy (EmployeeID, ManagerID, Level) AS (
    -- Anchor member: Retrieve the root manager
    SELECT EmployeeID, ManagerID, 0
    FROM Employees
    WHERE ManagerID IS NULL

    UNION ALL

    -- Recursive member: Retrieve employees reporting to each manager
    SELECT e.EmployeeID, e.ManagerID, eh.Level + 1
    FROM Employees e
    JOIN EmployeeHierarchy eh ON e.ManagerID = eh.EmployeeID
)
SELECT EmployeeID, ManagerID, Level
FROM EmployeeHierarchy

Output

Example 2. Hierarchical Data - File System Structure
Consider a table called "Files" that has the columns "FileID" and "ParentID," which describe the hierarchy of a file system. To retrieve all files and their hierarchical structures, we'll utilize a recursive query.
-- Create the Files table
CREATE TABLE Files (
    FileID INT,
    ParentID INT,
    FileName VARCHAR(100)
);


-- Insert sample data
INSERT INTO Files (FileID, ParentID, FileName)
VALUES (1, NULL, 'Root1'),
       (2, NULL, 'Root2'),
       (3, 1, 'Folder1'),
       (4, 2, 'Folder1'),
       (5, 3, 'Subfolder1'),
       (6, 4, 'Subfolder1'),
       (7, 4, 'Subfolder2'),
       (8,5, 'Subfolder1_1'),
       (9,6, 'File1'),
       (10,6, 'File2');


-- Perform the recursive query
WITH RECURSIVE FileStructure (FileID, ParentID, FileName, Level) AS (
    -- Anchor member: Retrieve root level files
    SELECT FileID, ParentID, FileName, 0
    FROM Files
    WHERE ParentID IS NULL

    UNION ALL

    -- Recursive member: Retrieve nested files
    SELECT f.FileID, f.ParentID, f.FileName, fs.Level + 1
    FROM Files f
    JOIN FileStructure fs ON f.ParentID = fs.FileID
)
SELECT FileID, ParentID, FileName, Level
FROM FileStructure;

Output
File Structure Hierarchy Output

The recursive query continues to iterate until the termination criteria are satisfied, generating new rows or records in each iteration based on the previous iteration's results. The result set of a recursive query includes all the rows or records generated during the recursion.

Recursive queries are typically used to work with hierarchical data structures, such as organizational charts, file systems, or product categories. They allow you to navigate and analyze the nested relationships within these structures without the need for complex procedural code or multiple iterations.

Recursive queries are supported by several database systems, including common SQL-based systems like PostgreSQL, MySQL (with the help of Common Table Expressions or CTEs), and Microsoft SQL Server (with the help of the WITH RECURSIVE keyword).

Advantages of Recursive Queries

  • Handling Hierarchical Data: Recursive queries provide a straightforward and efficient way to work with hierarchical data structures, such as organizational charts, file systems, or product categories. They allow you to retrieve and navigate the nested relationships in a concise manner.
  • Flexibility and Adaptability: Recursive queries are adaptable to various levels of depth within a hierarchical structure. They can handle any level of nesting, making them suitable for scenarios where the depth of the hierarchy may vary.
  • Code Reusability: Once you have defined a recursive query, it can be easily reused for different hierarchical structures within the same table, saving development time and effort.
  • Simplified Query Logic: Recursive queries eliminate the need for complex procedural code or multiple iterations to traverse hierarchical relationships. With a single query, you can retrieve the entire hierarchy or specific levels of interest.
  • Improved Performance: Recursive queries are optimized by the database engine, allowing for efficient traversal of self-referential relationships. The engine handles the iterative process internally, leading to better performance compared to manual traversal techniques.

Disadvantages of Recursive Queries

  • Performance Impact on Large Hierarchies: While recursive queries offer performance benefits, they can become slower when dealing with large hierarchies or deeply nested structures. The performance impact increases as the level of recursion, and the number of records involved in the recursion grow.
  • Limited Portability: Recursive queries may not be supported or may have varying syntax across different database systems. This can limit the portability of your SQL code when migrating to a different database platform.
  • Complexity in Maintenance: Recursive queries can be complex to understand and maintain, especially for developers who are not familiar with recursive programming concepts. Code readability and documentation become crucial to ensure clarity and ease of maintenance.
  • Recursive Depth Limitations: Some database systems impose limitations on the maximum recursion depth allowed for recursive queries. This can restrict the usage of recursive queries in scenarios with extremely deep hierarchies.
  • Potential for Infinite Loops: Incorrectly constructed recursive queries can lead to infinite loops, causing the query execution to hang or consume excessive system resources. It is essential to carefully design and test recursive queries to avoid this issue.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE.eu :: SQL Server Useful Queries

clock July 11, 2023 08:34 by author Peter

SQL Server is a widespread database administration system utilized by organizations of all sizes. In order to effectively manage and maintain a database as a database administrator or developer, it is necessary to have a thorough understanding of SQL. This post will cover some useful SQL Server queries that will assist you in performing a variety of duties.
1. List all databases supported by SQL Server

To view an inventory of all databases on a server, use the query below.

SELECT name FROM sys.databases

This will return a list of all databases on the server, including system databases such as 'master', 'model', and 'tempdb'.

2. Viewing the schema of a SQL table
To see the structure of a table, including column names and data types, you can use the following query:
EXEC sp_help 'table_name'

This will return a list of all columns in the table, along with information such as the data type, length, and whether or not the column is nullable.

3. Checking the size of a SQL Server database
To see the size of a database, including the amount of used and unused space, you can use the following query:

EXEC sp_spaceused

This will return the number of rows in the database, the amount of reserved space, and the amount of used and unused space.

4. Retrieving the current user
To see the current user that is connected to the database, you can use the following query:

SELECT SUSER_NAME()

 

This can be useful for auditing purposes, or for determining which user is making changes to the database.
5. Viewing the current date and time

To see the current date and time on the server, you can use the following query:
SELECT GETDATE()

This can be useful for storing timestamps in your database, or for checking the current time on the server.

6. Finding the Total Space of the tables in a database
To see the total space of all the tables in a database, you can use the following query:
SELECT t.NAME
       AS
       TableName,
       s.NAME
       AS SchemaName,
       p.rows,
       Sum(a.total_pages) * 8
       AS TotalSpaceKB,
       Cast(Round(( ( Sum(a.total_pages) * 8 ) / 1024.00 ), 2) AS NUMERIC(36, 2)
       ) AS
       TotalSpaceMB,
       Sum(a.used_pages) * 8
       AS UsedSpaceKB,
       Cast(Round(( ( Sum(a.used_pages) * 8 ) / 1024.00 ), 2) AS NUMERIC(36, 2))
       AS
       UsedSpaceMB,
       ( Sum(a.total_pages) - Sum(a.used_pages) ) * 8
       AS UnusedSpaceKB,
       Cast(Round(( ( Sum(a.total_pages) - Sum(a.used_pages) ) * 8 ) / 1024.00,
            2) AS
            NUMERIC(36, 2))
       AS UnusedSpaceMB
FROM   sys.tables t
       INNER JOIN sys.indexes i
               ON t.object_id = i.object_id
       INNER JOIN sys.partitions p
               ON i.object_id = p.object_id
                  AND i.index_id = p.index_id
       INNER JOIN sys.allocation_units a
               ON p.partition_id = a.container_id
       LEFT OUTER JOIN sys.schemas s
                    ON t.schema_id = s.schema_id
WHERE  t.NAME NOT LIKE 'dt%'
       AND t.is_ms_shipped = 0
       AND i.object_id > 255
GROUP  BY t.NAME,
          s.NAME,
          p.rows
ORDER  BY totalspacemb DESC,
          t.NAME

This can be useful for identifying tables that may be consuming a large amount of space, and determining if any optimization is necessary.

7. Connect two Database with Different Servers in SQL Server
To connect two databases on different servers in a SQL Server query, you can use a linked server. A linked server allows you to connect to another instance of an SQL Server and execute queries against it.
exec sp_addlinkedsrvlogin  'Servername', 'false', null, 'userid', 'password';

This can be connected to two databases.

8. Execute the query with the connected server database
To see the query where you use one server database for another server database, you can use the following query:
select  *  from [Servername].[Databasename].[dbo].[tablename]

This can be used from one server database to another database.

9. Disconnect two Database with Different Servers in SQL Server
To disconnect a linked server in SQL Server, you can use the sp_dropserver system stored procedure. Here's the syntax:
drop server exec sp_dropserver    @server='Servername'

This can be disconnected from one server database to another database.


10. Top 20 Costliest Stored Procedures - High CPU
To see the query where you can find the SP which takes a High CPU, you can use the following query:
SELECT TOP (20)
    p.name AS [SP Name],
    qs.total_worker_time AS [TotalWorkerTime],
    qs.total_worker_time/qs.execution_count AS [AvgWorkerTime],
    qs.execution_count,
    ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GETDATE()), 0) AS [Calls/Second],
    qs.total_elapsed_time,
    qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time],
    qs.cached_time
FROM    sys.procedures AS p WITH (NOLOCK)
INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK) ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
ORDER BY qs.total_worker_time DESC OPTION (RECOMPILE);


Output
SP Name: Stored Procedure Name

TotalWorkerTime: Total Worker Time since the last compile time

AvgWorkerTime: Average Worker Time since last compile time

execution_count: Total number of execution since last compile time

Calls/Second: Number of calls/executions per second

total_elapsed_time: total elapsed time

avg_elapsed_time: Average elapsed time

cached_time: Procedure Cached time

10. How to identify DUPLICATE indexes in SQL Server
To see the query where you can find duplicate indexes, you can use the following query:
;WITH myduplicate
     AS (SELECT Sch.[name]                                                 AS
                SchemaName
                ,
                Obj.[name]
                AS TableName,
                Idx.[name]                                                 AS
                IndexName,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 1)  AS
                Col1,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 2)  AS
                Col2,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 3)  AS
                Col3,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 4)  AS
                Col4,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 5)  AS
                Col5,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 6)  AS
                Col6,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 7)  AS
                Col7,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 8)  AS
                Col8,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 9)  AS
                Col9,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 10) AS
                Col10,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 11) AS
                Col11,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 12) AS
                Col12,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 13) AS
                Col13,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 14) AS
                Col14,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 15) AS
                Col15,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 16) AS
                Col16
         FROM   sys.indexes Idx
                INNER JOIN sys.objects Obj
                        ON Idx.[object_id] = Obj.[object_id]
                INNER JOIN sys.schemas Sch
                        ON Sch.[schema_id] = Obj.[schema_id]
         WHERE  index_id > 0)
SELECT MD1.schemaname,
       MD1.tablename,
       MD1.indexname,
       MD2.indexname AS OverLappingIndex,
       MD1.col1,
       MD1.col2,
       MD1.col3,
       MD1.col4,
       MD1.col5,
       MD1.col6,
       MD1.col7,
       MD1.col8,
       MD1.col9,
       MD1.col10,
       MD1.col11,
       MD1.col12,
       MD1.col13,
       MD1.col14,
       MD1.col15,
       MD1.col16
FROM   myduplicate MD1
       INNER JOIN myduplicate MD2
               ON MD1.tablename = MD2.tablename
                  AND MD1.indexname <> MD2.indexname
                  AND MD1.col1 = MD2.col1
                  AND ( MD1.col2 IS NULL
                         OR MD2.col2 IS NULL
                         OR MD1.col2 = MD2.col2 )
                  AND ( MD1.col3 IS NULL
                         OR MD2.col3 IS NULL
                         OR MD1.col3 = MD2.col3 )
                  AND ( MD1.col4 IS NULL
                         OR MD2.col4 IS NULL
                         OR MD1.col4 = MD2.col4 )
                  AND ( MD1.col5 IS NULL
                         OR MD2.col5 IS NULL
                         OR MD1.col5 = MD2.col5 )
                  AND ( MD1.col6 IS NULL
                         OR MD2.col6 IS NULL
                         OR MD1.col6 = MD2.col6 )
                  AND ( MD1.col7 IS NULL
                         OR MD2.col7 IS NULL
                         OR MD1.col7 = MD2.col7 )
                  AND ( MD1.col8 IS NULL
                         OR MD2.col8 IS NULL
                         OR MD1.col8 = MD2.col8 )
                  AND ( MD1.col9 IS NULL
                         OR MD2.col9 IS NULL
                         OR MD1.col9 = MD2.col9 )
                  AND ( MD1.col10 IS NULL
                         OR MD2.col10 IS NULL
                         OR MD1.col10 = MD2.col10 )
                  AND ( MD1.col11 IS NULL
                         OR MD2.col11 IS NULL
                         OR MD1.col11 = MD2.col11 )
                  AND ( MD1.col12 IS NULL
                         OR MD2.col12 IS NULL
                         OR MD1.col12 = MD2.col12 )
                  AND ( MD1.col13 IS NULL
                         OR MD2.col13 IS NULL
                         OR MD1.col13 = MD2.col13 )
                  AND ( MD1.col14 IS NULL
                         OR MD2.col14 IS NULL
                         OR MD1.col14 = MD2.col14 )
                  AND ( MD1.col15 IS NULL
                         OR MD2.col15 IS NULL
                         OR MD1.col15 = MD2.col15 )
                  AND ( MD1.col16 IS NULL
                         OR MD2.col16 IS NULL
                         OR MD1.col16 = MD2.col16 )
ORDER  BY MD1.schemaname,
          MD1.tablename,
          MD1.indexname


This can be Find the Duplicate Indexes, So you can remove the duplicate Indexes.

In this post, we covered some useful queries for working with Microsoft SQL Server. These queries can help you perform tasks such as listing all databases on a server, viewing the schema of a table, checking the size of a database, seeing the current user and date and time, linking to another server database, get Duplicate indexes.

I hope these queries are useful for you!

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE.eu :: Compare And Identify Data Differences Between Two SQL Server Tables

clock July 7, 2023 07:58 by author Peter

Frequently, systems utilize distributed databases containing distributed tables. Multiple mechanisms facilitate distribution, including replication. In this situation, it is essential to continuously maintain the synchronization of a specific data segment. Additionally, it is necessary to verify the synchronization itself. This is when it becomes necessary to compare data in two tables.

Before contrasting data in two tables, you must ensure that their schemas are either identical or distinguishable in an acceptable manner. Acceptably distinct refers to a difference in the definition of two tables that enables correct data comparison. For example, types of corresponding columns of compared tables must be mapped without data loss.

Compare the SQL Server schemas of the two Employee tables from the JobEmpl and JobEmplDB databases.
For further work, it is necessary to review the Employee table definitions in the JobEmpl and JobEmplDB databases:
USE [JobEmpl]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[Employee](
    [EmployeeID] [int] IDENTITY(1,1) NOT NULL,
    [FirstName] [nvarchar](255) NOT NULL,
    [LastName] [nvarchar](255) NOT NULL,
    [Address] [nvarchar](max) NULL,
    [CheckSumVal]  AS (checksum((coalesce(CONVERT([nvarchar](max),[FirstName]),N'')+coalesce(CONVERT([nvarchar](max),[LastName]),N''))+coalesce(CONVERT([nvarchar](max),[Address]),N''))),
    [REPL_GUID] [uniqueidentifier] ROWGUIDCOL  NOT NULL,
    CONSTRAINT [PK_Employee_EmployeeID] PRIMARY KEY CLUSTERED
    (
        [EmployeeID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

ALTER TABLE [dbo].[Employee] ADD CONSTRAINT [Employee_DEF_REPL_GUID]  DEFAULT (newsequentialid()) FOR [REPL_GUID]
GO

//and

USE [JobEmplDB]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[Employee](
    [EmployeeID] [int] IDENTITY(1,1) NOT NULL,
    [FirstName] [nvarchar](255) NOT NULL,
    [LastName] [nvarchar](255) NOT NULL,
    [Address] [nvarchar](max) NULL,
    CONSTRAINT [PK_Employee_EmployeeID] PRIMARY KEY CLUSTERED
    (
        [EmployeeID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Comparing Database Schemas using SQL Server Data Tools
With the help of Visual Studio and SSDT, you can compare database schemas. To do this, you need to create a new project “JobEmployee” by doing the following:

Then you need to import the database.
To do this, right-click the project and in the context menu, select Import \ Database..:

Next, hit the “Select connection…” button and in the cascading menu, in the “Browse” tab set up the connection to JobEmpl database as follows:

Next, click the “Start” button to start the import of the JobEmpl database:

You will then see a window showing the progress of the database import:

When the database import process is completed, press “Finish”:

 

 

Once it is finished, JobEmployee project will contain directories, subdirectories, and database objects definitions in the following form:

Once it is finished, JobEmployee project will contain directories, subdirectories, and database objects definitions in the following form:

In the same way, we create a similar JobEmployeeDB project and import JobEmplDB database into it:


 

Now, right-click the JobEmployee project and in the drop-down menu, select “Schema Compare”:

This will bring up the database schema compare window.
In the window, you need to select the projects as source and target, and then click the “Compare” button to start the comparison process:

We can see here that despite the differences between the definitions of the Employee tables in two databases, the table columns that we need for comparison are identical in data type. This means that the difference in the schemas of the Employee tables is acceptable. That is, we can compare the data in these two tables.
We can also use other tools to compare database schemas such as dbForge Schema Compare for SQL Server.

Comparing database schemas with the help of dbForge Schema Compare
Now, to compare database table schemas, we use a tool dbForge Schema Compare for SQL Server, which is also included in SQL Tools.
For this, in SSMS, right-click the first database and in the drop-down menu, select Schema Compare\ Set as Source:

We simply transfer JobEmplDB, the second database, to Target area and click the green arrow between source and target:

You simply need to press the “Next” button in the opened database schema comparison project:

Leave the following settings at their defaults and click the “Next” button:


In the “Schema Mapping” tab, we also leave everything by default and press the “Next” button:

On the “Table Mapping” tab, select the required Employee table and on the right of the table name, click the ellipsis:


The table mapping window opens up:

In our case, only 4 fields are mapped, because two last fields are contained only in the JobEmpl database and are absent in the JobEmplDB database.
This setting is useful when column names in the source table and target table do not match.
The “Column details” table displays the column definition details in two tables: on the left – from the source database and on the right – from the target database.
Now hit the “OK” button


Now, to start the database schema comparison process, click the “Compare” button:

A progress bar will appear

We then select the desired Employee table.

At the bottom left, you can see the code for defining the source database table and on the right – the target database table.
We can see here, as before, that the definitions of the Employee table in two databases JobEmpl and JobEmplDB show admissible distinction, that is why we can compare data in these two tables.

Let us now move on to the comparison of the data in two tables itself.

Comparing database data using SSIS

Let’s first make a comparison using SSIS. For this, you need to have SSDT installed.
We create a project called Integration Service Project in Visual Studio and name it IntegrationServicesProject


We then create three connections:

    To the source JobEmpl database
    To the target JobEmplDB database
    To the JobEmplDiff database, where the table of differences will be displayed the following way below:

That way, new connections will be displayed in the project.

Then, in the project, in the “Control Flow” tab, we create a data flow task and name it “data flow task”

Let us now switch to the data flow and create an element “Source OLE DB” by doing the following

On the “Columns” tab, we then select the fields required for comparison
And now, right-click the created data source and in the drop-down menu, select “Show Advanced Editor…”

Next, for each of the “Output Columns” groups for the EpmloyeeID column, set SortKeyPosition property to 1. That is, we sort by the EmployeeID field value in ascending order,

Similarly, let us create and set the data source to the JobEmplDB database.
That way, we obtain two created sources in the data flow task

Now, we create a merge join element in the following way:


Please note that we merge tables using a full outer join.
We then connect our sources to the created join element by merging “Merge Join”

We make the connection from JobEmpl left and the connection from JobEmplDB – right.
In fact, it is not that important, it is possible to do this the other way around.
In the JobEmplDiff database, we create a different table called EmployeeDiff, where we are going to put data differences in the following manner:

USE [JobEmplDiff]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[EmployeeDiff](
    [ID] [int] IDENTITY(1,1) NOT NULL,
    [EmployeeID] [int] NULL,
    [EmployeeID_2] [int] NULL,
    [FirstName] [nvarchar](255) NULL,
    [FirstName_2] [nvarchar](255) NULL,
    [LastName] [nvarchar](255) NULL,
    [LastName_2] [nvarchar](255) NULL,
    [Address] [nvarchar](max) NULL,
    [Address_2] [nvarchar](max) NULL,
    CONSTRAINT [PK_EmployeeDiff_1] PRIMARY KEY CLUSTERED
    (
        [ID] ASC
    ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO


Now, let us get back to our project and in the data flow task, we create a conditional split element

In the Conditional field for NotMatch, you need to type the following expression:

(
  ISNULL(EmployeeID)
  || ISNULL(EmployeeID)
)
|| (
  REPLACENULL(FirstName, "") != REPLACENULL(FirstName_2, "")
)
|| (
  REPLACENULL(LastName, "") != REPLACENULL(LastName_2, "")
)
|| (
  (
    Address != Address_2
    && (!ISNULL(Address))
    && (!ISNULL(Address_2))
  )
  || ISNULL(Address) != ISNULL(Address_2)
)

This expression is true if the fields do not match with account for NULL values for the same EmployeeID value. And it is true if there is no match for the EmployeeID value from one table for the EmployeeID value in the other table, that is, if there are no rows in both tables that have the EmployeeID value.
You can obtain a similar result in the form of selection using the following T-SQL query:

SELECT
    e1.[EmployeeID] AS [EmployeeID],
    e2.[EmployeeID] AS [EmployeeID_2],
    e1.[FirstName] AS [FirstName],
    e2.[FirstName] AS [FirstName_2],
    e1.[LastName] AS [LastName],
    e2.[LastName] AS [LastName_2],
    e1.[Address] AS [Address],
    e2.[Address] AS [Address_2]
FROM
    [JobEmpl].[dbo].[Employee] AS e1
    FULL OUTER JOIN [JobEmplDB].[dbo].[Employee] AS e2 ON e1.[EmployeeID] = e2.[EmployeeID]
WHERE
    (e1.[EmployeeID] IS NULL)
    OR (e2.[EmployeeID] IS NULL)
    OR (COALESCE(e1.[FirstName], N'') <> COALESCE(e2.[FirstName], N''))
    OR (COALESCE(e1.[LastName], N'') <> COALESCE(e2.[LastName], N''))
    OR (COALESCE(e1.[Address], N'') <> COALESCE(e2.[Address], N''));


Now, let us connect the elements “Merge Join” and “Conditional Split”

Next, we create an OLE DB destination element.

Now, we map the columns.

We set “Error Output” tab by default.

We can now join “Conditional Split” and “OLE DB JobEmplDiff” elements. As a result, we get a complete data flow.

Let us run the package that we have obtained.

Upon successful completion of the package work, all its elements turn into green circles.

If an error occurs, it is displayed in the form of a red circle instead of a green one. To resolve any issues, you need to read the log files.
To analyze the data difference, we need to derive the necessary data from the EmployeeDiff table of the JobEmplDiff database:

SELECT
    [ID],
    [EmployeeID],
    [EmployeeID_2],
    [FirstName],
    [FirstName_2],
    [LastName],
    [LastName_2],
    [Address],
    [Address_2]
FROM
    [JobEmplDiff].[dbo].[EmployeeDiff]

Here, you can see the Employee table from JobEmpl database, where Address isn’t set, and FirstName and LastName are mixed up in some columns. However, there is a bunch of missing rows in JobEmplDB, which exist in JobEmpl.

HostForLIFE SQL Server 2019 Hosting
HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in