European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE :: Usage, Features, and Limitations of SQL Server CTE

clock August 15, 2023 07:54 by author Peter

In a SQL Server SELECT, INSERT, UPDATE, or DELETE query, you can refer to a temporary named result set known as a common table expression (CTE). CTEs, which were first made available in SQL Server 2005, are comparable to derived tables and views. Following a WITH clause, a CTE name and query expression are specified to define CTEs. The main query can refer to the CTE as if it were a table or view after the CTE query finishes and fills the CTE with data.

Some important benefits of CTEs

  • CTE queries can be modularized into individual CTE blocks that can then be readily reused in different parts of the query. This increases readability and consistency.
  • CTEs allow you to design recursive searches in which a CTE refers itself. This is beneficial for data that is hierarchical or tree-like.
  • Simpler semantics: CTEs are easier to understand and write since they employ less sophisticated SQL syntax than derived tables.
  • CTEs perform better than nested views and subqueries when it comes to optimization. The optimizer can use CTE results to improve performance in tempdb.

When Should CTEs Be Used?

  • Here are some examples of frequent applications for CTEs:
  • As previously stated, CTEs let you to split down complex logic into simpler modular parts, boosting readability.
  • Reusable query logic - Once a CTE is defined, it can be referenced several times in the query. This reduces the need for repetitious logic.
  • CTEs can recursively reference themselves to query hierarchical data such as org charts, folders, and so on.
  • Replace views - In some circumstances, a CTE can accomplish the same thing as a view but with higher efficiency since the optimizer can better tune the CTE query.
  • Replace derived tables - CTEs can be used to replace derived tables in order to simplify query syntax.
  • Data exploration/investigation - Because CTE definitions are localized to a single statement, they might be beneficial for ad hoc data exploration prior to permanent table storage.

Now let's look at some examples to demonstrate how to write and use CTEs.

Syntax
WITH CTE_Name (Column1, Column2)

AS
(
    -- CTE Definition using SELECT
)

SELECT *
FROM CTE_Name

A WITH clause is used before the primary SELECT statement to define the CTE. Following the WITH keyword, you supply the CTE name and optional column list. The AS keyword denotes the beginning of the CTE definition query.

After you've defined the CTE, you may use it in the primary SELECT query just like any other table or view.

Consider the following example.

WITH Sales_CTE (SalesPerson, SalesAmount)

AS
(
   SELECT SalesPerson, SUM(SalesAmount)
   FROM Sales
   GROUP BY SalesPerson
)

SELECT *
FROM Sales_CTE

This CTE summarizes sales per person into a temporary result set named Sales_CTE. The main query simply selects from the CTE to display the sales summary.

Multiple CTEs

You can define multiple CTEs in a single query by listing them sequentially after the WITH clause:
WITH Sales_CTE (SalesPerson, SalesAmount)

AS
(
SELECT SalesPerson, SUM(SalesAmount)
FROM Sales
GROUP BY SalesPerson
),

TopSales_CTE (TopSalesPerson, TopSalesAmount)

AS
(
SELECT TOP 1 SalesPerson, SalesAmount
FROM Sales_CTE
ORDER BY SalesAmount DESC
)

SELECT *
FROM TopSales_CTE


Here we define two CTEs - Sales_CTE and TopSales_CTE. The second CTE references the first CTE. The main query selects the top salesperson from the second CTE.

Recursive CTE Example

One of the key benefits of CTEs is the ability to write recursive queries. Here is an example to find all managers and employees in a hierarchy.
WITH Managers_CTE (EmployeeID, ManagerID, EmployeeName, ManagerName, Level)

AS
(
-- Anchor member
SELECT e.EmployeeID, e.ManagerID, e.EmployeeName, m.EmployeeName,
       0 AS Level

FROM Employees e
INNER JOIN Employees m
ON e.ManagerID = m.EmployeeID

UNION ALL

-- Recursive member that references CTE name
SELECT e.EmployeeID, e.ManagerID, e.EmployeeName, m.EmployeeName,
       Level + 1
FROM Employees e
INNER JOIN Managers_CTE m
ON e.ManagerID = m.EmployeeID
)

-- Outer query

SELECT *
FROM Managers_CTE


The anchor member defines the root level of the hierarchy.
The recursive member joins back to the CTE name to get to the next level.
UNION ALL combines each round of recursion.
Outer query returns the final resultset.

This builds the org hierarchy iteratively until all levels are retrieved.
CTE with INSERT Example

In addition to SELECT, CTEs can be used with data modification statements like INSERT.
WITH Sales_CTE (SalesID, SalesPersonID, SalesAmount)

AS
(

SELECT SalesID, SalesPersonID, SalesAmount
FROM Sales
WHERE SalesDate = '20180901'
)

INSERT INTO SalesByDay (SalesDate, SalesPersonID, SalesAmount)
SELECT '20180901', SalesPersonID, SalesAmount

FROM Sales_CTE


This inserts sales for a specific date into a separate SalesByDay table using a CTE as the data source.

CTE with UPDATE Example
You can also leverage CTEs with updated statements.

WITH Sales_CTE (SalesID, SalesAmount)

AS
(

SELECT SalesID, SalesAmount
FROM Sales
WHERE SalesDate = '20180901'

)

UPDATE SalesByDay
SET SalesAmount = Sales_CTE.SalesAmount

FROM SalesByDay
INNER JOIN Sales_CTE
ON SalesByDay.SalesID = Sales_CTE.SalesID
WHERE SalesByDay.SalesDate = '20180901'


Here we populate matching rows in another table using values from the CTE.
CTE with DELETE Example

Similarly, CTEs can be utilized with DELETE statements.

WITH InactiveSales_CTE (SalesID, SalesDate)
AS
(
SELECT SalesID, SalesDate
FROM Sales
WHERE SalesDate < '20180101'
)

DELETE SalesByDay
FROM SalesByDay
WHERE SalesID IN (SELECT SalesID
             FROM InactiveSales_CTE)


This deletes related rows in another table based on inactive sales data from the CTE.

Temporary CTE Benefits

A key benefit of CTEs is that they are temporary named result sets that only exist during query execution. This provides several advantages.

  • No need to persist CTEs in the database, unlike views or permanent tables. This reduces storage overhead.
  • Can reference CTEs multiple times in a statement without repetitive subqueries or joins. Improves maintainability.
  • Optimizer can tailor a temporary CTE query plan, unlike a persisted view which has a fixed query plan.
  • Can replace inline derived tables and views to simplify and improve query semantics.
  • Great for ad hoc data investigation before determining permanent tables.


In summary, CTEs are very useful in SQL Server for simplifying complex logic, improving query readability, handling recursive queries, and temporarily staging data transformations for business reporting and analysis. As you gain more experience with SQL Server, be sure to add CTEs to your development toolbox.

Features of CTEs

  • Improved Readability and Maintainability: CTEs enhance the readability of complex queries by breaking them into smaller logical sections. This is especially useful when dealing with queries involving multiple joins, subqueries, or complex calculations. The segmented structure makes it easier to understand and troubleshoot the query.
  • Modularity and Reusability: CTEs enable the creation of modular SQL code. You can define CTEs for specific tasks or calculations and then reuse them across different parts of the main query. This promotes code reusability, reduces redundancy, and simplifies the modification of specific parts of the query.
  • Recursive Queries: CTEs are ideal for building recursive queries, where a query references itself to traverse hierarchical or recursive data structures. This is commonly used for tasks like navigating organizational charts, product categories, or tree-like data.
  • Self-Joins and Window Functions Simplification: When dealing with self-joins or complex calculations involving window functions, CTEs provide a clearer and more organized way to express the logic. They break down intricate operations into manageable steps, leading to more concise and readable code.
  • Code Organization and Reusability: Complex subqueries can be defined within CTEs, allowing for cleaner code organization. This organization makes it easier to understand the purpose of each part of the query, leading to improved maintainability.
  • Optimization Opportunities: In some cases, SQL Server's query optimizer can optimize CTEs more effectively than equivalent subqueries. This optimization can lead to better execution plans and potentially improved performance.


Limitations of CTEs

  • Single-Statement Scope: CTEs are scoped to a single SQL statement. They cannot be referenced across different statements in the same batch. This limitation can restrict their use in complex scenarios that involve multiple related statements.
  • Performance Considerations: While CTEs enhance query organization, they may not always result in the most optimal execution plans. In certain cases, complex CTEs can lead to performance issues, especially when dealing with large datasets or intricate queries.
  • Memory Usage: Recursive CTEs, which are used for hierarchical or recursive queries, can consume significant memory, particularly when dealing with deep hierarchies. This can lead to performance degradation if memory usage is not managed effectively.
  • Lack of Indexing Support: CTEs do not support indexing. Unlike temporary tables, CTEs do not allow you to create indexes to improve query performance. This can be a limitation when working with large datasets that require efficient access patterns.
  • Nested CTEs and Complexity: Nesting multiple CTEs within each other can lead to complex and challenging-to-maintain code. Overuse of nesting can make the query difficult to understand, debug, and optimize.
  • Limited Use in Stored Procedures: CTEs are more commonly used in ad-hoc queries. While they can be used within stored procedures, their single-statement scope can sometimes be restrictive when working with multiple statements in a procedure.
  • Complexity Management: While CTEs enhance query readability, they can also introduce complexity, especially when dealing with deeply nested or highly recursive queries. Overusing CTEs might lead to code that is harder to understand and maintain.


Conclusion
Common Table Expressions (CTEs) in SQL Server offer valuable features that enhance the readability, modularity, and organization of complex queries. Their ability to handle recursive operations and simplify self-joins and window functions makes them a versatile tool for developers. However, it's crucial to be aware of the limitations, such as single-statement scope, performance considerations, and lack of indexing support. By understanding both the features and limitations of CTEs, developers can leverage them effectively to create optimized and maintainable SQL code. Properly using CTEs requires a balance between leveraging their advantages and mitigating potential drawbacks.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Working with JSON in SQL Server

clock August 9, 2023 07:43 by author Peter

JSON (JavaScript Object Notation) has grown in popularity as a simple and versatile data format for delivering and storing information. SQL Server's JSON data type and many built-in functions allow you to store and handle JSON data. This tutorial will teach you how to query, index, and operate with JSON documents in SQL Server.

Create one table and work with a table named "members" to store member information, including locations in JSON format
CREATE TABLE Members (
    userID INT PRIMARY KEY,
    FirstName NVARCHAR(50),
    LastName NVARCHAR(50),
  Location NVARCHAR(max),
    Salary NVARCHAR(50)
);

What is the best way to insert JSON data into SQL Server?
JSON data can be stored in SQL Server using the JSON data type introduced in SQL Server 2016 and later editions. JSON documents can be directly stored in columns using the JSON data type, allowing for rapid querying and processing.

INSERT INTO Members (userID, FirstName, LastName, Location,Salary)
VALUES (2, 'Peter', 'Scott', '[{"Type": "Home", "Street": "London", "City": "lucknow", "Zip": "
111111"}, {"Type": "Work", "Street": "s1", "City": "Manchester", "Zip": "111111"}]',70000);

How to query JSON data in SQL Server?
Using SQL Server to Query JSON Data: SQL Server has multiple techniques for interacting with JSON data. JSON_VALUE, which extracts a scalar value from a JSON string, is one of the most frequently used functions.
SELECT userID, FirstName, LastName,
    JSON_VALUE(Location.Value, '$.Street') AS Street,
    JSON_VALUE(Location.Value, '$.City') AS City,
    JSON_VALUE(Location.Value, '$.Zip') AS Zip,Salary
FROM Members
CROSS APPLY OPENJSON(Location) AS Location
WHERE JSON_VALUE(Location.Value, '$.Type') = 'Home';

How to filter data in JSON file?
The JSON_QUERY function allows you to filter JSON data based on set parameters.


SELECT userID, JSON_QUERY(Location) AS Membername
FROM Members;

How to update JSON data in SQL Server?
SQL Server has functions to add, update, and remove properties from JSON documents.


UPDATE Members
SET Location = JSON_MODIFY(Location, '$[2].street', 'New area')
WHERE userID = 2;

How to Aggregate JSON Data in SQL Server?
The FOR JSON clause can aggregate JSON data and return result sets using JSON formatting.
SELECT FirstName, LastName
FROM Members
FOR JSON auto ;

Thanks for reading, and I hope you like it.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: SSAS Tabular Models Documentation Guide

clock August 1, 2023 09:48 by author Peter

In the labyrinthine world of data modeling, using tools like SQL Server Analysis Services (SSAS) often feels like you're assembling a thousand-piece puzzle. The beauty and intricacy of a tabular model lie in its vast networks of tables, relationships, and measures, all working harmoniously to reveal valuable insights. However, without effective documentation, this puzzle might become a maze for those trying to understand it in the future.

I recently navigated the exciting process of working with a complex SSAS tabular model - an impressive structure of around 20 tables, a web of relationships, and an array of measures. Along the way, I realized the necessity for clear, thorough, and accessible documentation. Not only does it ease the navigation for future explorers of this model, but it also ensures the maintainability and scalability of the work.

Today, I'd like to share the comprehensive approach I used to document my tabular model, the best practices I discovered, and the strategies that made the process smooth and efficient.

Step 1. Harnessing the Power of Descriptions
Every object in SSAS, be it tables, columns, measures, or relationships, has a property known as "Description." I used this feature to provide a meaningful description for every object, allowing anyone reviewing the model to grasp its components' purpose and role quickly. Best practice tip. Always provide concise yet comprehensive descriptions, and maintain a consistent style.

Step 2. The Indispensable Data Dictionary
I created a data dictionary to detail table names, column names, their respective data types, descriptions, and any relevant notes. This serves as a reliable reference point, especially for those new to the model. Best practice tip. Keep the dictionary updated and synchronize it with the model to ensure they're always in alignment.

Step 3. Visualize with Diagrams

I used software like Visio to create comprehensive diagrams to represent the relationships between tables. These diagrams provide an overview of the model's interconnectedness, making it more comprehensible. Best practice tip. Make your diagrams clear and easy to follow, and ensure they represent the model's structure correctly.

Step 4. Detailing DAX Formulas
For each measure in the model, I meticulously documented the DAX formulas, explaining their purpose and logic. This transparency helps future developers understand the model's inner workings. Best practice tip. Be precise and detailed in your formula descriptions, explaining the "what" and the "why."

Step 5. Process and Refresh Strategy
I documented my process and refreshed my strategy to ensure reviewers understood how the cube's data stays up-to-date. This offers insights into when and how the data updates. Best practice tip. Include potential dependencies or bottlenecks in your documentation to provide a complete picture of the data refresh process.

Step 6. Centralizing the Information

With all the information at hand, it was crucial to present everything in a unified, accessible format. Tools like Microsoft Word and SharePoint allowed me to create a central hub of information that could be easily accessed and understood. Best practice tip. Keep your documentation easily accessible and organize it in an intuitive way.

Automation can be a game-changer. Third-party tools like Power BI Helper, DAX Studio, or SQL Power Doc for PowerShell can generate documentation automatically, saving you significant time and streamlining the process. Best practice tip. Regularly update and review auto-generated documentation to ensure it accurately reflects the current state of your model.

A crucial lesson from my experience. Effective documentation is a living entity; it grows, adapts, and evolves with your model. As a best practice, always keep it updated whenever you make changes to the model.

As we navigate the exciting world of data modeling, let's aim to make our data puzzles a little less puzzling and a lot more engaging!

Until next time, happy modeling!

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: How Do I Write CRUD Stored Procedures?

clock July 24, 2023 08:09 by author Peter

You must create a set of SQL Server stored procedures known as CRUD (Create, Read, Update, Delete) stored procedures in order to quickly and securely interact with your database tables. CRUD operations are crucial in database systems because they enable data modification.

How can we build a SQL server insert stored procedure?
You can use the Create operation to add new records to the database. creating a stored procedure.
CREATE PROCEDURE InsertProject
    @ProjectName NVARCHAR(500),
    @ClientName NVARCHAR(500),
    @ProjectManagerId NVARCHAR(500),
    @ProjectDescription NVARCHAR(500) = NULL,
    @StartDate DATETIME
AS
BEGIN
    SET NOCOUNT ON;

    INSERT INTO Projects
    (
        [ProjectName],
        [ClientName],
        [ProjectManagerId],
        [CreatedDate],
        [ProjectDescription],
        [StartDate]
    )
    VALUES
    (
        @ProjectName,
        @ClientName,
        @ProjectManagerId,
        GETUTCDATE(),
        @ProjectDescription,
        @StartDate
    )
END

EXEC InsertProject
    @ProjectName = 'Management Software',
    @ClientName = 'Peter',
    @ProjectManagerId = 'A51DC085-073F-4D3A-AFC8-ACE61B89E8C8',
    @ProjectDescription = 'This is a sample Management Software.',
    @StartDate = '2023-01-20 12:45:00.000';

How may a Read stored procedure be created in the SQL server?
The Read operation is used to retrieve data from the database. You can create a select stored procedure by using a straightforward, constraint-free SELECT statement to retrieve all the data from a table.

CREATE PROCEDURE GetAllProject

AS
BEGIN
    SET NOCOUNT ON;
   SELECT Id, ProjectName, ClientName,ProjectDescription,StartDate,EndDate,CreatedDate from Projects
END

EXEC GetAllProject

Output

Use a WHERE clause with the proper condition to retrieve a specific row from the table using a unique identifier (such as Id) within a stored procedure.
CREATE PROCEDURE GetProjectByProjectId
     @Id INT
AS
BEGIN
    SET NOCOUNT ON;
   SELECT ProjectName,ClientName,ProjectDescription,CreatedDate from Projects
      WHERE Id = @Id;
END


EXEC GetProjectByProjectId @Id=2

Output

How we can create an Update stored procedure in the SQL server?

Existing records in the database can be modified using the Update operation.

CREATE PROCEDURE [UpdateProject]
     @id INT
    ,@ProjectName NVARCHAR(500)
    ,@ClientName NVARCHAR(500)
    ,@ProjectManagerId NVARCHAR(500)
    ,@ProjectDescription NVARCHAR(500)
    ,@StartDate DATETIME = NULL
    ,@EndDate DATETIME = NULL
    ,@UpdatedDate DATETIME = NULL
AS
BEGIN
    UPDATE Projects
    SET ProjectName = @ProjectName
        ,ClientName = @ClientName
        ,ProjectManagerId = @ProjectManagerId
        ,ProjectDescription = @ProjectDescription
        ,StartDate = @StartDate
        ,EndDate = @EndDate
        ,UpdatedDate = getutcdate()
    WHERE Id = @Id
END

SQL

EXEC UpdateProject
    @Id=1,
    @ProjectName = 'TimeSystem Software',
    @ClientName = 'Peter',
    @ProjectManagerId = 'A51DC085-073F-4D3A-AFC8-ACE61B89E8C8',
    @ProjectDescription = 'This is a sample TimeSystem Software.',
    @StartDate = '2023-01-20 12:45:00.000',
    @EndDate = '2023-04-20 12:45:00.000',
    @UpdatedDate= getutcdate();


Output

How we can create a Delete stored procedure in the SQL server?
To delete records from the database, use the Delete operation. To develop a stored process for deletion
CREATE PROCEDURE DeleteProjectById
   @id int
AS
BEGIN
    SET NOCOUNT ON;

    DELETE FROM Projects
    WHERE id = @id;
END


EXEC DeleteProjectById @Id=2

Output

To preserve accurate information and past information, gentle deletion rather than hard deletion is frequently used in database designs. Instead of physically removing records from the database, soft delete involves listing them as inactive or removed. This strategy enables previous tracking and data retrieval.
CREATE PROCEDURE [DeleteProject]
@id int
AS
BEGIN


UPDATE Projects SET IsDelete =1, IsActive =0
WHERE Id = @id


END

EXEC DeleteProject @Id=2

Output

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE :: Recursive Queries in SQL

clock July 18, 2023 10:55 by author Peter

What are Recursive queries in SQL?
Recursive queries in SQL are queries that involve self-referential relationships within a table. They allow you to perform operations that require iterative processing, enabling you to traverse and manipulate hierarchical data structures efficiently.

Syntax of Recursive Queries
WITH RECURSIVE cte_name (column1, column2, ...) AS (
    -- Anchor member
    SELECT column1, column2, ...
    FROM table_name
    WHERE condition

    UNION ALL

    -- Recursive member
    SELECT column1, column2, ...
    FROM table_name
    JOIN cte_name ON join_condition
    WHERE condition
)
SELECT column1, column2, ...
FROM cte_name;


Recursive queries consist of two main components,

1. Anchor Member
The anchor member establishes the base case or initial condition for the recursive query. It selects the initial set of rows or records that serve as the starting point for the recursion. The anchor member is a regular SELECT statement that defines the base case condition.

2. Recursive Member
The recursive member defines the relationship and iteration process in the recursive query. It specifies how to generate new rows or records by joining the result of the previous iteration with the underlying table. The recursive member includes a join condition that establishes the relationship between the previous iteration and the current iteration. It also includes termination criteria to stop the recursion when certain conditions are met.

Example 1. Hierarchical Data - Employee Hierarchy
Consider the "Employees" table, which has the columns "EmployeeID" and "ManagerID." The employees reporting to a certain manager will be retrieved via a recursive query that traverses the employee hierarchy.
-- Create the Employees table
CREATE TABLE Employees (
    EmployeeID INT,
    ManagerID INT
);

-- Insert sample data
INSERT INTO Employees (EmployeeID, ManagerID)
VALUES (1, NULL),
       (2, 1),
       (3, 1),
       (4, 2),
       (5, 2),
       (6, 3),
       (7, 6);

-- Perform the recursive query
WITH RECURSIVE EmployeeHierarchy (EmployeeID, ManagerID, Level) AS (
    -- Anchor member: Retrieve the root manager
    SELECT EmployeeID, ManagerID, 0
    FROM Employees
    WHERE ManagerID IS NULL

    UNION ALL

    -- Recursive member: Retrieve employees reporting to each manager
    SELECT e.EmployeeID, e.ManagerID, eh.Level + 1
    FROM Employees e
    JOIN EmployeeHierarchy eh ON e.ManagerID = eh.EmployeeID
)
SELECT EmployeeID, ManagerID, Level
FROM EmployeeHierarchy

Output

Example 2. Hierarchical Data - File System Structure
Consider a table called "Files" that has the columns "FileID" and "ParentID," which describe the hierarchy of a file system. To retrieve all files and their hierarchical structures, we'll utilize a recursive query.
-- Create the Files table
CREATE TABLE Files (
    FileID INT,
    ParentID INT,
    FileName VARCHAR(100)
);


-- Insert sample data
INSERT INTO Files (FileID, ParentID, FileName)
VALUES (1, NULL, 'Root1'),
       (2, NULL, 'Root2'),
       (3, 1, 'Folder1'),
       (4, 2, 'Folder1'),
       (5, 3, 'Subfolder1'),
       (6, 4, 'Subfolder1'),
       (7, 4, 'Subfolder2'),
       (8,5, 'Subfolder1_1'),
       (9,6, 'File1'),
       (10,6, 'File2');


-- Perform the recursive query
WITH RECURSIVE FileStructure (FileID, ParentID, FileName, Level) AS (
    -- Anchor member: Retrieve root level files
    SELECT FileID, ParentID, FileName, 0
    FROM Files
    WHERE ParentID IS NULL

    UNION ALL

    -- Recursive member: Retrieve nested files
    SELECT f.FileID, f.ParentID, f.FileName, fs.Level + 1
    FROM Files f
    JOIN FileStructure fs ON f.ParentID = fs.FileID
)
SELECT FileID, ParentID, FileName, Level
FROM FileStructure;

Output
File Structure Hierarchy Output

The recursive query continues to iterate until the termination criteria are satisfied, generating new rows or records in each iteration based on the previous iteration's results. The result set of a recursive query includes all the rows or records generated during the recursion.

Recursive queries are typically used to work with hierarchical data structures, such as organizational charts, file systems, or product categories. They allow you to navigate and analyze the nested relationships within these structures without the need for complex procedural code or multiple iterations.

Recursive queries are supported by several database systems, including common SQL-based systems like PostgreSQL, MySQL (with the help of Common Table Expressions or CTEs), and Microsoft SQL Server (with the help of the WITH RECURSIVE keyword).

Advantages of Recursive Queries

  • Handling Hierarchical Data: Recursive queries provide a straightforward and efficient way to work with hierarchical data structures, such as organizational charts, file systems, or product categories. They allow you to retrieve and navigate the nested relationships in a concise manner.
  • Flexibility and Adaptability: Recursive queries are adaptable to various levels of depth within a hierarchical structure. They can handle any level of nesting, making them suitable for scenarios where the depth of the hierarchy may vary.
  • Code Reusability: Once you have defined a recursive query, it can be easily reused for different hierarchical structures within the same table, saving development time and effort.
  • Simplified Query Logic: Recursive queries eliminate the need for complex procedural code or multiple iterations to traverse hierarchical relationships. With a single query, you can retrieve the entire hierarchy or specific levels of interest.
  • Improved Performance: Recursive queries are optimized by the database engine, allowing for efficient traversal of self-referential relationships. The engine handles the iterative process internally, leading to better performance compared to manual traversal techniques.

Disadvantages of Recursive Queries

  • Performance Impact on Large Hierarchies: While recursive queries offer performance benefits, they can become slower when dealing with large hierarchies or deeply nested structures. The performance impact increases as the level of recursion, and the number of records involved in the recursion grow.
  • Limited Portability: Recursive queries may not be supported or may have varying syntax across different database systems. This can limit the portability of your SQL code when migrating to a different database platform.
  • Complexity in Maintenance: Recursive queries can be complex to understand and maintain, especially for developers who are not familiar with recursive programming concepts. Code readability and documentation become crucial to ensure clarity and ease of maintenance.
  • Recursive Depth Limitations: Some database systems impose limitations on the maximum recursion depth allowed for recursive queries. This can restrict the usage of recursive queries in scenarios with extremely deep hierarchies.
  • Potential for Infinite Loops: Incorrectly constructed recursive queries can lead to infinite loops, causing the query execution to hang or consume excessive system resources. It is essential to carefully design and test recursive queries to avoid this issue.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE.eu :: SQL Server Useful Queries

clock July 11, 2023 08:34 by author Peter

SQL Server is a widespread database administration system utilized by organizations of all sizes. In order to effectively manage and maintain a database as a database administrator or developer, it is necessary to have a thorough understanding of SQL. This post will cover some useful SQL Server queries that will assist you in performing a variety of duties.
1. List all databases supported by SQL Server

To view an inventory of all databases on a server, use the query below.

SELECT name FROM sys.databases

This will return a list of all databases on the server, including system databases such as 'master', 'model', and 'tempdb'.

2. Viewing the schema of a SQL table
To see the structure of a table, including column names and data types, you can use the following query:
EXEC sp_help 'table_name'

This will return a list of all columns in the table, along with information such as the data type, length, and whether or not the column is nullable.

3. Checking the size of a SQL Server database
To see the size of a database, including the amount of used and unused space, you can use the following query:

EXEC sp_spaceused

This will return the number of rows in the database, the amount of reserved space, and the amount of used and unused space.

4. Retrieving the current user
To see the current user that is connected to the database, you can use the following query:

SELECT SUSER_NAME()

 

This can be useful for auditing purposes, or for determining which user is making changes to the database.
5. Viewing the current date and time

To see the current date and time on the server, you can use the following query:
SELECT GETDATE()

This can be useful for storing timestamps in your database, or for checking the current time on the server.

6. Finding the Total Space of the tables in a database
To see the total space of all the tables in a database, you can use the following query:
SELECT t.NAME
       AS
       TableName,
       s.NAME
       AS SchemaName,
       p.rows,
       Sum(a.total_pages) * 8
       AS TotalSpaceKB,
       Cast(Round(( ( Sum(a.total_pages) * 8 ) / 1024.00 ), 2) AS NUMERIC(36, 2)
       ) AS
       TotalSpaceMB,
       Sum(a.used_pages) * 8
       AS UsedSpaceKB,
       Cast(Round(( ( Sum(a.used_pages) * 8 ) / 1024.00 ), 2) AS NUMERIC(36, 2))
       AS
       UsedSpaceMB,
       ( Sum(a.total_pages) - Sum(a.used_pages) ) * 8
       AS UnusedSpaceKB,
       Cast(Round(( ( Sum(a.total_pages) - Sum(a.used_pages) ) * 8 ) / 1024.00,
            2) AS
            NUMERIC(36, 2))
       AS UnusedSpaceMB
FROM   sys.tables t
       INNER JOIN sys.indexes i
               ON t.object_id = i.object_id
       INNER JOIN sys.partitions p
               ON i.object_id = p.object_id
                  AND i.index_id = p.index_id
       INNER JOIN sys.allocation_units a
               ON p.partition_id = a.container_id
       LEFT OUTER JOIN sys.schemas s
                    ON t.schema_id = s.schema_id
WHERE  t.NAME NOT LIKE 'dt%'
       AND t.is_ms_shipped = 0
       AND i.object_id > 255
GROUP  BY t.NAME,
          s.NAME,
          p.rows
ORDER  BY totalspacemb DESC,
          t.NAME

This can be useful for identifying tables that may be consuming a large amount of space, and determining if any optimization is necessary.

7. Connect two Database with Different Servers in SQL Server
To connect two databases on different servers in a SQL Server query, you can use a linked server. A linked server allows you to connect to another instance of an SQL Server and execute queries against it.
exec sp_addlinkedsrvlogin  'Servername', 'false', null, 'userid', 'password';

This can be connected to two databases.

8. Execute the query with the connected server database
To see the query where you use one server database for another server database, you can use the following query:
select  *  from [Servername].[Databasename].[dbo].[tablename]

This can be used from one server database to another database.

9. Disconnect two Database with Different Servers in SQL Server
To disconnect a linked server in SQL Server, you can use the sp_dropserver system stored procedure. Here's the syntax:
drop server exec sp_dropserver    @server='Servername'

This can be disconnected from one server database to another database.


10. Top 20 Costliest Stored Procedures - High CPU
To see the query where you can find the SP which takes a High CPU, you can use the following query:
SELECT TOP (20)
    p.name AS [SP Name],
    qs.total_worker_time AS [TotalWorkerTime],
    qs.total_worker_time/qs.execution_count AS [AvgWorkerTime],
    qs.execution_count,
    ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GETDATE()), 0) AS [Calls/Second],
    qs.total_elapsed_time,
    qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time],
    qs.cached_time
FROM    sys.procedures AS p WITH (NOLOCK)
INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK) ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
ORDER BY qs.total_worker_time DESC OPTION (RECOMPILE);


Output
SP Name: Stored Procedure Name

TotalWorkerTime: Total Worker Time since the last compile time

AvgWorkerTime: Average Worker Time since last compile time

execution_count: Total number of execution since last compile time

Calls/Second: Number of calls/executions per second

total_elapsed_time: total elapsed time

avg_elapsed_time: Average elapsed time

cached_time: Procedure Cached time

10. How to identify DUPLICATE indexes in SQL Server
To see the query where you can find duplicate indexes, you can use the following query:
;WITH myduplicate
     AS (SELECT Sch.[name]                                                 AS
                SchemaName
                ,
                Obj.[name]
                AS TableName,
                Idx.[name]                                                 AS
                IndexName,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 1)  AS
                Col1,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 2)  AS
                Col2,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 3)  AS
                Col3,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 4)  AS
                Col4,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 5)  AS
                Col5,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 6)  AS
                Col6,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 7)  AS
                Col7,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 8)  AS
                Col8,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 9)  AS
                Col9,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 10) AS
                Col10,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 11) AS
                Col11,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 12) AS
                Col12,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 13) AS
                Col13,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 14) AS
                Col14,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 15) AS
                Col15,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 16) AS
                Col16
         FROM   sys.indexes Idx
                INNER JOIN sys.objects Obj
                        ON Idx.[object_id] = Obj.[object_id]
                INNER JOIN sys.schemas Sch
                        ON Sch.[schema_id] = Obj.[schema_id]
         WHERE  index_id > 0)
SELECT MD1.schemaname,
       MD1.tablename,
       MD1.indexname,
       MD2.indexname AS OverLappingIndex,
       MD1.col1,
       MD1.col2,
       MD1.col3,
       MD1.col4,
       MD1.col5,
       MD1.col6,
       MD1.col7,
       MD1.col8,
       MD1.col9,
       MD1.col10,
       MD1.col11,
       MD1.col12,
       MD1.col13,
       MD1.col14,
       MD1.col15,
       MD1.col16
FROM   myduplicate MD1
       INNER JOIN myduplicate MD2
               ON MD1.tablename = MD2.tablename
                  AND MD1.indexname <> MD2.indexname
                  AND MD1.col1 = MD2.col1
                  AND ( MD1.col2 IS NULL
                         OR MD2.col2 IS NULL
                         OR MD1.col2 = MD2.col2 )
                  AND ( MD1.col3 IS NULL
                         OR MD2.col3 IS NULL
                         OR MD1.col3 = MD2.col3 )
                  AND ( MD1.col4 IS NULL
                         OR MD2.col4 IS NULL
                         OR MD1.col4 = MD2.col4 )
                  AND ( MD1.col5 IS NULL
                         OR MD2.col5 IS NULL
                         OR MD1.col5 = MD2.col5 )
                  AND ( MD1.col6 IS NULL
                         OR MD2.col6 IS NULL
                         OR MD1.col6 = MD2.col6 )
                  AND ( MD1.col7 IS NULL
                         OR MD2.col7 IS NULL
                         OR MD1.col7 = MD2.col7 )
                  AND ( MD1.col8 IS NULL
                         OR MD2.col8 IS NULL
                         OR MD1.col8 = MD2.col8 )
                  AND ( MD1.col9 IS NULL
                         OR MD2.col9 IS NULL
                         OR MD1.col9 = MD2.col9 )
                  AND ( MD1.col10 IS NULL
                         OR MD2.col10 IS NULL
                         OR MD1.col10 = MD2.col10 )
                  AND ( MD1.col11 IS NULL
                         OR MD2.col11 IS NULL
                         OR MD1.col11 = MD2.col11 )
                  AND ( MD1.col12 IS NULL
                         OR MD2.col12 IS NULL
                         OR MD1.col12 = MD2.col12 )
                  AND ( MD1.col13 IS NULL
                         OR MD2.col13 IS NULL
                         OR MD1.col13 = MD2.col13 )
                  AND ( MD1.col14 IS NULL
                         OR MD2.col14 IS NULL
                         OR MD1.col14 = MD2.col14 )
                  AND ( MD1.col15 IS NULL
                         OR MD2.col15 IS NULL
                         OR MD1.col15 = MD2.col15 )
                  AND ( MD1.col16 IS NULL
                         OR MD2.col16 IS NULL
                         OR MD1.col16 = MD2.col16 )
ORDER  BY MD1.schemaname,
          MD1.tablename,
          MD1.indexname


This can be Find the Duplicate Indexes, So you can remove the duplicate Indexes.

In this post, we covered some useful queries for working with Microsoft SQL Server. These queries can help you perform tasks such as listing all databases on a server, viewing the schema of a table, checking the size of a database, seeing the current user and date and time, linking to another server database, get Duplicate indexes.

I hope these queries are useful for you!

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE.eu :: Compare And Identify Data Differences Between Two SQL Server Tables

clock July 7, 2023 07:58 by author Peter

Frequently, systems utilize distributed databases containing distributed tables. Multiple mechanisms facilitate distribution, including replication. In this situation, it is essential to continuously maintain the synchronization of a specific data segment. Additionally, it is necessary to verify the synchronization itself. This is when it becomes necessary to compare data in two tables.

Before contrasting data in two tables, you must ensure that their schemas are either identical or distinguishable in an acceptable manner. Acceptably distinct refers to a difference in the definition of two tables that enables correct data comparison. For example, types of corresponding columns of compared tables must be mapped without data loss.

Compare the SQL Server schemas of the two Employee tables from the JobEmpl and JobEmplDB databases.
For further work, it is necessary to review the Employee table definitions in the JobEmpl and JobEmplDB databases:
USE [JobEmpl]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[Employee](
    [EmployeeID] [int] IDENTITY(1,1) NOT NULL,
    [FirstName] [nvarchar](255) NOT NULL,
    [LastName] [nvarchar](255) NOT NULL,
    [Address] [nvarchar](max) NULL,
    [CheckSumVal]  AS (checksum((coalesce(CONVERT([nvarchar](max),[FirstName]),N'')+coalesce(CONVERT([nvarchar](max),[LastName]),N''))+coalesce(CONVERT([nvarchar](max),[Address]),N''))),
    [REPL_GUID] [uniqueidentifier] ROWGUIDCOL  NOT NULL,
    CONSTRAINT [PK_Employee_EmployeeID] PRIMARY KEY CLUSTERED
    (
        [EmployeeID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

ALTER TABLE [dbo].[Employee] ADD CONSTRAINT [Employee_DEF_REPL_GUID]  DEFAULT (newsequentialid()) FOR [REPL_GUID]
GO

//and

USE [JobEmplDB]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[Employee](
    [EmployeeID] [int] IDENTITY(1,1) NOT NULL,
    [FirstName] [nvarchar](255) NOT NULL,
    [LastName] [nvarchar](255) NOT NULL,
    [Address] [nvarchar](max) NULL,
    CONSTRAINT [PK_Employee_EmployeeID] PRIMARY KEY CLUSTERED
    (
        [EmployeeID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Comparing Database Schemas using SQL Server Data Tools
With the help of Visual Studio and SSDT, you can compare database schemas. To do this, you need to create a new project “JobEmployee” by doing the following:

Then you need to import the database.
To do this, right-click the project and in the context menu, select Import \ Database..:

Next, hit the “Select connection…” button and in the cascading menu, in the “Browse” tab set up the connection to JobEmpl database as follows:

Next, click the “Start” button to start the import of the JobEmpl database:

You will then see a window showing the progress of the database import:

When the database import process is completed, press “Finish”:

 

 

Once it is finished, JobEmployee project will contain directories, subdirectories, and database objects definitions in the following form:

Once it is finished, JobEmployee project will contain directories, subdirectories, and database objects definitions in the following form:

In the same way, we create a similar JobEmployeeDB project and import JobEmplDB database into it:


 

Now, right-click the JobEmployee project and in the drop-down menu, select “Schema Compare”:

This will bring up the database schema compare window.
In the window, you need to select the projects as source and target, and then click the “Compare” button to start the comparison process:

We can see here that despite the differences between the definitions of the Employee tables in two databases, the table columns that we need for comparison are identical in data type. This means that the difference in the schemas of the Employee tables is acceptable. That is, we can compare the data in these two tables.
We can also use other tools to compare database schemas such as dbForge Schema Compare for SQL Server.

Comparing database schemas with the help of dbForge Schema Compare
Now, to compare database table schemas, we use a tool dbForge Schema Compare for SQL Server, which is also included in SQL Tools.
For this, in SSMS, right-click the first database and in the drop-down menu, select Schema Compare\ Set as Source:

We simply transfer JobEmplDB, the second database, to Target area and click the green arrow between source and target:

You simply need to press the “Next” button in the opened database schema comparison project:

Leave the following settings at their defaults and click the “Next” button:


In the “Schema Mapping” tab, we also leave everything by default and press the “Next” button:

On the “Table Mapping” tab, select the required Employee table and on the right of the table name, click the ellipsis:


The table mapping window opens up:

In our case, only 4 fields are mapped, because two last fields are contained only in the JobEmpl database and are absent in the JobEmplDB database.
This setting is useful when column names in the source table and target table do not match.
The “Column details” table displays the column definition details in two tables: on the left – from the source database and on the right – from the target database.
Now hit the “OK” button


Now, to start the database schema comparison process, click the “Compare” button:

A progress bar will appear

We then select the desired Employee table.

At the bottom left, you can see the code for defining the source database table and on the right – the target database table.
We can see here, as before, that the definitions of the Employee table in two databases JobEmpl and JobEmplDB show admissible distinction, that is why we can compare data in these two tables.

Let us now move on to the comparison of the data in two tables itself.

Comparing database data using SSIS

Let’s first make a comparison using SSIS. For this, you need to have SSDT installed.
We create a project called Integration Service Project in Visual Studio and name it IntegrationServicesProject


We then create three connections:

    To the source JobEmpl database
    To the target JobEmplDB database
    To the JobEmplDiff database, where the table of differences will be displayed the following way below:

That way, new connections will be displayed in the project.

Then, in the project, in the “Control Flow” tab, we create a data flow task and name it “data flow task”

Let us now switch to the data flow and create an element “Source OLE DB” by doing the following

On the “Columns” tab, we then select the fields required for comparison
And now, right-click the created data source and in the drop-down menu, select “Show Advanced Editor…”

Next, for each of the “Output Columns” groups for the EpmloyeeID column, set SortKeyPosition property to 1. That is, we sort by the EmployeeID field value in ascending order,

Similarly, let us create and set the data source to the JobEmplDB database.
That way, we obtain two created sources in the data flow task

Now, we create a merge join element in the following way:


Please note that we merge tables using a full outer join.
We then connect our sources to the created join element by merging “Merge Join”

We make the connection from JobEmpl left and the connection from JobEmplDB – right.
In fact, it is not that important, it is possible to do this the other way around.
In the JobEmplDiff database, we create a different table called EmployeeDiff, where we are going to put data differences in the following manner:

USE [JobEmplDiff]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[EmployeeDiff](
    [ID] [int] IDENTITY(1,1) NOT NULL,
    [EmployeeID] [int] NULL,
    [EmployeeID_2] [int] NULL,
    [FirstName] [nvarchar](255) NULL,
    [FirstName_2] [nvarchar](255) NULL,
    [LastName] [nvarchar](255) NULL,
    [LastName_2] [nvarchar](255) NULL,
    [Address] [nvarchar](max) NULL,
    [Address_2] [nvarchar](max) NULL,
    CONSTRAINT [PK_EmployeeDiff_1] PRIMARY KEY CLUSTERED
    (
        [ID] ASC
    ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO


Now, let us get back to our project and in the data flow task, we create a conditional split element

In the Conditional field for NotMatch, you need to type the following expression:

(
  ISNULL(EmployeeID)
  || ISNULL(EmployeeID)
)
|| (
  REPLACENULL(FirstName, "") != REPLACENULL(FirstName_2, "")
)
|| (
  REPLACENULL(LastName, "") != REPLACENULL(LastName_2, "")
)
|| (
  (
    Address != Address_2
    && (!ISNULL(Address))
    && (!ISNULL(Address_2))
  )
  || ISNULL(Address) != ISNULL(Address_2)
)

This expression is true if the fields do not match with account for NULL values for the same EmployeeID value. And it is true if there is no match for the EmployeeID value from one table for the EmployeeID value in the other table, that is, if there are no rows in both tables that have the EmployeeID value.
You can obtain a similar result in the form of selection using the following T-SQL query:

SELECT
    e1.[EmployeeID] AS [EmployeeID],
    e2.[EmployeeID] AS [EmployeeID_2],
    e1.[FirstName] AS [FirstName],
    e2.[FirstName] AS [FirstName_2],
    e1.[LastName] AS [LastName],
    e2.[LastName] AS [LastName_2],
    e1.[Address] AS [Address],
    e2.[Address] AS [Address_2]
FROM
    [JobEmpl].[dbo].[Employee] AS e1
    FULL OUTER JOIN [JobEmplDB].[dbo].[Employee] AS e2 ON e1.[EmployeeID] = e2.[EmployeeID]
WHERE
    (e1.[EmployeeID] IS NULL)
    OR (e2.[EmployeeID] IS NULL)
    OR (COALESCE(e1.[FirstName], N'') <> COALESCE(e2.[FirstName], N''))
    OR (COALESCE(e1.[LastName], N'') <> COALESCE(e2.[LastName], N''))
    OR (COALESCE(e1.[Address], N'') <> COALESCE(e2.[Address], N''));


Now, let us connect the elements “Merge Join” and “Conditional Split”

Next, we create an OLE DB destination element.

Now, we map the columns.

We set “Error Output” tab by default.

We can now join “Conditional Split” and “OLE DB JobEmplDiff” elements. As a result, we get a complete data flow.

Let us run the package that we have obtained.

Upon successful completion of the package work, all its elements turn into green circles.

If an error occurs, it is displayed in the form of a red circle instead of a green one. To resolve any issues, you need to read the log files.
To analyze the data difference, we need to derive the necessary data from the EmployeeDiff table of the JobEmplDiff database:

SELECT
    [ID],
    [EmployeeID],
    [EmployeeID_2],
    [FirstName],
    [FirstName_2],
    [LastName],
    [LastName_2],
    [Address],
    [Address_2]
FROM
    [JobEmplDiff].[dbo].[EmployeeDiff]

Here, you can see the Employee table from JobEmpl database, where Address isn’t set, and FirstName and LastName are mixed up in some columns. However, there is a bunch of missing rows in JobEmplDB, which exist in JobEmpl.

HostForLIFE SQL Server 2019 Hosting
HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE.eu :: Magic Tables in SQL Server

clock June 23, 2023 09:47 by author Peter

Magic tables are the logical temporary tables created by the SQL server internally to recover recently inserted, deleted, and updated data into the SQL server. They are created during DML trigger execution. If you want to know more about DML triggers, you may refer to my previous article on DML Triggers.

Three types of Magic tables are created at the time of insert/update/delete in the SQL server.
    INSERTED Magic tables
    DELETED Magic tables
    UPDATED Magic tables

Magic tables are stored in temp DB just as a temporary internal table, and we can see them with the help of triggers. We can retrieve the information or the impacted records using these Magic tables.

Let’s see how this works with the use of a trigger.
    When we perform the insert operation, the inserted magic table will have a recently inserted record showing on top of the table.
    When we perform the delete operation, the deleted magic table will have a recently deleted record showing on top of the table.
    When we perform the update operation, the inserted magic table will have a recently updated record showing on top of the table.

Let’s consider the below table to see how this work.
SELECT * FROM StudentsReport;

Inserted Magic Table
Let’s create a trigger on the StudentsReport table to see if the values are inserted on the StudentsReport table and see if a virtual table or temp table (Magic table) is created with recently inserted records.

CREATE TRIGGER  TR_StudentsReport_InsertedMagic ON StudentsReport
FOR INSERT
AS
BEGIN
    SELECT * FROM INSERTED
END

Now when we insert the records in the StudentsReport table, at the same time inserted magic table will be created along with recently inserted records.

Now execute the below queries together.
INSERT INTO StudentsReport VALUES (6, 'Peter', 'English', 90);
SELECT * FROM StudentsReport;

We can see that while inserting a record in the StudentsReport table, it’s showing a recently Inserted record in the temp table, and that temp table is inserted magic table.
Deleted Magic Table

Now let’s create a trigger on the StudentsReport table to see if the values are deleted from the StudentsReport table and if the Magic table is created for recently deleted records.

CREATE TRIGGER  TR_StudentsReport_DeletedMagic ON StudentsReport
FOR DELETE
AS
BEGIN
    SELECT * FROM Deleted
END

we can see that while deleting a record from the StudentsReport table, it’s also showing a recently deleted record in the temp table, and that temp table is deleted magic table.

Updated Magic Table


Now, Let’s create a trigger on the StudentsReport table to see if the values are updated on the StudentsReport table and if the Magic table is created for recently updated records.
CREATE TRIGGER  TR_StudentsReport_UpdatedMagic ON StudentsReport
FOR UPDATE
AS
BEGIN
    SELECT * FROM INSERTED
END

Now when we update the records in the StudentsReport table, at the same time updated magic table will be created along with recently updated records.
Now execute the below query together.
UPDATE StudentsReport SET Marks = 90 WHERE StudentId = 3;
SELECT * FROM StudentsReport;

we can see that while updating the record in the StudentsReport table, it’s also showing a recently updated record in the temp table, and that temp table is an updated magic table.

HostForLIFE SQL Server 2019 Hosting
HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: SPARSE Column in SQL Server

clock January 9, 2023 06:58 by author Peter

In this article, we will learn about SPARSE Column in SQL Server. The SPARSE column is a good feature of SQL Server. It helps us to reduce the space requirements for null values. Using a SPARSE column, we may save up to 20 to 40 percent of space.

SPARSE Column in SQL ServerA SPARSE column is a common column with optimized storage for NULL values. It also reduces the space requirements for null values at the cost of more overhead to retrieve non-null values. In other words, a SPARSE column is better at managing NULL and ZERO values in SQL Server. It does not occupy any space in the database. Using a SPARSE column, we may save up to 20 to 40 percent of the area. We can define a column as a SPARSE column using the CREATE TABLE or ALTER TABLE statements.
CREATE TABLE TableName
(
      .....
      Col1 INT SPARSE,
      Col2 VARCHAR(100) SPARSE,
      Col3 DateTime SPARSE
      .....
)


We may also add/change a column from the graphical view.

Example
In this example, I have created two tables with the same number of columns and the same data type, but one table's columns are created as a SPARSE column. Each table contains 500+ rows.
CREATE TABLE TableName
(
      Col1 INT SPARSE,
      Col2 VARCHAR(100) SPARSE,
      Col3 DateTime SPARSE
)

CREATE TABLE TableName1
(
      Col1 INT ,
      Col2 VARCHAR(100) ,
      Col3 DateTime
)


Using the sp_spaceused stored procedure, we can determine the space occupied by the table data.
sp_spaceused 'TableName'
GO
sp_spaceused 'TableName1'

Advantages of a SPARSE column
    A SPARSE column saves database space when there are zero or null values.
    INSERT, UPDATE, and DELETE statements can reference the SPARSE columns by name.
    We can get more benefits from Filtered indexes on a SPARSE column.
    We can use SPARSE columns with change tracking and change data capture.

Limitations of a SPARSE column
    A SPARSE column must be nullable and cannot have the ROWGUIDCOL or IDENTITY properties.
    A SPARSE column cannot be data types like text, ntext, image, timestamp, user-defined data type, geometry, or geography.
    It cannot have a default value and bounded-to rule.
    A SPARSE column cannot be part of a clustered index or a unique primary key index and partition key of a clustered index or heap.
    Merge replication does not support SPARSE columns.
    The SPARSE property of a column is not preserved when the table is copied.

HostForLIFE SQL Server 2019 Hosting
HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Resizing Tempdb In SQL Server

clock January 8, 2021 08:57 by author Peter

Occasionally, we must resize or realign our Tempdb log file (.ldf) or data files (.mdf or .ndf) due to a growth event that forces the file size out of whack. To resize we have three options, restart the SQL Server Service, add additional files, or shrink the current file. We most likely have all been faced with runaway log files and in an emergency situation restarting the SQL Services may not be an option but we still need to get our log file size smaller before we run out of disk space for example. The process of shrinking down that file can get tricky so I created this flow chart to help you out if you ever get into this situation.
 
Now its very important to note that many of these commands will clear your cache and will greatly impact your server performance as it warms cache backup. In addition, you should not shrink your database data or log file unless absolutely necessary. But doing so, it can result in a corrupt tempdb.
 
Let’s walk through it and explain somethings as we go along.
First thing you must do is issue a Checkpoint. A checkpoint marks the log as a “good up to here” point of reference. It lets the SQL Server Database Engine know it can start applying changes contained in the log during recovery after this point if an unexpected shutdown or crash occurs. Anything prior to the check point is what I like to call “Hardened”. This means all the dirty pages in memory have been written to disk, specifically to the .mdf and .ndf files. So, it is important to make that mark in the log before you proceed. Now, we know tempdb is not recovered during a restart it is recreated, however this is still a requirement.
    USE TEMPDB;    
    GO    
    CHECKPOINT;  

Next, we try to shrink the log by issuing a DBCC SHRINKFILE command. This is the step that frees the unallocated space from the database file if there is any unallocated space available. You will note the Shrink? decision block in the diagram after this step. It is possible that there is no unallocated space and you will need to move further along the path to free some up and try again.
    USE TEMPDB;    
    GO   
    DBCC SHRINKFILE (templog, 1000);   --Shrinks it to 1GB  

If the database shrinks, great congratulations, however for some of us we still might have work to do. Next up is to try and free up some of that allocated space by running DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE.
DBCC DROPCLEANBUFFERS
 
Clears the clean buffers from the buffer pool and columnstore object pool. This will flush cached indexes and data pages.
    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;  

DBCC FREEPROCCACHE
Clears the procedure cache, you are probably familiar with as a performance tuning tool in development. It will clean out all your execution plans from cache which may free up some space in tempdb. As we know though, this will create a performance as your execution plans now have to make it back into cache on their next execution and will not get the benefit of plan reuse. Now it’s not really clear why this works, so I asked tempdb expert Pam Lahoud (B|T) for clarification as to why this has anything to do with tempdb. Both of us are diving into this to understand exactly why this works. I believe it to be related to Tempdb using cached objects and memory objects associated with stored procedures which can have latches and locks on them that need to be release by running this. Check back for further clarification, I'll be updating this as I find out more.
    DBCC FREEPROCCACHE WITH NO_INFOMSGS;  

Once these two commands have been run and you have attempted to free up some space you can now try the DBCC SHRINKFILE command again. For most this should make the shrink possible and you will be good to go. Unfortunately, a few more of us may have to take a couple more steps through to get to that point.
 
The last two things I do when I have no other choice to get my log file smaller is to run those last two commands in the process. These should do the trick and get the log to shrink.
 
DBCC FREESESSIONCACHE
This command will flush any distributed query connection cache, meaning queries that are between two or more servers.
    DBCC FREESESSIONCACHE WITH NO_INFOMSGS;  

DBCC FREESYSTEMCACHE
This command will release all unused remaining cache entries from all cache stores including temp table cache. This covers any temp table or table variables remaining in cache that need to be released.
    DBCC FREESYSTEMCACHE ('ALL');  

In my early days as a database administrator I would have loved to have this diagram. Having some quick steps during stressful situations such as tempdb’s log file filling up on me would have been a huge help. So hopefully someone will find this handy and will be able to use it to take away a little of their stress.

HostForLIFE SQL Server 2019 Hosting
HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in