European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE :: Recursive Queries in SQL

clock July 18, 2023 10:55 by author Peter

What are Recursive queries in SQL?
Recursive queries in SQL are queries that involve self-referential relationships within a table. They allow you to perform operations that require iterative processing, enabling you to traverse and manipulate hierarchical data structures efficiently.

Syntax of Recursive Queries
WITH RECURSIVE cte_name (column1, column2, ...) AS (
    -- Anchor member
    SELECT column1, column2, ...
    FROM table_name
    WHERE condition

    UNION ALL

    -- Recursive member
    SELECT column1, column2, ...
    FROM table_name
    JOIN cte_name ON join_condition
    WHERE condition
)
SELECT column1, column2, ...
FROM cte_name;


Recursive queries consist of two main components,

1. Anchor Member
The anchor member establishes the base case or initial condition for the recursive query. It selects the initial set of rows or records that serve as the starting point for the recursion. The anchor member is a regular SELECT statement that defines the base case condition.

2. Recursive Member
The recursive member defines the relationship and iteration process in the recursive query. It specifies how to generate new rows or records by joining the result of the previous iteration with the underlying table. The recursive member includes a join condition that establishes the relationship between the previous iteration and the current iteration. It also includes termination criteria to stop the recursion when certain conditions are met.

Example 1. Hierarchical Data - Employee Hierarchy
Consider the "Employees" table, which has the columns "EmployeeID" and "ManagerID." The employees reporting to a certain manager will be retrieved via a recursive query that traverses the employee hierarchy.
-- Create the Employees table
CREATE TABLE Employees (
    EmployeeID INT,
    ManagerID INT
);

-- Insert sample data
INSERT INTO Employees (EmployeeID, ManagerID)
VALUES (1, NULL),
       (2, 1),
       (3, 1),
       (4, 2),
       (5, 2),
       (6, 3),
       (7, 6);

-- Perform the recursive query
WITH RECURSIVE EmployeeHierarchy (EmployeeID, ManagerID, Level) AS (
    -- Anchor member: Retrieve the root manager
    SELECT EmployeeID, ManagerID, 0
    FROM Employees
    WHERE ManagerID IS NULL

    UNION ALL

    -- Recursive member: Retrieve employees reporting to each manager
    SELECT e.EmployeeID, e.ManagerID, eh.Level + 1
    FROM Employees e
    JOIN EmployeeHierarchy eh ON e.ManagerID = eh.EmployeeID
)
SELECT EmployeeID, ManagerID, Level
FROM EmployeeHierarchy

Output

Example 2. Hierarchical Data - File System Structure
Consider a table called "Files" that has the columns "FileID" and "ParentID," which describe the hierarchy of a file system. To retrieve all files and their hierarchical structures, we'll utilize a recursive query.
-- Create the Files table
CREATE TABLE Files (
    FileID INT,
    ParentID INT,
    FileName VARCHAR(100)
);


-- Insert sample data
INSERT INTO Files (FileID, ParentID, FileName)
VALUES (1, NULL, 'Root1'),
       (2, NULL, 'Root2'),
       (3, 1, 'Folder1'),
       (4, 2, 'Folder1'),
       (5, 3, 'Subfolder1'),
       (6, 4, 'Subfolder1'),
       (7, 4, 'Subfolder2'),
       (8,5, 'Subfolder1_1'),
       (9,6, 'File1'),
       (10,6, 'File2');


-- Perform the recursive query
WITH RECURSIVE FileStructure (FileID, ParentID, FileName, Level) AS (
    -- Anchor member: Retrieve root level files
    SELECT FileID, ParentID, FileName, 0
    FROM Files
    WHERE ParentID IS NULL

    UNION ALL

    -- Recursive member: Retrieve nested files
    SELECT f.FileID, f.ParentID, f.FileName, fs.Level + 1
    FROM Files f
    JOIN FileStructure fs ON f.ParentID = fs.FileID
)
SELECT FileID, ParentID, FileName, Level
FROM FileStructure;

Output
File Structure Hierarchy Output

The recursive query continues to iterate until the termination criteria are satisfied, generating new rows or records in each iteration based on the previous iteration's results. The result set of a recursive query includes all the rows or records generated during the recursion.

Recursive queries are typically used to work with hierarchical data structures, such as organizational charts, file systems, or product categories. They allow you to navigate and analyze the nested relationships within these structures without the need for complex procedural code or multiple iterations.

Recursive queries are supported by several database systems, including common SQL-based systems like PostgreSQL, MySQL (with the help of Common Table Expressions or CTEs), and Microsoft SQL Server (with the help of the WITH RECURSIVE keyword).

Advantages of Recursive Queries

  • Handling Hierarchical Data: Recursive queries provide a straightforward and efficient way to work with hierarchical data structures, such as organizational charts, file systems, or product categories. They allow you to retrieve and navigate the nested relationships in a concise manner.
  • Flexibility and Adaptability: Recursive queries are adaptable to various levels of depth within a hierarchical structure. They can handle any level of nesting, making them suitable for scenarios where the depth of the hierarchy may vary.
  • Code Reusability: Once you have defined a recursive query, it can be easily reused for different hierarchical structures within the same table, saving development time and effort.
  • Simplified Query Logic: Recursive queries eliminate the need for complex procedural code or multiple iterations to traverse hierarchical relationships. With a single query, you can retrieve the entire hierarchy or specific levels of interest.
  • Improved Performance: Recursive queries are optimized by the database engine, allowing for efficient traversal of self-referential relationships. The engine handles the iterative process internally, leading to better performance compared to manual traversal techniques.

Disadvantages of Recursive Queries

  • Performance Impact on Large Hierarchies: While recursive queries offer performance benefits, they can become slower when dealing with large hierarchies or deeply nested structures. The performance impact increases as the level of recursion, and the number of records involved in the recursion grow.
  • Limited Portability: Recursive queries may not be supported or may have varying syntax across different database systems. This can limit the portability of your SQL code when migrating to a different database platform.
  • Complexity in Maintenance: Recursive queries can be complex to understand and maintain, especially for developers who are not familiar with recursive programming concepts. Code readability and documentation become crucial to ensure clarity and ease of maintenance.
  • Recursive Depth Limitations: Some database systems impose limitations on the maximum recursion depth allowed for recursive queries. This can restrict the usage of recursive queries in scenarios with extremely deep hierarchies.
  • Potential for Infinite Loops: Incorrectly constructed recursive queries can lead to infinite loops, causing the query execution to hang or consume excessive system resources. It is essential to carefully design and test recursive queries to avoid this issue.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE.eu :: SQL Server Useful Queries

clock July 11, 2023 08:34 by author Peter

SQL Server is a widespread database administration system utilized by organizations of all sizes. In order to effectively manage and maintain a database as a database administrator or developer, it is necessary to have a thorough understanding of SQL. This post will cover some useful SQL Server queries that will assist you in performing a variety of duties.
1. List all databases supported by SQL Server

To view an inventory of all databases on a server, use the query below.

SELECT name FROM sys.databases

This will return a list of all databases on the server, including system databases such as 'master', 'model', and 'tempdb'.

2. Viewing the schema of a SQL table
To see the structure of a table, including column names and data types, you can use the following query:
EXEC sp_help 'table_name'

This will return a list of all columns in the table, along with information such as the data type, length, and whether or not the column is nullable.

3. Checking the size of a SQL Server database
To see the size of a database, including the amount of used and unused space, you can use the following query:

EXEC sp_spaceused

This will return the number of rows in the database, the amount of reserved space, and the amount of used and unused space.

4. Retrieving the current user
To see the current user that is connected to the database, you can use the following query:

SELECT SUSER_NAME()

 

This can be useful for auditing purposes, or for determining which user is making changes to the database.
5. Viewing the current date and time

To see the current date and time on the server, you can use the following query:
SELECT GETDATE()

This can be useful for storing timestamps in your database, or for checking the current time on the server.

6. Finding the Total Space of the tables in a database
To see the total space of all the tables in a database, you can use the following query:
SELECT t.NAME
       AS
       TableName,
       s.NAME
       AS SchemaName,
       p.rows,
       Sum(a.total_pages) * 8
       AS TotalSpaceKB,
       Cast(Round(( ( Sum(a.total_pages) * 8 ) / 1024.00 ), 2) AS NUMERIC(36, 2)
       ) AS
       TotalSpaceMB,
       Sum(a.used_pages) * 8
       AS UsedSpaceKB,
       Cast(Round(( ( Sum(a.used_pages) * 8 ) / 1024.00 ), 2) AS NUMERIC(36, 2))
       AS
       UsedSpaceMB,
       ( Sum(a.total_pages) - Sum(a.used_pages) ) * 8
       AS UnusedSpaceKB,
       Cast(Round(( ( Sum(a.total_pages) - Sum(a.used_pages) ) * 8 ) / 1024.00,
            2) AS
            NUMERIC(36, 2))
       AS UnusedSpaceMB
FROM   sys.tables t
       INNER JOIN sys.indexes i
               ON t.object_id = i.object_id
       INNER JOIN sys.partitions p
               ON i.object_id = p.object_id
                  AND i.index_id = p.index_id
       INNER JOIN sys.allocation_units a
               ON p.partition_id = a.container_id
       LEFT OUTER JOIN sys.schemas s
                    ON t.schema_id = s.schema_id
WHERE  t.NAME NOT LIKE 'dt%'
       AND t.is_ms_shipped = 0
       AND i.object_id > 255
GROUP  BY t.NAME,
          s.NAME,
          p.rows
ORDER  BY totalspacemb DESC,
          t.NAME

This can be useful for identifying tables that may be consuming a large amount of space, and determining if any optimization is necessary.

7. Connect two Database with Different Servers in SQL Server
To connect two databases on different servers in a SQL Server query, you can use a linked server. A linked server allows you to connect to another instance of an SQL Server and execute queries against it.
exec sp_addlinkedsrvlogin  'Servername', 'false', null, 'userid', 'password';

This can be connected to two databases.

8. Execute the query with the connected server database
To see the query where you use one server database for another server database, you can use the following query:
select  *  from [Servername].[Databasename].[dbo].[tablename]

This can be used from one server database to another database.

9. Disconnect two Database with Different Servers in SQL Server
To disconnect a linked server in SQL Server, you can use the sp_dropserver system stored procedure. Here's the syntax:
drop server exec sp_dropserver    @server='Servername'

This can be disconnected from one server database to another database.


10. Top 20 Costliest Stored Procedures - High CPU
To see the query where you can find the SP which takes a High CPU, you can use the following query:
SELECT TOP (20)
    p.name AS [SP Name],
    qs.total_worker_time AS [TotalWorkerTime],
    qs.total_worker_time/qs.execution_count AS [AvgWorkerTime],
    qs.execution_count,
    ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GETDATE()), 0) AS [Calls/Second],
    qs.total_elapsed_time,
    qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time],
    qs.cached_time
FROM    sys.procedures AS p WITH (NOLOCK)
INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK) ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
ORDER BY qs.total_worker_time DESC OPTION (RECOMPILE);


Output
SP Name: Stored Procedure Name

TotalWorkerTime: Total Worker Time since the last compile time

AvgWorkerTime: Average Worker Time since last compile time

execution_count: Total number of execution since last compile time

Calls/Second: Number of calls/executions per second

total_elapsed_time: total elapsed time

avg_elapsed_time: Average elapsed time

cached_time: Procedure Cached time

10. How to identify DUPLICATE indexes in SQL Server
To see the query where you can find duplicate indexes, you can use the following query:
;WITH myduplicate
     AS (SELECT Sch.[name]                                                 AS
                SchemaName
                ,
                Obj.[name]
                AS TableName,
                Idx.[name]                                                 AS
                IndexName,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 1)  AS
                Col1,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 2)  AS
                Col2,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 3)  AS
                Col3,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 4)  AS
                Col4,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 5)  AS
                Col5,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 6)  AS
                Col6,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 7)  AS
                Col7,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 8)  AS
                Col8,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 9)  AS
                Col9,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 10) AS
                Col10,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 11) AS
                Col11,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 12) AS
                Col12,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 13) AS
                Col13,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 14) AS
                Col14,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 15) AS
                Col15,
                Index_col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 16) AS
                Col16
         FROM   sys.indexes Idx
                INNER JOIN sys.objects Obj
                        ON Idx.[object_id] = Obj.[object_id]
                INNER JOIN sys.schemas Sch
                        ON Sch.[schema_id] = Obj.[schema_id]
         WHERE  index_id > 0)
SELECT MD1.schemaname,
       MD1.tablename,
       MD1.indexname,
       MD2.indexname AS OverLappingIndex,
       MD1.col1,
       MD1.col2,
       MD1.col3,
       MD1.col4,
       MD1.col5,
       MD1.col6,
       MD1.col7,
       MD1.col8,
       MD1.col9,
       MD1.col10,
       MD1.col11,
       MD1.col12,
       MD1.col13,
       MD1.col14,
       MD1.col15,
       MD1.col16
FROM   myduplicate MD1
       INNER JOIN myduplicate MD2
               ON MD1.tablename = MD2.tablename
                  AND MD1.indexname <> MD2.indexname
                  AND MD1.col1 = MD2.col1
                  AND ( MD1.col2 IS NULL
                         OR MD2.col2 IS NULL
                         OR MD1.col2 = MD2.col2 )
                  AND ( MD1.col3 IS NULL
                         OR MD2.col3 IS NULL
                         OR MD1.col3 = MD2.col3 )
                  AND ( MD1.col4 IS NULL
                         OR MD2.col4 IS NULL
                         OR MD1.col4 = MD2.col4 )
                  AND ( MD1.col5 IS NULL
                         OR MD2.col5 IS NULL
                         OR MD1.col5 = MD2.col5 )
                  AND ( MD1.col6 IS NULL
                         OR MD2.col6 IS NULL
                         OR MD1.col6 = MD2.col6 )
                  AND ( MD1.col7 IS NULL
                         OR MD2.col7 IS NULL
                         OR MD1.col7 = MD2.col7 )
                  AND ( MD1.col8 IS NULL
                         OR MD2.col8 IS NULL
                         OR MD1.col8 = MD2.col8 )
                  AND ( MD1.col9 IS NULL
                         OR MD2.col9 IS NULL
                         OR MD1.col9 = MD2.col9 )
                  AND ( MD1.col10 IS NULL
                         OR MD2.col10 IS NULL
                         OR MD1.col10 = MD2.col10 )
                  AND ( MD1.col11 IS NULL
                         OR MD2.col11 IS NULL
                         OR MD1.col11 = MD2.col11 )
                  AND ( MD1.col12 IS NULL
                         OR MD2.col12 IS NULL
                         OR MD1.col12 = MD2.col12 )
                  AND ( MD1.col13 IS NULL
                         OR MD2.col13 IS NULL
                         OR MD1.col13 = MD2.col13 )
                  AND ( MD1.col14 IS NULL
                         OR MD2.col14 IS NULL
                         OR MD1.col14 = MD2.col14 )
                  AND ( MD1.col15 IS NULL
                         OR MD2.col15 IS NULL
                         OR MD1.col15 = MD2.col15 )
                  AND ( MD1.col16 IS NULL
                         OR MD2.col16 IS NULL
                         OR MD1.col16 = MD2.col16 )
ORDER  BY MD1.schemaname,
          MD1.tablename,
          MD1.indexname


This can be Find the Duplicate Indexes, So you can remove the duplicate Indexes.

In this post, we covered some useful queries for working with Microsoft SQL Server. These queries can help you perform tasks such as listing all databases on a server, viewing the schema of a table, checking the size of a database, seeing the current user and date and time, linking to another server database, get Duplicate indexes.

I hope these queries are useful for you!

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE.eu :: Compare And Identify Data Differences Between Two SQL Server Tables

clock July 7, 2023 07:58 by author Peter

Frequently, systems utilize distributed databases containing distributed tables. Multiple mechanisms facilitate distribution, including replication. In this situation, it is essential to continuously maintain the synchronization of a specific data segment. Additionally, it is necessary to verify the synchronization itself. This is when it becomes necessary to compare data in two tables.

Before contrasting data in two tables, you must ensure that their schemas are either identical or distinguishable in an acceptable manner. Acceptably distinct refers to a difference in the definition of two tables that enables correct data comparison. For example, types of corresponding columns of compared tables must be mapped without data loss.

Compare the SQL Server schemas of the two Employee tables from the JobEmpl and JobEmplDB databases.
For further work, it is necessary to review the Employee table definitions in the JobEmpl and JobEmplDB databases:
USE [JobEmpl]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[Employee](
    [EmployeeID] [int] IDENTITY(1,1) NOT NULL,
    [FirstName] [nvarchar](255) NOT NULL,
    [LastName] [nvarchar](255) NOT NULL,
    [Address] [nvarchar](max) NULL,
    [CheckSumVal]  AS (checksum((coalesce(CONVERT([nvarchar](max),[FirstName]),N'')+coalesce(CONVERT([nvarchar](max),[LastName]),N''))+coalesce(CONVERT([nvarchar](max),[Address]),N''))),
    [REPL_GUID] [uniqueidentifier] ROWGUIDCOL  NOT NULL,
    CONSTRAINT [PK_Employee_EmployeeID] PRIMARY KEY CLUSTERED
    (
        [EmployeeID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

ALTER TABLE [dbo].[Employee] ADD CONSTRAINT [Employee_DEF_REPL_GUID]  DEFAULT (newsequentialid()) FOR [REPL_GUID]
GO

//and

USE [JobEmplDB]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[Employee](
    [EmployeeID] [int] IDENTITY(1,1) NOT NULL,
    [FirstName] [nvarchar](255) NOT NULL,
    [LastName] [nvarchar](255) NOT NULL,
    [Address] [nvarchar](max) NULL,
    CONSTRAINT [PK_Employee_EmployeeID] PRIMARY KEY CLUSTERED
    (
        [EmployeeID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Comparing Database Schemas using SQL Server Data Tools
With the help of Visual Studio and SSDT, you can compare database schemas. To do this, you need to create a new project “JobEmployee” by doing the following:

Then you need to import the database.
To do this, right-click the project and in the context menu, select Import \ Database..:

Next, hit the “Select connection…” button and in the cascading menu, in the “Browse” tab set up the connection to JobEmpl database as follows:

Next, click the “Start” button to start the import of the JobEmpl database:

You will then see a window showing the progress of the database import:

When the database import process is completed, press “Finish”:

 

 

Once it is finished, JobEmployee project will contain directories, subdirectories, and database objects definitions in the following form:

Once it is finished, JobEmployee project will contain directories, subdirectories, and database objects definitions in the following form:

In the same way, we create a similar JobEmployeeDB project and import JobEmplDB database into it:


 

Now, right-click the JobEmployee project and in the drop-down menu, select “Schema Compare”:

This will bring up the database schema compare window.
In the window, you need to select the projects as source and target, and then click the “Compare” button to start the comparison process:

We can see here that despite the differences between the definitions of the Employee tables in two databases, the table columns that we need for comparison are identical in data type. This means that the difference in the schemas of the Employee tables is acceptable. That is, we can compare the data in these two tables.
We can also use other tools to compare database schemas such as dbForge Schema Compare for SQL Server.

Comparing database schemas with the help of dbForge Schema Compare
Now, to compare database table schemas, we use a tool dbForge Schema Compare for SQL Server, which is also included in SQL Tools.
For this, in SSMS, right-click the first database and in the drop-down menu, select Schema Compare\ Set as Source:

We simply transfer JobEmplDB, the second database, to Target area and click the green arrow between source and target:

You simply need to press the “Next” button in the opened database schema comparison project:

Leave the following settings at their defaults and click the “Next” button:


In the “Schema Mapping” tab, we also leave everything by default and press the “Next” button:

On the “Table Mapping” tab, select the required Employee table and on the right of the table name, click the ellipsis:


The table mapping window opens up:

In our case, only 4 fields are mapped, because two last fields are contained only in the JobEmpl database and are absent in the JobEmplDB database.
This setting is useful when column names in the source table and target table do not match.
The “Column details” table displays the column definition details in two tables: on the left – from the source database and on the right – from the target database.
Now hit the “OK” button


Now, to start the database schema comparison process, click the “Compare” button:

A progress bar will appear

We then select the desired Employee table.

At the bottom left, you can see the code for defining the source database table and on the right – the target database table.
We can see here, as before, that the definitions of the Employee table in two databases JobEmpl and JobEmplDB show admissible distinction, that is why we can compare data in these two tables.

Let us now move on to the comparison of the data in two tables itself.

Comparing database data using SSIS

Let’s first make a comparison using SSIS. For this, you need to have SSDT installed.
We create a project called Integration Service Project in Visual Studio and name it IntegrationServicesProject


We then create three connections:

    To the source JobEmpl database
    To the target JobEmplDB database
    To the JobEmplDiff database, where the table of differences will be displayed the following way below:

That way, new connections will be displayed in the project.

Then, in the project, in the “Control Flow” tab, we create a data flow task and name it “data flow task”

Let us now switch to the data flow and create an element “Source OLE DB” by doing the following

On the “Columns” tab, we then select the fields required for comparison
And now, right-click the created data source and in the drop-down menu, select “Show Advanced Editor…”

Next, for each of the “Output Columns” groups for the EpmloyeeID column, set SortKeyPosition property to 1. That is, we sort by the EmployeeID field value in ascending order,

Similarly, let us create and set the data source to the JobEmplDB database.
That way, we obtain two created sources in the data flow task

Now, we create a merge join element in the following way:


Please note that we merge tables using a full outer join.
We then connect our sources to the created join element by merging “Merge Join”

We make the connection from JobEmpl left and the connection from JobEmplDB – right.
In fact, it is not that important, it is possible to do this the other way around.
In the JobEmplDiff database, we create a different table called EmployeeDiff, where we are going to put data differences in the following manner:

USE [JobEmplDiff]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[EmployeeDiff](
    [ID] [int] IDENTITY(1,1) NOT NULL,
    [EmployeeID] [int] NULL,
    [EmployeeID_2] [int] NULL,
    [FirstName] [nvarchar](255) NULL,
    [FirstName_2] [nvarchar](255) NULL,
    [LastName] [nvarchar](255) NULL,
    [LastName_2] [nvarchar](255) NULL,
    [Address] [nvarchar](max) NULL,
    [Address_2] [nvarchar](max) NULL,
    CONSTRAINT [PK_EmployeeDiff_1] PRIMARY KEY CLUSTERED
    (
        [ID] ASC
    ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO


Now, let us get back to our project and in the data flow task, we create a conditional split element

In the Conditional field for NotMatch, you need to type the following expression:

(
  ISNULL(EmployeeID)
  || ISNULL(EmployeeID)
)
|| (
  REPLACENULL(FirstName, "") != REPLACENULL(FirstName_2, "")
)
|| (
  REPLACENULL(LastName, "") != REPLACENULL(LastName_2, "")
)
|| (
  (
    Address != Address_2
    && (!ISNULL(Address))
    && (!ISNULL(Address_2))
  )
  || ISNULL(Address) != ISNULL(Address_2)
)

This expression is true if the fields do not match with account for NULL values for the same EmployeeID value. And it is true if there is no match for the EmployeeID value from one table for the EmployeeID value in the other table, that is, if there are no rows in both tables that have the EmployeeID value.
You can obtain a similar result in the form of selection using the following T-SQL query:

SELECT
    e1.[EmployeeID] AS [EmployeeID],
    e2.[EmployeeID] AS [EmployeeID_2],
    e1.[FirstName] AS [FirstName],
    e2.[FirstName] AS [FirstName_2],
    e1.[LastName] AS [LastName],
    e2.[LastName] AS [LastName_2],
    e1.[Address] AS [Address],
    e2.[Address] AS [Address_2]
FROM
    [JobEmpl].[dbo].[Employee] AS e1
    FULL OUTER JOIN [JobEmplDB].[dbo].[Employee] AS e2 ON e1.[EmployeeID] = e2.[EmployeeID]
WHERE
    (e1.[EmployeeID] IS NULL)
    OR (e2.[EmployeeID] IS NULL)
    OR (COALESCE(e1.[FirstName], N'') <> COALESCE(e2.[FirstName], N''))
    OR (COALESCE(e1.[LastName], N'') <> COALESCE(e2.[LastName], N''))
    OR (COALESCE(e1.[Address], N'') <> COALESCE(e2.[Address], N''));


Now, let us connect the elements “Merge Join” and “Conditional Split”

Next, we create an OLE DB destination element.

Now, we map the columns.

We set “Error Output” tab by default.

We can now join “Conditional Split” and “OLE DB JobEmplDiff” elements. As a result, we get a complete data flow.

Let us run the package that we have obtained.

Upon successful completion of the package work, all its elements turn into green circles.

If an error occurs, it is displayed in the form of a red circle instead of a green one. To resolve any issues, you need to read the log files.
To analyze the data difference, we need to derive the necessary data from the EmployeeDiff table of the JobEmplDiff database:

SELECT
    [ID],
    [EmployeeID],
    [EmployeeID_2],
    [FirstName],
    [FirstName_2],
    [LastName],
    [LastName_2],
    [Address],
    [Address_2]
FROM
    [JobEmplDiff].[dbo].[EmployeeDiff]

Here, you can see the Employee table from JobEmpl database, where Address isn’t set, and FirstName and LastName are mixed up in some columns. However, there is a bunch of missing rows in JobEmplDB, which exist in JobEmpl.

HostForLIFE SQL Server 2019 Hosting
HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE.eu :: Magic Tables in SQL Server

clock June 23, 2023 09:47 by author Peter

Magic tables are the logical temporary tables created by the SQL server internally to recover recently inserted, deleted, and updated data into the SQL server. They are created during DML trigger execution. If you want to know more about DML triggers, you may refer to my previous article on DML Triggers.

Three types of Magic tables are created at the time of insert/update/delete in the SQL server.
    INSERTED Magic tables
    DELETED Magic tables
    UPDATED Magic tables

Magic tables are stored in temp DB just as a temporary internal table, and we can see them with the help of triggers. We can retrieve the information or the impacted records using these Magic tables.

Let’s see how this works with the use of a trigger.
    When we perform the insert operation, the inserted magic table will have a recently inserted record showing on top of the table.
    When we perform the delete operation, the deleted magic table will have a recently deleted record showing on top of the table.
    When we perform the update operation, the inserted magic table will have a recently updated record showing on top of the table.

Let’s consider the below table to see how this work.
SELECT * FROM StudentsReport;

Inserted Magic Table
Let’s create a trigger on the StudentsReport table to see if the values are inserted on the StudentsReport table and see if a virtual table or temp table (Magic table) is created with recently inserted records.

CREATE TRIGGER  TR_StudentsReport_InsertedMagic ON StudentsReport
FOR INSERT
AS
BEGIN
    SELECT * FROM INSERTED
END

Now when we insert the records in the StudentsReport table, at the same time inserted magic table will be created along with recently inserted records.

Now execute the below queries together.
INSERT INTO StudentsReport VALUES (6, 'Peter', 'English', 90);
SELECT * FROM StudentsReport;

We can see that while inserting a record in the StudentsReport table, it’s showing a recently Inserted record in the temp table, and that temp table is inserted magic table.
Deleted Magic Table

Now let’s create a trigger on the StudentsReport table to see if the values are deleted from the StudentsReport table and if the Magic table is created for recently deleted records.

CREATE TRIGGER  TR_StudentsReport_DeletedMagic ON StudentsReport
FOR DELETE
AS
BEGIN
    SELECT * FROM Deleted
END

we can see that while deleting a record from the StudentsReport table, it’s also showing a recently deleted record in the temp table, and that temp table is deleted magic table.

Updated Magic Table


Now, Let’s create a trigger on the StudentsReport table to see if the values are updated on the StudentsReport table and if the Magic table is created for recently updated records.
CREATE TRIGGER  TR_StudentsReport_UpdatedMagic ON StudentsReport
FOR UPDATE
AS
BEGIN
    SELECT * FROM INSERTED
END

Now when we update the records in the StudentsReport table, at the same time updated magic table will be created along with recently updated records.
Now execute the below query together.
UPDATE StudentsReport SET Marks = 90 WHERE StudentId = 3;
SELECT * FROM StudentsReport;

we can see that while updating the record in the StudentsReport table, it’s also showing a recently updated record in the temp table, and that temp table is an updated magic table.

HostForLIFE SQL Server 2019 Hosting
HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: SPARSE Column in SQL Server

clock January 9, 2023 06:58 by author Peter

In this article, we will learn about SPARSE Column in SQL Server. The SPARSE column is a good feature of SQL Server. It helps us to reduce the space requirements for null values. Using a SPARSE column, we may save up to 20 to 40 percent of space.

SPARSE Column in SQL ServerA SPARSE column is a common column with optimized storage for NULL values. It also reduces the space requirements for null values at the cost of more overhead to retrieve non-null values. In other words, a SPARSE column is better at managing NULL and ZERO values in SQL Server. It does not occupy any space in the database. Using a SPARSE column, we may save up to 20 to 40 percent of the area. We can define a column as a SPARSE column using the CREATE TABLE or ALTER TABLE statements.
CREATE TABLE TableName
(
      .....
      Col1 INT SPARSE,
      Col2 VARCHAR(100) SPARSE,
      Col3 DateTime SPARSE
      .....
)


We may also add/change a column from the graphical view.

Example
In this example, I have created two tables with the same number of columns and the same data type, but one table's columns are created as a SPARSE column. Each table contains 500+ rows.
CREATE TABLE TableName
(
      Col1 INT SPARSE,
      Col2 VARCHAR(100) SPARSE,
      Col3 DateTime SPARSE
)

CREATE TABLE TableName1
(
      Col1 INT ,
      Col2 VARCHAR(100) ,
      Col3 DateTime
)


Using the sp_spaceused stored procedure, we can determine the space occupied by the table data.
sp_spaceused 'TableName'
GO
sp_spaceused 'TableName1'

Advantages of a SPARSE column
    A SPARSE column saves database space when there are zero or null values.
    INSERT, UPDATE, and DELETE statements can reference the SPARSE columns by name.
    We can get more benefits from Filtered indexes on a SPARSE column.
    We can use SPARSE columns with change tracking and change data capture.

Limitations of a SPARSE column
    A SPARSE column must be nullable and cannot have the ROWGUIDCOL or IDENTITY properties.
    A SPARSE column cannot be data types like text, ntext, image, timestamp, user-defined data type, geometry, or geography.
    It cannot have a default value and bounded-to rule.
    A SPARSE column cannot be part of a clustered index or a unique primary key index and partition key of a clustered index or heap.
    Merge replication does not support SPARSE columns.
    The SPARSE property of a column is not preserved when the table is copied.

HostForLIFE SQL Server 2019 Hosting
HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Resizing Tempdb In SQL Server

clock January 8, 2021 08:57 by author Peter

Occasionally, we must resize or realign our Tempdb log file (.ldf) or data files (.mdf or .ndf) due to a growth event that forces the file size out of whack. To resize we have three options, restart the SQL Server Service, add additional files, or shrink the current file. We most likely have all been faced with runaway log files and in an emergency situation restarting the SQL Services may not be an option but we still need to get our log file size smaller before we run out of disk space for example. The process of shrinking down that file can get tricky so I created this flow chart to help you out if you ever get into this situation.
 
Now its very important to note that many of these commands will clear your cache and will greatly impact your server performance as it warms cache backup. In addition, you should not shrink your database data or log file unless absolutely necessary. But doing so, it can result in a corrupt tempdb.
 
Let’s walk through it and explain somethings as we go along.
First thing you must do is issue a Checkpoint. A checkpoint marks the log as a “good up to here” point of reference. It lets the SQL Server Database Engine know it can start applying changes contained in the log during recovery after this point if an unexpected shutdown or crash occurs. Anything prior to the check point is what I like to call “Hardened”. This means all the dirty pages in memory have been written to disk, specifically to the .mdf and .ndf files. So, it is important to make that mark in the log before you proceed. Now, we know tempdb is not recovered during a restart it is recreated, however this is still a requirement.
    USE TEMPDB;    
    GO    
    CHECKPOINT;  

Next, we try to shrink the log by issuing a DBCC SHRINKFILE command. This is the step that frees the unallocated space from the database file if there is any unallocated space available. You will note the Shrink? decision block in the diagram after this step. It is possible that there is no unallocated space and you will need to move further along the path to free some up and try again.
    USE TEMPDB;    
    GO   
    DBCC SHRINKFILE (templog, 1000);   --Shrinks it to 1GB  

If the database shrinks, great congratulations, however for some of us we still might have work to do. Next up is to try and free up some of that allocated space by running DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE.
DBCC DROPCLEANBUFFERS
 
Clears the clean buffers from the buffer pool and columnstore object pool. This will flush cached indexes and data pages.
    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;  

DBCC FREEPROCCACHE
Clears the procedure cache, you are probably familiar with as a performance tuning tool in development. It will clean out all your execution plans from cache which may free up some space in tempdb. As we know though, this will create a performance as your execution plans now have to make it back into cache on their next execution and will not get the benefit of plan reuse. Now it’s not really clear why this works, so I asked tempdb expert Pam Lahoud (B|T) for clarification as to why this has anything to do with tempdb. Both of us are diving into this to understand exactly why this works. I believe it to be related to Tempdb using cached objects and memory objects associated with stored procedures which can have latches and locks on them that need to be release by running this. Check back for further clarification, I'll be updating this as I find out more.
    DBCC FREEPROCCACHE WITH NO_INFOMSGS;  

Once these two commands have been run and you have attempted to free up some space you can now try the DBCC SHRINKFILE command again. For most this should make the shrink possible and you will be good to go. Unfortunately, a few more of us may have to take a couple more steps through to get to that point.
 
The last two things I do when I have no other choice to get my log file smaller is to run those last two commands in the process. These should do the trick and get the log to shrink.
 
DBCC FREESESSIONCACHE
This command will flush any distributed query connection cache, meaning queries that are between two or more servers.
    DBCC FREESESSIONCACHE WITH NO_INFOMSGS;  

DBCC FREESYSTEMCACHE
This command will release all unused remaining cache entries from all cache stores including temp table cache. This covers any temp table or table variables remaining in cache that need to be released.
    DBCC FREESYSTEMCACHE ('ALL');  

In my early days as a database administrator I would have loved to have this diagram. Having some quick steps during stressful situations such as tempdb’s log file filling up on me would have been a huge help. So hopefully someone will find this handy and will be able to use it to take away a little of their stress.

HostForLIFE SQL Server 2019 Hosting
HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE.eu :: How To Get The Entire Database Size Using C# In SQL Server?

clock December 18, 2020 06:53 by author Peter

This article shows how we can determine the size of an entire database using C# and the size of each and every table in the database using a single SQL command. To explain this, it is divided into three sections,

  • Getting Size of one Table in the Database using single SQL Command
  • Getting Size of each Table in the Database using single SQL Command
  • Getting Size of the entire Database using C#

Getting Size of one Table in the Database using single SQL Command
To get the table size in SQL Server, we need to use the system stored procedure sp_spaceused. If we Table Name as an argument, it gives the disk space used by the table and some other information like: Number of rows existing in the table, Total amount of reserved space for Table, Total amount of space reserved for the table but not yet used and Total amount of space used by indexes in Table.

Example
For the ADDRESS table in our database, if we run,
sp_spaceused 'TADDRS'

It will give the following result,

Getting Size of each Table in the Database using single SQL Command

We have seen how we can determine the size of one table. Now, suppose we want to determine the size of each table in the entire database. We could find the size of any table using this command just by changing the Table name in the parameter. Is that right? But would it not be much better if we have a one-line SQL command that gives the size of each table? Right?
 
Fortunately, SQL Server provides a way to do this. A stored procedure "sp_MSforeachtable" could do easily for us!
 
The sp_MSforeachtable stored procedure is one of many undocumented stored procedures tucked away in the depths of SQL Server. You can find more details of "sp_MSforeachtable" in this link.
 
sp_MSforeachtable is an undocumented stored procedure that is not listed in MSDN Books Online and can be used to run a query against each table in the database. In short, you can use this as,
EXEC sp_MSforeachtable @command1="command to run"
 
In the "command to run" put a "?" , where you want the table name to be inserted. For example, to run the sp_spaceused stored procedure for each table in the database, we'd use,

It will give the size of each table (including other details) like,

Getting the Size of the entire Database
Now we want to get the total size used by a database i.e. the additional space used by each table in the database. We have seen in the previous section how we can get the size of each table in the database. Here is sample code in C# that can be used to calculate the size of the entire database.
 
In this, we are executing the same command what we discussed in the above section and are querying the database using simple ADO.Net. We get the result in a DataSet and then iterate each table to get its size. The table size is stored in the "data" column of each table (see the above Picture-1). We just add the "data" column value of each table to get the size of the entire database. Sample code is also attached with this article.
    class MemorySizeCalculator  
    {  
      public void GetDbSize()  
      {  
        int sum = 0;  
       
     // Database Connection String   
     string sConnectionString = "Server = .; Integrated Security = true; database = HKS";  
       
      // SQL Command [Same command discussed in section-B of this article]  
      string sSqlquery = "EXEC sp_MSforeachtable @command1=\"EXEC sp_spaceused '?'\" ";  
       
      DataSet oDataSet = new DataSet();  
       
      // Executing SQL Command using ADO.Net  
      using (SqlConnection oConn = new SqlConnection(sConnectionString))  
      {  
             oConn.Open();  
             using (SqlCommand oCmdGetData = new SqlCommand(sSqlquery, oConn))  
             {  
                  oCmdGetData.ExecuteNonQuery();  
                  SqlDataAdapter executeAdapter = new SqlDataAdapter(oCmdGetData);  
                  executeAdapter.Fill(oDataSet);  
             }  
             oConn.Close();  
      }  
      // Iterating each table  
      for (int i = 0; i < oDataSet.Tables.Count; i++)  
      {  
        // We want to add only "data" column value of each table  
        sum = sum + Convert.ToInt32(oDataSet.Tables[i].Rows[0]["data"].ToString().Replace    
        ("KB", "").Trim());  
      }  
        Console.WriteLine("Total size of the database is : " + sum + " KB");  
     }  
    } 

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



European SignalR Hosting :: How To Get List Of Connected Clients In SignalR?

clock November 27, 2020 07:22 by author Peter

I have not found any direct way to call. So, we need to write our own logic inside a method provided by SignalR Library.
 

There is a Hub class provided by the SignalR library.
 
In this class, we have 2 methods.
    OnConnectedAsync()
    OnDisconnectedAsync(Exception exception)


So, the OnConnectedAsync() method will add user and OnDisconnectedAsyn will disconnect a user because when any client gets connected, this OnConnectedAsync method gets called.

In the same way, when any client gets disconnected, then the OnDisconnectedAsync method is called.
 
So, let us see it by example.
 
Step 1

Here, I am going to define a class SignalRHub and inherit the Hub class that provides virtual method and add the Context.ConnectionId: It is a unique id generated by the SignalR HubCallerContext class.
    public class SignalRHub : Hub  
    {  
    public override Task OnConnected()  
    {  
    ConnectedUser.Ids.Add(Context.ConnectionId);  
    return base.OnConnected();  
    }  
    public override Task OnDisconnected()  
    {  
    ConnectedUser.Ids.Remove(Context.ConnectionId);  
    return base.OnDisconnected();  
    }  
    }  


Step 2
In this step, we need to define our class ConnectedUser with property Id that is used to Add/Remove when any client gets connected or disconnected.
 
Let us see this with an example.
    public static class ConnectedUser  
    {  
    public static List<string> Ids = new List<string>();  
    }  


Now, you will get the result of currently connected client using ConnectedUser.Ids.Count.
 
As you see, here I am using a static class that will work fine when you have only one server, but when you will work on multiple servers, then it will not work as expected. In this case, you could use a cache server like Redis cache, SQL cache.



SQL Server Hosting - HostForLIFE.eu :: How to Sort Numbers in SQL Server Without A Sorting Function

clock November 25, 2020 08:18 by author Peter

Today, I'm gonna show you how to sort numbers in SQL Server. It's not a difficult task but not an easy way. In the front end are many functions that for sorting values but SQL Server has no predefined function available.

For example I will sort the numbers 12,5,8,64,548,987,6542,4,285,11,26. SQL Server has no array list or array so how can we hold the values after sorting the numbers? SQL Server has temporary tables. Temporary automatically creates and drops the table after the execution.

First of all, create a temporary table. Suppose a problem occurs in SQL Server or during program execution. A Temporary table can't be deleted or dropped the proper way. When we want to create a table a second time a confirm error occurs as in the following:

There is already an object named '#temp' in the database.

So this type of problem is avoided by checking first if the table exists like this:

    IF  EXISTS (SELECT * FROM sys.tables 
    WHERE name = N'#temp' AND type = 'U') --check the #temp already exists in database or not  
    --Not:-  type U stand for user 
    begin 
    drop table #temp 
    end 

If the table already exists in the database then drop the table #temp.

My values are 12,5,8,64,548,987,6542,4,285,11,26. They need to be be split up before the sort. How can we split the numbers? Of course we can at the comma (,).  If I split the at the comma then I get the numbers like this: 12 5 8 64 548 and so on. One question then arises is, how to split the value? Don't worry, I have done that.

select left('12,45,18,95',

CHARINDEX(',','12,45,18,95')-1))

If i run this query it should be return the value is 12

After that everything is fine, we get the value from the #temp table.

    select ROW_NUMBER() over (order by value) 'srNo', value from #temp order by value

The following is a complete Stored Procedure to sort the numbers. 

   ALTER proc [dbo].[Porc_sortnumber] 
    as 
    begin 
    DECLARE @value VARCHAR(MAX)='1,2,5,6,12,88,47,95,56,20' 
    declare @lenth int =1 
    IF  EXISTS (SELECT * FROM sys.tables 
    WHERE name = N'#temp' AND type = 'U') --check the #temp allready exists in database or not  
    --Not:-  type U stand for user 
    begin 
    drop table #temp 
    end 
    create table #temp (id int identity(1,1),value int)  
    while(@lenth!=0 ) 
    begin 
    insert into #temp(value) values(left(@value,(CHARINDEX(',',@value)-1))) 
    set @value= right(@value,len(@value)-((CHARINDEX(',',@value)))) 
    set @lenth=CHARINDEX(',',@value) 
     
    end 
    insert into #temp(value) values(@value) 
    select ROW_NUMBER() over (order by value) 'srNo', value from #temp order by value 
    end 

Output

I hope this article will helpful!

HostForLIFE.eu SQL Server 2014 Hosting



ASP.NET 5 Hosting Available NOW!

clock November 24, 2020 08:00 by author Scott

HostForLIFE.eu is a popular online Windows and ASP.NET based hosting service provider catering to those people who face such issues. The company has managed to build a strong client base in a very short period of time. It is known for offering ultra-fast, fully-managed and secured services in the competitive market.

.NET 5 is the next version of .NET Core and the future of the .NET platform. With .NET 5 you have everything you need to build rich, interactive front end web UI and powerful backend services. .NET 5 contains great performance improvements in the
runtime and libraries and for the gRPC components. These improvements, when applied to ASP.NET Core, result in some significant wins in throughput (RPS) and latency.

HostForLIFE.eu hosts its servers in top rate data centers that's located in Amsterdam (NL), London (UK), Washington, D.C. (US), Paris (France), Frankfurt (Germany), Chennai (India), Milan (Italy), Toronto (Canada) and São Paulo (Brazil) to ensure 99.9% network period. All data center feature redundancies in network connectivity, power, HVAC, security, and fire suppression. HostForLIFE.eu proudly announces available ASP.NET 5 feature for new customers and existing customers.

HostForLIFE.eu is a popular online Windows based hosting service provider catering to those people who face such issues. The company has managed to build a strong client base in a very short period of time. It is known for offering ultra-fast, fully-managed and secured services in the competitive market. Their powerful servers are especially optimized and ensure ASP.NET 5 performance. They have best data centers on three continent, unique account isolation for security, and 24/7 proactive uptime monitoring.

Further information and the full range of features ASP.NET 5 Hosting can be viewed here
https://hostforlife.eu/European-ASPNET-5-Hosting.

 



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in