European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE :: Using Tablock to Unlock Faster INSERT Operations in SQL Server

clock February 13, 2025 07:29 by author Peter

Performance is frequently the main issue in any SQL Server system when working with big datasets. The INSERT operation is one frequent operation that occasionally turns into a bottleneck. The time required to input data increases with its size, which can have a major effect on system performance and user experience in general. The usage of the TABLOCK hint is one of the many methods and improvements that SQL Server offers to help speed up data insertions. When working with huge datasets or when parallelism is crucial, this straightforward yet effective method can significantly increase the pace of your INSERT operations.

What is the TABLOCK Hint?
The TABLOCK hint is a table-level lock hint that forces SQL Server to take a schema modification (Sch-M) lock on the target table when performing an INSERT, UPDATE, or DELETE operation. This hint ensures that the table is locked for the duration of the operation, which can help speed up data loading by minimizing logging and reducing contention.

A key benefit of the TABLOCK hint is that it reduces the amount of log space used during the operation, as the minimal logging mechanism is activated. This means that SQL Server does not have to log each individual row insertion, but rather just the metadata for the bulk operation. As a result, this significantly reduces the overhead and speeds up data loading.

Additionally, because the table is locked at the schema level, it allows SQL Server to parallelize the operation, leading to faster execution times. This is particularly useful for large-scale data-loading tasks.

When to Use TABLOCK Hint

The TABLOCK hint is ideal for scenarios where:

  • You are inserting a large number of rows into a table.
  • You can afford to lock the table for the duration of the operation (i.e., no other transactions need access to the table while the insert is in progress).
  • You want to reduce the logging overhead and speed up bulk insertions.
  • You want to use parallel insertions to take advantage of SQL Server's ability to use multiple threads for data loading.

It’s also important to note that the TABLOCK hint works well with temporary tables, so you can take advantage of these performance benefits when working with temp tables, often used in ETL processes or batch operations.

Benefits of Using TABLOCK

  • Improved Performance: The primary benefit of using the TABLOCK hint is the performance improvement during large INSERT operations. By reducing the amount of logging, SQL Server can insert rows much faster.
  • Parallel Insertion: With TABLOCK, SQL Server can use parallelism to load the data, speeding up the operation of systems with sufficient resources.
  • Reduced Logging Overhead: Since SQL Server logs less information, the system consumes less log space, which can be crucial when working with large datasets.
  • Works with Temp Tables: You can apply TABLOCK to temporary tables as well, giving you the same performance benefits for in-memory operations.

Example
Let’s consider a scenario where we need to insert a large number of rows from the Sales.SalesOrderDetail table into the Sales.SalesOrderDetailTemp table in the HostForLIFE database.
Create table script for Sales.SalesOrderDetailTem
USE [HostForLIFE]
GO

DROP TABLE IF EXISTS [Sales].[SalesOrderDetailTemp]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [Sales].[SalesOrderDetailTemp](
    [SalesOrderID] [int] NOT NULL,
    [SalesOrderDetailID] [int] NOT NULL,
    [CarrierTrackingNumber] [nvarchar](25) NULL,
    [OrderQty] [smallint] NOT NULL,
    [ProductID] [int] NOT NULL,
    [SpecialOfferID] [int] NOT NULL,
    [UnitPrice] [money] NOT NULL,
    [UnitPriceDiscount] [money] NOT NULL,
    [LineTotal]  [money] NULL,
    [rowguid] [uniqueidentifier] ROWGUIDCOL  NOT NULL,
    [ModifiedDate] [datetime] NOT NULL,
 CONSTRAINT [PK_SalesOrderDetailTemp_SalesOrderID_SalesOrderDetailTempID] PRIMARY KEY CLUSTERED
(
    [SalesOrderID] ASC,
    [SalesOrderDetailID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO


Without the TABLOCK hint, this operation may take a considerable amount of time, especially when the table is large and the database is under load.

Here’s a basic example of how you can speed up the INSERT operation by using the TABLOCK hint.
USE HostForLIFE
GO

SET STATISTICS TIME, IO ON

SET NOCOUNT ON

INSERT INTO Sales.SalesOrderDetailTemp
SELECT *
FROM Sales.SalesOrderDetail;


Truncate the table.
USE HostForLIFE
GO

TRUNCATE TABLE Sales.SalesOrderDetailTemp


Now, let’s modify the query to use the TABLOCK hint.
USE HostForLIFE
GO

SET STATISTICS TIME, IO ON

SET NOCOUNT ON

INSERT INTO Sales.SalesOrderDetailTemp
WITH (TABLOCK)
SELECT *
FROM Sales.SalesOrderDetail;


Comparison
Execution 1 (without TABLOCK) took longer, with higher CPU and elapsed time (204 ms and 284 ms), indicating a slower operation. Execution 2 (with TABLOCK) performed better, completing in 125 ms CPU time and 157 ms elapsed time, making the TABLOCK version more efficient in this case.

Considerations When Using TABLOCK

While the TABLOCK hint can greatly improve performance, it’s important to be aware of some considerations:

  • Table Locking: The TABLOCK hint locks the entire table for the duration of the operation. This means that other transactions cannot access the table until the INSERT operation is complete, so be sure that this behavior aligns with your application’s requirements.
  • Transaction Log Growth: Although TABLOCK reduces the amount of logging, it still logs certain details of the operation. If you’re inserting a massive amount of data, you may need to monitor transaction log growth and ensure that you have enough log space available.
  • Not Suitable for OLTP Workloads: The TABLOCK hint is more suited to batch operations or bulk-loading scenarios. It may not be appropriate for transactional systems that require frequent concurrent access to the table.

Conclusion
If you are working with large datasets and want to speed up your INSERT operations in SQL Server, the TABLOCK hint can be a game-changer. By reducing logging overhead and enabling parallel insertions, it helps improve performance and can significantly reduce the time it takes to load data.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Utilize Sparse Columns to Reduce NULL Storage in SQL Server

clock February 6, 2025 06:09 by author Peter

To effectively store NULL values while consuming the least amount of storage space, SQL Server offers Sparse Columns. When NULL values appear in a column in a sizable portion of rows, sparse columns are the best option.

1. What Are Sparse Columns?
Sparse columns are ordinary columns optimized for NULL storage. When a column is declared as SPARSE, it does not consume storage for NULL values, making them beneficial when a large number of rows have NULLs.

  • Benefits of Sparse Columns.
  • Saves storage by not allocating space for NULL values.
  • Reduces I/O operations and improves performance for sparse datasets.
  • Supports filtered indexes for better query performance.
  • Drawbacks of Sparse Columns.
  • Non-NULL values take up more space than regular columns.
  • It cannot be used with.
  • Text, Ntext, Image, Timestamp.
  • User-defined data types.
  • Computed columns.
  • Default values (unless explicitly specified in an insert).
  • CHECK constraints (except NULL constraints).

2. Declaring Sparse Columns
To use sparse columns, declare them with the SPARSE attribute.
Example. Creating a Table with Sparse Columns.
CREATE TABLE Employees (
EmployeeID INT PRIMARY KEY,
Name VARCHAR(100) NOT NULL,
PhoneNumber VARCHAR(20) SPARSE NULL,
Address NVARCHAR(255) SPARSE NULL
);

PhoneNumber and Address will not consume storage when NULL.

When storing non-NULL values, they use more storage than regular columns.

3. Storage Considerations
The impact on storage depends on the data type.

  • For NULL values: Storage savings are significant.
  • For Non-NULL values: Sparse columns require an additional 4 bytes.

When to Use Sparse Columns?

  • When at least 20-40% of values are NULL, sparse columns save space.
  • If NULLs are less frequent, regular columns are more efficient.

Example of Storage Cost for INT Data Type.

4. Using Sparse Columns with Column Sets
SQL Server provides Column Sets to handle sparse columns dynamically.

Example. Using Column Set for Dynamic Queries.
CREATE TABLE EmployeeData (
    EmployeeID INT PRIMARY KEY,
    Name VARCHAR(100) NOT NULL,
    PhoneNumber VARCHAR(20) SPARSE NULL,
    Address NVARCHAR(255) SPARSE NULL,
    AdditionalData XML COLUMN_SET FOR ALL_SPARSE_COLUMNS
);

AdditionalData (XML) aggregates all sparse column values into a single XML column dynamically.

Retrieving Data Using Column Set

SELECT EmployeeID, AdditionalData FROM EmployeeData;

The Column Set simplifies handling dynamic attributes.

5. Querying Sparse Columns Efficiently
Use Filtered Indexes to optimize queries on sparse columns.

Example. Creating a Filtered Index.
CREATE INDEX IX_Employees_PhoneNumber
ON Employees(PhoneNumber)
WHERE PhoneNumber IS NOT NULL;

This improves query performance for non-NULL sparse column searches.

Example. Query with Index Utilization.
SELECT Name, PhoneNumber
FROM Employees
WHERE PhoneNumber IS NOT NULL;


The filtered index ensures efficient lookups.

6. Checking Sparse Column Storage Space
You can analyze storage savings using sys.dm_db_index_physical_stats.

Check Space Savings.
SELECT name, is_sparse, max_length
FROM sys.columns
WHERE object_id = OBJECT_ID('Employees');


This shows which columns are SPARSE.

7. When NOT to Use Sparse Columns

Avoid sparse columns when:

  • NULL values are less than 20-40% of total rows.
  • The column is part of frequent aggregations.
  • Additional 4-byte overhead is unacceptable.

8. Test Tables with sparse and without parse columns
Create two tables as below:

Add random data in both tables.

Check Table space.


In SQL Server, sparse columns are an effective technique to maximize NULL storage, minimize space consumption, and enhance performance. They function best when a large portion of the values are NULL and can be effectively queried with column sets and filtered indexes.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE :: Using Derived Column Task in SQL Server Integration Services

clock January 21, 2025 08:44 by author Peter

In this article, we will learn how to create derived columns. Derived column task in SSIS is used to,

  • Create new column
  • Update existing columns
  • Merge different columns into one column. For example, businesses want to concatenate first name, middle name, and last name to make it a full name.

Now let’s understand this with an example.
We will use the database, which has already been loaded into the SQL server, to create a derived column function.

Now let’s go to SSMS and see the table how it looks right now.

So here let’s say, the business wants to merge the first name, middle name, and last name of the customers in one column, so to perform this operation we need to use the derived column task. Now let’s go on the Data Flow Task in SSIS and perform the below steps.

Step 1. Create OLE DB Source connection – we will add OLE DB Source and Derived Column Task in the Data Flow Task and establish a source connection. We can see this in the screenshot below.

Step 2. Derived Column Transformation –To create new column values, we perform derived column transformation by applying expressions. We will go on Derived Column Editor to add derived column names and specify expressions to create new column values.

Step 3. Now click Ok and Create an OLE DB Destination connection – We will establish a new OLE DB Destination connection in the editor to push new data. So here will give the connection manager name and the new table name as CustomerNameDerivedCol.

Step 4. Now hit ok and go to Mappings to see available input columns and available destination columns, here you can notice that the same column name is mapped.

Step 5. Now the destination connection is established and the SSIS package is executed successfully.

Step 6. Now let’s verify this in SQL Server Management Studio (SSMS). In the below screenshot at the end of the result, we can see the CustomerName column. That’s how we derive a new column.

Summary
In this article, you have learned how to create the derived column in SSIS, hope you liked it. Looking forward to your comments and suggestions in the section below.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Transaction in SQL Server

clock January 15, 2025 07:40 by author Peter

In SQL Server, a transaction is a sequential set of actions (such statements or queries) carried out as a single task. Data in a database may be read, written, updated, or deleted throughout these activities. Transactions that adhere to the ACID characteristics guarantee data integrity.

  • Atomicity: Ensures that a transaction is treated as a single unit. Either all operations in the transaction are completed successfully, or none are applied, maintaining the "all-or-nothing" principle.
  • Consistency: Guarantees that a transaction transforms the database from one valid state to another, adhering to all defined rules, constraints, and relationships.
  • Isolation: Ensures that concurrent transactions do not interfere with each other, maintaining data integrity as if transactions were executed sequentially.
  • Durability: Ensures that once a transaction is committed, its changes are permanently recorded in the database, even in the event of a system failure.

Transaction Control
The following are the commands used to control transactions.

  • BEGIN TRANSACTION: Marks the start of a transaction. All subsequent operations will be part of this transaction until it is committed or rolled back.
  • COMMIT: Saves all the changes made during the transaction permanently to the database. Once committed, the changes cannot be undone.
  • ROLLBACK: Reverts all changes made during the transaction to their state at the start of the transaction, effectively canceling the transaction.
  • SAVEPOINT: Creates a checkpoint within a transaction. This allows rolling back a transaction to a specific point without undoing the entire transaction.
  • RELEASE SAVEPOINT: Deletes a previously defined SAVEPOINT. Once released, SAVEPOINT can no longer be used for rollback.
  • SET TRANSACTION: Configures a transaction with specific properties, such as setting it to read-only or read/write, or associating it with a specific rollback segment.

Types of Transactions

  • Implicit Transactions
    • Automatically initiated by the database system when specific commands (e.g., INSERT, DELETE, UPDATE) are executed.
    • The transaction remains active until explicitly committed or rolled back by the user.
  • Explicit Transactions
    • Manually initiated and controlled by the user.
    • Typically defined using BEGIN TRANSACTION, followed by COMMIT or ROLLBACK to either save or undo changes.
  • Autocommit Transactions
    • The default transaction mode in most SQL systems.
    • Each individual SQL statement is automatically committed if it executes successfully. No explicit commands are needed to commit or rollback.
  • Savepoints
    • Checkpoints within a transaction that allow partial rollbacks.
    • Useful for rolling back a specific part of a transaction without undoing the entire

Basic Transaction Syntax
Explicit Transaction Example

BEGIN TRANSACTION;

-- Deduct from one account
UPDATE EmpSalary_int
SET Salary = Salary - 100
WHERE EmpID = 1;

-- Add to another account
UPDATE EmpSalary_int
SET Salary = Salary + 100
WHERE EmpID = 2;

-- Commit the transaction
COMMIT;

Using ROLLBACK
BEGIN TRANSACTION;

UPDATE products
SET stock_quantity = stock_quantity - 10
WHERE product_id = 5;

-- Simulating an error
IF @@ERROR <> 0
BEGIN
    ROLLBACK;
    PRINT 'Transaction failed and was rolled back.';
END
ELSE
BEGIN
    COMMIT;
    PRINT 'Transaction completed successfully.';
END

Failed transaction

Savepoints for Partial Rollbacks
BEGIN TRANSACTION;

-- Step 1
INSERT INTO orders (order_id, customer_id, order_date)
VALUES (101, 1, GETDATE());

SAVE TRANSACTION SavePoint1;

-- Step 2
INSERT INTO order_details (order_id, product_id, quantity)
VALUES (101, 2, 5);

-- Rollback to SavePoint1 if needed
ROLLBACK TRANSACTION SavePoint1;

-- Commit remaining operations
COMMIT;


Here, we can see in the second table that order_details data is not saved because we have set rollback savepoint1.

TRY...CATCH Example
BEGIN TRY
    BEGIN TRANSACTION;

    UPDATE EmpSalary_int
    SET Salary = Salary - 100
    WHERE EmpID = 1;

    UPDATE EmpSalary_int
    SET Salary = Salary + 'null'
    WHERE EmpID= 2;

    COMMIT;
    PRINT 'Transaction completed successfully.';
END TRY
BEGIN CATCH
    ROLLBACK;
    PRINT 'An error occurred. Transaction rolled back.';
END CATCH;

Failed transaction

Conclusion
In SQL, transactions are sequences of operations performed as a single logical unit of work, ensuring data consistency and integrity. A transaction follows the ACID properties: Atomicity (all-or-nothing execution), Consistency (ensures data validity), Isolation (independence of concurrent transactions), and Durability (changes persist after completion). Transactions are crucial for managing database operations reliably and are typically controlled with commands like BEGIN, COMMIT, and ROLLBACK.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Utilizing SQL Server Integration Services' Derived Column Task

clock January 9, 2025 07:16 by author Peter

In this article, we will learn how to create derived columns. Derived column task in SSIS is used to,

  • Create new column
  • Update existing columns
  • Merge different columns into one column. For example, businesses want to concatenate first name, middle name, and last name to make it a full name.

Now let’s understand this with an example. We will use the database, which has already been loaded into the SQL server, to create a derived column function. Now let’s go to SSMS and see the table how it looks right now.

So here let’s say, the business wants to merge the first name, middle name, and last name of the customers in one column, so to perform this operation we need to use the derived column task. Now let’s go on the Data Flow Task in SSIS and perform the below steps.

Step 1. Create OLE DB Source connection – we will add OLE DB Source and Derived Column Task in the Data Flow Task and establish a source connection. We can see this in the screenshot below.

Step 2. Derived Column Transformation –To create new column values, we perform derived column transformation by applying expressions. We will go on Derived Column Editor to add derived column names and specify expressions to create new column values.

Step 3. Now click Ok and Create an OLE DB Destination connection – We will establish a new OLE DB Destination connection in the editor to push new data. So here will give the connection manager name and the new table name as CustomerNameDerivedCol.

Step 4. Now hit ok and go to Mappings to see available input columns and available destination columns, here you can notice that the same column name is mapped.

Step 5. Now the destination connection is established and the SSIS package is executed successfully.

Step 6. Now let’s verify this in SQL Server Management Studio (SSMS). In the below screenshot at the end of the result, we can see the CustomerName column. That’s how we derive a new column.

Summary
In this article, you have learned how to create the derived column in SSIS, hope you liked it.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Using SSIS to Automate Table Creation

clock December 19, 2024 07:32 by author Peter

In this article, we will learn how to create a database table in SQL Server using the SSIS package.

As we know, data is growing massively, and we need to have the capability to extract data from numerous sources such as Oracle, Excel files, and SQL Server databases. SSIS provides integration and workflow solutions through which we can connect with multiple sources. We can build inflow and outflow of data and build pipelines through SSIS packages.

In order to work on SSIS, you need to install Sql Server Data Tools (SSDT) and the Integration Service component. I have used the SSDT 2022 version.

Now, let’s explore more and learn how we can create a table in SQL Server using the SSIS package.

We need to follow below steps:
Step 1. Firstly, create a new SSIS package by right-clicking in Solution Explorer.  A new SSIS Package1.dtsx is created, which we can see in the snippet below.

Step 2. Now go to the SSIS toolbox, select Execute SQL Task, and drag and drop it to the new SSIS package. Below is the snippet.

Step 3. Now, establish a new connection and make sure to test the connection.

Step 4. Now consider the script below and put this in the SQLStatement section in the Task Editor.
create table employee_details
  (emp_id int,
   emp_name varchar(50),
   salary int)

  insert into employee_details values(101,'Christina','90000')
  insert into employee_details values(102,'Aman','40000')
  insert into employee_details values(103,'James','45000')
  insert into employee_details values(104,'Jack','80000')
  insert into employee_details values(105,'Jamal','87000')
  insert into employee_details values(106,'Lisa','82000')
  insert into employee_details values(107,'Karan','45000')

After establishing the connection and entering the script, we can see below the red mark is gone.

Step 5. Now, execute the package by hitting the start button.

We can see in the above screenshot that the SQL task was executed successfully.

Step 6. Now, we will see in the SQL Server using SSMS that the employee_details table is created. Below is the screenshot.

Summary
You have learned today how to create a new table using the SSIS task. The benefit of using SSIS is automation since we can run packages either on-demand or using the scheduler. We can further extend this with the help of project parameters. I hope you liked the article. Please let me know your feedback/ suggestions in the comments section below.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Example of a SQLite Database with.NET

clock December 11, 2024 07:35 by author Peter

SQLite is a file-based relational database management system that is lightweight and ideal for small Web applications, development and testing, and mobile applications such as iOS and Android apps. It is a serverless, self-contained database that works nicely with.NET applications.

To use SQLite in your.NET application, install the "System.Data.SQLite" package from Nuget or run the command below in the terminal.
Install-Package System.Data.SQLite

SQLiteConnection class is used to make connection strings, create SQLite DB files, handle queries, and manage connections from the SQLite database engine. Here, we will take the Create, update, read, delete (CURD) operation example code with C#.
using System;
using System.Data.SQLite;

class Program
{
    static void Main()
    {
        // Create a new database file
        SQLiteConnection.CreateFile("ExampleDatabase.sqlite");

        // Connect to the database
        using (var connection = new SQLiteConnection("Data Source=ExampleDatabase.sqlite;Version=3;"))
        {
            connection.Open();

            // Create a table
            string createTableQuery = "CREATE TABLE IF NOT EXISTS Users (Id INTEGER PRIMARY KEY, Name TEXT, Age INTEGER)";
            using (var command = new SQLiteCommand(createTableQuery, connection))
            {
                command.ExecuteNonQuery();
            }

            // Insert data
            string insertQuery = "INSERT INTO Users (Name, Age) VALUES ('aaa', 15)";
            using (var command = new SQLiteCommand(insertQuery, connection))
            {
                command.ExecuteNonQuery();
            }

            // Read data
            string selectQuery = "SELECT * FROM Users";
            using (var command = new SQLiteCommand(selectQuery, connection))
            using (var reader = command.ExecuteReader())
            {
                while (reader.Read())
                {
                    Console.WriteLine($"Id: {reader["Id"]}, Name: {reader["Name"]}, Age: {reader["Age"]}");
                }
            }

            // Update data
            string updateQuery = "UPDATE Users SET Age = 31 WHERE Name = 'Alice'";
            using (var command = new SQLiteCommand(updateQuery, connection))
            {
                command.ExecuteNonQuery();
            }

            // Delete data
            string deleteQuery = "DELETE FROM Users WHERE Name = 'Alice'";
            using (var command = new SQLiteCommand(deleteQuery, connection))
            {
                command.ExecuteNonQuery();
            }
        }
    }
}


Once after executing the code, the SQLite file will be created in the build location. Using online SQLite Viewer SQLite database and table able to view in this Link


HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: JSON Data Handling in SQL Server

clock December 4, 2024 08:28 by author Peter

Managing 100 columns in a table is difficult in real-time projects; in these cases, we must add JSON to the SQL table columns and then manage them.

Example
Suppose we have some Application settings that need to be set up from the database, and it has very complex settings that keep changing so instead of creating multiple columns for those settings, we can club all settings or configurations into one JSON format and save them to a single Sql table column.

SQL Server provides support for handling JSON data, allowing you to store JSON in columns and perform operations on it. This can be particularly useful when dealing with semi-structured or unstructured data. Here are some examples to illustrate how to read JSON from SQL Server columns.
To handle JSON data in SQL Server

    Store JSON in NVARCHAR(MAX) or VARCHAR(MAX) columns.
    Insert JSON using regular INSERT statements.
    Read and Parse JSON using JSON_VALUE, JSON_QUERY, and OPENJSON:
        JSON_VALUE: Extracts a scalar value from JSON.
        JSON_QUERY: Extracts an object or array.
        OPENJSON: Parses JSON and returns a table of key-value pairs.
    Modify JSON using JSON_MODIFY to update or add properties.
    Aggregate Data into JSON using the FOR JSON clause.

Storing JSON Data
JSON data is typically stored in columns of type NVARCHAR(MAX) or VARCHAR(MAX).
Create table
CREATE TABLE ProductsData (
    ProductID INT PRIMARY KEY,
    ProductName NVARCHAR(50),
    ProductDetails NVARCHAR(MAX) -- JSON data stored in this column
);


Insert JSON on ProductDetails column
INSERT INTO ProductsData (ProductID, ProductName, ProductDetails)
VALUES
    (1, 'Laptop',
        '{
            "Brand": "Dell",
            "Specifications": {
                "Processor": "Intel i7",
                "RAM": "16GB",
                "Storage": "512GB SSD"
            }
        }'),
    (2, 'Smartphone',
        '{
            "Brand": "Apple",
            "Specifications": {
                "Model": "iPhone 12",
                "Storage": "128GB"
            }
        }');


Simple Select
SELECT
    ProductID,
    ProductName,
    ProductDetails
FROM
    ProductsData;


Reading JSON Data
You can read JSON data from the table and parse it using built-in functions such as JSON_VALUE, JSON_QUERY, and OPENJSON.Read SQL JSON ColumnSELECT
    ProductID,
    ProductName,
    JSON_VALUE(ProductDetails, '$.Brand') AS Brand,
    JSON_VALUE(ProductDetails, '$.Specifications.Processor') AS Processor
FROM
    ProductsData;

Extract Nested Objects or Arrays with JSON_QUERY
SELECT
    ProductID,
    ProductName,
    JSON_QUERY(ProductDetails, '$.Specifications') AS Specifications
FROM
    Products;

Expand JSON Arrays with OPENJSON

SQL
-- Insert data with JSON array
INSERT INTO ProductsData (ProductID, ProductName, ProductDetails)
VALUES
(3, 'Tablet',
'{
    "Brand": "Samsung",
    "Models": [
        {"Model": "Galaxy Tab S7", "Storage": "256GB"},
        {"Model": "Galaxy Tab S6", "Storage": "128GB"}
    ]
}');

-- Select data and expand JSON array
SELECT
    ProductID,
    ProductName,
    Models.Model,
    Models.Storage
FROM
    ProductsData
CROSS APPLY OPENJSON(ProductDetails, '$.Models')
WITH (
    Model NVARCHAR(50) '$.Model',
    Storage NVARCHAR(50) '$.Storage'
) AS Models;

Aggregating Data into JSON Format
SQL Server allows you to convert query results into JSON format using the FOR JSON clause.

SQL
SELECT
    ProductID,
    ProductName,
    ProductDetails
FROM
    ProductsData
FOR JSON PATH;

Conclusion
We have learned ways to read JSON from SQL Server columns. Thanks!

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Understanding SQL Server Sequence

clock November 29, 2024 06:53 by author Peter

The SEQUENCE object in SQL Server is an effective tool for producing distinct numerical numbers in a predetermined order. It was first included in SQL Server 2012 to offer features akin to those of IDENTITY columns with more adaptability. Because SEQUENCE is table-independent, developers can utilize it in a variety of scenarios or tables. The fundamentals of SEQUENCE will be covered in this article, along with comparisons to other options like IDENTITY and a fresh real-world example to show how to use it.

What is SEQUENCE in SQL Server?
A SEQUENCE is a user-defined schema-bound object that generates a sequence of numeric values. Unlike IDENTITY, which is tied to a specific table column, SEQUENCE exists independently and can be used across multiple tables or queries.

Key Features of SEQUENCE

  • Independent Object: Not tied to a single table or column.
  • Customizable: Allows control over the starting value, increment step, minimum, maximum, and cycle behavior.
  • Reusable: Can be used in multiple tables or even in calculations.
  • Flexible Usage: Values can be retrieved using the NEXT VALUE FOR function.

Syntax
CREATE SEQUENCE schema_name.sequence_name
    AS data_type
    START WITH <initial_value>
    INCREMENT BY <step>
    [MINVALUE <min_value>]
    [MAXVALUE <max_value>]
    [CYCLE | NO CYCLE]
    [CACHE <cache_size> | NO CACHE]

-- START WITH: Specifies the starting value.
-- INCREMENT BY: Specifies the increment between each value (positive or negative).
-- MINVALUE/MAXVALUE: Specifies the minimum and maximum allowed values.
-- CYCLE: Restarts the sequence when it reaches the maximum or minimum value.
-- CACHE: Improves performance by caching sequence values in memory.


Example
We will create a use case for managing order numbers in an e-commerce system. The goal is to assign unique order IDs to transactions using a SEQUENCE object.

Step 1. We will create a SEQUENCE object named order_sequence to generate unique order IDs starting from 1000 and incrementing by 10.
USE Hostforlife
GO

CREATE SEQUENCE dbo.order_sequence
    AS INT
    START WITH 1000
    INCREMENT BY 10

Step 2. Next, we will create a table customer_orders to store customer order details. The order_id column will use the SEQUENCE object to generate unique IDs automatically.
USE Hostforlife
GO

CREATE TABLE dbo.customer_orders (
    order_id INT NOT NULL DEFAULT (NEXT VALUE FOR order_sequence),
    customer_name VARCHAR(100) NOT NULL,
    product_name VARCHAR(100) NOT NULL,
    order_date DATE DEFAULT GETDATE()
)


Step 3. Insert a few sample records into the customer_orders table. The order_id column will automatically get its value from the SEQUENCE object.
USE Hostforlife
GO

INSERT INTO dbo.customer_orders (customer_name, product_name)
VALUES
    ('Peter', 'Smartphone'),
    ('leon', 'Laptop'),
    ('Michael', 'Tablet');


Step 4. Retrieve the data to see the order_id values generated by the SEQUENCE.
USE Hostforlife
GO

SELECT * FROM dbo.customer_orders


Step 5. Use the sys.sequences catalog view to check the properties of the SEQUENCE object. This query will provide details such as the current value, increment, and start value of the SEQUENCE.

SELECT * FROM sys.sequences WHERE name = 'order_sequence';

Output

Step 6. When more records are inserted, the SEQUENCE continues generating unique values.
USE Hostforlife
GO

INSERT INTO customer_orders (customer_name, product_name)
VALUES ('John', 'Headphones');


Advantages

  • Greater Control: SEQUENCE provides more control compared to IDENTITY, such as restarting, cycling, and specifying custom increments.
  • Reusability: It can be used across multiple tables or in ad hoc queries.
  • Predictability: Developers can predict the next value without inserting a record (unlike IDENTITY).
  • Performance: Caching values improve performance for high-volume applications.

SEQUENCE vs IDENTITY

Feature SEQUENCE IDENTITY
Scope Independent object Table-specific
Reusability It can be reused across tables Limited to one table
Customization Highly customizable Limited options
Predictability Values can be previewed Values are generated only on the insert

Conclusion
The SEQUENCE object in SQL Server is a versatile tool for generating sequential numbers, offering greater flexibility and control than IDENTITY. Whether you're building a multi-table system or need precise control over numbering, SEQUENCE is a valuable addition to your SQL Server toolkit. By leveraging SEQUENCE, you can design robust, scalable, and reusable numbering systems tailored to your application’s needs.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: User-Defined Functions in SQL Server

clock November 22, 2024 07:42 by author Peter

SQL Server has two types function system or built-in function and user-defined function. In today’s article, we shall cover User Defined Functions (UDF). UDFs are the custom functions that developers need to create. SQL Server supports two types of User-Defined Functions.

  • Table-Valued Functions
  • Scalar Valued Functions

Now, let’s look into how to create them. To create User-Defined Functions, let’s consider the table.

Table-Valued Function
When we run a function and return a table as a result, then it’s called a table-valued function.
CREATE FUNCTION [dbo].[Function_Name]()
RETURNS TABLE
AS
RETURN (
    Statement
)


Now, let's create a Table-Valued function.
CREATE FUNCTION [dbo].[udf_ProductDetailsQuantitywise] (@product_quantity INT)
RETURNS TABLE
AS
RETURN (
    SELECT *
    FROM Products
    WHERE product_quantity >= @product_quantity
);

Below is the Output.

As we can see in the above screenshot, the Table-Valued function is created, we took input parameter as @product_quantity which will return table as a result. Now, let’s call the function [dbo].[udf_ProductDetailsQuantitywise] and pass the value as 12 for the input parameter.
SELECT * FROM [dbo].[udf_ProductDetailsQuantitywise] (12)

We can see [dbo].[udf_ProductDetailsQuantitywise] function is used to get the quantity-wise product table. Hence, the table-valued function is useful for returning the table based on business logic.

Scaler Valued Function

Unlike the table-valued function, this function returns a single scaler value based on the input parameter we passed.
CREATE FUNCTION [dbo].[Function_Name] (ParameterList)
RETURNS DataType
AS
BEGIN
    RETURN Return_DataType
END

Now, let's create a Scaler-Valued function.
CREATE FUNCTION [dbo].[udf_ProductPriceTotal]
    (@unit_price INT, @product_quantity INT)
RETURNS INT
AS
BEGIN
    RETURN @unit_price * @product_quantity
END


Below is the Output.

As we can see in the above screenshot, scaler-valued function is created, it took two input parameters @unit_price and @product_quantity which will return a single scaler value as a result.

Now, let’s call the function using [dbo].[udf_ProductPriceTotal].
SELECT
    product_name,
    unit_price,
    product_quantity,
    [dbo].[udf_ProductPriceTotal](unit_price, product_quantity) AS TotalPrice
FROM Products;


We can see the udf_ProductPriceTotal function is used to get the total price of each product in the table. Hence, the scaler function is useful for returning the single scaler value.

Summary
UDFs are a great way to customize the function based on specific needs. In the article, we covered how to create them by taking simple examples. I hope you liked the article, and I am looking forward to your comments and suggestions.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in