European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE :: Locate SQL Server's Most Used Queries and Stored Procedures

clock September 8, 2025 07:28 by author Peter

The most of the harm is typically caused by a few statements when a SQL Server feels "slow." With Dynamic Management Views (DMVs), you can locate them the quickest. The worst offenders are uncovered by CPU, IO, duration, and "what's running right now" using the copy-paste scripts below, along with instructions on how to read the results and what to do next.

    Requirements: VIEW SERVER STATE permission. Numbers like total_worker_time and total_elapsed_time are in microseconds unless noted.

What “high usage” means (pick the lens)?

  • CPU: how much processor time a query uses.
  • IO: logical/physical reads and writes (memory and disk pressure).
  • Duration: how long a query takes end-to-end.
  • Currently running: live workload that may be blocking others.

You’ll use a different script depending on which lens you want.

Top queries by CPU

-- Top 20 queries by total CPU since the plan was cached
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    qs.total_worker_time/1000.0                             AS total_cpu_ms,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_cpu_ms,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    qs.total_logical_reads + qs.total_physical_reads        AS total_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text,
    qp.query_plan
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
ORDER BY qs.total_worker_time DESC;

Tip: Add WHERE DB_NAME(st.dbid) = 'YourDbName' if you only care about one database.

Top queries by IO (reads/writes)

-- Top 20 by total reads; add writes if you care about heavy DML
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    (qs.total_logical_reads + qs.total_physical_reads)      AS total_reads,
    (qs.total_logical_writes + qs.total_physical_writes)    AS total_writes,
    (qs.total_logical_reads + qs.total_physical_reads) / NULLIF(qs.execution_count,0) AS avg_reads,
    (qs.total_logical_writes + qs.total_physical_writes) / NULLIF(qs.execution_count,0) AS avg_writes,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER BY total_reads DESC;


Top queries by average duration
-- Queries that are slow per execution (not just popular)
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    (qs.total_elapsed_time/1000.0)                          AS total_duration_ms,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0)  AS avg_cpu_ms,
    (qs.total_logical_reads + qs.total_physical_reads) / NULLIF(qs.execution_count,0) AS avg_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
WHERE qs.execution_count > 0
ORDER BY avg_duration_ms DESC;

Stored procedures that hit the server hardest
Use sys.dm_exec_procedure_stats to get procedure-level rollups (cleaner than trying to stitch statements together).
-- Procedures by total CPU
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    ps.total_worker_time/1000.0                             AS total_cpu_ms,
    (ps.total_worker_time/1000.0)/NULLIF(ps.execution_count,0) AS avg_cpu_ms,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4  -- skip system DBs; remove if you want them
ORDER BY ps.total_worker_time DESC;

-- Procedures by total reads (IO)
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    (ps.total_logical_reads + ps.total_physical_reads)      AS total_reads,
    ((ps.total_logical_reads + ps.total_physical_reads)/NULLIF(ps.execution_count,0)) AS avg_reads,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4
ORDER BY total_reads DESC;

-- Procedures by average duration
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    (ps.total_elapsed_time/1000.0)/NULLIF(ps.execution_count,0) AS avg_duration_ms,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4 AND ps.execution_count > 0
ORDER BY avg_duration_ms DESC;

What’s heavy right now (live view)?
-- Currently executing requests ordered by CPU time
SELECT
    r.session_id,
    r.status,
    DB_NAME(r.database_id)            AS database_name,
    r.cpu_time                        AS cpu_ms,         -- already in ms
    r.total_elapsed_time              AS elapsed_ms,     -- already in ms
    r.wait_type,
    r.wait_time,
    r.blocking_session_id,
    SUBSTRING(t.text, r.statement_start_offset/2 + 1,
        (CASE WHEN r.statement_end_offset = -1
              THEN DATALENGTH(t.text)
              ELSE r.statement_end_offset END - r.statement_start_offset)/2 + 1) AS running_statement
FROM sys.dm_exec_requests AS r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t
WHERE r.session_id <> @@SPID
ORDER BY r.cpu_time DESC;


If you see blocking_session_id populated, chase that session and fix the blocker first.

Group similar statements together (query_hash)
Same query text with different literals can appear as separate rows. Aggregate by query_hash to see the true top offenders.
-- Roll up by query_hash to combine similar statements
WITH q AS (
    SELECT
        qs.query_hash,
        qs.execution_count,
        qs.total_worker_time,
        qs.total_elapsed_time,
        qs.total_logical_reads + qs.total_physical_reads AS total_reads
    FROM sys.dm_exec_query_stats qs
)
SELECT TOP 20
    query_hash,
    SUM(execution_count)                                  AS executions,
    SUM(total_worker_time)/1000.0                         AS total_cpu_ms,
    (SUM(total_worker_time)/1000.0)/NULLIF(SUM(execution_count),0) AS avg_cpu_ms,
    (SUM(total_elapsed_time)/1000.0)/NULLIF(SUM(execution_count),0) AS avg_duration_ms,
    SUM(total_reads)                                      AS total_reads
FROM q
GROUP BY query_hash
ORDER BY total_cpu_ms DESC;

Filters you’ll actually use
Add these lines to any query above as needed:
-- Only one DB
WHERE DB_NAME(st.dbid) = 'YourDbName'

-- Only statements executed in the last day (approx; uses last_execution_time)
WHERE qs.last_execution_time >= DATEADD(DAY, -1, SYSUTCDATETIME())

-- Exclude trivial one-off executions
AND qs.execution_count >= 5

Read the numbers the right way

  • High total + low average: popular query. Optimize for throughput (indexing, cached plan quality).
  • Low total + very high average: rare but slow. Optimize for latency (rewrite, avoid RBAR/scalar UDFs, better joins).
  • High duration but modest CPU/IO: usually blocking or waits. Check wait_type, blocking_session_id, and missing indexes that cause scans.
  • Metrics reset when plans get evicted or the instance restarts. Treat them as a rolling window, not forever history.

Quick wins to try after you find a culprit

  • Add the right index (covering where needed). Look at the actual plan’s missing index hints, then design a lean index yourself (don’t blindly accept 12-column monsters).
  • Kill implicit conversions (mismatched data types, e.g., NVARCHAR vs INT).
  • Replace SELECT * with exact columns (cuts reads).
  • Update statistics if they’re stale; consider WITH RECOMPILE for bad parameter sniffing cases (sparingly).
  • Avoid scalar UDFs in hot paths; inline logic or use APPLY.
  • Batch big writes; keep transactions short.

Bonus: store a snapshot for trending
If you want a daily/15-minute snapshot to trend over time:
-- One-time setup
CREATE TABLE dbo.TopQuerySnapshot
(
    captured_at           DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
    database_name         SYSNAME,
    executions            BIGINT,
    total_cpu_ms          BIGINT,
    avg_cpu_ms            DECIMAL(18,2),
    avg_duration_ms       DECIMAL(18,2),
    total_reads           BIGINT,
    query_text            NVARCHAR(MAX)
);

-- Collector (schedule as an Agent Job)
INSERT INTO dbo.TopQuerySnapshot (database_name, executions, total_cpu_ms, avg_cpu_ms, avg_duration_ms, total_reads, query_text)
SELECT TOP 50
    DB_NAME(st.dbid),
    qs.execution_count,
    qs.total_worker_time/1000,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0),
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0),
    (qs.total_logical_reads + qs.total_physical_reads),
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(st.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
ORDER BY qs.total_worker_time DESC;

Now you can chart trends and prove improvements.

Common questions
Does this include plans not in cache?

No. DMVs reflect what’s cached. For long-term history, enable Query Store (SQL Server 2016+) and analyze sys.query_store_runtime_stats.

What about currently blocked sessions?
Use the “live view” script; chase the blocking_session_id, then inspect that session’s SQL text and plan.

Conclusion

CPU comes first, followed by IO and duration. If users are currently complaining, use the live view. Check its strategy, add the appropriate index, correct data types, and retest when you've identified a heavy hitter. Big results are typically achieved with small, targeted modifications.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Working with Big Data in SQL Server

clock September 2, 2025 08:06 by author Peter

The amount of data generated and stored by organizations has been growing exponentially in recent years. Big data refers to this large and complex data sets that traditional data processing applications cannot handle. Managing and analyzing big data is becoming increasingly important for businesses to gain insights and stay ahead of the competition. Microsoft SQL Server is a powerful database management system capable of handling big data.

In this article, we will use appropriate examples to explore the techniques for working with big data in SQL Server.

Partitioning
Partitioning is a technique used to break large tables into smaller, more manageable pieces called partitions. Each partition contains a subset of the data, which can be processed and queried independently. SQL Server supports horizontal partitioning, which involves splitting data based on a column or set of columns.

For example, consider a table containing sales data for a large retail store. Partitioning the table based on the year column would create a separate partition for each year of data. This would allow queries to be run on individual partitions rather than the entire table, resulting in faster query performance. First we need to create a table with partitioned indexes. We can use the below code to create a table called "Sales" with a partitioned index on the "SaleDate" column:
CREATE TABLE [dbo].[Sales]
(
    [SaleID] [int] NOT NULL PRIMARY KEY,
    [Product] [nvarchar](50) NOT NULL,
    [SaleDate] [date] NOT NULL,
    [SaleAmount] [money] NOT NULL
)
WITH (MEMORY_OPTIMIZED = OFF, DURABILITY = ON)
GO

CREATE PARTITION FUNCTION [PF_Sales_SaleDate]
    (date)
AS RANGE RIGHT FOR VALUES ('2023-01-01', '2023-02-01', '2023-03-01');

CREATE PARTITION SCHEME [PS_Sales_SaleDate]
    AS PARTITION [PF_Sales_SaleDate]
    TO ([FG_Sales_202301], [FG_Sales_202302], [FG_Sales_202303], [FG_Sales_202304]);

CREATE CLUSTERED INDEX [CI_Sales_SaleDate]
ON [dbo].[Sales] ([SaleDate])
WITH (DROP_EXISTING = ON)
ON [PS_Sales_SaleDate] ([SaleDate]);


The code creates a table with columns for SaleID, Product, SaleDate, and SaleAmount, and defines a primary key on the SaleID column. The table is defined as DURABILITY = ON to ensure data is written to disk, but MEMORY_OPTIMIZED = OFF to ensure that data is not stored in memory. The partition function and scheme are defined to partition the table based on the SaleDate column, with partitions for January, February, and March of 2023 and at last, a clustered index is created on the SaleDate column, using the partition scheme to distribute the index across the partitions. Once we have created the table, we can insert some data into it using below query.

INSERT INTO [dbo].[Sales] ([SaleID], [Product], [SaleDate], [SaleAmount])
VALUES (1, 'Product A', '2022-01-01', 100.00),(2, 'Product B', '2022-01-02', 200.00),
       (3, 'Product C', '2022-01-03', 300.00),(4, 'Product A', '2022-02-01', 400.00),
       (5, 'Product B', '2022-02-02', 500.00),(6, 'Product C', '2022-02-03', 600.00);

Now whenever we can query the Sales table, the partitioned index will automatically be used. SQL Server can scan only the partitions that contain the relevant data. This improves query performance and reduces the amount of disk I/O required. Partitioning indexes is a powerful feature in SQL Server that can significantly improve the performance of queries on large tables. By partitioning a table based on a specific column, SQL Server can scan only the relevant partitions, reducing disk I/O and improving query performance.

Columnstore Indexes
Columnstore indexes are a specialized type of index that is optimized for large data warehouses. They store data in columns rather than rows, which makes them much more efficient for querying large datasets. Columnstore indexes are particularly useful for frequently queried but rarely updated data.

For example, consider a table containing customer sales data for a large online retailer. A columnstore index could be created on the Product columns. This would allow for very fast querying of the total sales for each product. First we need to create a table called "Sales" with a columnstore index on the "Product" column. To create Colunmstore Index:
CREATE TABLE [dbo].[Sales]
(
    [SaleID] [int] NOT NULL PRIMARY KEY,
    [Product] [nvarchar](50) NOT NULL,
    [SaleDate] [date] NOT NULL,
    [SaleAmount] [money] NOT NULL
)
WITH (MEMORY_OPTIMIZED = OFF, DURABILITY = ON)
GO


CREATE CLUSTERED COLUMNSTORE INDEX [CSI_Sales_Product]
ON [dbo].[Sales]([Product]);


Above query creates columnstore index on the Product column as a clustered index, which means that the entire table is stored in a columnar format. Now whenever we can query the Sales table, it will be much faster than a query on a traditional row-based index because the columnstore index is created on the Product column

In-Memory OLTP
In-Memory OLTP is a new feature in SQL Server that allows for creating memory-optimized tables. These tables are stored entirely in memory, which makes them much faster than traditional disk-based tables. In-Memory OLTP is beneficial for applications requiring high performance and low latency. For example, consider a table containing stock market data. In-Memory OLTP could create a memory-optimized table that stores the latest market data. This would allow for very fast querying of real-time stock prices. To create a memory-optimized filegroup, which will contain the memory-optimized tables we can use the below query:
ALTER DATABASE [MyDatabase] ADD FILEGROUP [InMemoryFilegroup] CONTAINS MEMORY_OPTIMIZED_DATA;

It will add a new filegroup called "InMemoryFilegroup" to the "MyDatabase" database, which contains memory-optimized data. Now we will create a memory-optimized table that will be stored entirely in memory:
CREATE TABLE [dbo].[MarketData_MemoryOptimized]
(
    [Name] [nvarchar](50) NOT NULL PRIMARY KEY NONCLUSTERED,
    [Price] [decimal](18, 2) NOT NULL,
    [Timestamp] [datetime2](0) NOT NULL,
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);


The "MarketData_MemoryOptimized" table is defined as MEMORY_OPTIMIZED, which means it will be stored entirely in memory, and DURABILITY is set to SCHEMA_ONLY, which means it won't be written to disk. In-Memory OLTP can be a powerful tool for storing and querying real-time data in memory-optimized tables. By storing data in memory, In-Memory OLTP can improve query performance and reduce latency for certain types of database workloads, such as real-time market data.

PolyBase

PolyBase is a feature in SQL Server that allows for integrating external data sources, such as Hadoop or Azure Blob Storage. PolyBase allows for querying structured and unstructured data, making it a powerful tool for working with big data.

For example, consider a large financial services company that stores customer data in Hadoop. The company may want to analyze customer behavior and trends to improve their services, but querying the large amount of data stored in Hadoop can be difficult and time-consuming. This is where PolyBase comes in - by connecting SQL Server directly to the Hadoop data source, the company can query the data easily and quickly, allowing for in-depth analysis of customer behavior and trends.
EXEC sp_configure 'polybase enabled', 1;
RECONFIGURE;

CREATE EXTERNAL DATA SOURCE HadoopDataSource
WITH (
    TYPE = HADOOP,
    LOCATION = 'hdfs://<HadoopNameNode>:<PortNumber>',
    CREDENTIAL = HadoopCredential
);

CREATE EXTERNAL TABLE CustomerData_Hadoop
(
    CustomerID INT,
    FirstName VARCHAR(50),
    LastName VARCHAR(50),
    EmailAddress VARCHAR(50),
    Address VARCHAR(100),
    City VARCHAR(50),
    State VARCHAR(2),
    ZipCode VARCHAR(10)
)
WITH (
    LOCATION = '/customerdata',
    DATA_SOURCE = HadoopDataSource,
    FILE_FORMAT = TEXTFILE
);

The above code enables PolyBase in SQL Server, creates an external data source called HadoopDataSource that points to the Hadoop cluster at the location hdfs://<HadoopNameNode>:<PortNumber>., and creates an external table called CustomerData_Hadoop that maps to the data stored in Hadoop. The LOCATION option specifies the location of the data in Hadoop, and the DATA_SOURCE option specifies the external data source to use to query the data. The FILE_FORMAT option specifies the format of the data in Hadoop, in this case, TEXTFILE.

PolyBase allows us to query external data sources directly from SQL Server. This can be a valuable tool for organizations that have large amounts of data stored in external sources and need to query that data quickly and easily.

Conclusion

SQL Server offers a number of effective tools and methods for handling large amounts of data. There are numerous methods for maximizing the speed of big data sets, ranging from partitioning and columnstore indexes to In-Memory OLTP and PolyBase. In today's data-driven world, companies may remain ahead of the competition and obtain insightful knowledge by employing these strategies.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: How Generative AI Can Help Database Engineers and DBAs Increase Productivity?

clock August 21, 2025 08:02 by author Peter

In contemporary software development, database engineers and database administrators, or DBAs, are essential. They frequently have a lot of work to do, from managing performance, security, and compliance to guaranteeing data veracity. Let's talk about generative AI. These AI models are capable of creating documentation, optimizing queries, writing SQL, designing schemas, and even assisting with compliance. By taking on repetitive tasks, they enhance the abilities of DBAs rather than replace them, allowing engineers to concentrate on high-value problem-solving.

With specific examples and tools you can use right now, we'll examine how database engineers and DBAs may use generative AI throughout the data lifecycle in this post.

1. Schema Design & Documentation
Problem: Designing normalized schemas and documenting them manually is time-consuming.
AI Solution: Generative AI can propose ERDs, normalize schemas, and generate Markdown or HTML documentation instantly.
Tools: ChatGPT, dbdiagram.io AI, Claude.

Example Prompt:
“Design a normalized Postgres schema for a healthcare awareness website with users, doctors, and articles. Include indexes for frequent queries.”

Benefit: Schema prototyping is 50–70% faster.

2. SQL Query Generation & Optimization

Problem: Writing and tuning complex queries can block analysts and slow delivery.
AI Solution: AI can convert plain English into SQL, write joins, and suggest performance improvements.

Tools: ChatGPT, Text2SQL tools, MindsDB.

Example Prompt:
“Write a SQL query to return the top 10 most read articles in the past 30 days by unique users.”

Benefit: Analysts and engineers save 60% time on query generation.

3. Performance Tuning

Problem: Query optimization often involves deep expertise and trial-and-error with execution plans.
AI Solution: Paste EXPLAIN ANALYZE outputs into ChatGPT to get recommendations for indexes, partitioning, or caching.

Example Prompt:
“Here’s a Postgres EXPLAIN ANALYZE output. Recommend optimizations and appropriate indexes.”

Benefit: Speeds up triage 30–40%.

4. Data Migration & ETL Automation

Problem: Writing migration scripts between databases or cleaning data pipelines can take weeks.
AI Solution: Generative AI can generate migration code, field mappings, and transformation logic.
Tools: ChatGPT, LangChain connectors, Claude.

Example Prompt:
“Generate SQL to migrate users from MySQL schema A to Postgres schema B. Map user_id → id, full_name → first_name + last_name.”

Benefit: Migration tasks are 2–3x faster.

5. Data Quality & Validation

Problem: Data validation scripts and anomaly detection rules are often neglected.

AI Solution: AI can generate test cases and quality checks.

Tools: ChatGPT + dbt.

Example Prompt:
“Write SQL checks to validate that zipcode is 5 digits and articles.title is never null. Output in dbt schema test format.”

Benefit: Expands coverage with little extra effort.

6. Security & Compliance
Problem: DBAs must enforce GDPR, HIPAA, and internal security rules — often manually.

AI Solution: AI can create row-level security policies, anonymization scripts, and audit queries.

Example Prompt:
“Generate Postgres row-level security so users can only see their own records.”

Benefit: Faster compliance, reduced risk.

7. Documentation & Knowledge Sharing
Problem: Documentation is often outdated, incomplete, or skipped.
AI Solution: AI can auto-generate data dictionaries, ER diagrams, and tutorials from schemas.

Example Prompt:
“Generate a Markdown data dictionary from this schema. Include data types, constraints, and sample values.”

Benefit: 80% of documentation time eliminated.

Productivity Gains for DBAs Using Generative AI

  • Schema prototyping: 50–70% faster
  • SQL generation: 60% faster
  • Query optimization: 30–40% faster
  • Documentation: 80% automated
  • Compliance & quality checks: 2–3× coverage

Generative AI transforms DBAs from “firefighters” into strategic enablers of innovation.

Learn More: Upskill with Generative AI

Want to master how to use Generative AI in your daily engineering work?

Check out LearnAI at C# Corner — our hands-on training designed for developers, data engineers, and DBAs who want to stay ahead of the AI curve.
You’ll learn prompt engineering, AI coding, AI database workflows, and production-ready integrations.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Developing Transaction Management and Error Handling Skills with Savepoints for Enterprise Excellence

clock August 13, 2025 09:20 by author Peter

In order to support corporate operations including sales, purchasing, production, supply chain management (SCM), accounts, and human resources, I have been designing mission-critical solutions for intricate ERP systems since 2009. I'll cover the art and science of creating robust database systems in this extensive 15,000+ word tutorial, with a focus on transaction savepoints—a sometimes overlooked but effective tool for fine-grained management in intricate workflows.

This blog article is your road map to becoming proficient with SQL Server's error handling and transaction management, regardless of your level of experience—from a newbie developer creating your first stored procedure to an experienced architect creating a worldwide ERP platform. Anticipate a combination of real-world business scenarios, useful coding examples, theoretical depth, and concrete best practices. I'll walk you through each layer, from fundamental error handling to sophisticated savepoint tactics in high-concurrency systems, making sure your databases are reliable, effective, and prepared for enterprise-scale problems. Hold on tight as we explore the world of SQL Server, where accuracy and dependability are evident in every transaction and data integrity is paramount!

1. Introduction to Error Handling and Transaction Management
Why These Mechanisms Are the Backbone of ERP Systems?

Imagine a global enterprise where a single misstep in a sales order, inventory update, or financial posting could ripple across departments, costing millions or eroding customer trust. In the high-pressure world of enterprise resource planning (ERP) systems, error handling and transaction management are the unsung heroes that ensure data integrity, reliability, and consistency. These mechanisms are not just technical tools; they’re the foundation of trust in systems that manage Sales, Purchase, Production, SCM, Accounts, and HR. As a SQL Server developer since 2009, I’ve witnessed the chaos of poorly handled errors and the elegance of well-orchestrated transactions. From preventing stock discrepancies in a retail giant to ensuring accurate payroll for thousands of employees, these techniques are critical for success. In complex ERP systems, where multiple modules interact with shared data, a single failure can cascade into disaster. Robust error handling catches issues before they escalate, while transaction management guarantees that operations are executed as an all-or-nothing unit, the Power of Savepoints in Complex Workflows. Among SQL Server’s arsenal, transaction savepoints stand out as a game-changer for complex workflows. Savepoints allow you to mark points within a transaction, enabling partial rollbacks without discarding the entire operation. In ERP systems, where a single process might involve dozens of steps such as updating inventory, posting financial entries, and logging audit trails, savepoints provide surgical precision, ensuring flexibility and resilience. In this 15,000+ word guide, I’ll take you from the basics of error handling to the advanced intricacies of savepoints, with real-world examples drawn from my experience in large-scale ERP systems. You’ll find detailed code, business scenarios, performance tips, and a case study to tie it all together. Let’s embark on this journey to build systems that stand the test of time!

2. Theoretical Foundations
Understanding Errors in SQL Server. Errors in SQL Server can disrupt even the most carefully designed systems. They fall into several categories.

  • Syntax Errors: Incorrect SQL code (e.g., missing semicolon).
  • Constraint Violations: Breaching primary key, foreign key, or check constraints.
  • Runtime Errors: Division by zero, data type mismatches, or deadlocks.
  • System Errors: Resource exhaustion, server crashes, or network failures.

SQL Server classifies errors by severity levels (0–25).

  • 0–10: Informational messages or warnings.
  • 11–16: User-correctable errors (e.g., constraint violations).
  • 17–19: Resource or software issues (e.g., out of memory).
  • 20–25: Fatal errors that terminate the connection or process.

Understanding these categories and their implications is essential for designing effective error-handling strategies, particularly in relation to transaction Concepts and ACID Properties. A transaction is a logical unit of work that must be executed as a whole. SQL Server transactions adhere to the ACID properties.

  1. Atomicity: Ensures all operations complete successfully, or none are applied.
  2. Consistency: Maintains the database in a valid state before and after the transaction.
  3. Isolation: Ensures transactions are independent of one another.
  4. Durability: Guarantees that committed changes are permanently saved, even in a system failure.

In ERP systems, ACID compliance is non-negotiable. For example, a sales order must update inventory, generate an invoice, and log an audit trail as a single atomic unit concurrency and Isolation Levels. In multi-user ERP systems, concurrent transactions can lead to issues like.

  • Dirty Reads: Reading uncommitted data.
  • Non-Repeatable Reads: Data changes between reads within a transaction.
  • Phantom Reads: New rows appear during a transaction.

SQL Server offers isolation levels to manage concurrency.

  • Read Uncommitted: Allows dirty reads (lowest isolation).
  • Read Committed: Prevents dirty reads (SQL Server default).
  • Repeatable Read: Prevents dirty and non-repeatable reads.
  • Serializable: Prevents all concurrency issues (highest isolation).
  • Snapshot: Uses row versioning for consistent reads without locking.


Choosing the correct isolation level is a balancing act between data consistency and performance, especially in high-concurrency ERP environments.

3. SQL Server Error Handling Mechanisms

Legacy Error Handling with @

@ERROR
Before SQL Server 2005, error handling relied on the @@ERROR system function, which returns the error number of the last executed statement (0 if no error).Example: Basic Error Handling with @.

BEGIN TRANSACTION;

INSERT INTO Sales.Orders (OrderID, CustomerID, OrderDate)
VALUES (1, 100, GETDATE());

IF @@ERROR <> 0
BEGIN
    ROLLBACK TRANSACTION;
    PRINT 'Error occurred during order insertion. Transaction rolled back.';
END
ELSE
BEGIN
    COMMIT TRANSACTION;
    PRINT 'Order inserted successfully.';
END


Limitations

  • @@ERROR resets after each statement, requiring immediate checks.
  • Limited error details (only error number).
  • Clunky for complex procedures.

Modern Error Handling with TRY CATCH. Introduced in SQL Server 2005, TRY…CATCH provides structured error handling, similar to C# or Java.

BEGIN TRY
    -- Code that might cause an error
END TRY
BEGIN CATCH
    -- Handle the error
END CATCH


Example: TRY…CATCH with Transaction.

BEGIN TRY
    BEGIN TRANSACTION;

    INSERT INTO Sales.Orders (OrderID, CustomerID, OrderDate)
    VALUES (1, 100, GETDATE());

    -- Simulate an error (foreign key violation)
    INSERT INTO Sales.OrderDetails (OrderID, ProductID, Quantity)
    VALUES (1, 999, 10); -- ProductID 999 does not exist

    COMMIT TRANSACTION;
END TRY
BEGIN CATCH
    IF @@TRANCOUNT > 0
        ROLLBACK TRANSACTION;

    SELECT
        ERROR_NUMBER()    AS ErrorNumber,
        ERROR_MESSAGE()   AS ErrorMessage,
        ERROR_LINE()      AS ErrorLine,
        ERROR_PROCEDURE() AS ErrorProcedure;
END CATCH;


Benefits

  1. Captures all errors in the TRY block.
  2. Provides detailed error information via functions.
  3. Simplifies transaction rollback logic.


Error Information Functions Within a CATCH block, use these functions to retrieve error details.
    ERROR_NUMBER(): Error number.
    ERROR_MESSAGE(): Error description.
    ERROR_LINE(): Line number where the error occurred.
    ERROR_PROCEDURE(): Name of the stored procedure or trigger.
    ERROR_SEVERITY(): Severity level.
    ERROR_STATE(): Error state number.


Custom Error Handling with RAISERROR and THROW. For business-specific errors, use RAISERROR (pre-SQL Server 2012) or THROW (SQL Server 2012+).

Example: Custom Error with THROW.

BEGIN TRY
    DECLARE @CustomerID INT = 999;

    IF NOT EXISTS (
        SELECT 1
        FROM Sales.Customers
        WHERE CustomerID = @CustomerID
    )
        THROW 50001, 'Invalid CustomerID provided.', 1;
END TRY
BEGIN CATCH
    SELECT
        ERROR_NUMBER() AS ErrorNumber,
        ERROR_MESSAGE() AS ErrorMessage;
END CATCH


THROW vs. RAISERROR

  • THROW is simpler, integrates with TRY…CATCH, and re-throws the original error.
  • RAISERROR supports custom severity but is less flexible.


4. Transaction Management in SQL Server
Transaction Types. SQL Server supports three transaction types.

  • Auto-Commit: Each statement is a transaction (default).
  • Implicit: Transactions start automatically for certain statements but require explicit COMMIT or ROLLBACK.
  • Explicit: Defined with BEGIN TRANSACTION, COMMIT, and ROLLBACK.


Transaction Control Statements

  • BEGIN TRANSACTION: Starts a transaction.
  • COMMIT TRANSACTION: Saves changes.
  • ROLLBACK TRANSACTION: Undoes changes.


Example: Explicit Transaction.

BEGIN TRANSACTION;

UPDATE Sales.Orders
SET OrderStatus = 'Processed'
WHERE OrderID = 100;

INSERT INTO Sales.OrderHistory (OrderID, Status, ChangeDate)
VALUES (100, 'Processed', GETDATE());

COMMIT TRANSACTION;

Nested Transactions SQL Server supports nested transactions, but only the outermost COMMIT or ROLLBACK affects the database. The @@TRANCOUNT function tracks nesting levels.

Example: Nested Transactions.
BEGIN TRANSACTION;

INSERT INTO Sales.Orders (OrderID, CustomerID, OrderDate)
VALUES (2, 101, GETDATE());

BEGIN TRANSACTION; -- Nested
    INSERT INTO Sales.OrderDetails (OrderID, ProductID, Quantity)
    VALUES (2, 100, 5);
COMMIT TRANSACTION; -- Inner commit

COMMIT TRANSACTION; -- Outer commit

Distributed Transactions. For operations across multiple databases or servers, use the Microsoft Distributed Transaction Coordinator (MS DTC).

Example: Distributed Transaction.
BEGIN DISTRIBUTED TRANSACTION;

UPDATE Server1.ERP.dbo.Orders
SET Status = 'Shipped'
WHERE OrderID = 100;

UPDATE Server2.ERP.dbo.Inventory
SET Quantity = Quantity - 10
WHERE ProductID = 100;

COMMIT TRANSACTION;

5. Best Practices for Error Handling and Transaction Management

    Use TRY…CATCH: More robust than @@ERROR.
    Check @
        @TRANCOUNT
        Ensure transactions are committed or rolled back.
    Log Errors: Store errors in a dedicated table for auditing.
    Minimize Transaction Scope: Reduce locking and improve concurrency.
    Choose Appropriate Isolation Levels: Balance consistency and performance.
    Handle Deadlocks: Retry on error 1205.
    Test Edge Cases: Simulate failures like constraint violations or timeouts.
    Document Error Codes: Maintain a reference for system and custom errors.

Example: Error Logging Table.
CREATE TABLE dbo.ErrorLog
(
    ErrorID         INT IDENTITY (1, 1) PRIMARY KEY,
    ErrorNumber     INT,
    ErrorMessage    NVARCHAR(4000),
    ErrorLine       INT,
    ErrorProcedure  NVARCHAR(128),
    ErrorDateTime   DATETIME DEFAULT GETDATE(),
    UserName        NVARCHAR(128) DEFAULT SUSER_SNAME()
);


Example: Logging Errors.
BEGIN TRY
    BEGIN TRANSACTION;

    -- Business logic

    COMMIT TRANSACTION;
END TRY
BEGIN CATCH
    IF @@TRANCOUNT > 0
        ROLLBACK TRANSACTION;

    INSERT INTO dbo.ErrorLog (
        ErrorNumber,
        ErrorMessage,
        ErrorLine,
        ErrorProcedure
    )
    VALUES (
        ERROR_NUMBER(),
        ERROR_MESSAGE(),
        ERROR_LINE(),
        ERROR_PROCEDURE()
    );

    THROW;
END CATCH;


6. In-Depth Exploration of Transaction Savepoints
What Are Savepoints?

Transaction savepoints allow you to mark specific points within a transaction, enabling partial rollbacks without undoing the entire transaction. This is particularly useful in complex ERP workflows where some operations can succeed even if others fail.

SAVE TRANSACTION SavepointName;
ROLLBACK TRANSACTION SavepointName;

  • SAVE TRANSACTION: Creates a savepoint.
  • ROLLBACK TRANSACTION SavepointName: Rolls back to the specified savepoint, preserving earlier operations.
  • Savepoints are scoped to the current transaction and are cleared when the transaction is committed or rolled back.


Use Cases in ERP Systems
Savepoints are ideal for.

  • Multi-Step Workflows: E.g., processing a sales order with optional steps like discount application.
  • Error Recovery: Allowing partial success in batch processing (e.g., production or payroll).
  • Audit Trails: Logging intermediate steps without committing the entire transaction.
  • Complex Validation: Rolling back specific steps if validations fail.


Detailed Examples with Data Let’s explore savepoints with a comprehensive example in an ERP system, complete with sample data and business context.

Scenario: A manufacturing company processes a production batch, consuming raw materials, producing finished goods, and updating quality control records. If the quality control step fails, only that step should be rolled back, preserving the raw material consumption and production updates.

Sample Data Setup
-- Create tables
CREATE TABLE Production.RawMaterials (
    MaterialID INT PRIMARY KEY,
    MaterialName NVARCHAR(100),
    Quantity INT
);

CREATE TABLE Production.FinishedGoods (
    ProductID INT PRIMARY KEY,
    ProductName NVARCHAR(100),
    Quantity INT
);

CREATE TABLE Production.QualityControl (
    QCID INT IDENTITY(1,1) PRIMARY KEY,
    BatchID INT,
    ProductID INT,
    Status NVARCHAR(20),
    QCDate DATETIME
);

-- Insert sample data
INSERT INTO Production.RawMaterials (MaterialID, MaterialName, Quantity)
VALUES
    (1, 'Steel', 1000),
    (2, 'Plastic', 500);

INSERT INTO Production.FinishedGoods (ProductID, ProductName, Quantity)
VALUES
    (101, 'Widget A', 200),
    (102, 'Widget B', 150);


Stored Procedure with Savepoints.
CREATE PROCEDURE Production.usp_ProcessBatchWithSavepoints
    @BatchID INT,
    @ProductID INT,
    @MaterialID INT,
    @QuantityProduced INT,
    @RawMaterialConsumed INT,
    @QCStatus NVARCHAR(20)
AS
BEGIN
    SET NOCOUNT ON;

    BEGIN TRY
        BEGIN TRANSACTION;

        -- Validate inputs
        IF NOT EXISTS (
            SELECT 1
            FROM Production.RawMaterials
            WHERE MaterialID = @MaterialID
              AND Quantity >= @RawMaterialConsumed
        )
            THROW 50010, 'Insufficient raw material quantity.', 1;

        IF NOT EXISTS (
            SELECT 1
            FROM Production.FinishedGoods
            WHERE ProductID = @ProductID
        )
            THROW 50011, 'Invalid ProductID.', 1;

        -- Step 1: Consume raw material
        SAVE TRANSACTION RawMaterialUpdate;
        UPDATE Production.RawMaterials
        SET Quantity = Quantity - @RawMaterialConsumed
        WHERE MaterialID = @MaterialID;

        -- Step 2: Produce finished goods
        SAVE TRANSACTION ProductionUpdate;
        UPDATE Production.FinishedGoods
        SET Quantity = Quantity + @QuantityProduced
        WHERE ProductID = @ProductID;

        -- Step 3: Perform quality control (simulate potential failure)
        SAVE TRANSACTION QualityControl;
        IF @QCStatus = 'Failed'
            THROW 50012, 'Quality control check failed.', 1;

        INSERT INTO Production.QualityControl (BatchID, ProductID, Status, QCDate)
        VALUES (@BatchID, @ProductID, @QCStatus, GETDATE());

        COMMIT TRANSACTION;
        SELECT 'Batch processed successfully.' AS Result;
    END TRY

    BEGIN CATCH
        IF @@TRANCOUNT > 0
        BEGIN
            IF ERROR_NUMBER() = 50012 -- QC failure
            BEGIN
                ROLLBACK TRANSACTION QualityControl; -- Roll back only QC step
                COMMIT TRANSACTION; -- Commit raw material and production updates
                SELECT 'Quality control failed, but raw material and production updates committed.' AS Result;
            END
            ELSE
            BEGIN
                ROLLBACK TRANSACTION; -- Full rollback for other errors

                INSERT INTO dbo.ErrorLog (ErrorNumber, ErrorMessage, ErrorLine, ErrorProcedure)
                VALUES (
                    ERROR_NUMBER(),
                    ERROR_MESSAGE(),
                    ERROR_LINE(),
                    ERROR_PROCEDURE()
                );

                THROW;
            END
        END
    END CATCH
END;


Execution Example 1: Successful Batch Processing.
-- Initial data state
SELECT *
FROM Production.RawMaterials;

SELECT *
FROM Production.FinishedGoods;

SELECT *
FROM Production.QualityControl;

-- Execute procedure (successful QC)
EXEC Production.usp_ProcessBatchWithSavepoints
    @BatchID = 1,
    @ProductID = 101,
    @MaterialID = 1,
    @QuantityProduced = 50,
    @RawMaterialConsumed = 200,
    @QCStatus = 'Passed';

-- Verify results
SELECT *
FROM Production.RawMaterials;       -- Steel: 800

SELECT *
FROM Production.FinishedGoods;      -- Widget A: 250

SELECT *
FROM Production.QualityControl;     -- QC record added


Execution Example 2: QC Failure with Partial Rollback.
-- Execute procedure (QC fails)
EXEC Production.usp_ProcessBatchWithSavepoints
    @BatchID = 2,
    @ProductID = 102,
    @MaterialID = 2,
    @QuantityProduced = 30,
    @RawMaterialConsumed = 100,
    @QCStatus = 'Failed';

-- Verify results
SELECT *
FROM Production.RawMaterials; -- Plastic: 400

SELECT *
FROM Production.FinishedGoods; -- Widget B: 180

SELECT *
FROM Production.QualityControl; -- No QC record added


Explanation

  • The procedure uses three savepoints: RawMaterialUpdate, ProductionUpdate, and QualityControl.
  • If the quality control step fails (error 50012), only the QC step is rolled back, preserving the raw material consumption and production updates.
  • Other errors trigger a complete rollback, ensuring data consistency.
  • The ErrorLog table captures any non-QC errors for auditing.

Best Practices for Savepoints
Use Descriptive Savepoint Names: E.g., RawMaterialUpdate instead of SP1.
Limit Savepoint Scope: Use savepoints only for critical steps to avoid complexity.
    Check @
        @TRANCOUNT

        Ensure a transaction is active before creating savepoints.
    Handle Errors Carefully: Differentiate between errors requiring partial vs. full rollback.
    Test Savepoint Logic: Simulate failures to verify rollback behavior.
    Avoid Overuse: Too many savepoints can complicate code and reduce readability.

Limitations and Considerations

  • No Nested Savepoints: Savepoints are linear within a transaction; rolling back to an earlier savepoint clears later ones.
  • Performance Overhead: Savepoints increase transaction log writes, impacting performance in high-volume systems.
  • Scope Limitation: Savepoints are only valid within the current transaction.
  • Error Handling Complexity: Requires careful logic to manage partial rollbacks.


Example: Savepoint Overuse Pitfall.
BEGIN TRANSACTION;

SAVE TRANSACTION SP1;
INSERT INTO Sales.Orders (OrderID, CustomerID)
VALUES (3, 103);

SAVE TRANSACTION SP2;
UPDATE Sales.Orders
SET OrderStatus = 'Processed'
WHERE OrderID = 3;

SAVE TRANSACTION SP3;
-- ... (many more savepoints)

ROLLBACK TRANSACTION SP1;  -- Clears SP2 and SP3
COMMIT TRANSACTION;


Solution: Limit savepoints to critical steps and use clear error handling to avoid confusion Advanced Savepoint Example: Multi-Module ERP Workflow. In an extensive ERP system, a sales order might involve inventory updates, financial postings, and audit logging, with optional steps like applying discounts. Savepoints allow flexibility in handling optional failures. Setup:

CREATE TABLE Sales.Orders (
    OrderID INT PRIMARY KEY,
    CustomerID INT,
    OrderDate DATE,
    TotalAmount DECIMAL(18, 2)
);

CREATE TABLE Sales.OrderDetails (
    OrderDetailID INT IDENTITY(1, 1) PRIMARY KEY,
    OrderID INT,
    ProductID INT,
    Quantity INT,
    UnitPrice DECIMAL(18, 2)
);

CREATE TABLE Inventory (
    ProductID INT PRIMARY KEY,
    Quantity INT
);

CREATE TABLE Accounts.GeneralLedger (
    LedgerID INT IDENTITY(1, 1) PRIMARY KEY,
    OrderID INT,
    Amount DECIMAL(18, 2),
    TransactionType NVARCHAR(20)
);

CREATE TABLE Sales.Discounts (
    DiscountID INT IDENTITY(1, 1) PRIMARY KEY,
    OrderID INT,
    DiscountAmount DECIMAL(18, 2)
);

-- Sample data
INSERT INTO Sales.Orders (OrderID, CustomerID, OrderDate, TotalAmount)
VALUES (1001, 201, '2025-07-21', 1000.00);

INSERT INTO Inventory (ProductID, Quantity)
VALUES (501, 500);

Procedure with Savepoints
CREATE PROCEDURE Sales.usp_ProcessOrderWithDiscount
    @OrderID INT,
    @CustomerID INT,
    @ProductID INT,
    @Quantity INT,
    @DiscountAmount DECIMAL(18,2)
AS
BEGIN
    SET NOCOUNT ON;

    BEGIN TRY
        BEGIN TRANSACTION;

        -- Step 1: Validate inputs
        IF NOT EXISTS (
            SELECT 1
            FROM Sales.Customers
            WHERE CustomerID = @CustomerID
        )
            THROW 50013, 'Invalid CustomerID.', 1;

        IF NOT EXISTS (
            SELECT 1
            FROM Inventory
            WHERE ProductID = @ProductID
              AND Quantity >= @Quantity
        )
            THROW 50014, 'Insufficient inventory.', 1;

        -- Step 2: Update order
        SAVE TRANSACTION OrderUpdate;

        UPDATE Sales.Orders
        SET TotalAmount = TotalAmount + (
            @Quantity * (SELECT UnitPrice FROM Products WHERE ProductID = @ProductID)
        )
        WHERE OrderID = @OrderID;

        -- Step 3: Update inventory
        SAVE TRANSACTION InventoryUpdate;

        UPDATE Inventory
        SET Quantity = Quantity - @Quantity
        WHERE ProductID = @ProductID;

        -- Step 4: Post to general ledger
        SAVE TRANSACTION LedgerUpdate;

        INSERT INTO Accounts.GeneralLedger (OrderID, Amount, TransactionType)
        VALUES (
            @OrderID,
            @Quantity * (SELECT UnitPrice FROM Products WHERE ProductID = @ProductID),
            'Debit'
        );

        -- Step 5: Apply discount (optional, can fail)
        SAVE TRANSACTION DiscountUpdate;

        IF @DiscountAmount > (
            SELECT TotalAmount
            FROM Sales.Orders
            WHERE OrderID = @OrderID
        )
            THROW 50015, 'Discount amount exceeds order total.', 1;

        INSERT INTO Sales.Discounts (OrderID, DiscountAmount)
        VALUES (@OrderID, @DiscountAmount);

        COMMIT TRANSACTION;

        SELECT 'Order processed successfully with discount.' AS Result;

    END TRY
    BEGIN CATCH
        IF @@TRANCOUNT > 0
        BEGIN
            IF ERROR_NUMBER() = 50015 -- Discount failure
            BEGIN
                ROLLBACK TRANSACTION DiscountUpdate; -- Roll back only discount
                COMMIT TRANSACTION; -- Commit other steps
                SELECT 'Discount application failed, but order processed.' AS Result;
            END
            ELSE
            BEGIN
                ROLLBACK TRANSACTION; -- Full rollback

                INSERT INTO dbo.ErrorLog (
                    ErrorNumber,
                    ErrorMessage,
                    ErrorLine,
                    ErrorProcedure
                )
                VALUES (
                    ERROR_NUMBER(),
                    ERROR_MESSAGE(),
                    ERROR_LINE(),
                    ERROR_PROCEDURE()
                );

                THROW;
            END
        END
    END CATCH
END;


Execution Example
-- Execute with valid discount
EXEC Sales.usp_ProcessOrderWithDiscount
    @OrderID        = 1001,
    @CustomerID     = 201,
    @ProductID      = 501,
    @Quantity       = 10,
    @DiscountAmount = 50.00;

-- Execute with invalid discount
EXEC Sales.usp_ProcessOrderWithDiscount
    @OrderID        = 1001,
    @CustomerID     = 201,
    @ProductID      = 501,
    @Quantity       = 10,
    @DiscountAmount = 2000.00; -- Exceeds order total

Results

  • Valid discount: All steps (order, inventory, ledger, discount) are committed.
  • Invalid discount: Discount step is rolled back, but order, inventory, and ledger updates are committed.


7. Real-World Business Scenarios in ERP Systems
Scenario 1: Sales Order Processing Business Context: A customer places an order, updating the Orders, OrderDetails, and Inventory tables, and generating an invoice. Savepoints ensure optional steps (e.g., discounts) can fail without rolling back the entire order.

Code Example
CREATE PROCEDURE Sales.usp_ProcessOrder
    @OrderID        INT,
    @CustomerID     INT,
    @ProductID      INT,
    @Quantity       INT,
    @DiscountAmount DECIMAL(18, 2)
AS
BEGIN
    SET NOCOUNT ON;

    BEGIN TRY
        BEGIN TRANSACTION;

        -- Validate inputs
        IF NOT EXISTS (
            SELECT 1
            FROM Sales.Customers
            WHERE CustomerID = @CustomerID
        )
            THROW 50016, 'Invalid CustomerID.', 1;

        IF NOT EXISTS (
            SELECT 1
            FROM Inventory
            WHERE ProductID = @ProductID
              AND Quantity >= @Quantity
        )
            THROW 50017, 'Insufficient inventory.', 1;

        -- Insert order
        SAVE TRANSACTION OrderInsert;
        INSERT INTO Sales.Orders (OrderID, CustomerID, OrderDate, TotalAmount)
        VALUES (
            @OrderID,
            @CustomerID,
            GETDATE(),
            @Quantity * (
                SELECT UnitPrice
                FROM Products
                WHERE ProductID = @ProductID
            )
        );

        -- Insert order details
        SAVE TRANSACTION OrderDetailsInsert;
        INSERT INTO Sales.OrderDetails (OrderID, ProductID, Quantity, UnitPrice)
        VALUES (
            @OrderID,
            @ProductID,
            @Quantity,
            (SELECT UnitPrice FROM Products WHERE ProductID = @ProductID)
        );

        -- Update inventory
        SAVE TRANSACTION InventoryUpdate;
        UPDATE Inventory
        SET Quantity = Quantity - @Quantity
        WHERE ProductID = @ProductID;

        -- Apply discount (optional)
        SAVE TRANSACTION DiscountApply;
        IF @DiscountAmount > (
            SELECT TotalAmount
            FROM Sales.Orders
            WHERE OrderID = @OrderID
        )
            THROW 50018, 'Discount amount exceeds order total.', 1;

        INSERT INTO Sales.Discounts (OrderID, DiscountAmount)
        VALUES (@OrderID, @DiscountAmount);

        COMMIT TRANSACTION;
        SELECT 'Order processed successfully.' AS Result;
    END TRY
    BEGIN CATCH
        IF @@TRANCOUNT > 0
        BEGIN
            IF ERROR_NUMBER() = 50018
            BEGIN
                ROLLBACK TRANSACTION DiscountApply;
                COMMIT TRANSACTION;
                SELECT 'Discount failed, but order processed.' AS Result;
            END
            ELSE
            BEGIN
                ROLLBACK TRANSACTION;

                INSERT INTO dbo.ErrorLog (ErrorNumber, ErrorMessage, ErrorLine, ErrorProcedure)
                VALUES (
                    ERROR_NUMBER(),
                    ERROR_MESSAGE(),
                    ERROR_LINE(),
                    ERROR_PROCEDURE()
                );

                THROW;
            END
        END
    END CATCH
END;

Scenario 2: Purchase Order and Inventory Update Business Context: Receiving goods from a vendor updates the Purchase Orders and Inventory tables. Savepoints allow partial rollback if quality checks fail.

Code Example
CREATE PROCEDURE Purchase.usp_ReceiveGoods
    @PurchaseOrderID INT,
    @VendorID INT,
    @ProductID INT,
    @Quantity INT,
    @QualityStatus NVARCHAR(20)
AS
BEGIN
    SET NOCOUNT ON;

    BEGIN TRY
        BEGIN TRANSACTION;

        -- Validate inputs
        IF NOT EXISTS (
            SELECT 1
            FROM Vendors
            WHERE VendorID = @VendorID
        )
            THROW 50019, 'Invalid VendorID.', 1;

        -- Insert purchase order
        SAVE TRANSACTION PurchaseOrderInsert;
        INSERT INTO Purchase.PurchaseOrders (
            PurchaseOrderID,
            VendorID,
            OrderDate
        )
        VALUES (
            @PurchaseOrderID,
            @VendorID,
            GETDATE()
        );

        -- Update inventory
        SAVE TRANSACTION InventoryUpdate;
        UPDATE Inventory
        SET Quantity = Quantity + @Quantity
        WHERE ProductID = @ProductID;

        -- Perform quality check
        SAVE TRANSACTION QualityCheck;
        IF @QualityStatus = 'Rejected'
            THROW 50020, 'Quality check failed.', 1;

        INSERT INTO Purchase.QualityControl (
            PurchaseOrderID,
            ProductID,
            Status
        )
        VALUES (
            @PurchaseOrderID,
            @ProductID,
            @QualityStatus
        );

        COMMIT TRANSACTION;
        SELECT 'Goods received successfully.' AS Result;
    END TRY
    BEGIN CATCH
        IF @@TRANCOUNT > 0
        BEGIN
            IF ERROR_NUMBER() = 50020
            BEGIN
                ROLLBACK TRANSACTION QualityCheck;
                COMMIT TRANSACTION;
                SELECT 'Quality check failed, but purchase order and inventory updated.' AS Result;
            END
            ELSE
            BEGIN
                ROLLBACK TRANSACTION;

                INSERT INTO dbo.ErrorLog (
                    ErrorNumber,
                    ErrorMessage,
                    ErrorLine,
                    ErrorProcedure
                )
                VALUES (
                    ERROR_NUMBER(),
                    ERROR_MESSAGE(),
                    ERROR_LINE(),
                    ERROR_PROCEDURE()
                );

                THROW;
            END
        END
    END CATCH
END;

Scenario 3: Production Batch Processing Business Context: A production batch consumes raw materials and produces finished goods, with savepoints for partial rollback if quality control fails (already covered in Section 6).

Scenario 4: Accounts Ledger Posting Business Context: Posting a financial transaction involves debit and credit entries in the GeneralLedger table. Savepoints ensure partial success if one entry fails.

Code Example
CREATE PROCEDURE Accounts.usp_PostLedgerEntry
    @TransactionID    INT,
    @AccountIDDebit   INT,
    @AccountIDCredit  INT,
    @Amount           DECIMAL(18,2)
AS
BEGIN
    SET NOCOUNT ON;

    BEGIN TRY
        BEGIN TRANSACTION;

        -- Validate accounts
        IF NOT EXISTS (
            SELECT 1
            FROM Accounts.ChartOfAccounts
            WHERE AccountID IN (@AccountIDDebit, @AccountIDCredit)
        )
            THROW 50021, 'Invalid account ID.', 1;

        -- Post debit entry
        SAVE TRANSACTION DebitEntry;
        INSERT INTO Accounts.GeneralLedger (
            TransactionID,
            AccountID,
            DebitAmount,
            CreditAmount,
            TransactionDate
        )
        VALUES (
            @TransactionID,
            @AccountIDDebit,
            @Amount,
            0,
            GETDATE()
        );

        -- Post credit entry
        SAVE TRANSACTION CreditEntry;
        INSERT INTO Accounts.GeneralLedger (
            TransactionID,
            AccountID,
            DebitAmount,
            CreditAmount,
            TransactionDate
        )
        VALUES (
            @TransactionID,
            @AccountIDCredit,
            0,
            @Amount,
            GETDATE()
        );

        COMMIT TRANSACTION;
        SELECT 'Ledger entry posted successfully.' AS Result;
    END TRY
    BEGIN CATCH
        IF @@TRANCOUNT > 0
        BEGIN
            IF ERROR_NUMBER() = 50021
               AND ERROR_MESSAGE() LIKE '%@AccountIDCredit%'
            BEGIN
                -- Roll back only credit entry
                ROLLBACK TRANSACTION CreditEntry;
                COMMIT TRANSACTION;
                SELECT 'Credit entry failed, but debit entry committed.' AS Result;
            END
            ELSE
            BEGIN
                ROLLBACK TRANSACTION;

                INSERT INTO dbo.ErrorLog (
                    ErrorNumber,
                    ErrorMessage,
                    ErrorLine,
                    ErrorProcedure
                )
                VALUES (
                    ERROR_NUMBER(),
                    ERROR_MESSAGE(),
                    ERROR_LINE(),
                    ERROR_PROCEDURE()
                );

                THROW;
            END
        END
    END CATCH
END;

Scenario 5: HR Payroll Processing Business Context: Payroll processing calculates salaries and deductions, updating Payroll and Employee Accounts. Savepoints allow partial rollback if deductions fail.

Code Example
CREATE PROCEDURE HR.usp_ProcessPayroll
    @EmployeeID INT,
    @PayPeriod DATE,
    @GrossSalary DECIMAL(18, 2),
    @TaxRate DECIMAL(5, 2)
AS
BEGIN
    SET NOCOUNT ON;

    DECLARE @Deductions DECIMAL(18, 2) = 0;
    DECLARE @NetPay DECIMAL(18, 2);

    BEGIN TRY
        BEGIN TRANSACTION;

        -- Calculate deductions
        SAVE TRANSACTION DeductionCalculation;
        IF @TaxRate > 0.5
            THROW 50022, 'Tax rate exceeds maximum allowable.', 1;

        SET @Deductions = @GrossSalary * @TaxRate;
        SET @NetPay = @GrossSalary - @Deductions;

        -- Insert payroll record
        SAVE TRANSACTION PayrollInsert;
        INSERT INTO HR.Payroll (EmployeeID, PayPeriod, GrossSalary, Deductions, NetPay)
        VALUES (@EmployeeID, @PayPeriod, @GrossSalary, @Deductions, @NetPay);

        -- Update employee account
        SAVE TRANSACTION AccountUpdate;
        UPDATE HR.EmployeeAccounts
        SET Balance = Balance + @NetPay
        WHERE EmployeeID = @EmployeeID;

        COMMIT TRANSACTION;
        SELECT 'Payroll processed successfully.' AS Result;
    END TRY
    BEGIN CATCH
        IF @@TRANCOUNT > 0
        BEGIN
            IF ERROR_NUMBER() = 50022
            BEGIN
                ROLLBACK TRANSACTION DeductionCalculation;
                COMMIT TRANSACTION;

                SELECT 'Deduction calculation failed, but no payroll or account updates applied.' AS Result;
            END
            ELSE
            BEGIN
                ROLLBACK TRANSACTION;
                INSERT INTO dbo.ErrorLog (ErrorNumber, ErrorMessage, ErrorLine, ErrorProcedure)
                VALUES (
                    ERROR_NUMBER(),
                    ERROR_MESSAGE(),
                    ERROR_LINE(),
                    ERROR_PROCEDURE()
                );
                THROW;
            END
        END
    END CATCH
END;


Scenario 6: SCM Shipment Tracking Business Context: Tracking a shipment updates the Shipments table and adjusts inventory. Savepoints allow rollback of inventory updates if shipment status fails.

Code Example
CREATE PROCEDURE SCM.usp_TrackShipment
    @ShipmentID INT,
    @WarehouseID INT,
    @ProductID INT,
    @Quantity INT,
    @Status NVARCHAR(20)
AS
BEGIN
    SET NOCOUNT ON;

    BEGIN TRY
        BEGIN TRANSACTION;

        -- Validate inputs
        IF NOT EXISTS (
            SELECT 1
            FROM SCM.Warehouses
            WHERE WarehouseID = @WarehouseID
        )
            THROW 50023, 'Invalid WarehouseID.', 1;

        IF NOT EXISTS (
            SELECT 1
            FROM Inventory
            WHERE ProductID = @ProductID
        )
            THROW 50024, 'Invalid ProductID.', 1;

        -- Update shipment status
        SAVE TRANSACTION ShipmentUpdate;

        UPDATE SCM.Shipments
        SET Status = @Status,
            DeliveryDate = GETDATE()
        WHERE ShipmentID = @ShipmentID;

        -- Update inventory
        SAVE TRANSACTION InventoryUpdate;

        IF @Status = 'Rejected'
            THROW 50025, 'Shipment rejected.', 1;

        UPDATE Inventory
        SET Quantity = Quantity + @Quantity
        WHERE ProductID = @ProductID
          AND WarehouseID = @WarehouseID;

        COMMIT TRANSACTION;

        SELECT 'Shipment tracked successfully.' AS Result;
    END TRY
    BEGIN CATCH
        IF @@TRANCOUNT > 0
        BEGIN
            IF ERROR_NUMBER() = 50025
            BEGIN
                ROLLBACK TRANSACTION InventoryUpdate;
                COMMIT TRANSACTION;

                SELECT 'Shipment rejected, but status updated.' AS Result;
            END
            ELSE
            BEGIN
                ROLLBACK TRANSACTION;

                INSERT INTO dbo.ErrorLog (
                    ErrorNumber,
                    ErrorMessage,
                    ErrorLine,
                    ErrorProcedure
                )
                VALUES (
                    ERROR_NUMBER(),
                    ERROR_MESSAGE(),
                    ERROR_LINE(),
                    ERROR_PROCEDURE()
                );

                THROW;
            END
        END
    END CATCH
END;

8. Advanced Techniques
Error Handling in Stored Procedures and Triggers Triggers often enforce business rules in ERP systems, and error handling ensures robustness.

Example: Trigger with Savepoint.
CREATE TRIGGER Sales.trg_OrderInsert
ON Sales.Orders
AFTER INSERT
AS
BEGIN
    SET NOCOUNT ON;

    BEGIN TRY
        BEGIN TRANSACTION;

        DECLARE @OrderID INT,
                @CustomerID INT;

        SELECT
            @OrderID = OrderID,
            @CustomerID = CustomerID
        FROM inserted;

        SAVE TRANSACTION AuditInsert;

        IF NOT EXISTS (
            SELECT 1
            FROM Sales.Customers
            WHERE CustomerID = @CustomerID
        )
            THROW 50026, 'Invalid CustomerID in order.', 1;

        INSERT INTO Sales.OrderHistory (OrderID, Status, ChangeDate)
        VALUES (@OrderID, 'Created', GETDATE());

        COMMIT TRANSACTION;
    END TRY
    BEGIN CATCH
        IF @@TRANCOUNT > 0
        BEGIN
            IF ERROR_NUMBER() = 50026
            BEGIN
                ROLLBACK TRANSACTION AuditInsert;
                COMMIT TRANSACTION;
            END
            ELSE
            BEGIN
                ROLLBACK TRANSACTION;

                INSERT INTO dbo.ErrorLog (
                    ErrorNumber,
                    ErrorMessage,
                    ErrorLine,
                    ErrorProcedure
                )
                VALUES (
                    ERROR_NUMBER(),
                    ERROR_MESSAGE(),
                    ERROR_LINE(),
                    ERROR_PROCEDURE()
                );

                THROW;
            END
        END
    END CATCH
END;


Transaction Management in High-Concurrency Systems
Use SNAPSHOT isolation to reduce locking in high-concurrency ERP systems.

Example: Snapshot Isolation.
ALTER DATABASE ERP
    SET ALLOW_SNAPSHOT_ISOLATION ON;

ALTER DATABASE ERP
    SET READ_COMMITTED_SNAPSHOT ON;

BEGIN TRANSACTION;

SET TRANSACTION ISOLATION LEVEL SNAPSHOT;

UPDATE Sales.Orders
SET Status = 'Processed'
WHERE OrderID = 100;

COMMIT TRANSACTION;

SQL

Using XACT_STATE and XACT_ABORT

    XACT_STATE(): Returns 0 (no transaction), 1 (committable), or -1 (uncommittable).
    SET XACT_ABORT ON: Rolls back the transaction on any error.

Example: XACT_STATE with Savepoint.

BEGIN TRY
    SET XACT_ABORT ON;
    BEGIN TRANSACTION;

    INSERT INTO Sales.Orders (OrderID, CustomerID, OrderDate)
    VALUES (4, 103, GETDATE());

    SAVE TRANSACTION OrderInsert;

    INSERT INTO Sales.OrderDetails (OrderID, ProductID, Quantity)
    VALUES (4, 999, 10); -- Invalid ProductID

    COMMIT TRANSACTION;
END TRY
BEGIN CATCH
    IF XACT_STATE() = -1
        ROLLBACK TRANSACTION;
    ELSE IF XACT_STATE() = 1
    BEGIN
        ROLLBACK TRANSACTION OrderInsert;
        COMMIT TRANSACTION;
    END

    INSERT INTO dbo.ErrorLog (
        ErrorNumber,
        ErrorMessage,
        ErrorLine,
        ErrorProcedure
    )
    VALUES (
        ERROR_NUMBER(),
        ERROR_MESSAGE(),
        ERROR_LINE(),
        ERROR_PROCEDURE()
    );

    THROW;
END CATCH

Distributed Transactions with MS DTC Example.
BEGIN DISTRIBUTED TRANSACTION;

UPDATE Server1.ERP.dbo.Orders
SET Status = 'Shipped'
WHERE OrderID = 100;

UPDATE Server2.ERP.dbo.Inventory
SET Quantity = Quantity - 10
WHERE ProductID = 100;

COMMIT TRANSACTION;

9. Pros and Cons of SQL Server Error Handling and Transaction Management.

  • Robustness: TRY…CATCH and savepoints ensure reliable error recovery.
  • Flexibility: Savepoints and nested transactions support complex workflows.
  • Detailed Error Information: Functions like ERROR_MESSAGE() aid debugging.
  • ACID Compliance: Guarantees data integrity.
  • Concurrency Control: Isolation levels manage multi-user environments.


Cons

  • Performance Overhead: Transactions and savepoints increase log writes.
  • Complexity: Nested transactions and savepoints require careful management.
  • Deadlock Risk: Long transactions increase contention.
  • Learning Curve: Advanced features like savepoints and XACT_STATE demand expertise.

10. Alternatives to Native SQL Server Mechanisms
Application-Level Error Handling
Example: C# with ADO.NET
using (var connection = new SqlConnection("connection_string"))
{
    connection.Open();
    var transaction = connection.BeginTransaction();

    try
    {
        var command = new SqlCommand("INSERT INTO Sales.Orders ...", connection, transaction);
        command.ExecuteNonQuery();

        transaction.Commit();
    }
    catch (SqlException ex)
    {
        transaction.Rollback();
        // Log error
        throw;
    }
}


Pros: Centralized error handling, integration with application logging. Cons: Inconsistent transaction management, additional application logic.ORM Frameworks (Entity Framework).

Example
using (var context = new ERPContext())
{
    using (var transaction = context.Database.BeginTransaction())
    {
        try
        {
            context.Orders.Add(new Order { OrderID = 5, CustomerID = 104 });
            context.SaveChanges();
            transaction.Commit();
        }
        catch
        {
            transaction.Rollback();
            throw;
        }
    }
}

  • Pros: Simplifies database access. Cons: Less control over transactions, potential performance overhead.NoSQL Databases NoSQL databases like MongoDB offer eventual consistency for high-throughput scenarios. Pros: Flexible schemas, scalability.
  • Cons: Limited ACID support, not suited for ERP’s relational data.

11. Performance Considerations and Optimization
Impact of Transactions on Performance.

  • Locking: Transactions hold locks, potentially blocking users.
  • Log Overhead: Savepoints and transactions increase transaction log writes.
  • Long-Running Transactions: Increase deadlock risk.


Indexing and Locking Strategies

  • Use covering indexes to reduce scan times.
  • Implement row-level locking to minimize contention.
  • Use NOLOCK for non-critical reads, but avoid dirty reads in critical operations.

Monitoring and Tuning

  • Use SQL Server Profiler or Extended Events to monitor transaction performance.
  • Analyze wait statistics (e.g., LCK_M_IX).
  • Leverage Query Store to identify and tune slow queries.

12. Case Study
Building a Robust ERP Transaction Framework. Designing a Scalable Framework. This framework centralizes error handling, transaction management, and savepoint logic for a multi-module ERP system. Components:

ErrorLog Table (defined earlier).

TransactionLog Table.
CREATE TABLE Framework.TransactionLog (
    LogID INT IDENTITY(1,1) PRIMARY KEY,
    Module VARCHAR(50),
    Operation VARCHAR(100),
    Status VARCHAR(20),
    ExecutionDate DATETIME DEFAULT GETDATE()
);


Transaction Template with Savepoints:
CREATE PROCEDURE Framework.usp_ExecuteTransaction
    @Module VARCHAR(50),
    @Operation VARCHAR(100),
    @Parameters NVARCHAR(MAX)
AS
BEGIN
    SET NOCOUNT ON;
    DECLARE @RetryCount INT = 0;
    DECLARE @MaxRetries INT = 3;

    WHILE @RetryCount <= @MaxRetries
    BEGIN
        BEGIN TRY
            BEGIN TRANSACTION;

            IF @Module = 'Sales'
            BEGIN
                SAVE TRANSACTION SalesOperation;
                EXEC Sales.usp_ProcessOrder @Parameters;
            END
            ELSE IF @Module = 'Purchase'
            BEGIN
                SAVE TRANSACTION PurchaseOperation;
                EXEC Purchase.usp_ReceiveGoods @Parameters;
            END

            COMMIT TRANSACTION;

            INSERT INTO Framework.TransactionLog (Module, Operation, Status, ExecutionDate)
            VALUES (@Module, @Operation, 'Success', GETDATE());

            BREAK;
        END TRY
        BEGIN CATCH
            IF @@TRANCOUNT > 0
            BEGIN
                IF ERROR_NUMBER() = 1205 -- Deadlock
                BEGIN
                    SET @RetryCount += 1;
                    WAITFOR DELAY '00:00:01';
                    CONTINUE;
                END
                ELSE IF ERROR_NUMBER() IN (50015, 50018, 50020, 50025) -- Savepoint-related errors
                BEGIN
                    ROLLBACK TRANSACTION @Module + 'Operation';
                    COMMIT TRANSACTION;

                    INSERT INTO Framework.TransactionLog (Module, Operation, Status, ExecutionDate)
                    VALUES (@Module, @Operation, 'Partial Success', GETDATE());

                    SELECT 'Operation partially succeeded due to savepoint rollback.' AS Result;
                    BREAK;
                END
                ELSE
                BEGIN
                    ROLLBACK TRANSACTION;

                    INSERT INTO dbo.ErrorLog (ErrorNumber, ErrorMessage, ErrorLine, ErrorProcedure)
                    VALUES (ERROR_NUMBER(), ERROR_MESSAGE(), ERROR_LINE(), ERROR_PROCEDURE());

                    INSERT INTO Framework.TransactionLog (Module, Operation, Status, ExecutionDate)
                    VALUES (@Module, @Operation, 'Failed', GETDATE());

                    THROW;
                END
            END
        END CATCH
    END
END;


Usage
EXEC Framework.usp_ExecuteTransaction
    @Module = 'Sales',
    @Operation = 'ProcessOrder',
    @Parameters = 'OrderID=1002,CustomerID=202,ProductID=501,Quantity=10,DiscountAmount=50.00';

Benefits

  • Consistent error handling and savepoint logic.
  • Retry mechanism for deadlocks.
  • Audit trail for all transactions.


13. Common Pitfalls and How to Avoid Them

  • Uncommitted Transactions: Check @@TRANCOUNT before exiting.
  • Overusing Savepoints: Limit to critical steps.
  • Ignoring Deadlocks: Implement retry logic for error 1205.
  • Poor Error Messages: Use descriptive messages with THROW.
  • Incorrect Isolation Levels: Test for optimal balance.


14. Conclusion

Crafting Resilient Systems for the Future. In the dynamic world of enterprise systems, where every transaction powers a business decision, SQL Server error handling and transaction management are your keys to building resilient, scalable, and trustworthy databases. From the precision of TRY…CATCH leverages the flexibility of transaction savepoints to empower you to navigate the complexities of ERP systems with confidence.

My 15+ years as a SQL Server developer have taught me that true mastery lies in anticipating failures, embracing best practices, and designing systems that thrive under pressure. Whether you’re orchestrating sales orders, managing supply chains, or ensuring financial accuracy, the strategies in this guide, bolstered by real-world examples, savepoint techniques, and performance optimizations, will elevate your craft. As you implement these principles, experiment with savepoints in your workflows, test edge cases, and refine your approach to build systems that not only meet today’s demands but also shape the future of enterprise excellence.

Let’s continue this journey of innovation and precision. Share your experiences, connect with me, and let’s build databases that power the world!

15. References and Further Reading

  • Microsoft Docs: TRY…CATCH
  • Microsoft Docs: Transactions
  • Book: SQL Server Internals by Kalen Delaney
  • Blog: Erland Sommarskog’s Error and Transaction Handling in SQL Server

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.




SQL Server Hosting - HostForLIFE :: Essential SQL Commands Every Beginner Should Know

clock August 6, 2025 08:50 by author Peter

Don't worry if you're just getting started. I'll go over the fundamental SQL commands that every novice should know in this blog. These commands will provide you with a solid foundation regardless of whether you are developing a basic application or are just learning about SQL Server.

What is SQL?

SQL stands for Structured Query Language. It is a programming language for storing and processing information in a relational database.

With SQL, you can,

  • Create and manage databases
  • Add and retrieve data
  • Update or delete records
  • Control access to data

SQL Commands
1. DDL ( Data Definition Language )

What it does: DDL commands define and modify the structure of database objects like tables, schemas, or databases.

Common DDL Commands

  • CREATE
  • ALTER
  • DROP
  • TRUNCATE

Example

Note. DDL commands are auto-committed — once executed, you cannot roll them back.

2. DML – Data Manipulation Language

What it does: DML commands let you insert, update, or delete actual data inside your tables.

Common DML Commands

  • INSERT
  • UPDATE
  • DELETE

Example

Note. Use WHERE carefully, forgetting that it can update or delete every row in the table.

3. DQL – Data Query Language

What it does: DQL is all about retrieving data from the database using queries.

Main DQL Command
SELECT

Example

This is the most-used category for anyone working with reports, dashboards, or APIs.

4. TCL – Transaction Control Language

What it does: TCL commands help manage transactions in SQL. These are useful when you want to ensure multiple operations succeed or fail together.

Common TCL Commands
    BEGIN TRANSACTION
    COMMIT
    ROLLBACK
    SAVEPOINT (optional/advanced)


Example

Best used when making multiple changes that must all succeed or fail together.
5. DCL – Data Control Language

What it does: DCL commands are about access control and permissions in the database.

Common DCL Commands
    GRANT
    REVOKE


Example
Data Control

Helpful for controlling individuals in settings where security is important, such production settings.

Conclusion
Understanding SQL command categories like DDL, DML, DQL, TCL, and DCL makes it much easier to work with databases. Whether you're creating tables, inserting data, running queries, or managing transactions, knowing which command to use and helps you write better and safer SQL.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: The Client Statistics for SQL Server

clock August 5, 2025 07:48 by author Peter

Performance analysis and query optimization are crucial duties while working with SQL Server. To assist developers and DBAs in troubleshooting and monitoring SQL query performance, SQL Server Management Studio (SSMS) comes with a number of built-in tools. Client statistics are one such underappreciated yet effective instrument. 

In SSMS, what are client statistics?
In SSMS, client statistics give specific details about how a query operates from the standpoint of the client (SSMS), as opposed to the SQL Server engine. Client statistics provide information on the duration of each client-side step and aid in comparing different executions of the same query, whereas execution plans describe how the query is carried out.

How to Enable and Use Client Statistics in SSMS?

  • Open SSMS and connect to your SQL Server.
  • Open a New Query window.
  • Click on the Query menu.
  • Select Include Client Statistics (or press Shift + Alt + S).
  • Run your SQL query.

You’ll notice a new tab named "Client Statistics" appears next to the "Results" and "Messages" tabs as above.
It will show multiple results where we can comapare persormance result as below. Note: Statistics will show up to 10 results. After that, it will add in 10th result as the latest.

Statistics
Key Metrics in Client Statistics

1. Query Profile Statistics

Statistic Description
Number of SELECT statements Count of SELECT statements executed.
Rows returned by SELECT statements How many rows were returned?
Network packets sent/received Useful for analyzing network impact.
TDS (Tabular Data Stream) packet count Helps in understanding low-level client-server communication.

2. Time Statistics


Statistic
Description
Client processing time Time taken by SSMS to process and display the result.
Total execution time Overall time including server processing + client overhead.
Wait time on server replies Time spent waiting for the server to respond.

3. Aggregate Totals
When you run the same query multiple times (e.g., for performance comparison), SSMS shows:

Statistic Description
Average Mean of all runs for that metric.
Last Metrics for the most recent run.
High Highest value among all runs.
Low Lowest value among all runs.

Why Use Client Statistics?
Benefits

  • Helps in query optimization.
  • Assists in identifying network delays.
  • Aids in performance tuning during development or testing.
  • Enables comparative analysis of different query versions.

Real-World Scenario Example
Suppose you're optimizing a stored procedure. You can,

  • Enable client statistics.
  • Run the stored procedure once with your original query.
  • Modify your query (e.g., by adding an index or changing a join).
  • Re-run and compare the Total execution time, Network packets, and Rows returned.

This helps you quantify improvements or spot regressions easily.

Limitations to Keep in Mind

  • Client Statistics reflect client-side performance, not full server-side analysis.
  • They don't replace Execution Plans or Dynamic Management Views (DMVs).
  • Useful mainly during development or testing, not in production environments.

Conclusion
Client Statistics in SSMS are a simple yet powerful feature to understand query performance from the client perspective. Though often overlooked, they are valuable for developers and DBAs working on performance tuning and query optimization. Next time you run a SQL query in SSMS, give Client Statistics a try it might give you insights you weren't expecting!

HostForLIFE.eu SQL Server 2022 Hosting

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: How SQL Server Jobs Are Created and Managed?

clock July 30, 2025 08:36 by author Peter

Are you sick of using your SQL Server database to complete the same activities over and over again? Are you trying to find a productive solution to automate repetitive tasks? The solution lies in SQL Server Jobs! With examples to demonstrate their usefulness and efficiency, we will walk through the steps of establishing and managing SQL Server jobs in this post.

SQL Server Jobs: What Are They?
SQL Server Jobs are automated procedures or operations that execute according to a predetermined timetable or in reaction to predetermined triggers. The SQL Server Agent service manages job management and makes sure that jobs are completed on time. You can minimize human error, save time, and maintain the functionality of your database system by establishing SQL Server Jobs.

First, turn on the SQL Server Agent. Make sure your instance has the SQL Server Agent enabled before you begin creating jobs. To confirm this, launch SQL Server Management Studio (SSMS) and go to the Object Explorer's "SQL Server Agent" node. To launch the agent, right-click on it and choose "Start."

Step 2: Establish a Fresh SQL Server Task

Take these actions to create a new job:

The "SQL Server Agent" node in SSMS can be expanded. Then, right-click on "Jobs," and choose "New Job."

Give your job a descriptive name and provide an optional description.

Under the "Steps" section, click "New" to add a step to the job. Each step represents a specific action or task that the job will execute.

In the "Step Properties" window, specify the step name, choose the type of action (e.g., T-SQL script, PowerShell script, etc.), and enter the required details. For example, if your job needs to execute a T-SQL script, enter the SQL code in the "Command" box.

Configure additional step options such as the database context, output file options, and error handling settings.
Under the "Schedules" section, click "New" to add a job schedule. In the Schedule Properties window, specify the schedule name, and choose schedule type and frequency.

Backup Database Job
Let's create a simple SQL Server Job to perform a daily backup of a database.

  • Create a new SQL Server Job and name it "DailyBackupJob."
  • Add a step to the job and name it "BackupDatabase."
  • Set the type of action to "Transact-SQL script" and enter the following T-SQL script in the "Command" box:

USE YourDatabaseName;
BACKUP DATABASE YourDatabaseName TO DISK = 'C:\Backup\YourDatabaseName.bak' WITH INIT;

  • Configure any additional options based on your requirements.
  • Under the "Schedules" section, click "New" to schedule the job's frequency. You can set it to run daily, weekly, or at specific intervals.
  • Specify the start date and time for the job to commence.

Step 3. Managing SQL Server Jobs
Once you've created a SQL Server Job, it's essential to manage it effectively. Here are some management tips:

  • Monitor Execution: Regularly check the job's execution history and review any errors or warnings that may occur. This will help you identify and resolve issues promptly.
  • Modify Job Settings: If your requirements change, you can modify the job's steps, schedule, or other settings. Right-click on the job in SSMS and select "Properties" to make the necessary adjustments.
  • Enable/Disable Jobs: Temporarily enable or disable jobs when needed. Right-click on the job and select "Enable" or "Disable."
  • Backup Job Definitions: Export and save your job definitions, especially if you need to move them to another server or restore them in case of accidental deletion.

Conclusion
SQL Server Jobs provides an excellent way to automate repetitive tasks, improve database maintenance, and boost productivity. By following the steps outlined in this blog, you can create and manage SQL Server Jobs efficiently. So, go ahead and automate your routine tasks, and enjoy the benefits of a well-organized and smoothly-running database environment!

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Temp Tables vs CTEs in SQL Server: What You Should Really Know

clock July 24, 2025 08:11 by author Peter

It's likely that you have considered this if you have ever built a sophisticated SQL query in SQL Server:

“Should I use a CTE here or just dump it into a temp table?”

Great question, and you’re not alone. CTEs (Common Table Expressions) and Temporary Tables both let you to break complex logic down into steps, but they serve different purposes under the hood. In this post, we'll break it all down in plain English, with examples and use cases, so you can confidently select the best option for your SQL journey.

What Is a CTE?
A CTE (Common Table Expression) is like a temporary named result set. You define it once at the top of your query, and then treat it like a virtual table.
Types of CTEs

There are two types of temporary tables:

  • Simple CTE: Used to give a name to a query result.
  • Multiple CTE: You can define more than one in a single query.
  • Recursive CTE: A CTE that refers to itself, helpful in working with hierarchical data.


Syntax & Example
WITH TopSales AS (
    SELECT TOP 5 EmployeeId, SUM(SalesAmount) AS TotalSales
    FROM Sales
    GROUP BY EmployeeId
    ORDER BY TotalSales DESC
)
SELECT * FROM TopSales;


Think of it like giving a nickname to a subquery. It’s elegant, readable, and vanishes as soon as the query finishes.

Behind the Scenes

  • CTEs are not stored in memory.
  • SQL Server optimizes them as part of the main query, just like inlining code.
  • You can’t index a CTE or reference it multiple times in the same query unless you write a recursive CTE.


A Quick Peek into What More CTEs Can Do

So far, we’ve looked at simple CTEs — but they can do a bit more. Two cool things to know:

1. You can use many CTEs together
You can define many CTEs in a query by separating them with commas. It is useful when you want to create step-by-step logic in a clean, readable format.

Example: 
WITH SalesSummary AS (
    SELECT EmployeeId, SUM(SalesAmount) AS TotalSales
    FROM Sales
    GROUP BY EmployeeId
),
TopPerformers AS (
    SELECT * FROM SalesSummary WHERE TotalSales > 50000
)
SELECT * FROM TopPerformers;


2. Recursive CTEs Exist Too!
CTEs can also reference themselves — this is called a recursive CTE. They're especially useful for hierarchical data like org charts, categories, or any "parent-child" structures.

Example:
WITH OrgChart AS (
    SELECT EmployeeId, ManagerId FROM Employees WHERE ManagerId IS NULL
    UNION ALL
    SELECT e.EmployeeId, e.ManagerId
    FROM Employees e
    INNER JOIN OrgChart c ON e.ManagerId = c.EmployeeId
)
SELECT * FROM OrgChart;


Don't worry if it appears advanced; the idea is that this CTE will continue to repeat until the entire organizational hierarchy has been created. We'll go over this in further detail in future article!

What is a Temporary Table?

A temporary table is exactly what it sounds like: a table that exists just temporarily within your SQL Server session.

You use it when you wish to store and manipulate intermediate data without permanently storing it in your database.
Types of Temporary Tables

There are two types of temporary tables:

  • Local Temporary Table (#Temp): Only visible within your session or procedure. It disappears automatically when your session ends.
  • Global Temporary Table (##Temp): Visible to all sessions. These are less commonly used and should be handled with care, especially in multi-user environments.

Syntax & Example
Let’s say we want to find the top 5 employees with the highest total sales — here’s how you can do it using a temporary table:
CREATE TABLE #TopSales (
    EmployeeId INT,
    TotalSales MONEY
);

INSERT INTO #TopSales
SELECT TOP 5 EmployeeId, SUM(SalesAmount)
FROM Sales
GROUP BY EmployeeId
ORDER BY SUM(SalesAmount) DESC;

SELECT * FROM #TopSales;


This table behaves just like a regular table, you can:

  • Insert and select data from it.
  • Add indexes for faster performance.
  • Join it with other tables.
  • Even update or delete rows from it.


Scope and Lifetime
The scope and lifetime of temp tables are like:
Local temp tables (#Temp) vanish automatically when your session or procedure ends.
Global temp tables (##Temp) disappear only after all active sessions using them are closed.
Both are stored in the special tempdb system database behind the scenes.

Do They Impact Performance?
Temporary tables are quite useful, especially when working with huge data sets and needing to break down complex queries. However, like with everything else, overusing them or failing to clear up might result in performance concerns.

Tip: You can manually drop a temp table when you're done using it — though SQL Server will often do this for you.
DROP TABLE #TopSales;

Side Note: You might hear about "table variables" too, they’re similar to temp tables but have some key differences. We’ll save that comparison for another article.

CTE vs Temp Table: The Key Differences

Feature CTE Temporary Table
Lifetime Only during the single query Lasts for the entire session or proc
Reusability Not reusable, you can only reference it once per query block (unless it's a recursive CTE). If you need to refer to it multiple times, use a temp table instead. Can be reused in multiple queries
Indexing Not possible Can add indexes
Performance Better for simple, readable queries Better for large or reused datasets
Debugging Harder, because CTE vanishes after a run Easier as you can SELECT at any point
Memory Usage Doesn’t physically store data Stores in tempdb

Note on CTE Reusability

When to Use CTEs?
Use a CTE when:

  • You want cleaner, more readable queries.
  • You’re chaining multiple subqueries (especially recursive ones).
  • The data is small or used once.
  • You don’t need to index or update the result.

CTEs are like inline helpers, great for readability but not heavy lifting.

When to Use Temp Tables
Use a Temporary Table when:

  • You need to reuse data across multiple steps.
  • You need to index, sort, or update the intermediate result.
  • The dataset is large and complex.
  • You’re working inside stored procedures or batch jobs.

Performance Tips

CTEs are great for cleaner queries, but remember, If you use the same CTE multiple times in one query, SQL Server might recalculate it each time. That means extra work — and slower results.
Temporary Tables store data physically in the tempdb database. So if you're just doing something small, avoid using them — they take extra time and resources.
For tiny bits of temporary data, like a few rows or quick calculations, consider using a table variable instead:

DECLARE @MyTable TABLE (Id INT, Name NVARCHAR(50));

It’s lightweight and faster for simple, short-lived tasks.

Quick Recap: CTE vs Temp Table Cheat Sheet

  • Use CTE for readability, short-lived logic, and recursion.
  • Use Temp Table for reuse, indexing, and larger datasets.

Still unsure? Ask yourself:
“Do I need to reuse or manipulate the data later?”

If yes, go with a temp table.

Conclusion
Temporary tables and Common Table Expressions are both great tools; they just work better in different situations. Selecting the appropriate one can improve the readability, speed, and manageability of your SQL code in the future. Now it's your turn: Have you ever had to choose between creating a temporary table to store results and creating a brief, repeated query? How did you handle it? Or forward this to a colleague who is now immersed in crafting those enormous, intricate queries; they would appreciate it!



SQL Server Hosting - HostForLIFE :: MySQL vs MSSQL

clock July 9, 2025 08:11 by author Peter

Microsoft SQL Server, also referred to as MSSQL, is a C++ relational database management system (RDBMS) that was created by Microsoft. Large-scale enterprise solutions that demand a sophisticated database structure are the main applications for SQL Server DB. Complex database queries with sophisticated security features can be handled by MSSQL.

Oracle Corporation created the open-source RDBMS MySQL. It is frequently utilized in e-commerce platforms, content management systems, and web applications and is built in C and C++. MySQL is prized for its dependability, speed, and user-friendliness, particularly in settings where cost-effectiveness and scalability are crucial considerations.

In terms of Usage and Features.

Feature MySQL SQL Server
Use Case Web applications (e.g., WordPress, LAMP stack) Enterprise applications, BI, data warehousing
Performance Lightweight, fast for read-heavy workloads Optimized for complex queries, large-scale enterprise systems
GUI Tools MySQL Workbench SQL Server Management Studio (SSMS)
Stored Procedures Supported Supported with advanced features
ACID Compliance Yes (with the InnoDB engine) Yes
JSON Support Yes (basic) Yes (advanced querying and indexing)
Replication Master-slave, Group Replication Always On Availability Groups, Replication
Licensing Open-source (GPL) commercial licenses are available Commercial (with free Express edition)

In terms of Security Features.

Security Feature MySQL SQL Server
SSL/TLS Encryption Supported (via ssl-mode settings) ssl-mode=True will Encrypt the Data over the Network. Supported (Encrypt=True in the connection string)
Authentication Native, PAM, LDAP (with plugins) Windows Authentication, SQL Authentication
Role-Based Access Control Basic (GRANT/REVOKE) Advanced (roles, schemas, granular permissions)

Summary

Use Case Recommended DBMS
Lightweight web apps MySQL
Enterprise systems SQL Server
Cost-effective open-source stack MySQL
Deep Microsoft integration SQL Server
Advanced security & compliance SQL Server

 

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE :: MSSQL Product Transaction Report

clock June 30, 2025 09:09 by author Peter

#Product Transaction Pivot Report

Create Temporary Tables for (User , Product & Product Transaction)
--# Check if table is available or not --
IF (OBJECT_ID('tempdb..##Tbl_User') IS NOT NULL)
BEGIN
DROP TABLE ##Tbl_User
END
-- we have created finalresult table to calculate final result report
IF (OBJECT_ID('tempdb..##Tbl_FinalResult') IS NOT NULL)
BEGIN
DROP TABLE ##Tbl_FinalResult
END
CREATE TABLE  ##Tbl_User
(
UID INT PRIMARY KEY NOT NULL IDENTITY(1,1),
UNAME VARCHAR(100)
)
-- Insert Data into user table
INSERT ##Tbl_User values ('Saee P'),('Swaraj P'),('Sharayu P'),('Atul P'),('Pravin P'),('Ninad P')
--# Check if table is available or not --
IF (OBJECT_ID('tempdb..##Tbl_Product') IS NOT NULL)
BEGIN
DROP TABLE ##Tbl_Product
END
CREATE TABLE ##Tbl_Product
(
PID INT PRIMARY KEY NOT NULL IDENTITY(1,1),
PNAME VARCHAR(100),
PCOST DECIMAL(18,2),
PQUNTITY INT
)
-- Insert Data into Product table
INSERT ##Tbl_Product values ('Shoes',1200,12),('Laptop',87000,17),('Hard disk',5000,10),('Cricket Bat',2000,14),
('Mobiles',30000,100),('Android TV LG',70000,20),('Solar',34251,20),('MS SQL 2017 Books',7500,100)
--# Product Transaction
--# Check if table is available or not --
IF (OBJECT_ID('tempdb..##Tbl_ProductTransaction') IS NOT NULL)
BEGIN
DROP TABLE ##Tbl_ProductTransaction
END
CREATE TABLE ##Tbl_ProductTransaction
(
PTID INT PRIMARY KEY NOT NULL IDENTITY(1,1),
UID INT ,
PID INT
)
-- Insert Data into Product Transaction table
INSERT ##Tbl_ProductTransaction VALUES
(1,2),(2,2),(1,1),(2,1),(3,1),(3,3),(2,4),(1,2),(2,3),(1,5),(3,5),(2,5),
(1,5),(3,5),(1,6),(2,6),(3,6),(4,6),(1,6),(5,5),(5,5),(5,7),(1,7),(5,7),
(1,7),(5,7),(1,7),(5,7),(1,7),(5,7),(1,7),(1,6),(2,6),(3,6),(4,6),(1,6),(5,5),
(1,2),(2,2),(1,1),(2,1),(3,1),(3,3),(2,4),(1,2),(2,3),(1,5),(3,5),(2,5),
(1,5),(3,5),(1,6),(2,6),(3,6),(4,6),(1,6),(5,5),(5,5),(5,7),(1,7)--,(5,7),
,(1,7),(5,7),(1,7),(5,7),(1,7),(5,7),(1,7),(1,6),(2,6),(3,6),(4,6),(1,6),(5,5),
(4,4),(4,4),(4,2),(4,8)
-- declare variables to calculate sum amount
-- declare variable for comma separated products
Declare @AllProducts nvarchar(2000)
SET @AllProducts=(
SELECT STRING_AGG(QUOTENAME(PNAME),',') FROM ##Tbl_Product
)
declare @col nvarchar(2000)
SET @col =
(
     SELECT STRING_AGG(CONCAT('SUM(','ISNULL(',QUOTENAME(PNAME),',0)',')'),'+') FROM ##Tbl_Product
)
--print @col
Declare @AllProductsListFormat nvarchar(2000)
-- Adding INR rupees format for prodcut Cost and runtime amount columns
SET @AllProductsListFormat=(
SELECT STRING_AGG(CONCAT('FORMAT(',QUOTENAME(PNAME),',''C'',''en-IN'')',QUOTENAME(PNAME)),',') FROM ##Tbl_Product
)
Declare @AllProductsListFooterFormat nvarchar(2000)
--SET @AllProductsListFooterFormat= CONCAT('FORMAT(',@col,',''#,0.00'')')
SET @AllProductsListFooterFormat= CONCAT('FORMAT(',@col,',''C'',''en-IN'')')
  --SELECT STRING_AGG(CONCAT('FORMAT(SUM(ISNULL(',QUOTENAME(PNAME),',0))',',''#,0.00'')',QUOTENAME(PNAME)),',') FROM ##Tbl_Product
Declare @AllProductsListFormatTop nvarchar(2000)
SET @AllProductsListFormatTop=(
SELECT STRING_AGG(CONCAT('FORMAT(SUM(ISNULL(',QUOTENAME(PNAME),',0))',',''C'',''en-IN'')',QUOTENAME(PNAME)),',') FROM ##Tbl_Product
)
PRINT @AllProductsListFooterFormat
print @AllProductsListFormatTop
declare @colwithzeroifnull nvarchar(2000)
SET @colwithzeroifnull =
(
     SELECT STRING_AGG(CONCAT('ISNULL(',QUOTENAME(PNAME),',0)',QUOTENAME(PNAME)),',') FROM ##Tbl_Product
)
print @colwithzeroifnull
declare @SqlAll nvarchar(max)
-- we use dynamic query, if any new product or transaction added then automatically that newly added record also included in Report
SET @SqlAll='
SELECT * INTO ##Tbl_FinalResult FROM (
Select UID,UNAME,'+@colwithzeroifnull+','+@col+' AS ''<<<<=Vertical Total=>>>>'' From (
SELECT  U.UID,U.UNAME,P.PNAME,iSNULL(P.PCOST,0) PCOST FROM ##Tbl_User U LEFT JOIN  ##Tbl_ProductTransaction PT
ON PT.UID=U.UID
LEFT  JOIN ##Tbl_Product P ON P.PID=PT.PID
     )
     T
     pivot
    (

     SUM(T.PCOST)  for T.PName IN
     (

    '+@AllProducts+'
     )


     ) as Pivot_TAble
      group by UID,UNAME, '+REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(@col,'sum(',''),')+',','),')',''),'ISNULL(',''),',0','')+'
     )q

     SELECT * FROM
     (
     Select UID,UNAME,'+@AllProductsListFormat+','+@AllProductsListFooterFormat+' As ''<<==Vertical Total==>>'' from ##Tbl_FinalResult
     group by UID,UNAME, '+REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(@col,'sum(',''),')+',','),')',''),'ISNULL(',''),',0','')+'
     )QX
     union
     Select (IDENT_CURRENT(''##Tbl_User'')+1) UID,
     ''=>Horizontal Total=>'','+@AllProductsListFormatTop+','+@AllProductsListFooterFormat+'
    FROM ##Tbl_FinalResult
     '
PRINT @SqlAll


EXEC SP_EXECUTESQL @SqlAll

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in