European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE :: Effectively Using XML and JSON Data in SQL Server for APIs

clock November 7, 2025 05:57 by author Peter

Backend databases are more than just tables and columns in today's API-driven applications. Complex hierarchical data, which is typically represented as JSON or XML, must be sent and received by APIs.

Because SQL Server natively supports both JSON and XML formats, developers can:

  • Data from API requests can be easily parsed into tables.
  • Provide structured answers without requiring extensive application-layer change.

In order to enhance efficiency, lower code complexity, and streamline your API integration—particularly when utilizing ASP.NET Core or Web API—we'll look at how to effectively integrate JSON and XML inside SQL Server.

2. Why JSON and XML Matter for APIs
Most modern APIs use JSON (JavaScript Object Notation) for data exchange because it’s lightweight and human-readable.
However, many legacy systems and enterprise integrations still rely on XML for structured documents, invoices, and configurations.

SQL Server supports both formats, which means you can:

  • Store JSON/XML data directly in database columns.
  • Parse and query nested data.
  • Return dynamic API responses without looping in C#.

Here’s what it enables in real-world projects:
ASP.NET Core API receives JSON → stored procedure handles JSON directly.
Stored procedure returns JSON → API sends it back without re-serialization.

3.1 Storing JSON Data
You can store JSON as plain text in an NVARCHAR(MAX) column:
CREATE TABLE CustomerOrders (
    OrderId INT PRIMARY KEY,
    CustomerName NVARCHAR(100),
    OrderData NVARCHAR(MAX) -- stores JSON
);


Example JSON data
{"OrderNumber": "ORD123","Items": [
    { "Product": "Keyboard", "Qty": 2, "Price": 450 },
    { "Product": "Mouse", "Qty": 1, "Price": 250 }],"Total": 1150}


Insert JSON into the table
INSERT INTO CustomerOrders (OrderId, CustomerName, OrderData)
VALUES (1, 'Peter',
N'{
  "OrderNumber": "ORD123",
  "Items": [
    {"Product": "Keyboard", "Qty": 2, "Price": 450},
    {"Product": "Mouse", "Qty": 1, "Price": 250}
  ],
  "Total": 1150
}');


3.2 Reading JSON Values
Use JSON_VALUE() to extract a scalar value.
SELECT
  JSON_VALUE(OrderData, '$.OrderNumber') AS OrderNumber,
  JSON_VALUE(OrderData, '$.Total') AS TotalAmount
FROM CustomerOrders;


Output
OrderNumber | TotalAmount
------------|------------
ORD123      | 1150


3.3 Parsing Arrays with OPENJSON
Use OPENJSON() to split array elements into rows.
SELECT
  JSON_VALUE(OrderData, '$.OrderNumber') AS OrderNumber,
  Item.value('Product', 'nvarchar(50)') AS ProductName,
  Item.value('Qty', 'int') AS Quantity,
  Item.value('Price', 'decimal(10,2)') AS UnitPrice
FROM CustomerOrders
CROSS APPLY OPENJSON(OrderData, '$.Items')
WITH (
    Product NVARCHAR(50) '$.Product',
    Qty INT '$.Qty',
    Price DECIMAL(10,2) '$.Price'
) AS Item;


This query expands nested arrays into tabular data — ideal for APIs that send product line items.

3.4 Returning JSON from SQL Server

Instead of letting your ASP.NET Core app serialize the data, you can return it as JSON directly:
SELECT
  OrderId,
  CustomerName,
  OrderData
FROM CustomerOrders
FOR JSON PATH, ROOT('Orders');


Output
{"Orders": [
    {
      "OrderId": 1,
      "CustomerName": "Peter",
      "OrderData": {
        "OrderNumber": "ORD123",
        "Items": [
          { "Product": "Keyboard", "Qty": 2, "Price": 450 },
          { "Product": "Mouse", "Qty": 1, "Price": 250 }
        ],
        "Total": 1150
      }
    }]}

This JSON can be sent directly to your API response — saving time and CPU cycles in .NET.

4. Working with XML in SQL Server
XML support in SQL Server has been around since 2005.
It’s still widely used in enterprise-level systems or integrations (e.g., financial, logistics, or government APIs).

4.1 Storing XML Data
CREATE TABLE VendorInvoices (
    InvoiceId INT PRIMARY KEY,
    VendorName NVARCHAR(100),
    InvoiceData XML
);


Example XML
<Invoice>
  <Number>INV-999</Number>
  <Date>2025-11-05</Date>
  <Items>
    <Item>
      <Name>SSD Drive</Name>
      <Qty>2</Qty>
      <Price>3200</Price>
    </Item>
    <Item>
      <Name>RAM 16GB</Name>
      <Qty>1</Qty>
      <Price>5500</Price>
    </Item>
  </Items>
</Invoice>

Insert XML
INSERT INTO VendorInvoices (InvoiceId, VendorName, InvoiceData)
VALUES (1, 'TechVendor Pvt Ltd',
N'<Invoice>
    <Number>INV-999</Number>
    <Date>2025-11-05</Date>
    <Items>
      <Item><Name>SSD Drive</Name><Qty>2</Qty><Price>3200</Price></Item>
      <Item><Name>RAM 16GB</Name><Qty>1</Qty><Price>5500</Price></Item>
    </Items>
</Invoice>');


4.2 Querying XML with XQuery
You can extract elements using .value() and .nodes().
SELECT
  InvoiceData.value('(/Invoice/Number)[1]', 'nvarchar(50)') AS InvoiceNumber,
  InvoiceData.value('(/Invoice/Date)[1]', 'date') AS InvoiceDate
FROM VendorInvoices;

Parsing nested XML arrays:
SELECT
  I.InvoiceId,
  Items.value('(Name)[1]', 'nvarchar(50)') AS ProductName,
  Items.value('(Qty)[1]', 'int') AS Quantity,
  Items.value('(Price)[1]', 'decimal(10,2)') AS Price
FROM VendorInvoices I
CROSS APPLY InvoiceData.nodes('/Invoice/Items/Item') AS T(Items);


4.3 Returning XML Responses
SELECT
  InvoiceId,
  VendorName,
  InvoiceData
FROM VendorInvoices
FOR XML PATH('Invoice'), ROOT('Invoices');


Output
<Invoices>
  <Invoice>
    <InvoiceId>1</InvoiceId>
    <VendorName>TechVendor Pvt Ltd</VendorName>
    <InvoiceData>
      <Invoice>
        <Number>INV-999</Number>
        ...
      </Invoice>
    </InvoiceData>
  </Invoice>
</Invoices>

5. JSON vs XML — Which to Choose?

FeatureJSONXML
Format Lightweight, human-readable Verbose but structured
Best For Web APIs, REST, mobile apps Legacy systems, SOAP, enterprise
SQL Server Support From 2016+ Since 2005
Parsing Functions OPENJSON, JSON_VALUE, JSON_QUERY .nodes(), .value(), .exist()
Schema Validation No Yes (XSD support)
Performance Faster for APIs Slightly slower due to verbosity

Recommendation

  • For modern APIs → Prefer JSON.

  • For legacy or B2B integrations → Use XML.

  • SQL Server can handle both — even in the same stored procedure.

7. Combining JSON + ASP.NET Core API
Here’s an example of a simple Web API endpoint calling a SQL stored procedure that accepts JSON.

Stored Procedure
CREATE PROCEDURE usp_SaveOrder
@OrderJson NVARCHAR(MAX)
ASBEGIN
INSERT INTO CustomerOrders (CustomerName, OrderData)
SELECT JSON_VALUE(@OrderJson, '$.CustomerName'),
       @OrderJson;
END


ASP.NET Core Controller
[HttpPost("api/orders")]
public async Task<IActionResult> SaveOrder([FromBody] JsonElement orderJson)
{
string jsonData = orderJson.ToString();

using var conn = new SqlConnection(_config.GetConnectionString("Default"));
using var cmd = new SqlCommand("usp_SaveOrder", conn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("@OrderJson", jsonData);

await conn.OpenAsync();
await cmd.ExecuteNonQueryAsync();
return Ok("Order saved successfully");
}

Result
Frontend sends JSON → stored procedure parses it → no mapping issues or downtime.

8. Performance Tips for JSON/XML in SQL Server
Use NVARCHAR(MAX) for JSON columns — it performs better than TEXT or NTEXT.

Validate JSON using ISJSON() before processing
WHERE ISJSON(OrderData) = 1

Create computed columns from JSON for indexing
ALTER TABLE CustomerOrders
ADD OrderNumber AS JSON_VALUE(OrderData, '$.OrderNumber') PERSISTED;
CREATE INDEX IX_OrderNumber ON CustomerOrders(OrderNumber);

For XML, use typed XML with XSD schema for faster querying.

Avoid over-storing massive documents — keep only necessary data portions.

Return compact results using FOR JSON PATH, WITHOUT_ARRAY_WRAPPER when the response is a single object.

9. Real-World Use Case
A logistics company integrated multiple systems (fleet tracking, billing, and customer portal).
They used ASP.NET Core + SQL Server to unify data exchange.

Earlier, the application layer transformed large JSON payloads before inserting them into SQL tables — wasting time.
After moving the parsing logic into SQL stored procedures using OPENJSON,

API response time reduced by 30%.

CPU usage on app servers dropped.

10. Best Practices Summary

AreaBest Practice
Storage Use NVARCHAR(MAX) for JSON, XML type for XML
Validation Always validate JSON/XML before processing
Performance Use computed columns for indexing JSON fields
Security Sanitize input data before dynamic SQL usage
API Integration Let SQL handle JSON serialization using FOR JSON
Maintainability Keep JSON structures consistent with DTO models

11. Summary and Conclusion

Integration complexity decreased.

SQL Server’s built-in JSON and XML features let you build cleaner, faster APIs with less code.

By using:

  • OPENJSON() and FOR JSON for modern REST APIs
  • .nodes() and FOR XML for structured enterprise data you can directly work with hierarchical data inside SQL Server without extra transformations in C# or middleware.

This approach ensures:

  • Faster API responses
  • Reduced serialization overhead
  • Cleaner, maintainable backend logic

In short
If your APIs and database speak the same language — JSON or XML — your system becomes leaner, faster, and easier to evolve.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Creating Stored Procedures with High Performance in SQL Server

clock November 3, 2025 07:33 by author Peter

When we talk about backend performance, stored procedures play a major role in SQL Server. They are precompiled, reusable, and secure but if not written properly, they can become slow and inefficient. In this article, we’ll learn how to write high-performance stored procedures with simple explanations, tips, and an example flowchart.

What is a Stored Procedure?
A stored procedure is a group of SQL statements stored in the database. Instead of sending multiple queries from your application, you can just call one stored procedure — making the process faster and easier to maintain.

Example
CREATE PROCEDURE usp_GetCustomerOrders
    @CustomerID INT
AS
BEGIN
    SELECT OrderID, OrderDate, TotalAmount
    FROM Orders
    WHERE CustomerID = @CustomerID;
END


You can execute it like:
EXEC usp_GetCustomerOrders @CustomerID = 1001;

Why Performance Matters?
When you work with large data — millions of rows — even small inefficiencies can slow everything down.
Optimizing your stored procedures can:

  • Reduce CPU and memory load
  • Improve response time
  • Make the system more scalable
  • Lower the chance of deadlocks or timeouts

Best Practices for Writing High-Performance Stored Procedures

Let’s go step by step.

1. Use Proper SET Options

Start every stored procedure with:
SET NOCOUNT ON;

This stops the “(X rows affected)” message from returning, improving speed slightly and reducing unnecessary output.

2. Avoid SELECT *
Don’t select everything — it wastes memory and network bandwidth.

Bad
SELECT * FROM Orders;

Good
SELECT OrderID, CustomerID, OrderDate FROM Orders;

3. Use Proper Indexing
Indexes are like shortcuts in a phone contact list. Without them, SQL Server must scan every row.

Example
CREATE INDEX IX_Orders_CustomerID ON Orders(CustomerID);

Always check query performance using Execution Plans in SSMS.

4. Avoid Cursors (If Possible)
Cursors process rows one by one — very slow for large data.

Bad
DECLARE cur CURSOR FOR SELECT OrderID FROM Orders;

Good
Use set-based operations instead:
UPDATE Orders SET Status = 'Closed' WHERE OrderDate < '2024-01-01';

5. Use Temporary Tables or Table Variables Wisely
Temporary tables (#TempTable) can help when you have complex joins.

But don’t overuse them — they increase I/O on tempdb.

Use them only when needed:
SELECT * INTO #TempOrders FROM Orders WHERE Status = 'Pending';

6. Parameter Sniffing Control
Sometimes, SQL Server caches an execution plan based on the first parameter it sees, which may not work well for others.

To fix it
DECLARE @LocalCustomerID INT = @CustomerID;
SELECT * FROM Orders WHERE CustomerID = @LocalCustomerID;


This helps avoid performance issues due to parameter sniffing.

7. Use Transactions Carefully
Transactions ensure data consistency, but if they are too long, they can lock resources.

Use
BEGIN TRANSACTION;

-- Your statements
COMMIT TRANSACTION;


Don’t keep a transaction open for long-running logic or unnecessary steps.

8. Use TRY…CATCH for Error Handling
It’s important for clean rollback and error logging.

Example
BEGIN TRY
    BEGIN TRANSACTION;

    UPDATE Accounts SET Balance = Balance - 100 WHERE ID = 1;
    UPDATE Accounts SET Balance = Balance + 100 WHERE ID = 2;

    COMMIT TRANSACTION;
END TRY
BEGIN CATCH
    ROLLBACK TRANSACTION;
    EXEC spLogException ERROR_MESSAGE(), ERROR_LINE();
END CATCH;


9. Avoid Unnecessary Joins
Too many joins, especially on non-indexed columns, can slow performance drastically.

Keep joins only where needed, and always join on indexed keys.

10. Use Execution Plans and Statistics
Before finalizing your procedure:
SET STATISTICS IO ON;
SET STATISTICS TIME ON;


Then execute your procedure and check the output to understand how much time and I/O it consumes.
Also, view the execution plan (Ctrl + M in SSMS) to find table scans, missing indexes, or inefficient joins.

This flow shows how SQL Server optimizes execution by reusing cached plans whenever possible.

Example: Optimized Stored Procedure
Here’s a real-world optimized procedure example:
CREATE PROCEDURE usp_GetActiveOrders
    @StartDate DATE,
    @EndDate DATE,
    @CustomerID INT = NULL
AS
BEGIN
    SET NOCOUNT ON;

    BEGIN TRY
        SELECT O.OrderID,
               O.OrderDate,
               O.TotalAmount,
               C.CustomerName
        FROM Orders O
        INNER JOIN Customers C ON O.CustomerID = C.ID
        WHERE O.OrderDate BETWEEN @StartDate AND @EndDate
          AND (@CustomerID IS NULL OR O.CustomerID = @CustomerID)
          AND O.Status = 'Active'
        ORDER BY O.OrderDate DESC;
    END TRY
    BEGIN CATCH
        EXEC spLogException ERROR_MESSAGE(), ERROR_LINE();
    END CATCH;
END


Key Points

  • Used SET NOCOUNT ON
  • Controlled optional parameter
  • Filtered with indexes
  • Handled errors safely

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: SQL Commenting Best Practices: Advantages, Guidelines, and Illustrations

clock October 31, 2025 07:06 by author Peter

Writing SQL code involves more than simply making it function; it also involves making it comprehensible, maintainable, and audit-friendly. Writing professional SQL code requires careful comments. The advantages of SQL comments, common commenting techniques, and examples for various SQL objects such as tables, procedures, functions, triggers, sequences, and indexes will all be covered in this article.

Why Comment in SQL Code?
Commenting SQL code brings several benefits:

  • Improves Readability: Helps developers understand your code without digging into every line.
  • Facilitates Maintenance: Makes it easier to fix bugs or enhance features later.
  • Audit & Documentation: Useful in enterprise environments for tracking who created or modified objects.
  • Reduces Human Error: Helps teams follow standards and avoid mistakes.
  • Supports Collaboration: Makes it easier for multiple developers to work on the same database.

Commenting Standards

  • A good SQL commenting standard should include:
  • Header Block Comments: Metadata about the object: author, date, description, and notes.
  • Inline Comments: Explain non-obvious logic or why a piece of code exists.
  • Consistency: Same style across all objects (tables, triggers, procedures, etc.).
  • Avoid Obvious Comments: Explain why instead of what.
  • Audit Information: Optional: who created or modified the object, when, and why.

Effective Commenting for Tables, Functions, Triggers, Indexes, and Sequences

1. Tables

/******************************************************************************************
* Table Name   : dbo.Roles
* Description  : Stores system roles with audit columns.
* Author       : Peter
* Created On   : 2025-10-09
* Last Modified: 2025-10-09
* Notes:
*   - Includes CreatedBy, CreatedDate, UpdatedBy, UpdatedDate columns for auditing.
******************************************************************************************/

CREATE TABLE dbo.Roles (
    RoleId INT IDENTITY(1,1) PRIMARY KEY,           -- Unique identifier for the role
    RoleName VARCHAR(50) NOT NULL,                  -- Name of the role
    Description VARCHAR(255) NULL,                  -- Optional description of the role
    CreatedBy VARCHAR(100) NULL,                    -- User who created the record
    CreatedDate DATETIME DEFAULT GETDATE(),         -- Timestamp when record was created
    UpdatedBy VARCHAR(100) NULL,                    -- User who last updated the record
    UpdatedDate DATETIME NULL                        -- Timestamp when record was last updated
);

Important points

  • Use block comments for headers.
  • Use inline comments for columns.
  • Include author, creation date, and purpose.

2. Functions
/******************************************************************************************
* Function Name : fn_GetRoleName
* Description   : Returns the RoleName for a given RoleId.
* Author        : Peter
* Created On    : 2025-10-09
******************************************************************************************/
CREATE FUNCTION dbo.fn_GetRoleName (@RoleId INT)
RETURNS VARCHAR(50)
AS
BEGIN
    DECLARE @RoleName VARCHAR(50);

    -- Fetch RoleName from Roles table
    SELECT @RoleName = RoleName
    FROM dbo.Roles
    WHERE RoleId = @RoleId;

    RETURN @RoleName;
END;
GO

Always think about where and how the function will be used. If it’s called millions of times in a query, performance optimization is critical

3. Triggers

/******************************************************************************************
* Trigger Name : trg_Update_Roles
* Table Name   : dbo.Roles
* Description  : Automatically updates UpdatedDate when a record in Roles is modified.
* Author       : Peter
* Created On   : 2025-10-09
* Last Modified: 2025-10-09
* Notes:
*   - UpdatedBy must be set manually.
*   - Ensures audit consistency across updates.
******************************************************************************************/

CREATE TRIGGER trg_Update_Roles
ON dbo.Roles
AFTER UPDATE
AS
BEGIN
    SET NOCOUNT ON;  -- Prevent "rows affected" messages for better performance

    BEGIN TRY
        -- Update the UpdatedDate to current timestamp for all modified rows
        UPDATE r
        SET r.UpdatedDate = GETDATE()
        FROM dbo.Roles AS r
        INNER JOIN inserted AS i ON r.RoleId = i.RoleId;
    END TRY
    BEGIN CATCH
        -- Error handling: raise meaningful error message
        DECLARE @ErrorMessage NVARCHAR(4000) = ERROR_MESSAGE();
        DECLARE @ErrorSeverity INT = ERROR_SEVERITY();
        DECLARE @ErrorState INT = ERROR_STATE();

        RAISERROR('Error in trigger trg_Update_Roles: %s', @ErrorSeverity, @ErrorState, @ErrorMessage);
    END CATCH;
END;
GO

4. Sequences
/******************************************************************************************
* Sequence Name : seq_OrderId
* Description   : Generates unique OrderId for Orders table.
* Author        : Peter
* Created On    : 2025-10-09
******************************************************************************************/
CREATE SEQUENCE dbo.seq_OrderId
    START WITH 1
    INCREMENT BY 1;


5. Indexes
/******************************************************************************************
* Index Name   : IX_Roles_RoleName
* Table Name   : dbo.Roles
* Description  : Non-clustered index on RoleName for faster search.
* Author       : Peter
* Created On   : 2025-10-09
******************************************************************************************/
CREATE NONCLUSTERED INDEX IX_Roles_RoleName
ON dbo.Roles (RoleName);

6. Stored Procedures
/******************************************************************************************
* Procedure Name : sp_GetRoleById
* Description    : Retrieves role details by RoleId.
* Author         : Peter
* Created On     : 2025-10-09
* Last Modified  : 2025-10-09
* Parameters:
*   @RoleId INT - Role identifier
* Returns:
*   Role details from dbo.Roles
******************************************************************************************/
CREATE PROCEDURE dbo.sp_GetRoleById
    @RoleId INT
AS
BEGIN
    SET NOCOUNT ON;  -- Prevent "rows affected" messages

    -- Select role information
    SELECT RoleId, RoleName, Description, CreatedBy, CreatedDate, UpdatedBy, UpdatedDate
    FROM dbo.Roles
    WHERE RoleId = @RoleId;
END;
GO


SQL Code Smells & Rules for Tables, Functions, Triggers, Indexes and Sequences
1. Tables
Code Smells

  • Poor naming conventions: Table names like tbl1, data, or temp are vague.
  • Too many columns: More than 50–100 columns might indicate the table isn’t normalized.
  • Unused columns: Columns never queried or used in transactions.
  • Repeated data: Indicates denormalization or missing relationships.
  • No primary key: Leads to duplicate rows and poor indexing.
  • Excessive nullable columns: Hard to enforce data integrity.

Best Practices

  • Use clear, singular, meaningful names: Employee, ProductOrder.
  • Normalize data to at least 3NF unless justified.
  • Always define primary keys and appropriate foreign keys.
  • Use data types appropriately: don’t use VARCHAR(255) for small text.
  • Limit NULLs; use default values where applicable.
  • Add comments to describe table's purpose:

2. Functions
Code Smells

  • Functions that do too much (multiple responsibilities).
  • Functions that access tables unnecessarily inside loops.
  • Functions with side effects (modifying data).
  • Poor naming (func1, doSomething).

Best Practices

  • Functions should be pure (no side effects).
  • One function → one purpose.
  • Use a schema prefix to avoid ambiguity.
  • Add comments explaining input/output:

3. Triggers
Code Smells

  • Triggers that perform complex logic or call other triggers → hard to maintain.
  • Silent failures: errors not logged.
  • Multiple triggers for the same action → order dependency issues.
  • Triggers updating the same table → risk of recursion.

Best Practices

  • Keep triggers small and specific.
  • Prefer using constraints or stored procedures instead of triggers if possible.
  • Log errors and operations.
  • Use AFTER vs INSTEAD OF carefully.

4. Indexes
Code Smells

  • Too many indexes → slows down writes.
  • Unused indexes → waste of storage.
  • Non-selective indexes → low performance gain.
  • Indexing columns that are frequently updated → high maintenance cost.

Best Practices

  • Index frequently queried columns used in WHERE, JOIN, and ORDER BY.
  • Avoid indexing columns with low cardinality (like gender).
  • Monitor index usage and remove unused indexes.
  • Name indexes consistently: IX_Table_Column.

5. Sequences
Code Smells

  • Using sequences unnecessarily for small tables.
  • Sequences with large gaps → indicate poor management.
  • Sequences without ownership or naming standards.

Best Practices

  • Use sequences for unique, incremental values across tables.
  • Define start, increment, min, mand ax values.
  • Always name sequences clearly: SEQ_EmployeeID.

6. Stored Procedures
Common SQL Smells

  • Too long / complex procedures – doing multiple unrelated tasks in one SP.
  • Hard-coded values – making the procedure inflexible and non-reusable.
  • No parameters or overuse of global variables – reduces modularity.
  • Missing error handling – errors are swallowed or unlogged.
  • Excessive dynamic SQL – may lead to SQL injection or maintenance issues.
  • Returning result sets instead of using output parameters when needed – inconsistent usage.
  • Tightly coupled logic – SP depends heavily on other SPs or tables, making it hard to maintain.
  • Unused or deprecated SPs – bloats the database.

Best Practices

  • One procedure, one purpose – keep SPs focused and small.
  • Use parameters – avoid hard-coded values; makes SP reusable.
  • Error handling – use TRY…CATCH to log or handle errors gracefully.
  • Use schema prefix & meaningful names – e.g., usp_InsertEmployee.
  • Avoid excessive dynamic SQL – use static queries where possible; if dynamic SQL is needed, validate input.
  • Document logic – add comments for inputs, outputs, and special cases.
  • Return status codes or output parameters instead of always returning full result sets.
  • Use sequences or identity columns appropriately – only when unique incremental IDs are required.
  • Avoid unnecessary loops – leverage set-based operations for performance.
  • Maintainability – regularly review SPs and remove unused or deprecated ones.

Developers, DBAs, and reviewers can use this document as a reference to keep clean, manageable SQL code and steer clear of performance issues. It is a professional requirement to use proper SQL comments. It enhances auditing, maintainability, and readability. By adhering to these guidelines, you can make sure that your database is reliable, collaborative, and future-proof. While writing code, commenting might take a few extra minutes, but it saves hours on maintenance and debugging.

I appreciate you reading my content. I hope you now have a thorough understanding of SQL Commenting Best Practices: Benefits, Standards & Examples. Although adhering to coding standards and using proper SQL comments may seem insignificant, they have a significant impact on producing database code that is professional, readable, and maintainable. By putting these best practices into practice, you can make sure that your projects are long-lasting, collaborative, and manageable.


Happy coding and keep your SQL Code clean!

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Automating the Creation of SQL Queries with LlamaIndex and Snowflake

clock October 24, 2025 09:08 by author Peter

The open-source project Text-to-SQL-Snowflake-LlamaIndex, which links Snowflake, a potent cloud data platform, with LlamaIndex, a top framework for LLM data orchestration, is thoroughly explained in this post. The repository shows how to convert natural language inputs into executable SQL queries on a Snowflake database using a large language model (LLM). Data engineers, analysts, and AI developers may swiftly incorporate natural language interfaces into their data systems with the help of this guide, which covers every aspect of the project, from architecture and environment setup to workflow samples.

What Is Text-to-SQL?
Text-to-SQL systems use LLMs to convert plain English questions (e.g., "Show me total sales in Q1 by region") into executable SQL queries. These systems remove the need for users to understand SQL syntax or schema structures, making data access more inclusive.

Why LlamaIndex?

LlamaIndex acts as a bridge between unstructured data and LLMs. It indexes schema metadata, manages context retrieval, and structures prompts for query generation.

Why Snowflake?

Snowflake offers scalable, cloud-based data warehousing with support for Python integrations via Snowflake Connector for Python and Snowpark. It is ideal for real-time querying from LLM-based agents.

Step-by-Step Walkthrough


1. Repository Overview
GitHub: tuanacelik/text-to-sql-snowflake-llamaindex

This project integrates:
LlamaIndex for text-to-SQL translation.
Snowflake Connector for query execution.
OpenAI / GPT-based models for language understanding.

2. Architecture



3. Environment Setup

Dependencies:
pip install llama-index
pip install snowflake-connector-python
pip install python-dotenv
pip install openai

Environment Variables (.env):
OPENAI_API_KEY=YOUR_API_KEY
SNOWFLAKE_ACCOUNT=YOUR_ACCOUNT
SNOWFLAKE_USER=YOUR_USERNAME
SNOWFLAKE_PASSWORD=YOUR_PASSWORD
SNOWFLAKE_WAREHOUSE=YOUR_WAREHOUSE
SNOWFLAKE_DATABASE=YOUR_DATABASE
SNOWFLAKE_SCHEMA=YOUR_SCHEMA

4. Connecting to Snowflake
Python Connection Example:
import snowflake.connector
import os
from dotenv import load_dotenv

load_dotenv()

conn = snowflake.connector.connect(
    user=os.getenv("SNOWFLAKE_USER"),
    password=os.getenv("SNOWFLAKE_PASSWORD"),
    account=os.getenv("SNOWFLAKE_ACCOUNT"),
    warehouse=os.getenv("SNOWFLAKE_WAREHOUSE"),
    database=os.getenv("SNOWFLAKE_DATABASE"),
    schema=os.getenv("SNOWFLAKE_SCHEMA")
)

This establishes a live connection that allows the system to send LLM-generated queries for execution.

5. LlamaIndex Integration
LlamaIndex is responsible for schema parsing and query generation.

Schema Loading Example:
from llama_index import SQLDatabase, LLMPredictor, ServiceContext
from llama_index.indices.struct_store import SQLTableIndex
from llama_index.llms import OpenAI

sql_database = SQLDatabase(conn)
llm_predictor = LLMPredictor(llm=OpenAI(temperature=0))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)

index = SQLTableIndex(
    tables=["SALES", "CUSTOMERS"],
    sql_database=sql_database,
    service_context=service_context
)


6. Generating SQL Queries from Text

Example:
query_engine = index.as_query_engine()
response = query_engine.query("Show me total sales by region for Q1 2024")
print(response)

Output (generated SQL):
SELECT region, SUM(amount) AS total_sales
FROM SALES
WHERE QUARTER(sale_date) = 1 AND YEAR(sale_date) = 2024
GROUP BY region;


7. Workflow JSON

Below is a minimal example JSON describing the full workflow:
{
  "workflow": {
    "input": "Show me the top 5 customers by revenue last month",
    "llm_engine": "OpenAI GPT-4",
    "schema_source": "Snowflake",
    "steps": [
      "Extract schema metadata using LlamaIndex",
      "Parse natural language input",
      "Generate SQL query",
      "Execute query via Snowflake connector",
      "Return structured results"
    ],
    "output_format": "JSON"
  }
}


Use Cases / Scenarios

  • Business Intelligence Automation: Non-technical users can access dashboards via natural language.
  • Embedded Analytics: Integrate LLM-driven querying inside SaaS platforms.
  • DataOps Automation: Generate schema-aware SQL queries programmatically.
  • AI Agents: Connect conversational assistants directly to Snowflake datasets.

Limitations / Considerations

  • Schema Ambiguity: Without clear metadata, LLMs may misinterpret table relationships.
  • Security: Ensure user inputs are sanitized before query execution to prevent injection.
  • Latency: Query generation involves LLM API calls, which can introduce delays.
  • Model Drift: Schema changes require reindexing via LlamaIndex.

Fixes and Troubleshooting

IssueCauseFix
Authentication Error Incorrect Snowflake credentials Verify .env variables
Empty Responses Model temperature too low or schema incomplete Adjust temperature or reindex tables
Invalid SQL Ambiguous query phrasing Add schema hints or context
Slow Queries Snowflake warehouse sleep Resume warehouse before execution

Conclusion
The Text-to-SQL-Snowflake-LlamaIndex project exemplifies the fusion of AI reasoning and data engineering. By connecting LlamaIndex with Snowflake, developers can enable natural language interfaces that democratize data access. As Generative Engine Optimization (GEO) principles evolve, such projects demonstrate how structured data can become machine-readable, citable, and actionable by AI.

Key takeaway: Text-to-SQL automation with LlamaIndex on Snowflake transforms how users interact with enterprise data—bridging the gap between human language and database logic.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: C# CSV/MCX File Conversion to DataTable and Bulk SQL Server Insert

clock October 21, 2025 08:22 by author Peter

Processing big CSV/MCX files, filtering records, and effectively inserting them into a database are common tasks in financial applications, particularly when dealing with MCX or stock market data. This article uses C#. 1 to demonstrate a methodical approach. Creating a DataTable from a CSV or MCX file

A CSV file is read using the convertdatatableMCX method, which then turns it into a DataTable:

public DataTable convertdatatable(string Filepath)
{
    DataTable FinalTable = new DataTable();

    // Define columns
    for (int i = 1; i <= 37; i++)
        FinalTable.Columns.Add("column" + i, typeof(string));

    FinalTable.Columns.Add("Count", typeof(int));

    StreamReader sr = new StreamReader(Filepath);
    try
    {
        string Fulltext = sr.ReadToEnd();
        string[] rows = Fulltext.Split('\n');

        if (rows.Length > 1)
        {
            for (int i = 1; i < rows.Length; i++)
            {
                string[] rowValues = rows[i].Split(',');
                if (rowValues.Length > 1)
                {
                    DataRow dr = FinalTable.NewRow();
                    for (int j = 0; j < 37; j++)
                        dr[j] = rowValues.Length > j ? rowValues[j].Trim() : "";

                    dr[37] = i; // Track row number
                    FinalTable.Rows.Add(dr);
                }
                else
                {
                    obj.WriteFailureLogFile("No Data in File");
                }
            }
        }
        else
        {
            obj.WriteFailureLogFile("No Data in File");
        }
    }
    catch (Exception ex)
    {
        obj.WriteErrorLogFile(ex.ToString(),"Converting");
    }
    finally
    {
        sr.Close();
        sr.Dispose();
    }
    return FinalTable;
}

(or)

public DataTable convertdatatable(string Filepath)
{
    #region table
    DataTable FinalTable = new DataTable();
    FinalTable.Columns.Add("column1", typeof(String));
    FinalTable.Columns.Add("column2", typeof(String));
    FinalTable.Columns.Add("column3", typeof(String));
    FinalTable.Columns.Add("column4", typeof(String));
    FinalTable.Columns.Add("column5", typeof(String));
    FinalTable.Columns.Add("column6", typeof(String));
    FinalTable.Columns.Add("column7", typeof(String));
    FinalTable.Columns.Add("column8", typeof(String));
    FinalTable.Columns.Add("column9", typeof(String));
    FinalTable.Columns.Add("column10", typeof(String));
    FinalTable.Columns.Add("column11", typeof(String));
    FinalTable.Columns.Add("column12", typeof(String));
    FinalTable.Columns.Add("column13", typeof(String));
    FinalTable.Columns.Add("column14", typeof(String));
    FinalTable.Columns.Add("column15", typeof(String));
    FinalTable.Columns.Add("column16", typeof(String));
    FinalTable.Columns.Add("column17", typeof(String));
    FinalTable.Columns.Add("column18", typeof(String));
    FinalTable.Columns.Add("column19", typeof(String));
    FinalTable.Columns.Add("column20", typeof(String));
    FinalTable.Columns.Add("column21", typeof(String));
    FinalTable.Columns.Add("column22", typeof(String));
    FinalTable.Columns.Add("column23", typeof(String));
    FinalTable.Columns.Add("column24", typeof(String));
    FinalTable.Columns.Add("column25", typeof(String));
    FinalTable.Columns.Add("column26", typeof(String));
    FinalTable.Columns.Add("column27", typeof(String));
    FinalTable.Columns.Add("column28", typeof(String));
    FinalTable.Columns.Add("column29", typeof(String));
    FinalTable.Columns.Add("column30", typeof(String));
    FinalTable.Columns.Add("column31", typeof(String));
    FinalTable.Columns.Add("column32", typeof(String));
    FinalTable.Columns.Add("column33", typeof(String));
    FinalTable.Columns.Add("column34", typeof(String));
    FinalTable.Columns.Add("column35", typeof(String));
    FinalTable.Columns.Add("column36", typeof(String));
    FinalTable.Columns.Add("column37", typeof(String));
    FinalTable.Columns.Add("Count", typeof(int));
    #endregion
    StreamReader sr = new StreamReader(Filepath);
    try
    {
        string filepath = Filepath;
        string Fulltext;
        Fulltext = sr.ReadToEnd().ToString();
        string[] rows = Fulltext.Split('\n');
        if (rows.Count() > 1)
        {

            for (int i = 1; i < rows.Count(); i++)
            {

                string[] rowValues = rows[i].Split(',');

                string column1 = "", column2 = "", column3 = "", column4 = "", column5 = "", column6 = "", column7 = "", column8 = "", column9 = "", column10 = "", column11 = "", column12 = "",
                       column13 = "", column14 = "", column15 = "", column16 = "", column17 = "", column18 = "", column19 = "", column20 = "", column21 = "", column22 = "", column23 = "", column24 = "",
                       column25 = "", column26 = "", column27 = "", column28 = "", column29 = "", column30 = "", column31 = "", column32 = "", column33 = "", column34 = "", column35 = "", column36 = "",
                    column37 = "";

                if (rowValues.Length > 1)
                {
                    #region assin
                    column1 = rowValues[0].ToString().Trim();
                    column2 = rowValues[1].ToString().Trim();
                    column3 = rowValues[2].ToString().Trim();
                    column4 = rowValues[3].ToString().Trim();
                    column5 = rowValues[4].ToString().Trim();
                    column6 = rowValues[5].ToString().Trim();
                    column7 = rowValues[6].ToString().Trim();
                    column8 = rowValues[7].ToString().Trim();
                    column9 = rowValues[8].ToString().Trim();
                    column10 = rowValues[9].ToString().Trim();
                    column11 = rowValues[10].ToString().Trim();
                    column12 = rowValues[11].ToString().Trim();
                    column13 = rowValues[12].ToString().Trim();
                    column14 = rowValues[13].ToString().Trim();
                    column15 = rowValues[14].ToString().Trim();
                    column16 = rowValues[15].ToString().Trim();
                    column17 = rowValues[16].ToString().Trim();
                    column18 = rowValues[17].ToString().Trim();
                    column19 = rowValues[18].ToString().Trim();
                    column20 = rowValues[19].ToString().Trim();
                    column21 = rowValues[20].ToString().Trim();
                    column22 = rowValues[21].ToString().Trim();
                    column23 = rowValues[22].ToString().Trim();
                    column24 = rowValues[23].ToString().Trim();
                    column25 = rowValues[24].ToString().Trim();
                    column26 = rowValues[25].ToString().Trim();
                    column27 = rowValues[26].ToString().Trim();
                    column28 = rowValues[27].ToString().Trim();
                    column29 = rowValues[28].ToString().Trim();
                    column30 = rowValues[29].ToString().Trim();
                    column31 = rowValues[30].ToString().Trim();
                    column32 = rowValues[31].ToString().Trim();
                    column33 = rowValues[32].ToString().Trim();
                    column34 = rowValues[33].ToString().Trim();
                    column35 = rowValues[34].ToString().Trim();
                    column36 = rowValues[35].ToString().Trim();
                    column37 = rowValues[36].ToString().Trim();
                    #endregion
                    //Add
                    DataRow dr = FinalTable.NewRow();
                    #region adddata
                    dr[0] = column1;
                    dr[1] = column2;
                    dr[2] = column3;
                    dr[3] = column4;
                    dr[4] = column5;
                    dr[5] = column6;
                    dr[6] = column7;
                    dr[7] = column8;
                    dr[8] = column9;
                    dr[9] = column10;
                    dr[10] = column11;
                    dr[11] = column12;
                    dr[12] = column13;
                    dr[13] = column14;
                    dr[14] = column15;
                    dr[15] = column16;
                    dr[16] = column17;
                    dr[17] = column18;
                    dr[18] = column19;
                    dr[19] = column20;
                    dr[20] = column21;
                    dr[21] = column22;
                    dr[22] = column23;
                    dr[23] = column24;
                    dr[24] = column25;
                    dr[25] = column26;
                    dr[26] = column27;
                    dr[27] = column28;
                    dr[28] = column29;
                    dr[29] = column30;
                    dr[30] = column31;
                    dr[31] = column32;
                    dr[32] = column33;
                    dr[33] = column34;
                    dr[34] = column35;
                    dr[35] = column36;
                    dr[36] = column37;
                    dr[37] = i;
                    #endregion
                    FinalTable.Rows.Add(dr);
                }
                else
                {
                    obj.WriteFailureLogFile("No Data in File");
                }

            }


        }
        else
        {
            obj.WriteFailureLogFile("No Data in File");
        }
    }
    catch (Exception ex)
    {
        obj.WriteErrorLogFile(ex.ToString(),"Converting");
    }
    finally
    {
        sr.Close();
        sr.Dispose();

    }
    return FinalTable;
}

Key Points

  • StreamReader reads the entire file.
  • Rows are split by newline \n and columns by comma ,.
  • Each row is added to the DataTable dynamically.
  • A Count column tracks the row index.


2. Filtering Data
After loading the MCX data, you can filter rows using a DataView:
DataTable FilteredData = MCXdata;

DataView dvView = FilteredData.DefaultView;
dvView.RowFilter = "Count > " + TotalRowCount; // Example: filter by Count column

DataTable dtFiltered = dvView.ToTable();
FinalTable = dtFiltered;


Notes:
RowFilter supports expressions similar to SQL WHERE.
dvView.ToTable() returns a filtered copy of the DataTable.

3. Bulk Insert into SQL Server
Using SqlBulkCopy, large datasets can be inserted efficiently:
if (FinalTable.Rows.Count > 0)
{
    using (SqlConnection con = new SqlConnection("Data Source=173.47.478.2;Initial Catalog=AS78955;User ID=sa;Password=N@yyui#DB&12$%"))
    {
        con.Open();
        using (SqlBulkCopy bulkCopy = new SqlBulkCopy(con))
        {
            bulkCopy.BulkCopyTimeout = 1000000; // Large timeout for big files
            bulkCopy.DestinationTableName = "DTUMCX";

            // Map columns
            for (int i = 1; i <= 37; i++)
                bulkCopy.ColumnMappings.Add("column" + i, "column" + i);

            bulkCopy.ColumnMappings.Add("Count", "Count");

            bulkCopy.WriteToServer(FinalTable);
        }
        con.Close();
        obj.WriteProcessLogFile("Bulk inserted Successfully. Total Rows - " + FinalTable.Rows.Count);
    }
}
(or)

if (FinalTable.Rows.Count > 0)
        {
            SqlConnection con = new SqlConnection("Data Source=173.47.478.2;Initial Catalog=AS78955;User ID=sa;Password=N@yyui#DB&12$%");
            con.Open();
            SqlBulkCopy bulkCopy = new SqlBulkCopy(con);
            bulkCopy.BulkCopyTimeout = 1000000;
            bulkCopy.DestinationTableName = "DTUMCX";
            string TotalCount = FinalTable.Rows.Count.ToString();
            try
            {
                bulkCopy.ColumnMappings.Add("column1", "column1");
                bulkCopy.ColumnMappings.Add("column2", "column2");
                bulkCopy.ColumnMappings.Add("column3", "column3");
                bulkCopy.ColumnMappings.Add("column4", "column4");
                bulkCopy.ColumnMappings.Add("column5", "column5");
                bulkCopy.ColumnMappings.Add("column6", "column6");
                bulkCopy.ColumnMappings.Add("column7", "column7");
                bulkCopy.ColumnMappings.Add("column8", "column8");
                bulkCopy.ColumnMappings.Add("column9", "column9");
                bulkCopy.ColumnMappings.Add("column10", "column10");
                bulkCopy.ColumnMappings.Add("column1", "column11");
                bulkCopy.ColumnMappings.Add("column12", "column12");
                bulkCopy.ColumnMappings.Add("column13", "column13");
                bulkCopy.ColumnMappings.Add("column14", "column14");
                bulkCopy.ColumnMappings.Add("column15", "column15");
                bulkCopy.ColumnMappings.Add("column16", "column16");
                bulkCopy.ColumnMappings.Add("column17", "column17");
                bulkCopy.ColumnMappings.Add("column18", "column18");
                bulkCopy.ColumnMappings.Add("column19", "column19");
                bulkCopy.ColumnMappings.Add("column20", "column20");
                bulkCopy.ColumnMappings.Add("column21", "column21");
                bulkCopy.ColumnMappings.Add("column22", "column22");
                bulkCopy.ColumnMappings.Add("column23", "column23");
                bulkCopy.ColumnMappings.Add("column24", "column24");
                bulkCopy.ColumnMappings.Add("column25", "column25");
                bulkCopy.ColumnMappings.Add("column26", "column26");
                bulkCopy.ColumnMappings.Add("column27", "column27");
                bulkCopy.ColumnMappings.Add("column28", "column28");
                bulkCopy.ColumnMappings.Add("column29", "column29");
                bulkCopy.ColumnMappings.Add("column30", "column30");
                bulkCopy.ColumnMappings.Add("column31", "column31");
                bulkCopy.ColumnMappings.Add("column32", "column32");
                bulkCopy.ColumnMappings.Add("column33", "column33");
                bulkCopy.ColumnMappings.Add("column34", "column34");
                bulkCopy.ColumnMappings.Add("column35", "column35");
                bulkCopy.ColumnMappings.Add("column36", "column36");
                bulkCopy.ColumnMappings.Add("column37", "column37");
                bulkCopy.WriteToServer(FinalTable);
                con.Close();
                obj.WriteProcessLogFile("Bulk inserted SuccessFully.Total Rows - " + TotalCount);
            }

Key Points

  • SqlBulkCopy is optimized for inserting large volumes of data.
  • Column mappings ensure DataTable columns match SQL table columns.
  • BulkCopyTimeout can be increased for very large files.

4. Error Handling and Logging

  • try-catch-finally ensures errors are logged and resources are released.
  • obj.WriteFailureLogFile logs missing or malformed rows.
  • obj.WriteErrorLogFile logs exceptions during conversion.

5. Advantages of this Approach

  • Efficiency: Handles large MCX files efficiently.
  • Maintainability: Adding or removing columns is straightforward.
  • Filtering: Easy to filter rows dynamically before insert.
  • Logging: Helps track processing errors and missing data.
  • Automation: Can be scheduled to process daily market data files automatically.

6. Example Workflow

  • Load MCX CSV file using convertdatatableMCX.
  • Filter rows based on a condition (e.g., Count > TotalRowCount).
  • Bulk insert the filtered data into DTUMCX SQL Server table.
  • Log success or failure messages for auditing.

Conclusion
This approach is ideal for financial applications dealing with large MCX or stock market datasets. By combining DataTable conversion, DataView filtering, and SqlBulkCopy, you can achieve efficient, reliable, and maintainable data processing pipelines in C#. 

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: The Intelligent, Safe, Automated Method for Managing Transaction Logs in Master SQL Server

clock October 17, 2025 08:47 by author Peter

STOP! Your SQL Server Log File is Out of Control - The RIGHT Way to Manage It
Is your SQL log file consuming all your disk space? 😱 Learn the safe, automated way to manage transaction log growth without breaking your backup chain. Free script included! #SQLServer #LogManagement #DBA

Introduction: The DBA's Nightmare - The 500GB Log File That Ate Your Server
It's 3 AM. Your monitoring system alerts: "Disk C: at 99% capacity." You investigate and find a single  .ldf  file has ballooned to hundreds of gigabytes. Panic sets in. Do you shrink it? Will it break something? How did this happen?

If you've faced this scenario, you're not alone.  Transaction log management is one of the most misunderstood aspects of SQL Server administration. In this guide, we'll give you not just an automated solution, but more importantly, the  knowledge to use it safely and effectively.

Critical Disclaimer: Read This Before Proceeding
Shrinking log files is generally NOT a best practice.  It should be treated as an emergency procedure, not regular maintenance. Frequent shrinking leads to:

  • Virtual Log File (VLF) fragmentation  - causing performance degradation
  • Immediate regrowth  - the very problem you're trying to solve
  • Potential data loss  if not handled correctly

The proper long-term solution is:

  • Proper backup strategy  (Transaction log backups for FULL recovery)
  • Right-sizing  your log file from the beginning
  • Monitoring growth patterns

Use this script for  emergency situations only  or in  development environments.

The Emergency Log Management Script 


-- =====================================================-- Procedure: spLogClear - EMERGENCY Transaction Log Management-- Description: Use ONLY for emergency log file reduction.--              Not recommended for regular maintenance.-- Author: FreeLearning365.com-- Usage: EXEC [dbo].[spLogClear]-- =====================================================

ALTER PROCEDURE [dbo].[spLogClear]ASBEGIN
    SET NOCOUNT ON;

    DECLARE
        @DBName SYSNAME,
        @LogFileName SYSNAME,
        @StartTime DATETIME = GETDATE(),
        @Msg NVARCHAR(MAX),
        @ErrorMsg NVARCHAR(MAX),
        @SQL NVARCHAR(MAX),
        @OriginalRecoveryModel NVARCHAR(60),
        @CurrentRecoveryModel NVARCHAR(60);

    BEGIN TRY
        -- SECTION 1: INITIALIZATION & VALIDATION
        -- ===========================================

        -- Dynamically detect current database context
        SET @DBName = DB_NAME();

        -- Safety Check: Prevent execution on system databases
        IF @DBName IN ('master', 'model', 'msdb', 'tempdb')
        BEGIN
            SET @Msg = 'CRITICAL: This procedure cannot be run on system databases. Attempted on: ' + @DBName;
            RAISERROR(@Msg, 16, 1);
            RETURN;
        END

        -- Detect the logical name of the transaction log file
        SELECT TOP 1 @LogFileName = name
        FROM sys.database_files
        WHERE type_desc = 'LOG';

        -- Safety Check: Ensure log file was found
        IF @LogFileName IS NULL
        BEGIN
            RAISERROR('Could not identify transaction log file for database: %s', 16, 1, @DBName);
            RETURN;
        END

        -- SECTION 2: AUDITING & RECOVERY MODEL MANAGEMENT
        -- ==================================================

        -- Capture original recovery model for restoration
        SELECT @OriginalRecoveryModel = recovery_model_desc
        FROM sys.databases
        WHERE name = @DBName;

        -- Log process initiation
        INSERT INTO log.LogShrinkAudit
        (DBName, LogFileName, StartTime, Status, Message)
        VALUES
        (@DBName, @LogFileName, @StartTime, 'Started',
         'Emergency log shrink initiated. Original Recovery: ' + @OriginalRecoveryModel);

        PRINT 'Starting emergency log management for database: ' + @DBName;
        PRINT 'Log file name: ' + @LogFileName;
        PRINT 'Original recovery model: ' + @OriginalRecoveryModel;

        -- SECTION 3: THE CORE LOG MANAGEMENT PROCESS
        -- =============================================

        -- STEP 3.1: TEMPORARY RECOVERY MODEL SWITCH
        -- WARNING: This breaks the log backup chain in FULL recovery!
        SET @SQL = N'ALTER DATABASE [' + @DBName + N'] SET RECOVERY SIMPLE;';
        EXEC(@SQL);
        PRINT 'Temporarily switched to SIMPLE recovery model.';

        -- STEP 3.2: CHECKPOINT - Flushes dirty pages to data file
        CHECKPOINT;
        PRINT 'Checkpoint completed.';

        -- STEP 3.3: SHRINK LOG FILE (THE MAIN EVENT)
        -- Parameter 0 = shrink to smallest possible size
        SET @SQL = N'DBCC SHRINKFILE (N''' + @LogFileName + N''', 0);';
        EXEC(@SQL);
        PRINT 'Log file shrink completed.';

        -- STEP 3.4: RIGHT-SIZE THE LOG FILE (CRITICAL STEP!)
        -- Prevents immediate autogrowth. Adjust 1000MB based on your needs.
        SET @SQL = N'ALTER DATABASE [' + @DBName + N']
                    MODIFY FILE (NAME = N''' + @LogFileName + N''',
                                SIZE = 1000MB,
                                MAXSIZE = UNLIMITED,
                                FILEGROWTH = 500MB);';
        EXEC(@SQL);
        PRINT 'Log file resized to prevent immediate regrowth.';

        -- STEP 3.5: RESTORE ORIGINAL RECOVERY MODEL
        IF @OriginalRecoveryModel = 'FULL'
        BEGIN
            SET @SQL = N'ALTER DATABASE [' + @DBName + N'] SET RECOVERY FULL;';
            EXEC(@SQL);
            PRINT 'Recovery model restored to FULL.';

            -- CRITICAL: Take a FULL backup after restoring FULL recovery
            -- This establishes a new backup chain
            SET @Msg = 'IMPORTANT: Take a FULL database backup immediately to re-establish backup chain.';
            PRINT @Msg;
        END

        -- SECTION 4: SUCCESS REPORTING
        -- ===============================

        SET @Msg = N'Emergency log management completed successfully for database [' + @DBName +
                   N']. Process completed at ' + CONVERT(NVARCHAR(30), GETDATE(), 120);

        INSERT INTO log.LogShrinkAudit
        (DBName, LogFileName, StartTime, EndTime, Status, Message)
        VALUES
        (@DBName, @LogFileName, @StartTime, GETDATE(), 'Success', @Msg);

        PRINT @Msg;
        PRINT '=== PROCESS COMPLETED SUCCESSFULLY ===';

    END TRY

    BEGIN CATCH
        -- SECTION 5: COMPREHENSIVE ERROR HANDLING
        -- ==========================================

        SET @ErrorMsg = 'ERROR [' + CAST(ERROR_NUMBER() AS NVARCHAR(10)) + ']: ' +
                        ERROR_MESSAGE() + ' (Line: ' + CAST(ERROR_LINE() AS NVARCHAR(10)) + ')';

        PRINT 'PROCESS FAILED: ' + @ErrorMsg;

        -- Attempt to restore original recovery model on failure
        BEGIN TRY
            IF @OriginalRecoveryModel IS NOT NULL AND @OriginalRecoveryModel != 'SIMPLE'
            BEGIN
                SET @SQL = N'ALTER DATABASE [' + @DBName + N'] SET RECOVERY ' + @OriginalRecoveryModel + N';';
                EXEC(@SQL);
                PRINT 'Original recovery model restored after failure.';
            END
        END TRY
        BEGIN CATCH
            PRINT 'WARNING: Could not restore original recovery model after failure.';
        END CATCH

        -- Log the failure
        INSERT INTO log.LogShrinkAudit
        (DBName, LogFileName, StartTime, EndTime, Status, Message)
        VALUES
        (@DBName, ISNULL(@LogFileName, 'Unknown'), @StartTime, GETDATE(), 'Failed', @ErrorMsg);

        -- Re-throw error for external handling (e.g., SQL Agent)
        THROW;

    END CATCH
END
GO


Step-by-Step Implementation Guide
Step 1. Prerequisites & Safety Checks
Create the Audit Table:

CREATE SCHEMA [log];
GO

CREATE TABLE [log].[LogShrinkAudit](
    [AuditID] [int] IDENTITY(1,1) NOT NULL,
    [DBName] [sysname] NOT NULL,
    [LogFileName] [sysname] NOT NULL,
    [StartTime] [datetime] NULL,
    [EndTime] [datetime] NULL,
    [Status] [nvarchar](50) NULL,
    [Message] [nvarchar](max) NULL)

Permissions Required:

  • ALTER DATABASE  permission on the target database
  • INSERT  permission on the audit table
  • Membership in  db_owner  role is typically required


Step 2. Initial Testing (SAFE MODE)
NEVER run this in production without testing first!

-- TEST 1: Check what would happen (read-only check)-- Examine current log size and VLFsSELECT
    name AS [LogFileName],
    size * 8.0 / 1024 AS [SizeMB],
    CASE WHEN size = max_size THEN 'FULL' ELSE 'GROWTH AVAILABLE' END AS [Status]FROM sys.database_files
WHERE type_desc = 'LOG';

-- Check VLF count (high count = fragmentation)DBCC LOGINFO;

Step 3. Emergency Execution

Only proceed if you have a genuine emergency and understand the risks:
-- Execute the emergency procedureEXEC [dbo].[spLogClear];
-- Monitor the audit logSELECT * FROM [log].[LogShrinkAudit] ORDER BY StartTime DESC;
-- RITICAL POST-PROCESS STEP: Take a FULL backupBACKUP DATABASE [YourDatabase]
TO DISK = 'D:\Backups\PostShrink_FullBackup.bak'WITH COMPRESSION, INIT;

Pros, Cons, and Best Practices 

AspectEmergency Use Case Regular Use Risks Best Practice Alternative
Disk Space Recovery Immediate space recovery  from runaway log VLF fragmentation  causes poor performance Proper log backups  in FULL recovery model
Automation Quick resolution  during emergencies Masks underlying problems Monitor log growth  and address root causes
Recovery Model Allows space reclamation in FULL model Breaks log backup chain  - potential data loss Size log appropriately  from the start
Right-Sizing Prevents immediate regrowth after shrink Manual sizing may not match workload Set appropriate autogrowth  (not percentage)

The RIGHT Way: Long-Term Log Management Strategy

Instead of regular shrinking, implement this

For FULL Recovery Model Databases

-- Schedule frequent transaction log backupsBACKUP LOG [YourDatabase]
TO DISK = 'D:\LogBackups\YourDatabase_Log.trn'WITH COMPRESSION;

    Right-Size from Beginning

-- Set appropriate initial size and growthALTER DATABASE [YourDatabase]MODIFY FILE (NAME = YourDatabase_Log,
         SIZE = 4096MB,
         FILEGROWTH = 1024MB); -- Fixed growth, not percentage

    Monitoring & Alerting

-- Regular monitoring querySELECT
    name AS DatabaseName,
    (size * 8.0 / 1024) AS LogSizeMB,
    (CAST(used_space AS float) * 8 / 1024) AS UsedSpaceMB,
    (size * 8.0 / 1024) - (CAST(used_space AS float) * 8 / 1024) AS FreeSpaceMB
FROM sys.dm_db_log_space_usage
CROSS APPLY sys.databases
WHERE sys.databases.database_id = sys.dm_db_log_space_usage.database_id;

Business Case & Limitations

Business Case:  Prevents production outages due to disk space exhaustion. Maintains system availability during unexpected log growth scenarios.

Limitations

  • Temporary solution  - doesn't address root cause
  • Performance impact  - VLF fragmentation affects write performance
  • Recovery implications  - breaks the point-in-time recovery capability until a new full backup is taken
  • Not a substitute  for a proper backup strategy

"The truth your DBA won't tell you about log files"
"From panic to solution in 5 minutes"
"The emergency fix that saved our production server"
"Why your log file keeps growing (and how to stop it forever)"
"The shrink operation that won't get you fired"


Remember: This script is your  emergency fire extinguisher  - essential to have, but you hope you never need to use it! 

Alternative Approach
Executive summary/business case

  • Why this matters
  • Large or uncontrolled log files consume disk, slow backups, complicate restores, and amplify ransomware damage.
  • Frequent, ad-hoc shrinking causes fragmentation and performance problems.

The right approach: prevent log growth with a correct recovery model + frequent log backups, monitor, and only shrink when necessary with a documented, auditable process.

Business benefits

  • Predictable disk usage and lower storage costs.
  • Faster restores and improved availability.
  • Audit trail for operations (compliance & change control).

Key principles & best standard (short)

  • Prefer prevention over cure:  take frequent transaction-log backups (FULL model) or use SIMPLE model only for dev/test.
  • Avoid routine shrinking:  shrink only when required (e.g., after one-off huge operation, log growth due to long-running transaction, or DB restore/maintenance).
  • Logically plan growth settings:  set sensible  FILEGROWTH  and  INITIAL SIZE —Avoid tiny percent-based growth for large logs.
  • Automate monitoring & alerts:  watch  log_reuse_wait_desc , free space, and autogrowth events.
  • Audit & document any shrink operation.  Record who ran it, why, and the before/after sizes.

Common causes of log growth
Long-running or uncommitted transactions
Replication, CHANGE_TRACKING, CDC, DB mirroring, AlwaysOn, long-running backups
Missing or infrequent log backups in the FULL recovery model
Bulk operations (index rebuilds, large loads)

Pros & Cons of shrinking logs
Pros

Instantly reclaims disk after one-time surge.
Useful after large one-off operations or restoring from a backup with a larger log.

Cons
Causes file fragmentation and potential performance degradation.
Shrinking regrows logs if operations continue — regrowth is expensive.
Not a long-term solution to recurring growth.

Best practices (step-by-step checklist before shrinking)

  • Investigate root cause: Check  DBCC SQLPERF(LOGSPACE)  and  sys.databases   log_reuse_wait_desc .
  • Fix underlying issue: e.g., schedule frequent log backups, commit/kill long transactions, disable/reconfigure features.
  • Take a transaction log backup (FULL model)  to free up virtual log files (VLFs) if possible.
  • Delay shrinking until log is reusable  (log_reuse_wait_desc = NOTHING).
  • Document & audit : always insert an audit record before/after shrink.
  • Shrink in a maintenance window and monitor performance, autogrowth.
  • Adjust file growth strategy: e.g.,  FILEGROWTH = 512MB  for busy OLTP DBs, avoid 1% growth for large files.
  • Perform index maintenance afterwards if required (heavy shrink may impact fragmentation).

Security, permissions & masking sensitive fields

  • Do not run shrink scripts as  sa  unless required. Use a least-privilege account with  ALTER DATABASE  permissions or run as an approved operator via SQL Agent.
  • Mask sensitive values in audit/log tables (if you store server or path info that could reveal structure). Example: store  ServerName  hashed or store only the first/last 8 chars.
  • Record operator identity using  SUSER_SNAME()  or  ORIGINAL_LOGIN()  — but don’t store personal data that is not necessary.

Table design (you provided) — small improvements
Add a default  StartTime  and index for queries:
ALTER TABLE log.LogShrinkAudit
ADD CONSTRAINT DF_LogShrinkAudit_StartTime DEFAULT (GETDATE()) FOR StartTime;

CREATE NONCLUSTERED INDEX IX_LogShrinkAudit_DBName_StartTime
ON log.LogShrinkAudit(DBName, StartTime DESC);


Masking note: if you log paths or server names and want to mask, store a hash:  HASHBYTES('SHA2_256', @FullServerPath)  (store varbinary) and keep mapping in a secure admin-only table.

Production-ready stored procedure (improved, audited, safe)

Key improvements over the original:

Detects  log_reuse_wait_desc  and refuses to shrink unless reason is  NOTHING  or explicit override given.

Optional  @Force  for documented emergency shrink with required  @Reason  param.

Records before/after sizes, VLF count, user who ran it.

Graceful error handling and throttled shrink (shrinks to target percent/size).

Avoids setting recovery model from FULL→SIMPLE→FULL automatically (dangerous) unless explicitly allowed.

Warning:  Changing recovery model from FULL→SIMPLE causes break in log backup chain. Only do if you know consequences.

-- ================================================-- dbo.spLogClear_Managed-- Safer, auditable transactional log cleanup procedure-- Usage: EXEC dbo.spLogClear_Managed @TargetSizeMB=1024, @Force=0, @AllowRecoveryChange=0, @Reason='...';-- ================================================CREATE OR ALTER PROCEDURE dbo.spLogClear_Managed
(
@TargetSizeMB INT = 1024,            -- desired final size in MB (approx)
@MinFreePercent INT = 10,            -- shrink only if free space percent > this
@Force BIT = 0,                      -- 1 = allow shrink even if log_reuse_wait != NOTHING (use with caution)
@AllowRecoveryChange BIT = 0,        -- 1 = allow temporary switch to SIMPLE (dangerous, breaks log chain)
@Reason NVARCHAR(4000) = NULL        -- required if @Force = 1 or @AllowRecoveryChange = 1
)
ASBEGIN
SET NOCOUNT ON;
DECLARE @DBName SYSNAME = DB_NAME();
DECLARE @LogFileName SYSNAME;
DECLARE @StartTime DATETIME = GETDATE();
DECLARE @BeforeSizeMB INT;
DECLARE @BeforeUsedPercent DECIMAL(5,2);
DECLARE @AfterSizeMB INT;
DECLARE @SQL NVARCHAR(MAX);
DECLARE @WaitDesc SYSNAME;
DECLARE @Operator SYSNAME = SUSER_SNAME();
DECLARE @AuditMsg NVARCHAR(MAX);

BEGIN TRY
    -- 1. Log initial state: find log file
    SELECT TOP(1)
        @LogFileName = name
    FROM sys.database_files
    WHERE type_desc = 'LOG';

    -- 2. Get current log size & used percent
    SELECT
        @BeforeSizeMB = CAST(size/128.0 AS INT),
        @BeforeUsedPercent = CAST( ( (size - CAST(FILEPROPERTY(name,'SpaceUsed') AS INT)) * 100.0 / NULLIF(size,0) ) AS DECIMAL(5,2) )
    FROM sys.database_files
    WHERE type_desc = 'LOG' AND name = @LogFileName;

    -- 3. Determine log reuse wait reason
    SELECT @WaitDesc = log_reuse_wait_desc
    FROM sys.databases
    WHERE name = @DBName;

    -- 4. Audit start
    INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, Status, Message)
    VALUES (@DBName, @LogFileName, @StartTime, 'Started',
            'Initiated by ' + ISNULL(@Operator,'Unknown') + '; log_reuse_wait_desc=' + ISNULL(@WaitDesc,'Unknown'));

    -- 5. Safety checks
    IF @Force = 0 AND @WaitDesc <> 'NOTHING'
    BEGIN
        SET @AuditMsg = 'Abort: log_reuse_wait_desc = ' + @WaitDesc + '. Use proper log backup or resolve wait reason before shrinking.';
        INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, EndTime, Status, Message)
        VALUES(@DBName, @LogFileName, @StartTime, GETDATE(), 'Aborted', @AuditMsg);
        RETURN;
    END

    IF @Force = 1 AND (@Reason IS NULL OR LEN(@Reason) < 5)
    BEGIN
        THROW 51010, 'Force requested but @Reason is required (short justification).', 1;
    END

    -- 6. Optionally switch recovery model (DANGEROUS)
    IF @AllowRecoveryChange = 1
    BEGIN
        IF @Reason IS NULL OR LEN(@Reason) < 5
            THROW 51011, 'AllowRecoveryChange requires a justified @Reason.', 1;

        SET @SQL = N'ALTER DATABASE [' + @DBName + '] SET RECOVERY SIMPLE WITH NO_WAIT;';
        EXEC(@SQL);

        INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, Status, Message)
        VALUES(@DBName, @LogFileName, @StartTime, 'Info', 'Recovery switched to SIMPLE temporarily. Reason: ' + @Reason);
    END

    -- 7. Do a checkpoint if in SIMPLE or after log backup
    CHECKPOINT;

    -- 8. Recompute size and used percent (safe check)
    SELECT
        @BeforeSizeMB = CAST(size/128.0 AS INT),
        @BeforeUsedPercent = CAST( ( (size - CAST(FILEPROPERTY(name,'SpaceUsed') AS INT)) * 100.0 / NULLIF(size,0) ) AS DECIMAL(5,2) )
    FROM sys.database_files
    WHERE type_desc = 'LOG' AND name = @LogFileName;

    -- 9. Check min free percent condition
    IF @BeforeUsedPercent <= (100 - @MinFreePercent)
    BEGIN
        -- compute target shrink size in pages (approx) using DBCC SHRINKFILE target in MB
        SET @SQL = N'DBCC SHRINKFILE ([' + @LogFileName + '], ' + CAST(@TargetSizeMB AS NVARCHAR(20)) + ');';
        EXEC(@SQL);

        -- Capture after size
        SELECT @AfterSizeMB = CAST(size/128.0 AS INT)
        FROM sys.database_files
        WHERE type_desc = 'LOG' AND name = @LogFileName;

        SET @AuditMsg = 'Shrink attempted. BeforeSizeMB=' + CAST(@BeforeSizeMB AS NVARCHAR(20)) +
                        '; AfterSizeMB=' + CAST(ISNULL(@AfterSizeMB, -1) AS NVARCHAR(20));
        INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, EndTime, Status, Message)
        VALUES(@DBName, @LogFileName, @StartTime, GETDATE(), 'Success', @AuditMsg);
    END
    ELSE
    BEGIN
        SET @AuditMsg = 'Abort: insufficient free space in log to shrink safely. UsedPercent=' + CAST(@BeforeUsedPercent AS NVARCHAR(10));
        INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, EndTime, Status, Message)
        VALUES(@DBName, @LogFileName, @StartTime, GETDATE(), 'Skipped', @AuditMsg);
    END

    -- 10. Restore recovery model if changed (only if @AllowRecoveryChange = 1)
    IF @AllowRecoveryChange = 1
    BEGIN
        SET @SQL = N'ALTER DATABASE [' + @DBName + '] SET RECOVERY FULL WITH NO_WAIT;';
        EXEC(@SQL);

        INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, Status, Message)
        VALUES(@DBName, @LogFileName, GETDATE(), 'Info', 'Recovery switched back to FULL. Ensure you take a full + log backup to re-establish chain.');
    END
END TRY
BEGIN CATCH
    DECLARE @ErrMsg NVARCHAR(MAX) = ERROR_MESSAGE();
    INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, EndTime, Status, Message)
    VALUES(@DBName, ISNULL(@LogFileName,'Unknown'), @StartTime, GETDATE(), 'Failed', @ErrMsg);
    THROW; -- re-throw for agent visibility
END CATCH
END
GO

Usage examples
Normal safe run (abort if log is in use):
EXEC dbo.spLogClear_Managed @TargetSizeMB=1024, @MinFreePercent=20;

Emergency (must supply reason):
EXEC dbo.spLogClear_Managed @TargetSizeMB=512, @Force=1, @Reason='Emergency disk pressure after large archive load';

Temporary recovery change (dangerous; document and follow-up!):
EXEC dbo.spLogClear_Managed @TargetSizeMB=512, @AllowRecoveryChange=1, @Reason='One-off maintenance, will take full backup after';

Alternatives & automation options
Preferred:  Fix root cause (frequent log backups) — implement a schedule  BACKUP LOG  to reduce VLF usage.

  • Ola Hallengren  — maintenance solution includes integrity checks, backup, and cleanup.
  • PowerShell approach:  use  Invoke-Sqlcmd  to check state and issue shrink if needed; easier to integrate with vaults and monitoring.
  • Third-party:  enterprise tools that manage log shipping, archiving, and unintrusive cleanup.

Monitoring & telemetry (what to alert on)
log_reuse_wait_desc  ≠ NOTHING — alert when it grows continuously.

Rapid autogrowth events — alert on frequent autogrowth.

Free disk space thresholds.

Number of VLFs (very large counts cause performance issues) — use  DBCC LOGINFO  or DMV-based scripts.

Risk mitigation & rollback plan
Before : always take current full + log backup.

If the shrink fails or the DB becomes suspect: have an emergency restore plan; always test restores in staging.
After: if you changed the recovery model, take a full backup, then restart the log backup schedule to reestablish the chain.

Masking sensitive fields example
When inserting path/server info into logs, store only hashed values or partials:
INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, Status, Message)
VALUES(@DBName, HASHBYTES('SHA2_256', @LogFileName), GETDATE(), 'Started', 'Initiated by ' + SUSER_SNAME());


Store mapping in a separate secure table accessible only to auditors.

  • “Ransomware-resistant log strategy”
  • “Zero-trust backup & log hygiene”
  • “Immutable audit trail”
  • “Predictable storage, instant restoration”
  • “Proactive log management — not reactive surgery”

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.




SQL Server Hosting - HostForLIFE :: Understanding SET SERVEROUTPUT ON in PL/SQL

clock September 26, 2025 07:52 by author Peter

SET SERVEROUTPUT ON in PL/SQL: Why It Matters and How to Use It
When working with Oracle PL/SQL, one of the first and most essential commands you'll encounter is:

SET SERVEROUTPUT ON;

Though it looks simple, this command plays a crucial role in how PL/SQL programs behave, especially when you want to display output from procedures, anonymous blocks, or scripts using DBMS_OUTPUT.PUT_LINE.

Why Do We Use SET SERVEROUTPUT ON?
By default, PL/SQL code runs silently inside the Oracle database engine. This means that even if your program generates output, you won’t see any result on your screen — unless you explicitly enable server output.

The command SET SERVEROUTPUT ON instructs SQL*Plus or SQL Developer to display output from the DBMS_OUTPUT buffer, allowing you to see the results of your PL/SQL program.

Syntax
SET SERVEROUTPUT ON;

You can also control the buffer size and format (optional):
SET SERVEROUTPUT ON SIZE 1000000 FORMAT WRAPPED;

When Should You Use It?
Use SET SERVEROUTPUT ON in situations like:

When using DBMS_OUTPUT.PUT_LINE to display output.

During testing and debugging of PL/SQL procedures or anonymous blocks.

To trace variable values or track the flow of control in your code.

Example: Using SET SERVEROUTPUT ON in a PL/SQL Block

Here’s a simple example that declares variables and uses DBMS_OUTPUT.PUT_LINE to display them:
SET SERVEROUTPUT ON;

DECLARE
    eno NUMBER(5) NOT NULL := 2;
    ename VARCHAR2(15) := 'Branson Devs';
    edept CONSTANT VARCHAR2(15) := 'Web Developer';
BEGIN
    dbms_output.put_line('Declared Values:');
    dbms_output.put_line(' Employee Number: ' || eno || ' Employee Name: ' || ename);
    dbms_output.put_line('Constant Declared:');
    dbms_output.put_line(' Employee Department: ' || edept);
END;
/


Output (Only Visible If SERVEROUTPUT Is ON):
Declared Values:
Employee Number: 2 Employee Name: Branson Devs
Constant Declared:
Employee Department: Web Developer

Important: If you omit SET SERVEROUTPUT ON, the DBMS_OUTPUT.PUT_LINE results will not be displayed, even though the block executes successfully.

Tips for Using SET SERVEROUTPUT ON
In SQL Developer, go to View > DBMS Output, then click the green + icon to enable output for your session.
In SQL*Plus, always run SET SERVEROUTPUT ON before any PL/SQL block that uses output.
Use SET SERVEROUTPUT OFF when you no longer need the output to be displayed.

Conclusion
The SET SERVEROUTPUT ON command is small but vital for writing and debugging PL/SQL code. It provides visibility into your PL/SQL logic by allowing output messages to be isplayed on screen, making your development workflow smoother and more transparent.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: SQL: The Database Language

clock September 22, 2025 08:11 by author Peter

The common language used to manage and communicate with relational databases is called Structured Query Language, or SQL. SQL provides the capabilities to effectively store, manage, and retrieve data, whether you're developing an enterprise application, corporate dashboard, or website.

What is SQL?
SQL (pronounced “ess-que-el” or sometimes “sequel”) is a domain-specific language used to communicate with relational database management systems (RDBMS) such as:

  • MySQL
  • PostgreSQL
  • Microsoft SQL Server
  • Oracle Database

SQLite
SQL lets you describe what data you want, while the database engine figures out how to get it.

Core Features of SQL

  • Data Definition: Create and modify the structure of databases (tables, views, indexes).
  • Data Manipulation: Insert, update, delete, and retrieve records.
  • Data Control: Manage permissions and security (GRANT, REVOKE).
  • Transaction Control: Commit or roll back changes safely.

Basic SQL Commands
Here are some of the most commonly used SQL statements:

Create a Table
CREATE TABLE Employees (

EmployeeID INT PRIMARY KEY,

Name VARCHAR(50),

Position VARCHAR(50),

Salary DECIMAL(10,2)

);


Insert Data
INSERT INTO Employees (EmployeeID, Name, Position, Salary)

VALUES (1, 'Alice', 'Developer', 65000);


Retrieve Data

SELECT Name, Position

FROM Employees

WHERE Salary > 60000;

Update Data

UPDATE Employees

SET Salary = 70000

WHERE EmployeeID = 1;

Delete Data
DELETE FROM Employees

WHERE EmployeeID = 1;


Why SQL is Important?

  • Universality: Nearly all relational databases use SQL or a close dialect.
  • Powerful Queries: Combine, group, and filter data with ease.
  • Data Integrity: Enforce constraints (primary keys, foreign keys) to keep data consistent.
  • Scalability: Handle anything from a small app’s data to enterprise-level systems.

Common Uses of SQL

  • Business intelligence and reporting
  • Backend for web and mobile apps
  • Data analytics and dashboards
  • Financial and inventory systems
  • Data migration between platforms

Advantages of SQL

  • Human-readable, declarative syntax
  • Optimized by database engines for performance
  • Portable across platforms with minimal changes
  • Supports complex operations with relatively simple commands

Limitations

  • Not ideal for unstructured or semi-structured data (that’s where NoSQL databases shine).
  • Large, complex queries can become hard to maintain without proper design.
  • Performance tuning may require knowledge of indexes, execution plans, and normalization.

Conclusion
The foundation of relational systems' data handling is SQL. You acquire a useful ability that forms the basis of practically every contemporary software program by becoming proficient with its commands and comprehending how databases arrange data. SQL is an essential skill for anyone dealing with data, whether they are an analyst, developer, or data scientist.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE :: Locate SQL Server's Most Used Queries and Stored Procedures

clock September 8, 2025 07:28 by author Peter

The most of the harm is typically caused by a few statements when a SQL Server feels "slow." With Dynamic Management Views (DMVs), you can locate them the quickest. The worst offenders are uncovered by CPU, IO, duration, and "what's running right now" using the copy-paste scripts below, along with instructions on how to read the results and what to do next.

    Requirements: VIEW SERVER STATE permission. Numbers like total_worker_time and total_elapsed_time are in microseconds unless noted.

What “high usage” means (pick the lens)?

  • CPU: how much processor time a query uses.
  • IO: logical/physical reads and writes (memory and disk pressure).
  • Duration: how long a query takes end-to-end.
  • Currently running: live workload that may be blocking others.

You’ll use a different script depending on which lens you want.

Top queries by CPU

-- Top 20 queries by total CPU since the plan was cached
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    qs.total_worker_time/1000.0                             AS total_cpu_ms,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_cpu_ms,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    qs.total_logical_reads + qs.total_physical_reads        AS total_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text,
    qp.query_plan
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
ORDER BY qs.total_worker_time DESC;

Tip: Add WHERE DB_NAME(st.dbid) = 'YourDbName' if you only care about one database.

Top queries by IO (reads/writes)

-- Top 20 by total reads; add writes if you care about heavy DML
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    (qs.total_logical_reads + qs.total_physical_reads)      AS total_reads,
    (qs.total_logical_writes + qs.total_physical_writes)    AS total_writes,
    (qs.total_logical_reads + qs.total_physical_reads) / NULLIF(qs.execution_count,0) AS avg_reads,
    (qs.total_logical_writes + qs.total_physical_writes) / NULLIF(qs.execution_count,0) AS avg_writes,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER BY total_reads DESC;


Top queries by average duration
-- Queries that are slow per execution (not just popular)
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    (qs.total_elapsed_time/1000.0)                          AS total_duration_ms,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0)  AS avg_cpu_ms,
    (qs.total_logical_reads + qs.total_physical_reads) / NULLIF(qs.execution_count,0) AS avg_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
WHERE qs.execution_count > 0
ORDER BY avg_duration_ms DESC;

Stored procedures that hit the server hardest
Use sys.dm_exec_procedure_stats to get procedure-level rollups (cleaner than trying to stitch statements together).
-- Procedures by total CPU
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    ps.total_worker_time/1000.0                             AS total_cpu_ms,
    (ps.total_worker_time/1000.0)/NULLIF(ps.execution_count,0) AS avg_cpu_ms,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4  -- skip system DBs; remove if you want them
ORDER BY ps.total_worker_time DESC;

-- Procedures by total reads (IO)
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    (ps.total_logical_reads + ps.total_physical_reads)      AS total_reads,
    ((ps.total_logical_reads + ps.total_physical_reads)/NULLIF(ps.execution_count,0)) AS avg_reads,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4
ORDER BY total_reads DESC;

-- Procedures by average duration
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    (ps.total_elapsed_time/1000.0)/NULLIF(ps.execution_count,0) AS avg_duration_ms,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4 AND ps.execution_count > 0
ORDER BY avg_duration_ms DESC;

What’s heavy right now (live view)?
-- Currently executing requests ordered by CPU time
SELECT
    r.session_id,
    r.status,
    DB_NAME(r.database_id)            AS database_name,
    r.cpu_time                        AS cpu_ms,         -- already in ms
    r.total_elapsed_time              AS elapsed_ms,     -- already in ms
    r.wait_type,
    r.wait_time,
    r.blocking_session_id,
    SUBSTRING(t.text, r.statement_start_offset/2 + 1,
        (CASE WHEN r.statement_end_offset = -1
              THEN DATALENGTH(t.text)
              ELSE r.statement_end_offset END - r.statement_start_offset)/2 + 1) AS running_statement
FROM sys.dm_exec_requests AS r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t
WHERE r.session_id <> @@SPID
ORDER BY r.cpu_time DESC;


If you see blocking_session_id populated, chase that session and fix the blocker first.

Group similar statements together (query_hash)
Same query text with different literals can appear as separate rows. Aggregate by query_hash to see the true top offenders.
-- Roll up by query_hash to combine similar statements
WITH q AS (
    SELECT
        qs.query_hash,
        qs.execution_count,
        qs.total_worker_time,
        qs.total_elapsed_time,
        qs.total_logical_reads + qs.total_physical_reads AS total_reads
    FROM sys.dm_exec_query_stats qs
)
SELECT TOP 20
    query_hash,
    SUM(execution_count)                                  AS executions,
    SUM(total_worker_time)/1000.0                         AS total_cpu_ms,
    (SUM(total_worker_time)/1000.0)/NULLIF(SUM(execution_count),0) AS avg_cpu_ms,
    (SUM(total_elapsed_time)/1000.0)/NULLIF(SUM(execution_count),0) AS avg_duration_ms,
    SUM(total_reads)                                      AS total_reads
FROM q
GROUP BY query_hash
ORDER BY total_cpu_ms DESC;

Filters you’ll actually use
Add these lines to any query above as needed:
-- Only one DB
WHERE DB_NAME(st.dbid) = 'YourDbName'

-- Only statements executed in the last day (approx; uses last_execution_time)
WHERE qs.last_execution_time >= DATEADD(DAY, -1, SYSUTCDATETIME())

-- Exclude trivial one-off executions
AND qs.execution_count >= 5

Read the numbers the right way

  • High total + low average: popular query. Optimize for throughput (indexing, cached plan quality).
  • Low total + very high average: rare but slow. Optimize for latency (rewrite, avoid RBAR/scalar UDFs, better joins).
  • High duration but modest CPU/IO: usually blocking or waits. Check wait_type, blocking_session_id, and missing indexes that cause scans.
  • Metrics reset when plans get evicted or the instance restarts. Treat them as a rolling window, not forever history.

Quick wins to try after you find a culprit

  • Add the right index (covering where needed). Look at the actual plan’s missing index hints, then design a lean index yourself (don’t blindly accept 12-column monsters).
  • Kill implicit conversions (mismatched data types, e.g., NVARCHAR vs INT).
  • Replace SELECT * with exact columns (cuts reads).
  • Update statistics if they’re stale; consider WITH RECOMPILE for bad parameter sniffing cases (sparingly).
  • Avoid scalar UDFs in hot paths; inline logic or use APPLY.
  • Batch big writes; keep transactions short.

Bonus: store a snapshot for trending
If you want a daily/15-minute snapshot to trend over time:
-- One-time setup
CREATE TABLE dbo.TopQuerySnapshot
(
    captured_at           DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
    database_name         SYSNAME,
    executions            BIGINT,
    total_cpu_ms          BIGINT,
    avg_cpu_ms            DECIMAL(18,2),
    avg_duration_ms       DECIMAL(18,2),
    total_reads           BIGINT,
    query_text            NVARCHAR(MAX)
);

-- Collector (schedule as an Agent Job)
INSERT INTO dbo.TopQuerySnapshot (database_name, executions, total_cpu_ms, avg_cpu_ms, avg_duration_ms, total_reads, query_text)
SELECT TOP 50
    DB_NAME(st.dbid),
    qs.execution_count,
    qs.total_worker_time/1000,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0),
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0),
    (qs.total_logical_reads + qs.total_physical_reads),
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(st.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
ORDER BY qs.total_worker_time DESC;

Now you can chart trends and prove improvements.

Common questions
Does this include plans not in cache?

No. DMVs reflect what’s cached. For long-term history, enable Query Store (SQL Server 2016+) and analyze sys.query_store_runtime_stats.

What about currently blocked sessions?
Use the “live view” script; chase the blocking_session_id, then inspect that session’s SQL text and plan.

Conclusion

CPU comes first, followed by IO and duration. If users are currently complaining, use the live view. Check its strategy, add the appropriate index, correct data types, and retest when you've identified a heavy hitter. Big results are typically achieved with small, targeted modifications.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Working with Big Data in SQL Server

clock September 2, 2025 08:06 by author Peter

The amount of data generated and stored by organizations has been growing exponentially in recent years. Big data refers to this large and complex data sets that traditional data processing applications cannot handle. Managing and analyzing big data is becoming increasingly important for businesses to gain insights and stay ahead of the competition. Microsoft SQL Server is a powerful database management system capable of handling big data.

In this article, we will use appropriate examples to explore the techniques for working with big data in SQL Server.

Partitioning
Partitioning is a technique used to break large tables into smaller, more manageable pieces called partitions. Each partition contains a subset of the data, which can be processed and queried independently. SQL Server supports horizontal partitioning, which involves splitting data based on a column or set of columns.

For example, consider a table containing sales data for a large retail store. Partitioning the table based on the year column would create a separate partition for each year of data. This would allow queries to be run on individual partitions rather than the entire table, resulting in faster query performance. First we need to create a table with partitioned indexes. We can use the below code to create a table called "Sales" with a partitioned index on the "SaleDate" column:
CREATE TABLE [dbo].[Sales]
(
    [SaleID] [int] NOT NULL PRIMARY KEY,
    [Product] [nvarchar](50) NOT NULL,
    [SaleDate] [date] NOT NULL,
    [SaleAmount] [money] NOT NULL
)
WITH (MEMORY_OPTIMIZED = OFF, DURABILITY = ON)
GO

CREATE PARTITION FUNCTION [PF_Sales_SaleDate]
    (date)
AS RANGE RIGHT FOR VALUES ('2023-01-01', '2023-02-01', '2023-03-01');

CREATE PARTITION SCHEME [PS_Sales_SaleDate]
    AS PARTITION [PF_Sales_SaleDate]
    TO ([FG_Sales_202301], [FG_Sales_202302], [FG_Sales_202303], [FG_Sales_202304]);

CREATE CLUSTERED INDEX [CI_Sales_SaleDate]
ON [dbo].[Sales] ([SaleDate])
WITH (DROP_EXISTING = ON)
ON [PS_Sales_SaleDate] ([SaleDate]);


The code creates a table with columns for SaleID, Product, SaleDate, and SaleAmount, and defines a primary key on the SaleID column. The table is defined as DURABILITY = ON to ensure data is written to disk, but MEMORY_OPTIMIZED = OFF to ensure that data is not stored in memory. The partition function and scheme are defined to partition the table based on the SaleDate column, with partitions for January, February, and March of 2023 and at last, a clustered index is created on the SaleDate column, using the partition scheme to distribute the index across the partitions. Once we have created the table, we can insert some data into it using below query.

INSERT INTO [dbo].[Sales] ([SaleID], [Product], [SaleDate], [SaleAmount])
VALUES (1, 'Product A', '2022-01-01', 100.00),(2, 'Product B', '2022-01-02', 200.00),
       (3, 'Product C', '2022-01-03', 300.00),(4, 'Product A', '2022-02-01', 400.00),
       (5, 'Product B', '2022-02-02', 500.00),(6, 'Product C', '2022-02-03', 600.00);

Now whenever we can query the Sales table, the partitioned index will automatically be used. SQL Server can scan only the partitions that contain the relevant data. This improves query performance and reduces the amount of disk I/O required. Partitioning indexes is a powerful feature in SQL Server that can significantly improve the performance of queries on large tables. By partitioning a table based on a specific column, SQL Server can scan only the relevant partitions, reducing disk I/O and improving query performance.

Columnstore Indexes
Columnstore indexes are a specialized type of index that is optimized for large data warehouses. They store data in columns rather than rows, which makes them much more efficient for querying large datasets. Columnstore indexes are particularly useful for frequently queried but rarely updated data.

For example, consider a table containing customer sales data for a large online retailer. A columnstore index could be created on the Product columns. This would allow for very fast querying of the total sales for each product. First we need to create a table called "Sales" with a columnstore index on the "Product" column. To create Colunmstore Index:
CREATE TABLE [dbo].[Sales]
(
    [SaleID] [int] NOT NULL PRIMARY KEY,
    [Product] [nvarchar](50) NOT NULL,
    [SaleDate] [date] NOT NULL,
    [SaleAmount] [money] NOT NULL
)
WITH (MEMORY_OPTIMIZED = OFF, DURABILITY = ON)
GO


CREATE CLUSTERED COLUMNSTORE INDEX [CSI_Sales_Product]
ON [dbo].[Sales]([Product]);


Above query creates columnstore index on the Product column as a clustered index, which means that the entire table is stored in a columnar format. Now whenever we can query the Sales table, it will be much faster than a query on a traditional row-based index because the columnstore index is created on the Product column

In-Memory OLTP
In-Memory OLTP is a new feature in SQL Server that allows for creating memory-optimized tables. These tables are stored entirely in memory, which makes them much faster than traditional disk-based tables. In-Memory OLTP is beneficial for applications requiring high performance and low latency. For example, consider a table containing stock market data. In-Memory OLTP could create a memory-optimized table that stores the latest market data. This would allow for very fast querying of real-time stock prices. To create a memory-optimized filegroup, which will contain the memory-optimized tables we can use the below query:
ALTER DATABASE [MyDatabase] ADD FILEGROUP [InMemoryFilegroup] CONTAINS MEMORY_OPTIMIZED_DATA;

It will add a new filegroup called "InMemoryFilegroup" to the "MyDatabase" database, which contains memory-optimized data. Now we will create a memory-optimized table that will be stored entirely in memory:
CREATE TABLE [dbo].[MarketData_MemoryOptimized]
(
    [Name] [nvarchar](50) NOT NULL PRIMARY KEY NONCLUSTERED,
    [Price] [decimal](18, 2) NOT NULL,
    [Timestamp] [datetime2](0) NOT NULL,
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);


The "MarketData_MemoryOptimized" table is defined as MEMORY_OPTIMIZED, which means it will be stored entirely in memory, and DURABILITY is set to SCHEMA_ONLY, which means it won't be written to disk. In-Memory OLTP can be a powerful tool for storing and querying real-time data in memory-optimized tables. By storing data in memory, In-Memory OLTP can improve query performance and reduce latency for certain types of database workloads, such as real-time market data.

PolyBase

PolyBase is a feature in SQL Server that allows for integrating external data sources, such as Hadoop or Azure Blob Storage. PolyBase allows for querying structured and unstructured data, making it a powerful tool for working with big data.

For example, consider a large financial services company that stores customer data in Hadoop. The company may want to analyze customer behavior and trends to improve their services, but querying the large amount of data stored in Hadoop can be difficult and time-consuming. This is where PolyBase comes in - by connecting SQL Server directly to the Hadoop data source, the company can query the data easily and quickly, allowing for in-depth analysis of customer behavior and trends.
EXEC sp_configure 'polybase enabled', 1;
RECONFIGURE;

CREATE EXTERNAL DATA SOURCE HadoopDataSource
WITH (
    TYPE = HADOOP,
    LOCATION = 'hdfs://<HadoopNameNode>:<PortNumber>',
    CREDENTIAL = HadoopCredential
);

CREATE EXTERNAL TABLE CustomerData_Hadoop
(
    CustomerID INT,
    FirstName VARCHAR(50),
    LastName VARCHAR(50),
    EmailAddress VARCHAR(50),
    Address VARCHAR(100),
    City VARCHAR(50),
    State VARCHAR(2),
    ZipCode VARCHAR(10)
)
WITH (
    LOCATION = '/customerdata',
    DATA_SOURCE = HadoopDataSource,
    FILE_FORMAT = TEXTFILE
);

The above code enables PolyBase in SQL Server, creates an external data source called HadoopDataSource that points to the Hadoop cluster at the location hdfs://<HadoopNameNode>:<PortNumber>., and creates an external table called CustomerData_Hadoop that maps to the data stored in Hadoop. The LOCATION option specifies the location of the data in Hadoop, and the DATA_SOURCE option specifies the external data source to use to query the data. The FILE_FORMAT option specifies the format of the data in Hadoop, in this case, TEXTFILE.

PolyBase allows us to query external data sources directly from SQL Server. This can be a valuable tool for organizations that have large amounts of data stored in external sources and need to query that data quickly and easily.

Conclusion

SQL Server offers a number of effective tools and methods for handling large amounts of data. There are numerous methods for maximizing the speed of big data sets, ranging from partitioning and columnstore indexes to In-Memory OLTP and PolyBase. In today's data-driven world, companies may remain ahead of the competition and obtain insightful knowledge by employing these strategies.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in