European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE :: SQL Commenting Best Practices: Advantages, Guidelines, and Illustrations

clock October 31, 2025 07:06 by author Peter

Writing SQL code involves more than simply making it function; it also involves making it comprehensible, maintainable, and audit-friendly. Writing professional SQL code requires careful comments. The advantages of SQL comments, common commenting techniques, and examples for various SQL objects such as tables, procedures, functions, triggers, sequences, and indexes will all be covered in this article.

Why Comment in SQL Code?
Commenting SQL code brings several benefits:

  • Improves Readability: Helps developers understand your code without digging into every line.
  • Facilitates Maintenance: Makes it easier to fix bugs or enhance features later.
  • Audit & Documentation: Useful in enterprise environments for tracking who created or modified objects.
  • Reduces Human Error: Helps teams follow standards and avoid mistakes.
  • Supports Collaboration: Makes it easier for multiple developers to work on the same database.

Commenting Standards

  • A good SQL commenting standard should include:
  • Header Block Comments: Metadata about the object: author, date, description, and notes.
  • Inline Comments: Explain non-obvious logic or why a piece of code exists.
  • Consistency: Same style across all objects (tables, triggers, procedures, etc.).
  • Avoid Obvious Comments: Explain why instead of what.
  • Audit Information: Optional: who created or modified the object, when, and why.

Effective Commenting for Tables, Functions, Triggers, Indexes, and Sequences

1. Tables

/******************************************************************************************
* Table Name   : dbo.Roles
* Description  : Stores system roles with audit columns.
* Author       : Peter
* Created On   : 2025-10-09
* Last Modified: 2025-10-09
* Notes:
*   - Includes CreatedBy, CreatedDate, UpdatedBy, UpdatedDate columns for auditing.
******************************************************************************************/

CREATE TABLE dbo.Roles (
    RoleId INT IDENTITY(1,1) PRIMARY KEY,           -- Unique identifier for the role
    RoleName VARCHAR(50) NOT NULL,                  -- Name of the role
    Description VARCHAR(255) NULL,                  -- Optional description of the role
    CreatedBy VARCHAR(100) NULL,                    -- User who created the record
    CreatedDate DATETIME DEFAULT GETDATE(),         -- Timestamp when record was created
    UpdatedBy VARCHAR(100) NULL,                    -- User who last updated the record
    UpdatedDate DATETIME NULL                        -- Timestamp when record was last updated
);

Important points

  • Use block comments for headers.
  • Use inline comments for columns.
  • Include author, creation date, and purpose.

2. Functions
/******************************************************************************************
* Function Name : fn_GetRoleName
* Description   : Returns the RoleName for a given RoleId.
* Author        : Peter
* Created On    : 2025-10-09
******************************************************************************************/
CREATE FUNCTION dbo.fn_GetRoleName (@RoleId INT)
RETURNS VARCHAR(50)
AS
BEGIN
    DECLARE @RoleName VARCHAR(50);

    -- Fetch RoleName from Roles table
    SELECT @RoleName = RoleName
    FROM dbo.Roles
    WHERE RoleId = @RoleId;

    RETURN @RoleName;
END;
GO

Always think about where and how the function will be used. If it’s called millions of times in a query, performance optimization is critical

3. Triggers

/******************************************************************************************
* Trigger Name : trg_Update_Roles
* Table Name   : dbo.Roles
* Description  : Automatically updates UpdatedDate when a record in Roles is modified.
* Author       : Peter
* Created On   : 2025-10-09
* Last Modified: 2025-10-09
* Notes:
*   - UpdatedBy must be set manually.
*   - Ensures audit consistency across updates.
******************************************************************************************/

CREATE TRIGGER trg_Update_Roles
ON dbo.Roles
AFTER UPDATE
AS
BEGIN
    SET NOCOUNT ON;  -- Prevent "rows affected" messages for better performance

    BEGIN TRY
        -- Update the UpdatedDate to current timestamp for all modified rows
        UPDATE r
        SET r.UpdatedDate = GETDATE()
        FROM dbo.Roles AS r
        INNER JOIN inserted AS i ON r.RoleId = i.RoleId;
    END TRY
    BEGIN CATCH
        -- Error handling: raise meaningful error message
        DECLARE @ErrorMessage NVARCHAR(4000) = ERROR_MESSAGE();
        DECLARE @ErrorSeverity INT = ERROR_SEVERITY();
        DECLARE @ErrorState INT = ERROR_STATE();

        RAISERROR('Error in trigger trg_Update_Roles: %s', @ErrorSeverity, @ErrorState, @ErrorMessage);
    END CATCH;
END;
GO

4. Sequences
/******************************************************************************************
* Sequence Name : seq_OrderId
* Description   : Generates unique OrderId for Orders table.
* Author        : Peter
* Created On    : 2025-10-09
******************************************************************************************/
CREATE SEQUENCE dbo.seq_OrderId
    START WITH 1
    INCREMENT BY 1;


5. Indexes
/******************************************************************************************
* Index Name   : IX_Roles_RoleName
* Table Name   : dbo.Roles
* Description  : Non-clustered index on RoleName for faster search.
* Author       : Peter
* Created On   : 2025-10-09
******************************************************************************************/
CREATE NONCLUSTERED INDEX IX_Roles_RoleName
ON dbo.Roles (RoleName);

6. Stored Procedures
/******************************************************************************************
* Procedure Name : sp_GetRoleById
* Description    : Retrieves role details by RoleId.
* Author         : Peter
* Created On     : 2025-10-09
* Last Modified  : 2025-10-09
* Parameters:
*   @RoleId INT - Role identifier
* Returns:
*   Role details from dbo.Roles
******************************************************************************************/
CREATE PROCEDURE dbo.sp_GetRoleById
    @RoleId INT
AS
BEGIN
    SET NOCOUNT ON;  -- Prevent "rows affected" messages

    -- Select role information
    SELECT RoleId, RoleName, Description, CreatedBy, CreatedDate, UpdatedBy, UpdatedDate
    FROM dbo.Roles
    WHERE RoleId = @RoleId;
END;
GO


SQL Code Smells & Rules for Tables, Functions, Triggers, Indexes and Sequences
1. Tables
Code Smells

  • Poor naming conventions: Table names like tbl1, data, or temp are vague.
  • Too many columns: More than 50–100 columns might indicate the table isn’t normalized.
  • Unused columns: Columns never queried or used in transactions.
  • Repeated data: Indicates denormalization or missing relationships.
  • No primary key: Leads to duplicate rows and poor indexing.
  • Excessive nullable columns: Hard to enforce data integrity.

Best Practices

  • Use clear, singular, meaningful names: Employee, ProductOrder.
  • Normalize data to at least 3NF unless justified.
  • Always define primary keys and appropriate foreign keys.
  • Use data types appropriately: don’t use VARCHAR(255) for small text.
  • Limit NULLs; use default values where applicable.
  • Add comments to describe table's purpose:

2. Functions
Code Smells

  • Functions that do too much (multiple responsibilities).
  • Functions that access tables unnecessarily inside loops.
  • Functions with side effects (modifying data).
  • Poor naming (func1, doSomething).

Best Practices

  • Functions should be pure (no side effects).
  • One function → one purpose.
  • Use a schema prefix to avoid ambiguity.
  • Add comments explaining input/output:

3. Triggers
Code Smells

  • Triggers that perform complex logic or call other triggers → hard to maintain.
  • Silent failures: errors not logged.
  • Multiple triggers for the same action → order dependency issues.
  • Triggers updating the same table → risk of recursion.

Best Practices

  • Keep triggers small and specific.
  • Prefer using constraints or stored procedures instead of triggers if possible.
  • Log errors and operations.
  • Use AFTER vs INSTEAD OF carefully.

4. Indexes
Code Smells

  • Too many indexes → slows down writes.
  • Unused indexes → waste of storage.
  • Non-selective indexes → low performance gain.
  • Indexing columns that are frequently updated → high maintenance cost.

Best Practices

  • Index frequently queried columns used in WHERE, JOIN, and ORDER BY.
  • Avoid indexing columns with low cardinality (like gender).
  • Monitor index usage and remove unused indexes.
  • Name indexes consistently: IX_Table_Column.

5. Sequences
Code Smells

  • Using sequences unnecessarily for small tables.
  • Sequences with large gaps → indicate poor management.
  • Sequences without ownership or naming standards.

Best Practices

  • Use sequences for unique, incremental values across tables.
  • Define start, increment, min, mand ax values.
  • Always name sequences clearly: SEQ_EmployeeID.

6. Stored Procedures
Common SQL Smells

  • Too long / complex procedures – doing multiple unrelated tasks in one SP.
  • Hard-coded values – making the procedure inflexible and non-reusable.
  • No parameters or overuse of global variables – reduces modularity.
  • Missing error handling – errors are swallowed or unlogged.
  • Excessive dynamic SQL – may lead to SQL injection or maintenance issues.
  • Returning result sets instead of using output parameters when needed – inconsistent usage.
  • Tightly coupled logic – SP depends heavily on other SPs or tables, making it hard to maintain.
  • Unused or deprecated SPs – bloats the database.

Best Practices

  • One procedure, one purpose – keep SPs focused and small.
  • Use parameters – avoid hard-coded values; makes SP reusable.
  • Error handling – use TRY…CATCH to log or handle errors gracefully.
  • Use schema prefix & meaningful names – e.g., usp_InsertEmployee.
  • Avoid excessive dynamic SQL – use static queries where possible; if dynamic SQL is needed, validate input.
  • Document logic – add comments for inputs, outputs, and special cases.
  • Return status codes or output parameters instead of always returning full result sets.
  • Use sequences or identity columns appropriately – only when unique incremental IDs are required.
  • Avoid unnecessary loops – leverage set-based operations for performance.
  • Maintainability – regularly review SPs and remove unused or deprecated ones.

Developers, DBAs, and reviewers can use this document as a reference to keep clean, manageable SQL code and steer clear of performance issues. It is a professional requirement to use proper SQL comments. It enhances auditing, maintainability, and readability. By adhering to these guidelines, you can make sure that your database is reliable, collaborative, and future-proof. While writing code, commenting might take a few extra minutes, but it saves hours on maintenance and debugging.

I appreciate you reading my content. I hope you now have a thorough understanding of SQL Commenting Best Practices: Benefits, Standards & Examples. Although adhering to coding standards and using proper SQL comments may seem insignificant, they have a significant impact on producing database code that is professional, readable, and maintainable. By putting these best practices into practice, you can make sure that your projects are long-lasting, collaborative, and manageable.


Happy coding and keep your SQL Code clean!

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Automating the Creation of SQL Queries with LlamaIndex and Snowflake

clock October 24, 2025 09:08 by author Peter

The open-source project Text-to-SQL-Snowflake-LlamaIndex, which links Snowflake, a potent cloud data platform, with LlamaIndex, a top framework for LLM data orchestration, is thoroughly explained in this post. The repository shows how to convert natural language inputs into executable SQL queries on a Snowflake database using a large language model (LLM). Data engineers, analysts, and AI developers may swiftly incorporate natural language interfaces into their data systems with the help of this guide, which covers every aspect of the project, from architecture and environment setup to workflow samples.

What Is Text-to-SQL?
Text-to-SQL systems use LLMs to convert plain English questions (e.g., "Show me total sales in Q1 by region") into executable SQL queries. These systems remove the need for users to understand SQL syntax or schema structures, making data access more inclusive.

Why LlamaIndex?

LlamaIndex acts as a bridge between unstructured data and LLMs. It indexes schema metadata, manages context retrieval, and structures prompts for query generation.

Why Snowflake?

Snowflake offers scalable, cloud-based data warehousing with support for Python integrations via Snowflake Connector for Python and Snowpark. It is ideal for real-time querying from LLM-based agents.

Step-by-Step Walkthrough


1. Repository Overview
GitHub: tuanacelik/text-to-sql-snowflake-llamaindex

This project integrates:
LlamaIndex for text-to-SQL translation.
Snowflake Connector for query execution.
OpenAI / GPT-based models for language understanding.

2. Architecture



3. Environment Setup

Dependencies:
pip install llama-index
pip install snowflake-connector-python
pip install python-dotenv
pip install openai

Environment Variables (.env):
OPENAI_API_KEY=YOUR_API_KEY
SNOWFLAKE_ACCOUNT=YOUR_ACCOUNT
SNOWFLAKE_USER=YOUR_USERNAME
SNOWFLAKE_PASSWORD=YOUR_PASSWORD
SNOWFLAKE_WAREHOUSE=YOUR_WAREHOUSE
SNOWFLAKE_DATABASE=YOUR_DATABASE
SNOWFLAKE_SCHEMA=YOUR_SCHEMA

4. Connecting to Snowflake
Python Connection Example:
import snowflake.connector
import os
from dotenv import load_dotenv

load_dotenv()

conn = snowflake.connector.connect(
    user=os.getenv("SNOWFLAKE_USER"),
    password=os.getenv("SNOWFLAKE_PASSWORD"),
    account=os.getenv("SNOWFLAKE_ACCOUNT"),
    warehouse=os.getenv("SNOWFLAKE_WAREHOUSE"),
    database=os.getenv("SNOWFLAKE_DATABASE"),
    schema=os.getenv("SNOWFLAKE_SCHEMA")
)

This establishes a live connection that allows the system to send LLM-generated queries for execution.

5. LlamaIndex Integration
LlamaIndex is responsible for schema parsing and query generation.

Schema Loading Example:
from llama_index import SQLDatabase, LLMPredictor, ServiceContext
from llama_index.indices.struct_store import SQLTableIndex
from llama_index.llms import OpenAI

sql_database = SQLDatabase(conn)
llm_predictor = LLMPredictor(llm=OpenAI(temperature=0))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)

index = SQLTableIndex(
    tables=["SALES", "CUSTOMERS"],
    sql_database=sql_database,
    service_context=service_context
)


6. Generating SQL Queries from Text

Example:
query_engine = index.as_query_engine()
response = query_engine.query("Show me total sales by region for Q1 2024")
print(response)

Output (generated SQL):
SELECT region, SUM(amount) AS total_sales
FROM SALES
WHERE QUARTER(sale_date) = 1 AND YEAR(sale_date) = 2024
GROUP BY region;


7. Workflow JSON

Below is a minimal example JSON describing the full workflow:
{
  "workflow": {
    "input": "Show me the top 5 customers by revenue last month",
    "llm_engine": "OpenAI GPT-4",
    "schema_source": "Snowflake",
    "steps": [
      "Extract schema metadata using LlamaIndex",
      "Parse natural language input",
      "Generate SQL query",
      "Execute query via Snowflake connector",
      "Return structured results"
    ],
    "output_format": "JSON"
  }
}


Use Cases / Scenarios

  • Business Intelligence Automation: Non-technical users can access dashboards via natural language.
  • Embedded Analytics: Integrate LLM-driven querying inside SaaS platforms.
  • DataOps Automation: Generate schema-aware SQL queries programmatically.
  • AI Agents: Connect conversational assistants directly to Snowflake datasets.

Limitations / Considerations

  • Schema Ambiguity: Without clear metadata, LLMs may misinterpret table relationships.
  • Security: Ensure user inputs are sanitized before query execution to prevent injection.
  • Latency: Query generation involves LLM API calls, which can introduce delays.
  • Model Drift: Schema changes require reindexing via LlamaIndex.

Fixes and Troubleshooting

IssueCauseFix
Authentication Error Incorrect Snowflake credentials Verify .env variables
Empty Responses Model temperature too low or schema incomplete Adjust temperature or reindex tables
Invalid SQL Ambiguous query phrasing Add schema hints or context
Slow Queries Snowflake warehouse sleep Resume warehouse before execution

Conclusion
The Text-to-SQL-Snowflake-LlamaIndex project exemplifies the fusion of AI reasoning and data engineering. By connecting LlamaIndex with Snowflake, developers can enable natural language interfaces that democratize data access. As Generative Engine Optimization (GEO) principles evolve, such projects demonstrate how structured data can become machine-readable, citable, and actionable by AI.

Key takeaway: Text-to-SQL automation with LlamaIndex on Snowflake transforms how users interact with enterprise data—bridging the gap between human language and database logic.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: C# CSV/MCX File Conversion to DataTable and Bulk SQL Server Insert

clock October 21, 2025 08:22 by author Peter

Processing big CSV/MCX files, filtering records, and effectively inserting them into a database are common tasks in financial applications, particularly when dealing with MCX or stock market data. This article uses C#. 1 to demonstrate a methodical approach. Creating a DataTable from a CSV or MCX file

A CSV file is read using the convertdatatableMCX method, which then turns it into a DataTable:

public DataTable convertdatatable(string Filepath)
{
    DataTable FinalTable = new DataTable();

    // Define columns
    for (int i = 1; i <= 37; i++)
        FinalTable.Columns.Add("column" + i, typeof(string));

    FinalTable.Columns.Add("Count", typeof(int));

    StreamReader sr = new StreamReader(Filepath);
    try
    {
        string Fulltext = sr.ReadToEnd();
        string[] rows = Fulltext.Split('\n');

        if (rows.Length > 1)
        {
            for (int i = 1; i < rows.Length; i++)
            {
                string[] rowValues = rows[i].Split(',');
                if (rowValues.Length > 1)
                {
                    DataRow dr = FinalTable.NewRow();
                    for (int j = 0; j < 37; j++)
                        dr[j] = rowValues.Length > j ? rowValues[j].Trim() : "";

                    dr[37] = i; // Track row number
                    FinalTable.Rows.Add(dr);
                }
                else
                {
                    obj.WriteFailureLogFile("No Data in File");
                }
            }
        }
        else
        {
            obj.WriteFailureLogFile("No Data in File");
        }
    }
    catch (Exception ex)
    {
        obj.WriteErrorLogFile(ex.ToString(),"Converting");
    }
    finally
    {
        sr.Close();
        sr.Dispose();
    }
    return FinalTable;
}

(or)

public DataTable convertdatatable(string Filepath)
{
    #region table
    DataTable FinalTable = new DataTable();
    FinalTable.Columns.Add("column1", typeof(String));
    FinalTable.Columns.Add("column2", typeof(String));
    FinalTable.Columns.Add("column3", typeof(String));
    FinalTable.Columns.Add("column4", typeof(String));
    FinalTable.Columns.Add("column5", typeof(String));
    FinalTable.Columns.Add("column6", typeof(String));
    FinalTable.Columns.Add("column7", typeof(String));
    FinalTable.Columns.Add("column8", typeof(String));
    FinalTable.Columns.Add("column9", typeof(String));
    FinalTable.Columns.Add("column10", typeof(String));
    FinalTable.Columns.Add("column11", typeof(String));
    FinalTable.Columns.Add("column12", typeof(String));
    FinalTable.Columns.Add("column13", typeof(String));
    FinalTable.Columns.Add("column14", typeof(String));
    FinalTable.Columns.Add("column15", typeof(String));
    FinalTable.Columns.Add("column16", typeof(String));
    FinalTable.Columns.Add("column17", typeof(String));
    FinalTable.Columns.Add("column18", typeof(String));
    FinalTable.Columns.Add("column19", typeof(String));
    FinalTable.Columns.Add("column20", typeof(String));
    FinalTable.Columns.Add("column21", typeof(String));
    FinalTable.Columns.Add("column22", typeof(String));
    FinalTable.Columns.Add("column23", typeof(String));
    FinalTable.Columns.Add("column24", typeof(String));
    FinalTable.Columns.Add("column25", typeof(String));
    FinalTable.Columns.Add("column26", typeof(String));
    FinalTable.Columns.Add("column27", typeof(String));
    FinalTable.Columns.Add("column28", typeof(String));
    FinalTable.Columns.Add("column29", typeof(String));
    FinalTable.Columns.Add("column30", typeof(String));
    FinalTable.Columns.Add("column31", typeof(String));
    FinalTable.Columns.Add("column32", typeof(String));
    FinalTable.Columns.Add("column33", typeof(String));
    FinalTable.Columns.Add("column34", typeof(String));
    FinalTable.Columns.Add("column35", typeof(String));
    FinalTable.Columns.Add("column36", typeof(String));
    FinalTable.Columns.Add("column37", typeof(String));
    FinalTable.Columns.Add("Count", typeof(int));
    #endregion
    StreamReader sr = new StreamReader(Filepath);
    try
    {
        string filepath = Filepath;
        string Fulltext;
        Fulltext = sr.ReadToEnd().ToString();
        string[] rows = Fulltext.Split('\n');
        if (rows.Count() > 1)
        {

            for (int i = 1; i < rows.Count(); i++)
            {

                string[] rowValues = rows[i].Split(',');

                string column1 = "", column2 = "", column3 = "", column4 = "", column5 = "", column6 = "", column7 = "", column8 = "", column9 = "", column10 = "", column11 = "", column12 = "",
                       column13 = "", column14 = "", column15 = "", column16 = "", column17 = "", column18 = "", column19 = "", column20 = "", column21 = "", column22 = "", column23 = "", column24 = "",
                       column25 = "", column26 = "", column27 = "", column28 = "", column29 = "", column30 = "", column31 = "", column32 = "", column33 = "", column34 = "", column35 = "", column36 = "",
                    column37 = "";

                if (rowValues.Length > 1)
                {
                    #region assin
                    column1 = rowValues[0].ToString().Trim();
                    column2 = rowValues[1].ToString().Trim();
                    column3 = rowValues[2].ToString().Trim();
                    column4 = rowValues[3].ToString().Trim();
                    column5 = rowValues[4].ToString().Trim();
                    column6 = rowValues[5].ToString().Trim();
                    column7 = rowValues[6].ToString().Trim();
                    column8 = rowValues[7].ToString().Trim();
                    column9 = rowValues[8].ToString().Trim();
                    column10 = rowValues[9].ToString().Trim();
                    column11 = rowValues[10].ToString().Trim();
                    column12 = rowValues[11].ToString().Trim();
                    column13 = rowValues[12].ToString().Trim();
                    column14 = rowValues[13].ToString().Trim();
                    column15 = rowValues[14].ToString().Trim();
                    column16 = rowValues[15].ToString().Trim();
                    column17 = rowValues[16].ToString().Trim();
                    column18 = rowValues[17].ToString().Trim();
                    column19 = rowValues[18].ToString().Trim();
                    column20 = rowValues[19].ToString().Trim();
                    column21 = rowValues[20].ToString().Trim();
                    column22 = rowValues[21].ToString().Trim();
                    column23 = rowValues[22].ToString().Trim();
                    column24 = rowValues[23].ToString().Trim();
                    column25 = rowValues[24].ToString().Trim();
                    column26 = rowValues[25].ToString().Trim();
                    column27 = rowValues[26].ToString().Trim();
                    column28 = rowValues[27].ToString().Trim();
                    column29 = rowValues[28].ToString().Trim();
                    column30 = rowValues[29].ToString().Trim();
                    column31 = rowValues[30].ToString().Trim();
                    column32 = rowValues[31].ToString().Trim();
                    column33 = rowValues[32].ToString().Trim();
                    column34 = rowValues[33].ToString().Trim();
                    column35 = rowValues[34].ToString().Trim();
                    column36 = rowValues[35].ToString().Trim();
                    column37 = rowValues[36].ToString().Trim();
                    #endregion
                    //Add
                    DataRow dr = FinalTable.NewRow();
                    #region adddata
                    dr[0] = column1;
                    dr[1] = column2;
                    dr[2] = column3;
                    dr[3] = column4;
                    dr[4] = column5;
                    dr[5] = column6;
                    dr[6] = column7;
                    dr[7] = column8;
                    dr[8] = column9;
                    dr[9] = column10;
                    dr[10] = column11;
                    dr[11] = column12;
                    dr[12] = column13;
                    dr[13] = column14;
                    dr[14] = column15;
                    dr[15] = column16;
                    dr[16] = column17;
                    dr[17] = column18;
                    dr[18] = column19;
                    dr[19] = column20;
                    dr[20] = column21;
                    dr[21] = column22;
                    dr[22] = column23;
                    dr[23] = column24;
                    dr[24] = column25;
                    dr[25] = column26;
                    dr[26] = column27;
                    dr[27] = column28;
                    dr[28] = column29;
                    dr[29] = column30;
                    dr[30] = column31;
                    dr[31] = column32;
                    dr[32] = column33;
                    dr[33] = column34;
                    dr[34] = column35;
                    dr[35] = column36;
                    dr[36] = column37;
                    dr[37] = i;
                    #endregion
                    FinalTable.Rows.Add(dr);
                }
                else
                {
                    obj.WriteFailureLogFile("No Data in File");
                }

            }


        }
        else
        {
            obj.WriteFailureLogFile("No Data in File");
        }
    }
    catch (Exception ex)
    {
        obj.WriteErrorLogFile(ex.ToString(),"Converting");
    }
    finally
    {
        sr.Close();
        sr.Dispose();

    }
    return FinalTable;
}

Key Points

  • StreamReader reads the entire file.
  • Rows are split by newline \n and columns by comma ,.
  • Each row is added to the DataTable dynamically.
  • A Count column tracks the row index.


2. Filtering Data
After loading the MCX data, you can filter rows using a DataView:
DataTable FilteredData = MCXdata;

DataView dvView = FilteredData.DefaultView;
dvView.RowFilter = "Count > " + TotalRowCount; // Example: filter by Count column

DataTable dtFiltered = dvView.ToTable();
FinalTable = dtFiltered;


Notes:
RowFilter supports expressions similar to SQL WHERE.
dvView.ToTable() returns a filtered copy of the DataTable.

3. Bulk Insert into SQL Server
Using SqlBulkCopy, large datasets can be inserted efficiently:
if (FinalTable.Rows.Count > 0)
{
    using (SqlConnection con = new SqlConnection("Data Source=173.47.478.2;Initial Catalog=AS78955;User ID=sa;Password=N@yyui#DB&12$%"))
    {
        con.Open();
        using (SqlBulkCopy bulkCopy = new SqlBulkCopy(con))
        {
            bulkCopy.BulkCopyTimeout = 1000000; // Large timeout for big files
            bulkCopy.DestinationTableName = "DTUMCX";

            // Map columns
            for (int i = 1; i <= 37; i++)
                bulkCopy.ColumnMappings.Add("column" + i, "column" + i);

            bulkCopy.ColumnMappings.Add("Count", "Count");

            bulkCopy.WriteToServer(FinalTable);
        }
        con.Close();
        obj.WriteProcessLogFile("Bulk inserted Successfully. Total Rows - " + FinalTable.Rows.Count);
    }
}
(or)

if (FinalTable.Rows.Count > 0)
        {
            SqlConnection con = new SqlConnection("Data Source=173.47.478.2;Initial Catalog=AS78955;User ID=sa;Password=N@yyui#DB&12$%");
            con.Open();
            SqlBulkCopy bulkCopy = new SqlBulkCopy(con);
            bulkCopy.BulkCopyTimeout = 1000000;
            bulkCopy.DestinationTableName = "DTUMCX";
            string TotalCount = FinalTable.Rows.Count.ToString();
            try
            {
                bulkCopy.ColumnMappings.Add("column1", "column1");
                bulkCopy.ColumnMappings.Add("column2", "column2");
                bulkCopy.ColumnMappings.Add("column3", "column3");
                bulkCopy.ColumnMappings.Add("column4", "column4");
                bulkCopy.ColumnMappings.Add("column5", "column5");
                bulkCopy.ColumnMappings.Add("column6", "column6");
                bulkCopy.ColumnMappings.Add("column7", "column7");
                bulkCopy.ColumnMappings.Add("column8", "column8");
                bulkCopy.ColumnMappings.Add("column9", "column9");
                bulkCopy.ColumnMappings.Add("column10", "column10");
                bulkCopy.ColumnMappings.Add("column1", "column11");
                bulkCopy.ColumnMappings.Add("column12", "column12");
                bulkCopy.ColumnMappings.Add("column13", "column13");
                bulkCopy.ColumnMappings.Add("column14", "column14");
                bulkCopy.ColumnMappings.Add("column15", "column15");
                bulkCopy.ColumnMappings.Add("column16", "column16");
                bulkCopy.ColumnMappings.Add("column17", "column17");
                bulkCopy.ColumnMappings.Add("column18", "column18");
                bulkCopy.ColumnMappings.Add("column19", "column19");
                bulkCopy.ColumnMappings.Add("column20", "column20");
                bulkCopy.ColumnMappings.Add("column21", "column21");
                bulkCopy.ColumnMappings.Add("column22", "column22");
                bulkCopy.ColumnMappings.Add("column23", "column23");
                bulkCopy.ColumnMappings.Add("column24", "column24");
                bulkCopy.ColumnMappings.Add("column25", "column25");
                bulkCopy.ColumnMappings.Add("column26", "column26");
                bulkCopy.ColumnMappings.Add("column27", "column27");
                bulkCopy.ColumnMappings.Add("column28", "column28");
                bulkCopy.ColumnMappings.Add("column29", "column29");
                bulkCopy.ColumnMappings.Add("column30", "column30");
                bulkCopy.ColumnMappings.Add("column31", "column31");
                bulkCopy.ColumnMappings.Add("column32", "column32");
                bulkCopy.ColumnMappings.Add("column33", "column33");
                bulkCopy.ColumnMappings.Add("column34", "column34");
                bulkCopy.ColumnMappings.Add("column35", "column35");
                bulkCopy.ColumnMappings.Add("column36", "column36");
                bulkCopy.ColumnMappings.Add("column37", "column37");
                bulkCopy.WriteToServer(FinalTable);
                con.Close();
                obj.WriteProcessLogFile("Bulk inserted SuccessFully.Total Rows - " + TotalCount);
            }

Key Points

  • SqlBulkCopy is optimized for inserting large volumes of data.
  • Column mappings ensure DataTable columns match SQL table columns.
  • BulkCopyTimeout can be increased for very large files.

4. Error Handling and Logging

  • try-catch-finally ensures errors are logged and resources are released.
  • obj.WriteFailureLogFile logs missing or malformed rows.
  • obj.WriteErrorLogFile logs exceptions during conversion.

5. Advantages of this Approach

  • Efficiency: Handles large MCX files efficiently.
  • Maintainability: Adding or removing columns is straightforward.
  • Filtering: Easy to filter rows dynamically before insert.
  • Logging: Helps track processing errors and missing data.
  • Automation: Can be scheduled to process daily market data files automatically.

6. Example Workflow

  • Load MCX CSV file using convertdatatableMCX.
  • Filter rows based on a condition (e.g., Count > TotalRowCount).
  • Bulk insert the filtered data into DTUMCX SQL Server table.
  • Log success or failure messages for auditing.

Conclusion
This approach is ideal for financial applications dealing with large MCX or stock market datasets. By combining DataTable conversion, DataView filtering, and SqlBulkCopy, you can achieve efficient, reliable, and maintainable data processing pipelines in C#. 

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: The Intelligent, Safe, Automated Method for Managing Transaction Logs in Master SQL Server

clock October 17, 2025 08:47 by author Peter

STOP! Your SQL Server Log File is Out of Control - The RIGHT Way to Manage It
Is your SQL log file consuming all your disk space? 😱 Learn the safe, automated way to manage transaction log growth without breaking your backup chain. Free script included! #SQLServer #LogManagement #DBA

Introduction: The DBA's Nightmare - The 500GB Log File That Ate Your Server
It's 3 AM. Your monitoring system alerts: "Disk C: at 99% capacity." You investigate and find a single  .ldf  file has ballooned to hundreds of gigabytes. Panic sets in. Do you shrink it? Will it break something? How did this happen?

If you've faced this scenario, you're not alone.  Transaction log management is one of the most misunderstood aspects of SQL Server administration. In this guide, we'll give you not just an automated solution, but more importantly, the  knowledge to use it safely and effectively.

Critical Disclaimer: Read This Before Proceeding
Shrinking log files is generally NOT a best practice.  It should be treated as an emergency procedure, not regular maintenance. Frequent shrinking leads to:

  • Virtual Log File (VLF) fragmentation  - causing performance degradation
  • Immediate regrowth  - the very problem you're trying to solve
  • Potential data loss  if not handled correctly

The proper long-term solution is:

  • Proper backup strategy  (Transaction log backups for FULL recovery)
  • Right-sizing  your log file from the beginning
  • Monitoring growth patterns

Use this script for  emergency situations only  or in  development environments.

The Emergency Log Management Script 


-- =====================================================-- Procedure: spLogClear - EMERGENCY Transaction Log Management-- Description: Use ONLY for emergency log file reduction.--              Not recommended for regular maintenance.-- Author: FreeLearning365.com-- Usage: EXEC [dbo].[spLogClear]-- =====================================================

ALTER PROCEDURE [dbo].[spLogClear]ASBEGIN
    SET NOCOUNT ON;

    DECLARE
        @DBName SYSNAME,
        @LogFileName SYSNAME,
        @StartTime DATETIME = GETDATE(),
        @Msg NVARCHAR(MAX),
        @ErrorMsg NVARCHAR(MAX),
        @SQL NVARCHAR(MAX),
        @OriginalRecoveryModel NVARCHAR(60),
        @CurrentRecoveryModel NVARCHAR(60);

    BEGIN TRY
        -- SECTION 1: INITIALIZATION & VALIDATION
        -- ===========================================

        -- Dynamically detect current database context
        SET @DBName = DB_NAME();

        -- Safety Check: Prevent execution on system databases
        IF @DBName IN ('master', 'model', 'msdb', 'tempdb')
        BEGIN
            SET @Msg = 'CRITICAL: This procedure cannot be run on system databases. Attempted on: ' + @DBName;
            RAISERROR(@Msg, 16, 1);
            RETURN;
        END

        -- Detect the logical name of the transaction log file
        SELECT TOP 1 @LogFileName = name
        FROM sys.database_files
        WHERE type_desc = 'LOG';

        -- Safety Check: Ensure log file was found
        IF @LogFileName IS NULL
        BEGIN
            RAISERROR('Could not identify transaction log file for database: %s', 16, 1, @DBName);
            RETURN;
        END

        -- SECTION 2: AUDITING & RECOVERY MODEL MANAGEMENT
        -- ==================================================

        -- Capture original recovery model for restoration
        SELECT @OriginalRecoveryModel = recovery_model_desc
        FROM sys.databases
        WHERE name = @DBName;

        -- Log process initiation
        INSERT INTO log.LogShrinkAudit
        (DBName, LogFileName, StartTime, Status, Message)
        VALUES
        (@DBName, @LogFileName, @StartTime, 'Started',
         'Emergency log shrink initiated. Original Recovery: ' + @OriginalRecoveryModel);

        PRINT 'Starting emergency log management for database: ' + @DBName;
        PRINT 'Log file name: ' + @LogFileName;
        PRINT 'Original recovery model: ' + @OriginalRecoveryModel;

        -- SECTION 3: THE CORE LOG MANAGEMENT PROCESS
        -- =============================================

        -- STEP 3.1: TEMPORARY RECOVERY MODEL SWITCH
        -- WARNING: This breaks the log backup chain in FULL recovery!
        SET @SQL = N'ALTER DATABASE [' + @DBName + N'] SET RECOVERY SIMPLE;';
        EXEC(@SQL);
        PRINT 'Temporarily switched to SIMPLE recovery model.';

        -- STEP 3.2: CHECKPOINT - Flushes dirty pages to data file
        CHECKPOINT;
        PRINT 'Checkpoint completed.';

        -- STEP 3.3: SHRINK LOG FILE (THE MAIN EVENT)
        -- Parameter 0 = shrink to smallest possible size
        SET @SQL = N'DBCC SHRINKFILE (N''' + @LogFileName + N''', 0);';
        EXEC(@SQL);
        PRINT 'Log file shrink completed.';

        -- STEP 3.4: RIGHT-SIZE THE LOG FILE (CRITICAL STEP!)
        -- Prevents immediate autogrowth. Adjust 1000MB based on your needs.
        SET @SQL = N'ALTER DATABASE [' + @DBName + N']
                    MODIFY FILE (NAME = N''' + @LogFileName + N''',
                                SIZE = 1000MB,
                                MAXSIZE = UNLIMITED,
                                FILEGROWTH = 500MB);';
        EXEC(@SQL);
        PRINT 'Log file resized to prevent immediate regrowth.';

        -- STEP 3.5: RESTORE ORIGINAL RECOVERY MODEL
        IF @OriginalRecoveryModel = 'FULL'
        BEGIN
            SET @SQL = N'ALTER DATABASE [' + @DBName + N'] SET RECOVERY FULL;';
            EXEC(@SQL);
            PRINT 'Recovery model restored to FULL.';

            -- CRITICAL: Take a FULL backup after restoring FULL recovery
            -- This establishes a new backup chain
            SET @Msg = 'IMPORTANT: Take a FULL database backup immediately to re-establish backup chain.';
            PRINT @Msg;
        END

        -- SECTION 4: SUCCESS REPORTING
        -- ===============================

        SET @Msg = N'Emergency log management completed successfully for database [' + @DBName +
                   N']. Process completed at ' + CONVERT(NVARCHAR(30), GETDATE(), 120);

        INSERT INTO log.LogShrinkAudit
        (DBName, LogFileName, StartTime, EndTime, Status, Message)
        VALUES
        (@DBName, @LogFileName, @StartTime, GETDATE(), 'Success', @Msg);

        PRINT @Msg;
        PRINT '=== PROCESS COMPLETED SUCCESSFULLY ===';

    END TRY

    BEGIN CATCH
        -- SECTION 5: COMPREHENSIVE ERROR HANDLING
        -- ==========================================

        SET @ErrorMsg = 'ERROR [' + CAST(ERROR_NUMBER() AS NVARCHAR(10)) + ']: ' +
                        ERROR_MESSAGE() + ' (Line: ' + CAST(ERROR_LINE() AS NVARCHAR(10)) + ')';

        PRINT 'PROCESS FAILED: ' + @ErrorMsg;

        -- Attempt to restore original recovery model on failure
        BEGIN TRY
            IF @OriginalRecoveryModel IS NOT NULL AND @OriginalRecoveryModel != 'SIMPLE'
            BEGIN
                SET @SQL = N'ALTER DATABASE [' + @DBName + N'] SET RECOVERY ' + @OriginalRecoveryModel + N';';
                EXEC(@SQL);
                PRINT 'Original recovery model restored after failure.';
            END
        END TRY
        BEGIN CATCH
            PRINT 'WARNING: Could not restore original recovery model after failure.';
        END CATCH

        -- Log the failure
        INSERT INTO log.LogShrinkAudit
        (DBName, LogFileName, StartTime, EndTime, Status, Message)
        VALUES
        (@DBName, ISNULL(@LogFileName, 'Unknown'), @StartTime, GETDATE(), 'Failed', @ErrorMsg);

        -- Re-throw error for external handling (e.g., SQL Agent)
        THROW;

    END CATCH
END
GO


Step-by-Step Implementation Guide
Step 1. Prerequisites & Safety Checks
Create the Audit Table:

CREATE SCHEMA [log];
GO

CREATE TABLE [log].[LogShrinkAudit](
    [AuditID] [int] IDENTITY(1,1) NOT NULL,
    [DBName] [sysname] NOT NULL,
    [LogFileName] [sysname] NOT NULL,
    [StartTime] [datetime] NULL,
    [EndTime] [datetime] NULL,
    [Status] [nvarchar](50) NULL,
    [Message] [nvarchar](max) NULL)

Permissions Required:

  • ALTER DATABASE  permission on the target database
  • INSERT  permission on the audit table
  • Membership in  db_owner  role is typically required


Step 2. Initial Testing (SAFE MODE)
NEVER run this in production without testing first!

-- TEST 1: Check what would happen (read-only check)-- Examine current log size and VLFsSELECT
    name AS [LogFileName],
    size * 8.0 / 1024 AS [SizeMB],
    CASE WHEN size = max_size THEN 'FULL' ELSE 'GROWTH AVAILABLE' END AS [Status]FROM sys.database_files
WHERE type_desc = 'LOG';

-- Check VLF count (high count = fragmentation)DBCC LOGINFO;

Step 3. Emergency Execution

Only proceed if you have a genuine emergency and understand the risks:
-- Execute the emergency procedureEXEC [dbo].[spLogClear];
-- Monitor the audit logSELECT * FROM [log].[LogShrinkAudit] ORDER BY StartTime DESC;
-- RITICAL POST-PROCESS STEP: Take a FULL backupBACKUP DATABASE [YourDatabase]
TO DISK = 'D:\Backups\PostShrink_FullBackup.bak'WITH COMPRESSION, INIT;

Pros, Cons, and Best Practices 

AspectEmergency Use Case Regular Use Risks Best Practice Alternative
Disk Space Recovery Immediate space recovery  from runaway log VLF fragmentation  causes poor performance Proper log backups  in FULL recovery model
Automation Quick resolution  during emergencies Masks underlying problems Monitor log growth  and address root causes
Recovery Model Allows space reclamation in FULL model Breaks log backup chain  - potential data loss Size log appropriately  from the start
Right-Sizing Prevents immediate regrowth after shrink Manual sizing may not match workload Set appropriate autogrowth  (not percentage)

The RIGHT Way: Long-Term Log Management Strategy

Instead of regular shrinking, implement this

For FULL Recovery Model Databases

-- Schedule frequent transaction log backupsBACKUP LOG [YourDatabase]
TO DISK = 'D:\LogBackups\YourDatabase_Log.trn'WITH COMPRESSION;

    Right-Size from Beginning

-- Set appropriate initial size and growthALTER DATABASE [YourDatabase]MODIFY FILE (NAME = YourDatabase_Log,
         SIZE = 4096MB,
         FILEGROWTH = 1024MB); -- Fixed growth, not percentage

    Monitoring & Alerting

-- Regular monitoring querySELECT
    name AS DatabaseName,
    (size * 8.0 / 1024) AS LogSizeMB,
    (CAST(used_space AS float) * 8 / 1024) AS UsedSpaceMB,
    (size * 8.0 / 1024) - (CAST(used_space AS float) * 8 / 1024) AS FreeSpaceMB
FROM sys.dm_db_log_space_usage
CROSS APPLY sys.databases
WHERE sys.databases.database_id = sys.dm_db_log_space_usage.database_id;

Business Case & Limitations

Business Case:  Prevents production outages due to disk space exhaustion. Maintains system availability during unexpected log growth scenarios.

Limitations

  • Temporary solution  - doesn't address root cause
  • Performance impact  - VLF fragmentation affects write performance
  • Recovery implications  - breaks the point-in-time recovery capability until a new full backup is taken
  • Not a substitute  for a proper backup strategy

"The truth your DBA won't tell you about log files"
"From panic to solution in 5 minutes"
"The emergency fix that saved our production server"
"Why your log file keeps growing (and how to stop it forever)"
"The shrink operation that won't get you fired"


Remember: This script is your  emergency fire extinguisher  - essential to have, but you hope you never need to use it! 

Alternative Approach
Executive summary/business case

  • Why this matters
  • Large or uncontrolled log files consume disk, slow backups, complicate restores, and amplify ransomware damage.
  • Frequent, ad-hoc shrinking causes fragmentation and performance problems.

The right approach: prevent log growth with a correct recovery model + frequent log backups, monitor, and only shrink when necessary with a documented, auditable process.

Business benefits

  • Predictable disk usage and lower storage costs.
  • Faster restores and improved availability.
  • Audit trail for operations (compliance & change control).

Key principles & best standard (short)

  • Prefer prevention over cure:  take frequent transaction-log backups (FULL model) or use SIMPLE model only for dev/test.
  • Avoid routine shrinking:  shrink only when required (e.g., after one-off huge operation, log growth due to long-running transaction, or DB restore/maintenance).
  • Logically plan growth settings:  set sensible  FILEGROWTH  and  INITIAL SIZE —Avoid tiny percent-based growth for large logs.
  • Automate monitoring & alerts:  watch  log_reuse_wait_desc , free space, and autogrowth events.
  • Audit & document any shrink operation.  Record who ran it, why, and the before/after sizes.

Common causes of log growth
Long-running or uncommitted transactions
Replication, CHANGE_TRACKING, CDC, DB mirroring, AlwaysOn, long-running backups
Missing or infrequent log backups in the FULL recovery model
Bulk operations (index rebuilds, large loads)

Pros & Cons of shrinking logs
Pros

Instantly reclaims disk after one-time surge.
Useful after large one-off operations or restoring from a backup with a larger log.

Cons
Causes file fragmentation and potential performance degradation.
Shrinking regrows logs if operations continue — regrowth is expensive.
Not a long-term solution to recurring growth.

Best practices (step-by-step checklist before shrinking)

  • Investigate root cause: Check  DBCC SQLPERF(LOGSPACE)  and  sys.databases   log_reuse_wait_desc .
  • Fix underlying issue: e.g., schedule frequent log backups, commit/kill long transactions, disable/reconfigure features.
  • Take a transaction log backup (FULL model)  to free up virtual log files (VLFs) if possible.
  • Delay shrinking until log is reusable  (log_reuse_wait_desc = NOTHING).
  • Document & audit : always insert an audit record before/after shrink.
  • Shrink in a maintenance window and monitor performance, autogrowth.
  • Adjust file growth strategy: e.g.,  FILEGROWTH = 512MB  for busy OLTP DBs, avoid 1% growth for large files.
  • Perform index maintenance afterwards if required (heavy shrink may impact fragmentation).

Security, permissions & masking sensitive fields

  • Do not run shrink scripts as  sa  unless required. Use a least-privilege account with  ALTER DATABASE  permissions or run as an approved operator via SQL Agent.
  • Mask sensitive values in audit/log tables (if you store server or path info that could reveal structure). Example: store  ServerName  hashed or store only the first/last 8 chars.
  • Record operator identity using  SUSER_SNAME()  or  ORIGINAL_LOGIN()  — but don’t store personal data that is not necessary.

Table design (you provided) — small improvements
Add a default  StartTime  and index for queries:
ALTER TABLE log.LogShrinkAudit
ADD CONSTRAINT DF_LogShrinkAudit_StartTime DEFAULT (GETDATE()) FOR StartTime;

CREATE NONCLUSTERED INDEX IX_LogShrinkAudit_DBName_StartTime
ON log.LogShrinkAudit(DBName, StartTime DESC);


Masking note: if you log paths or server names and want to mask, store a hash:  HASHBYTES('SHA2_256', @FullServerPath)  (store varbinary) and keep mapping in a secure admin-only table.

Production-ready stored procedure (improved, audited, safe)

Key improvements over the original:

Detects  log_reuse_wait_desc  and refuses to shrink unless reason is  NOTHING  or explicit override given.

Optional  @Force  for documented emergency shrink with required  @Reason  param.

Records before/after sizes, VLF count, user who ran it.

Graceful error handling and throttled shrink (shrinks to target percent/size).

Avoids setting recovery model from FULL→SIMPLE→FULL automatically (dangerous) unless explicitly allowed.

Warning:  Changing recovery model from FULL→SIMPLE causes break in log backup chain. Only do if you know consequences.

-- ================================================-- dbo.spLogClear_Managed-- Safer, auditable transactional log cleanup procedure-- Usage: EXEC dbo.spLogClear_Managed @TargetSizeMB=1024, @Force=0, @AllowRecoveryChange=0, @Reason='...';-- ================================================CREATE OR ALTER PROCEDURE dbo.spLogClear_Managed
(
@TargetSizeMB INT = 1024,            -- desired final size in MB (approx)
@MinFreePercent INT = 10,            -- shrink only if free space percent > this
@Force BIT = 0,                      -- 1 = allow shrink even if log_reuse_wait != NOTHING (use with caution)
@AllowRecoveryChange BIT = 0,        -- 1 = allow temporary switch to SIMPLE (dangerous, breaks log chain)
@Reason NVARCHAR(4000) = NULL        -- required if @Force = 1 or @AllowRecoveryChange = 1
)
ASBEGIN
SET NOCOUNT ON;
DECLARE @DBName SYSNAME = DB_NAME();
DECLARE @LogFileName SYSNAME;
DECLARE @StartTime DATETIME = GETDATE();
DECLARE @BeforeSizeMB INT;
DECLARE @BeforeUsedPercent DECIMAL(5,2);
DECLARE @AfterSizeMB INT;
DECLARE @SQL NVARCHAR(MAX);
DECLARE @WaitDesc SYSNAME;
DECLARE @Operator SYSNAME = SUSER_SNAME();
DECLARE @AuditMsg NVARCHAR(MAX);

BEGIN TRY
    -- 1. Log initial state: find log file
    SELECT TOP(1)
        @LogFileName = name
    FROM sys.database_files
    WHERE type_desc = 'LOG';

    -- 2. Get current log size & used percent
    SELECT
        @BeforeSizeMB = CAST(size/128.0 AS INT),
        @BeforeUsedPercent = CAST( ( (size - CAST(FILEPROPERTY(name,'SpaceUsed') AS INT)) * 100.0 / NULLIF(size,0) ) AS DECIMAL(5,2) )
    FROM sys.database_files
    WHERE type_desc = 'LOG' AND name = @LogFileName;

    -- 3. Determine log reuse wait reason
    SELECT @WaitDesc = log_reuse_wait_desc
    FROM sys.databases
    WHERE name = @DBName;

    -- 4. Audit start
    INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, Status, Message)
    VALUES (@DBName, @LogFileName, @StartTime, 'Started',
            'Initiated by ' + ISNULL(@Operator,'Unknown') + '; log_reuse_wait_desc=' + ISNULL(@WaitDesc,'Unknown'));

    -- 5. Safety checks
    IF @Force = 0 AND @WaitDesc <> 'NOTHING'
    BEGIN
        SET @AuditMsg = 'Abort: log_reuse_wait_desc = ' + @WaitDesc + '. Use proper log backup or resolve wait reason before shrinking.';
        INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, EndTime, Status, Message)
        VALUES(@DBName, @LogFileName, @StartTime, GETDATE(), 'Aborted', @AuditMsg);
        RETURN;
    END

    IF @Force = 1 AND (@Reason IS NULL OR LEN(@Reason) < 5)
    BEGIN
        THROW 51010, 'Force requested but @Reason is required (short justification).', 1;
    END

    -- 6. Optionally switch recovery model (DANGEROUS)
    IF @AllowRecoveryChange = 1
    BEGIN
        IF @Reason IS NULL OR LEN(@Reason) < 5
            THROW 51011, 'AllowRecoveryChange requires a justified @Reason.', 1;

        SET @SQL = N'ALTER DATABASE [' + @DBName + '] SET RECOVERY SIMPLE WITH NO_WAIT;';
        EXEC(@SQL);

        INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, Status, Message)
        VALUES(@DBName, @LogFileName, @StartTime, 'Info', 'Recovery switched to SIMPLE temporarily. Reason: ' + @Reason);
    END

    -- 7. Do a checkpoint if in SIMPLE or after log backup
    CHECKPOINT;

    -- 8. Recompute size and used percent (safe check)
    SELECT
        @BeforeSizeMB = CAST(size/128.0 AS INT),
        @BeforeUsedPercent = CAST( ( (size - CAST(FILEPROPERTY(name,'SpaceUsed') AS INT)) * 100.0 / NULLIF(size,0) ) AS DECIMAL(5,2) )
    FROM sys.database_files
    WHERE type_desc = 'LOG' AND name = @LogFileName;

    -- 9. Check min free percent condition
    IF @BeforeUsedPercent <= (100 - @MinFreePercent)
    BEGIN
        -- compute target shrink size in pages (approx) using DBCC SHRINKFILE target in MB
        SET @SQL = N'DBCC SHRINKFILE ([' + @LogFileName + '], ' + CAST(@TargetSizeMB AS NVARCHAR(20)) + ');';
        EXEC(@SQL);

        -- Capture after size
        SELECT @AfterSizeMB = CAST(size/128.0 AS INT)
        FROM sys.database_files
        WHERE type_desc = 'LOG' AND name = @LogFileName;

        SET @AuditMsg = 'Shrink attempted. BeforeSizeMB=' + CAST(@BeforeSizeMB AS NVARCHAR(20)) +
                        '; AfterSizeMB=' + CAST(ISNULL(@AfterSizeMB, -1) AS NVARCHAR(20));
        INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, EndTime, Status, Message)
        VALUES(@DBName, @LogFileName, @StartTime, GETDATE(), 'Success', @AuditMsg);
    END
    ELSE
    BEGIN
        SET @AuditMsg = 'Abort: insufficient free space in log to shrink safely. UsedPercent=' + CAST(@BeforeUsedPercent AS NVARCHAR(10));
        INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, EndTime, Status, Message)
        VALUES(@DBName, @LogFileName, @StartTime, GETDATE(), 'Skipped', @AuditMsg);
    END

    -- 10. Restore recovery model if changed (only if @AllowRecoveryChange = 1)
    IF @AllowRecoveryChange = 1
    BEGIN
        SET @SQL = N'ALTER DATABASE [' + @DBName + '] SET RECOVERY FULL WITH NO_WAIT;';
        EXEC(@SQL);

        INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, Status, Message)
        VALUES(@DBName, @LogFileName, GETDATE(), 'Info', 'Recovery switched back to FULL. Ensure you take a full + log backup to re-establish chain.');
    END
END TRY
BEGIN CATCH
    DECLARE @ErrMsg NVARCHAR(MAX) = ERROR_MESSAGE();
    INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, EndTime, Status, Message)
    VALUES(@DBName, ISNULL(@LogFileName,'Unknown'), @StartTime, GETDATE(), 'Failed', @ErrMsg);
    THROW; -- re-throw for agent visibility
END CATCH
END
GO

Usage examples
Normal safe run (abort if log is in use):
EXEC dbo.spLogClear_Managed @TargetSizeMB=1024, @MinFreePercent=20;

Emergency (must supply reason):
EXEC dbo.spLogClear_Managed @TargetSizeMB=512, @Force=1, @Reason='Emergency disk pressure after large archive load';

Temporary recovery change (dangerous; document and follow-up!):
EXEC dbo.spLogClear_Managed @TargetSizeMB=512, @AllowRecoveryChange=1, @Reason='One-off maintenance, will take full backup after';

Alternatives & automation options
Preferred:  Fix root cause (frequent log backups) — implement a schedule  BACKUP LOG  to reduce VLF usage.

  • Ola Hallengren  — maintenance solution includes integrity checks, backup, and cleanup.
  • PowerShell approach:  use  Invoke-Sqlcmd  to check state and issue shrink if needed; easier to integrate with vaults and monitoring.
  • Third-party:  enterprise tools that manage log shipping, archiving, and unintrusive cleanup.

Monitoring & telemetry (what to alert on)
log_reuse_wait_desc  ≠ NOTHING — alert when it grows continuously.

Rapid autogrowth events — alert on frequent autogrowth.

Free disk space thresholds.

Number of VLFs (very large counts cause performance issues) — use  DBCC LOGINFO  or DMV-based scripts.

Risk mitigation & rollback plan
Before : always take current full + log backup.

If the shrink fails or the DB becomes suspect: have an emergency restore plan; always test restores in staging.
After: if you changed the recovery model, take a full backup, then restart the log backup schedule to reestablish the chain.

Masking sensitive fields example
When inserting path/server info into logs, store only hashed values or partials:
INSERT INTO log.LogShrinkAudit (DBName, LogFileName, StartTime, Status, Message)
VALUES(@DBName, HASHBYTES('SHA2_256', @LogFileName), GETDATE(), 'Started', 'Initiated by ' + SUSER_SNAME());


Store mapping in a separate secure table accessible only to auditors.

  • “Ransomware-resistant log strategy”
  • “Zero-trust backup & log hygiene”
  • “Immutable audit trail”
  • “Predictable storage, instant restoration”
  • “Proactive log management — not reactive surgery”

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.




SQL Server Hosting - HostForLIFE :: The Step-by-Step Guide to SQL Server Database Recovery in Emergency Mode?

clock October 16, 2025 07:45 by author Peter

SQL Server may automatically switch a database into EMERGENCY mode if it is suspected of being compromised. Limited access is available for troubleshooting and repairs in this mode. This post will explain how to properly restore a database from EMERGENCY mode using Microsoft-approved recovery procedures and detailed SQL instructions.

What Is Emergency Mode?
Emergency mode is a special state in SQL Server that allows administrators to access a database marked as corrupt or inaccessible. It’s read-only, single-user, and bypasses the transaction log for emergency repair or data extraction.

Typical reasons your database enters this state include:

Missing or corrupt transaction logs

  • Hardware failure or disk corruption
  • Unexpected shutdowns or SQL Server crashes
  • File system or drive letter changes

Step-by-Step Guide to Recover a Database from Emergency Mode

Step 1. Check Database State
Use this command to verify the current state of your database:
SELECT name, state_desc
FROM sys.databases
WHERE name = 'YourDatabaseName';

If the state shows as EMERGENCY or SUSPECT, proceed to recovery.

Step 2. Set Database to Emergency Mode (Manually)
If SQL Server has not already done this, you can manually put your database into EMERGENCY mode:
ALTER DATABASE YourDatabaseName SET EMERGENCY;
GO


This gives you sysadmin-level access to inspect and fix the database.

Step 3. Run Consistency Check (DBCC CHECKDB)
Now, check the physical and logical integrity of your database:
DBCC CHECKDB (YourDatabaseName);
GO


If you see errors like “Msg 824, 825, or 826”, it means corruption exists.

Step 4. Set Database to Single User Mode
Before performing repairs, you must ensure no other connections interfere:
ALTER DATABASE YourDatabaseName SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO

Step 5. Attempt to Repair the Database
There are multiple repair levels; the most commonly used for severe corruption is:
DBCC CHECKDB (YourDatabaseName, REPAIR_ALLOW_DATA_LOSS);
GO


Warning: As the name suggests, this may cause some data loss.
Always attempt to restore from backup first if available.

If you have a valid recent backup, do not run this — instead, restore using RESTORE DATABASE.

Step 6. Set Database Back to Multi User Mode
Once the repair completes successfully, return the database to normal operation:
ALTER DATABASE YourDatabaseName SET MULTI_USER;
GO


Then, verify the database state again:
SELECT name, state_desc FROM sys.databases WHERE name = 'YourDatabaseName';
GO

It should now show as ONLINE.

Optional: Restore from Backup Instead of Repair

If you maintain regular backups, restoring is the safest route:
RESTORE DATABASE YourDatabaseName
FROM DISK = 'D:\Backups\YourDatabaseName.bak'
WITH REPLACE;
GO


This avoids potential data loss caused by emergency repairs.

Example Scenario
Let’s assume your database StudentDB is marked as SUSPECT.
Here’s a real-world recovery script:
USE master;
GO

ALTER DATABASE StudentDB SET EMERGENCY;
GO

ALTER DATABASE StudentDB SET SINGLE_USER;
GO

DBCC CHECKDB (StudentDB, REPAIR_ALLOW_DATA_LOSS);
GO

ALTER DATABASE StudentDB SET MULTI_USER;
GO


After running this, StudentDB should return to a normal operational state.

Verify and Backup Immediately

Once your database is online:
Validate key tables and data integrity.

Take a full backup immediately:
BACKUP DATABASE StudentDB TO DISK = 'D:\Backup\StudentDB_Recovered.bak';
GO

Monitor your SQL Server logs for any recurring I/O or consistency errors.

Best Practices to Prevent Emergency Mode Issues

  • Always maintain daily full backups and transaction log backups.
  • Enable SQL Server alerts for error codes 823, 824, 825.
  • Store backups on different physical drives or cloud storage (e.g., Azure Blob, AWS S3).
  • Use ECC RAM and RAID storage for database servers.
  • Schedule regular DBCC CHECKDB checks in SQL Agent jobs.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Functional and Performance Disparities Between IN and EXISTS in SQL Queries

clock October 10, 2025 08:32 by author Peter

It may be necessary to use the results of another query to filter data while writing SQL queries. The IN and EXISTS operators are two popular methods for doing this. They may seem similar and often return the same results, but they work differently in the background. Choosing the right one can help your query run faster.

In this blog, we’ll look at:

  • What IN and EXISTS mean
  • Easy examples to understand how they work
  • How their performance compares
  • Tips on when to use each one
  • Getting to Know the Concept

IN Operator:   The IN operator helps you check whether a value appears in a list or in the results of another query. If the value is found in that list or subquery, the condition returns true.
SELECT *
FROM Employees
WHERE DepartmentID IN (
    SELECT DepartmentID FROM Departments WHERE IsActive = 1
);

How it works?
The subquery is evaluated first, generating a list of DepartmentID values. The outer query then checks if each row’s DepartmentID is present in that list.
EXISTS Operator: The EXISTS operator is used to check whether a subquery returns any rows. If the subquery finds even one matching record, the condition returns true. It doesn’t compare specific values — it simply checks if results exist
SELECT *
FROM Employees E
WHERE EXISTS (
    SELECT 1
    FROM Departments D
    WHERE D.DepartmentID = E.DepartmentID
      AND D.IsActive = 1
);


How it works?
For each row in Employees, the subquery checks for at least one matching DepartmentID. If it finds one, it stops searching and returns TRUE

Key Differences

AspectINEXISTS
Evaluation Compares a value to a static list or subquery results Checks if any row satisfies the subquery condition
When to Use When the subquery returns a small dataset When the subquery returns a large dataset
NULL Handling Returns no match if subquery returns NULL Not affected by NULL values
Short-circuiting Evaluates entire list before matching Stops after first match (faster in many cases)
Optimizer Hint Converts to SEMI JOIN internally Converts to SEMI JOIN internally (more efficient in modern engines)

Performance Comparison
Let us compare performance with a practical scenario.

For instance, we have the two tables below:
CREATE TABLE Employees (
    EmployeeID INT PRIMARY KEY,
    DepartmentID INT
);

CREATE TABLE Departments (
    DepartmentID INT PRIMARY KEY,
    IsActive BIT
);


Scenario 1. When the Subquery Returns a Small Result
Let’s say only a few departments are active:
SELECT *
FROM Employees
WHERE DepartmentID IN (
    SELECT DepartmentID
    FROM Departments
    WHERE IsActive = 1
);

In this case, using IN works well — and may even be slightly faster. That’s because the database can easily check a small list of values using a quick lookup, similar to searching for an item in a short list.

Scenario 2. When the Subquery Returns a Large Result:
Now, imagine most departments are active:
SELECT *
FROM Employees E
WHERE EXISTS (
    SELECT 1
    FROM Departments D
    WHERE D.DepartmentID = E.DepartmentID
      AND D.IsActive = 1
);


In this case, using EXISTS is usually faster. That’s because EXISTS stops searching as soon as it finds a matching record — it doesn’t need to go through the entire list. On the other hand, IN has to compare each value against all the items in the list, which takes more time when the list is large.

How the SQL Optimizer Handles IN and EXISTS?

Modern SQL engines — like SQL Server, Oracle, PostgreSQL, and MySQL 8+ — are smart. They often turn both IN and EXISTS into similar operations behind the scenes, called semi-joins, to make queries faster.

However, there are some special cases where their performance can differ:

  • IN can slow down if the subquery returns a lot of rows that include NULL values.
  • EXISTS doesn’t have this problem and usually works more efficiently for large datasets.

Think of it like this: IN is like checking a list item by item, while EXISTS just asks, “Is there at least one match?” — and stops as soon as it finds one.

Best Practices for Using IN and EXISTS
Use EXISTS for correlated subqueries

When your subquery depends on the outer query, EXISTS is usually the better choice:
SELECT *
FROM Orders O
WHERE EXISTS (
    SELECT 1
    FROM OrderDetails D
    WHERE D.OrderID = O.OrderID
);


Use IN for small or fixed lists
If you have a short list of known values, IN is simple and easy to read:
WHERE Country IN ('US', 'CA', 'UK')

Be careful with NOT IN and NULLs
If your subquery might return NULL, using NOT IN can lead to unexpected results:
WHERE DepartmentID NOT IN (
    SELECT DepartmentID
    FROM Departments
)

If the subquery contains a NULL, this query will return no results. To avoid this problem, use NOT EXISTS instead — it handles NULLs safely.

Inspect the execution plan

Use EXPLAIN (or SET SHOWPLAN in SQL Server) to see how the database executes your query. This shows whether it turns your query into a semi-join (for IN/EXISTS) or an anti-join (for NOT EXISTS).

Index your foreign keys
Make sure the columns used in subqueries — especially join keys like DepartmentID — have indexes. This helps the database find matches faster and improves performance.

Final Thoughts
Both IN and EXISTS are powerful tools in SQL. The key is not just knowing what they do, but understanding how they work behind the scenes. By paying attention to the size of your data, indexing, and how NULLs are handled, you can choose the most efficient option for your queries.

Remember

  • Use IN for small, fixed lists of values.
  • Use EXISTS for correlated subqueries or large result sets.
  • Always check the execution plan - query optimizers can surprise you!

With these tips, you’ll be able to write SQL queries that are both correct and fast.
Happy Coding!

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Understanding SET SERVEROUTPUT ON in PL/SQL

clock September 26, 2025 07:52 by author Peter

SET SERVEROUTPUT ON in PL/SQL: Why It Matters and How to Use It
When working with Oracle PL/SQL, one of the first and most essential commands you'll encounter is:

SET SERVEROUTPUT ON;

Though it looks simple, this command plays a crucial role in how PL/SQL programs behave, especially when you want to display output from procedures, anonymous blocks, or scripts using DBMS_OUTPUT.PUT_LINE.

Why Do We Use SET SERVEROUTPUT ON?
By default, PL/SQL code runs silently inside the Oracle database engine. This means that even if your program generates output, you won’t see any result on your screen — unless you explicitly enable server output.

The command SET SERVEROUTPUT ON instructs SQL*Plus or SQL Developer to display output from the DBMS_OUTPUT buffer, allowing you to see the results of your PL/SQL program.

Syntax
SET SERVEROUTPUT ON;

You can also control the buffer size and format (optional):
SET SERVEROUTPUT ON SIZE 1000000 FORMAT WRAPPED;

When Should You Use It?
Use SET SERVEROUTPUT ON in situations like:

When using DBMS_OUTPUT.PUT_LINE to display output.

During testing and debugging of PL/SQL procedures or anonymous blocks.

To trace variable values or track the flow of control in your code.

Example: Using SET SERVEROUTPUT ON in a PL/SQL Block

Here’s a simple example that declares variables and uses DBMS_OUTPUT.PUT_LINE to display them:
SET SERVEROUTPUT ON;

DECLARE
    eno NUMBER(5) NOT NULL := 2;
    ename VARCHAR2(15) := 'Branson Devs';
    edept CONSTANT VARCHAR2(15) := 'Web Developer';
BEGIN
    dbms_output.put_line('Declared Values:');
    dbms_output.put_line(' Employee Number: ' || eno || ' Employee Name: ' || ename);
    dbms_output.put_line('Constant Declared:');
    dbms_output.put_line(' Employee Department: ' || edept);
END;
/


Output (Only Visible If SERVEROUTPUT Is ON):
Declared Values:
Employee Number: 2 Employee Name: Branson Devs
Constant Declared:
Employee Department: Web Developer

Important: If you omit SET SERVEROUTPUT ON, the DBMS_OUTPUT.PUT_LINE results will not be displayed, even though the block executes successfully.

Tips for Using SET SERVEROUTPUT ON
In SQL Developer, go to View > DBMS Output, then click the green + icon to enable output for your session.
In SQL*Plus, always run SET SERVEROUTPUT ON before any PL/SQL block that uses output.
Use SET SERVEROUTPUT OFF when you no longer need the output to be displayed.

Conclusion
The SET SERVEROUTPUT ON command is small but vital for writing and debugging PL/SQL code. It provides visibility into your PL/SQL logic by allowing output messages to be isplayed on screen, making your development workflow smoother and more transparent.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: SQL: The Database Language

clock September 22, 2025 08:11 by author Peter

The common language used to manage and communicate with relational databases is called Structured Query Language, or SQL. SQL provides the capabilities to effectively store, manage, and retrieve data, whether you're developing an enterprise application, corporate dashboard, or website.

What is SQL?
SQL (pronounced “ess-que-el” or sometimes “sequel”) is a domain-specific language used to communicate with relational database management systems (RDBMS) such as:

  • MySQL
  • PostgreSQL
  • Microsoft SQL Server
  • Oracle Database

SQLite
SQL lets you describe what data you want, while the database engine figures out how to get it.

Core Features of SQL

  • Data Definition: Create and modify the structure of databases (tables, views, indexes).
  • Data Manipulation: Insert, update, delete, and retrieve records.
  • Data Control: Manage permissions and security (GRANT, REVOKE).
  • Transaction Control: Commit or roll back changes safely.

Basic SQL Commands
Here are some of the most commonly used SQL statements:

Create a Table
CREATE TABLE Employees (

EmployeeID INT PRIMARY KEY,

Name VARCHAR(50),

Position VARCHAR(50),

Salary DECIMAL(10,2)

);


Insert Data
INSERT INTO Employees (EmployeeID, Name, Position, Salary)

VALUES (1, 'Alice', 'Developer', 65000);


Retrieve Data

SELECT Name, Position

FROM Employees

WHERE Salary > 60000;

Update Data

UPDATE Employees

SET Salary = 70000

WHERE EmployeeID = 1;

Delete Data
DELETE FROM Employees

WHERE EmployeeID = 1;


Why SQL is Important?

  • Universality: Nearly all relational databases use SQL or a close dialect.
  • Powerful Queries: Combine, group, and filter data with ease.
  • Data Integrity: Enforce constraints (primary keys, foreign keys) to keep data consistent.
  • Scalability: Handle anything from a small app’s data to enterprise-level systems.

Common Uses of SQL

  • Business intelligence and reporting
  • Backend for web and mobile apps
  • Data analytics and dashboards
  • Financial and inventory systems
  • Data migration between platforms

Advantages of SQL

  • Human-readable, declarative syntax
  • Optimized by database engines for performance
  • Portable across platforms with minimal changes
  • Supports complex operations with relatively simple commands

Limitations

  • Not ideal for unstructured or semi-structured data (that’s where NoSQL databases shine).
  • Large, complex queries can become hard to maintain without proper design.
  • Performance tuning may require knowledge of indexes, execution plans, and normalization.

Conclusion
The foundation of relational systems' data handling is SQL. You acquire a useful ability that forms the basis of practically every contemporary software program by becoming proficient with its commands and comprehending how databases arrange data. SQL is an essential skill for anyone dealing with data, whether they are an analyst, developer, or data scientist.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE :: Find High-Usage Queries and Stored Procedures in SQL Server

clock September 18, 2025 08:38 by author Peter

The most of the harm is typically caused by a few statements when a SQL Server feels "slow." With Dynamic Management Views (DMVs), you can locate them the quickest. The worst offenders are uncovered by CPU, IO, duration, and "what's running right now" using the copy-paste scripts below, along with instructions on how to read the results and what to do next.

Requirements: VIEW SERVER STATE permission. Numbers like total_worker_time and total_elapsed_time are in microseconds unless noted.

What “high usage” means (pick the lens)

  • CPU: how much processor time a query uses.
  • IO: logical/physical reads and writes (memory and disk pressure).
  • Duration: how long a query takes end-to-end.
  • Currently running: live workload that may be blocking others.

You’ll use a different script depending on which lens you want.

Top queries by CPU

-- Top 20 queries by total CPU since the plan was cached
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    qs.total_worker_time/1000.0                             AS total_cpu_ms,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_cpu_ms,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    qs.total_logical_reads + qs.total_physical_reads        AS total_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text,
    qp.query_plan
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
ORDER BY qs.total_worker_time DESC;

Tip: Add WHERE DB_NAME(st.dbid) = 'YourDbName' if you only care about one database.
Top queries by IO (reads/writes)

-- Top 20 by total reads; add writes if you care about heavy DML
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    (qs.total_logical_reads + qs.total_physical_reads)      AS total_reads,
    (qs.total_logical_writes + qs.total_physical_writes)    AS total_writes,
    (qs.total_logical_reads + qs.total_physical_reads) / NULLIF(qs.execution_count,0) AS avg_reads,
    (qs.total_logical_writes + qs.total_physical_writes) / NULLIF(qs.execution_count,0) AS avg_writes,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER BY total_reads DESC;


Top queries by average duration
-- Queries that are slow per execution (not just popular)
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    (qs.total_elapsed_time/1000.0)                          AS total_duration_ms,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0)  AS avg_cpu_ms,
    (qs.total_logical_reads + qs.total_physical_reads) / NULLIF(qs.execution_count,0) AS avg_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
WHERE qs.execution_count > 0
ORDER BY avg_duration_ms DESC;


Stored procedures that hit the server hardest
Use sys.dm_exec_procedure_stats to get procedure-level rollups (cleaner than trying to stitch statements together).

-- Procedures by total CPU
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    ps.total_worker_time/1000.0                             AS total_cpu_ms,
    (ps.total_worker_time/1000.0)/NULLIF(ps.execution_count,0) AS avg_cpu_ms,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4  -- skip system DBs; remove if you want them
ORDER BY ps.total_worker_time DESC;

-- Procedures by total reads (IO)
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    (ps.total_logical_reads + ps.total_physical_reads)      AS total_reads,
    ((ps.total_logical_reads + ps.total_physical_reads)/NULLIF(ps.execution_count,0)) AS avg_reads,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4
ORDER BY total_reads DESC;

-- Procedures by average duration
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    (ps.total_elapsed_time/1000.0)/NULLIF(ps.execution_count,0) AS avg_duration_ms,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4 AND ps.execution_count > 0
ORDER BY avg_duration_ms DESC;


What’s heavy right now (live view)?
-- Currently executing requests ordered by CPU time
SELECT
    r.session_id,
    r.status,
    DB_NAME(r.database_id)            AS database_name,
    r.cpu_time                        AS cpu_ms,         -- already in ms
    r.total_elapsed_time              AS elapsed_ms,     -- already in ms
    r.wait_type,
    r.wait_time,
    r.blocking_session_id,
    SUBSTRING(t.text, r.statement_start_offset/2 + 1,
        (CASE WHEN r.statement_end_offset = -1
              THEN DATALENGTH(t.text)
              ELSE r.statement_end_offset END - r.statement_start_offset)/2 + 1) AS running_statement
FROM sys.dm_exec_requests AS r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t
WHERE r.session_id <> @@SPID
ORDER BY r.cpu_time DESC;


If you see blocking_session_id populated, chase that session and fix the blocker first.

Group similar statements together (query_hash)

Same query text with different literals can appear as separate rows. Aggregate by query_hash to see the true top offenders.
-- Roll up by query_hash to combine similar statements
WITH q AS (
    SELECT
        qs.query_hash,
        qs.execution_count,
        qs.total_worker_time,
        qs.total_elapsed_time,
        qs.total_logical_reads + qs.total_physical_reads AS total_reads
    FROM sys.dm_exec_query_stats qs
)
SELECT TOP 20
    query_hash,
    SUM(execution_count)                                  AS executions,
    SUM(total_worker_time)/1000.0                         AS total_cpu_ms,
    (SUM(total_worker_time)/1000.0)/NULLIF(SUM(execution_count),0) AS avg_cpu_ms,
    (SUM(total_elapsed_time)/1000.0)/NULLIF(SUM(execution_count),0) AS avg_duration_ms,
    SUM(total_reads)                                      AS total_reads
FROM q
GROUP BY query_hash
ORDER BY total_cpu_ms DESC;


Filters you’ll actually use

Add these lines to any query above as needed:
-- Only one DB
WHERE DB_NAME(st.dbid) = 'YourDbName'

-- Only statements executed in the last day (approx; uses last_execution_time)
WHERE qs.last_execution_time >= DATEADD(DAY, -1, SYSUTCDATETIME())

-- Exclude trivial one-off executions
AND qs.execution_count >= 5

Read the numbers the right way

  • High total + low average: popular query. Optimize for throughput (indexing, cached plan quality).
  • Low total + very high average: rare but slow. Optimize for latency (rewrite, avoid RBAR/scalar UDFs, better joins).
  • High duration but modest CPU/IO: usually blocking or waits. Check wait_type, blocking_session_id, and missing indexes that cause scans.
  • Metrics reset when plans get evicted or the instance restarts. Treat them as a rolling window, not forever history.


Quick wins to try after you find a culprit

  • Add the right index (covering where needed). Look at the actual plan’s missing index hints, then design a lean index yourself (don’t blindly accept 12-column monsters).
  • Kill implicit conversions (mismatched data types, e.g., NVARCHAR vs INT).
  • Replace SELECT * with exact columns (cuts reads).
  • Update statistics if they’re stale; consider WITH RECOMPILE for bad parameter sniffing cases (sparingly).
  • Avoid scalar UDFs in hot paths; inline logic or use APPLY.
  • Batch big writes; keep transactions short.


Bonus: store a snapshot for trending
If you want a daily/15-minute snapshot to trend over time:
-- One-time setup
CREATE TABLE dbo.TopQuerySnapshot
(
    captured_at           DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
    database_name         SYSNAME,
    executions            BIGINT,
    total_cpu_ms          BIGINT,
    avg_cpu_ms            DECIMAL(18,2),
    avg_duration_ms       DECIMAL(18,2),
    total_reads           BIGINT,
    query_text            NVARCHAR(MAX)
);

-- Collector (schedule as an Agent Job)
INSERT INTO dbo.TopQuerySnapshot (database_name, executions, total_cpu_ms, avg_cpu_ms, avg_duration_ms, total_reads, query_text)
SELECT TOP 50
    DB_NAME(st.dbid),
    qs.execution_count,
    qs.total_worker_time/1000,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0),
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0),
    (qs.total_logical_reads + qs.total_physical_reads),
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(st.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
ORDER BY qs.total_worker_time DESC;


Now you can chart trends and prove improvements.

Conclusion

Start with CPU, then IO, then duration. Use the live view if users are complaining right now. Once you spot a heavy hitter, check its plan, add the right index, fix data types, and re-test. Small, focused changes usually deliver big wins.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Locate SQL Server's Most Used Queries and Stored Procedures

clock September 8, 2025 07:28 by author Peter

The most of the harm is typically caused by a few statements when a SQL Server feels "slow." With Dynamic Management Views (DMVs), you can locate them the quickest. The worst offenders are uncovered by CPU, IO, duration, and "what's running right now" using the copy-paste scripts below, along with instructions on how to read the results and what to do next.

    Requirements: VIEW SERVER STATE permission. Numbers like total_worker_time and total_elapsed_time are in microseconds unless noted.

What “high usage” means (pick the lens)?

  • CPU: how much processor time a query uses.
  • IO: logical/physical reads and writes (memory and disk pressure).
  • Duration: how long a query takes end-to-end.
  • Currently running: live workload that may be blocking others.

You’ll use a different script depending on which lens you want.

Top queries by CPU

-- Top 20 queries by total CPU since the plan was cached
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    qs.total_worker_time/1000.0                             AS total_cpu_ms,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_cpu_ms,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    qs.total_logical_reads + qs.total_physical_reads        AS total_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text,
    qp.query_plan
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
ORDER BY qs.total_worker_time DESC;

Tip: Add WHERE DB_NAME(st.dbid) = 'YourDbName' if you only care about one database.

Top queries by IO (reads/writes)

-- Top 20 by total reads; add writes if you care about heavy DML
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    (qs.total_logical_reads + qs.total_physical_reads)      AS total_reads,
    (qs.total_logical_writes + qs.total_physical_writes)    AS total_writes,
    (qs.total_logical_reads + qs.total_physical_reads) / NULLIF(qs.execution_count,0) AS avg_reads,
    (qs.total_logical_writes + qs.total_physical_writes) / NULLIF(qs.execution_count,0) AS avg_writes,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER BY total_reads DESC;


Top queries by average duration
-- Queries that are slow per execution (not just popular)
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    (qs.total_elapsed_time/1000.0)                          AS total_duration_ms,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0)  AS avg_cpu_ms,
    (qs.total_logical_reads + qs.total_physical_reads) / NULLIF(qs.execution_count,0) AS avg_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
WHERE qs.execution_count > 0
ORDER BY avg_duration_ms DESC;

Stored procedures that hit the server hardest
Use sys.dm_exec_procedure_stats to get procedure-level rollups (cleaner than trying to stitch statements together).
-- Procedures by total CPU
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    ps.total_worker_time/1000.0                             AS total_cpu_ms,
    (ps.total_worker_time/1000.0)/NULLIF(ps.execution_count,0) AS avg_cpu_ms,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4  -- skip system DBs; remove if you want them
ORDER BY ps.total_worker_time DESC;

-- Procedures by total reads (IO)
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    (ps.total_logical_reads + ps.total_physical_reads)      AS total_reads,
    ((ps.total_logical_reads + ps.total_physical_reads)/NULLIF(ps.execution_count,0)) AS avg_reads,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4
ORDER BY total_reads DESC;

-- Procedures by average duration
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    (ps.total_elapsed_time/1000.0)/NULLIF(ps.execution_count,0) AS avg_duration_ms,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4 AND ps.execution_count > 0
ORDER BY avg_duration_ms DESC;

What’s heavy right now (live view)?
-- Currently executing requests ordered by CPU time
SELECT
    r.session_id,
    r.status,
    DB_NAME(r.database_id)            AS database_name,
    r.cpu_time                        AS cpu_ms,         -- already in ms
    r.total_elapsed_time              AS elapsed_ms,     -- already in ms
    r.wait_type,
    r.wait_time,
    r.blocking_session_id,
    SUBSTRING(t.text, r.statement_start_offset/2 + 1,
        (CASE WHEN r.statement_end_offset = -1
              THEN DATALENGTH(t.text)
              ELSE r.statement_end_offset END - r.statement_start_offset)/2 + 1) AS running_statement
FROM sys.dm_exec_requests AS r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t
WHERE r.session_id <> @@SPID
ORDER BY r.cpu_time DESC;


If you see blocking_session_id populated, chase that session and fix the blocker first.

Group similar statements together (query_hash)
Same query text with different literals can appear as separate rows. Aggregate by query_hash to see the true top offenders.
-- Roll up by query_hash to combine similar statements
WITH q AS (
    SELECT
        qs.query_hash,
        qs.execution_count,
        qs.total_worker_time,
        qs.total_elapsed_time,
        qs.total_logical_reads + qs.total_physical_reads AS total_reads
    FROM sys.dm_exec_query_stats qs
)
SELECT TOP 20
    query_hash,
    SUM(execution_count)                                  AS executions,
    SUM(total_worker_time)/1000.0                         AS total_cpu_ms,
    (SUM(total_worker_time)/1000.0)/NULLIF(SUM(execution_count),0) AS avg_cpu_ms,
    (SUM(total_elapsed_time)/1000.0)/NULLIF(SUM(execution_count),0) AS avg_duration_ms,
    SUM(total_reads)                                      AS total_reads
FROM q
GROUP BY query_hash
ORDER BY total_cpu_ms DESC;

Filters you’ll actually use
Add these lines to any query above as needed:
-- Only one DB
WHERE DB_NAME(st.dbid) = 'YourDbName'

-- Only statements executed in the last day (approx; uses last_execution_time)
WHERE qs.last_execution_time >= DATEADD(DAY, -1, SYSUTCDATETIME())

-- Exclude trivial one-off executions
AND qs.execution_count >= 5

Read the numbers the right way

  • High total + low average: popular query. Optimize for throughput (indexing, cached plan quality).
  • Low total + very high average: rare but slow. Optimize for latency (rewrite, avoid RBAR/scalar UDFs, better joins).
  • High duration but modest CPU/IO: usually blocking or waits. Check wait_type, blocking_session_id, and missing indexes that cause scans.
  • Metrics reset when plans get evicted or the instance restarts. Treat them as a rolling window, not forever history.

Quick wins to try after you find a culprit

  • Add the right index (covering where needed). Look at the actual plan’s missing index hints, then design a lean index yourself (don’t blindly accept 12-column monsters).
  • Kill implicit conversions (mismatched data types, e.g., NVARCHAR vs INT).
  • Replace SELECT * with exact columns (cuts reads).
  • Update statistics if they’re stale; consider WITH RECOMPILE for bad parameter sniffing cases (sparingly).
  • Avoid scalar UDFs in hot paths; inline logic or use APPLY.
  • Batch big writes; keep transactions short.

Bonus: store a snapshot for trending
If you want a daily/15-minute snapshot to trend over time:
-- One-time setup
CREATE TABLE dbo.TopQuerySnapshot
(
    captured_at           DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
    database_name         SYSNAME,
    executions            BIGINT,
    total_cpu_ms          BIGINT,
    avg_cpu_ms            DECIMAL(18,2),
    avg_duration_ms       DECIMAL(18,2),
    total_reads           BIGINT,
    query_text            NVARCHAR(MAX)
);

-- Collector (schedule as an Agent Job)
INSERT INTO dbo.TopQuerySnapshot (database_name, executions, total_cpu_ms, avg_cpu_ms, avg_duration_ms, total_reads, query_text)
SELECT TOP 50
    DB_NAME(st.dbid),
    qs.execution_count,
    qs.total_worker_time/1000,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0),
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0),
    (qs.total_logical_reads + qs.total_physical_reads),
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(st.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
ORDER BY qs.total_worker_time DESC;

Now you can chart trends and prove improvements.

Common questions
Does this include plans not in cache?

No. DMVs reflect what’s cached. For long-term history, enable Query Store (SQL Server 2016+) and analyze sys.query_store_runtime_stats.

What about currently blocked sessions?
Use the “live view” script; chase the blocking_session_id, then inspect that session’s SQL text and plan.

Conclusion

CPU comes first, followed by IO and duration. If users are currently complaining, use the live view. Check its strategy, add the appropriate index, correct data types, and retest when you've identified a heavy hitter. Big results are typically achieved with small, targeted modifications.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in