European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE :: Find High-Usage Queries and Stored Procedures in SQL Server

clock September 18, 2025 08:38 by author Peter

The most of the harm is typically caused by a few statements when a SQL Server feels "slow." With Dynamic Management Views (DMVs), you can locate them the quickest. The worst offenders are uncovered by CPU, IO, duration, and "what's running right now" using the copy-paste scripts below, along with instructions on how to read the results and what to do next.

Requirements: VIEW SERVER STATE permission. Numbers like total_worker_time and total_elapsed_time are in microseconds unless noted.

What “high usage” means (pick the lens)

  • CPU: how much processor time a query uses.
  • IO: logical/physical reads and writes (memory and disk pressure).
  • Duration: how long a query takes end-to-end.
  • Currently running: live workload that may be blocking others.

You’ll use a different script depending on which lens you want.

Top queries by CPU

-- Top 20 queries by total CPU since the plan was cached
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    qs.total_worker_time/1000.0                             AS total_cpu_ms,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_cpu_ms,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    qs.total_logical_reads + qs.total_physical_reads        AS total_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text,
    qp.query_plan
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
ORDER BY qs.total_worker_time DESC;

Tip: Add WHERE DB_NAME(st.dbid) = 'YourDbName' if you only care about one database.
Top queries by IO (reads/writes)

-- Top 20 by total reads; add writes if you care about heavy DML
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    (qs.total_logical_reads + qs.total_physical_reads)      AS total_reads,
    (qs.total_logical_writes + qs.total_physical_writes)    AS total_writes,
    (qs.total_logical_reads + qs.total_physical_reads) / NULLIF(qs.execution_count,0) AS avg_reads,
    (qs.total_logical_writes + qs.total_physical_writes) / NULLIF(qs.execution_count,0) AS avg_writes,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER BY total_reads DESC;


Top queries by average duration
-- Queries that are slow per execution (not just popular)
SELECT TOP 20
    DB_NAME(st.dbid)                                        AS database_name,
    qs.execution_count,
    (qs.total_elapsed_time/1000.0)                          AS total_duration_ms,
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0)  AS avg_cpu_ms,
    (qs.total_logical_reads + qs.total_physical_reads) / NULLIF(qs.execution_count,0) AS avg_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2) + 1)              AS query_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
WHERE qs.execution_count > 0
ORDER BY avg_duration_ms DESC;


Stored procedures that hit the server hardest
Use sys.dm_exec_procedure_stats to get procedure-level rollups (cleaner than trying to stitch statements together).

-- Procedures by total CPU
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    ps.total_worker_time/1000.0                             AS total_cpu_ms,
    (ps.total_worker_time/1000.0)/NULLIF(ps.execution_count,0) AS avg_cpu_ms,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4  -- skip system DBs; remove if you want them
ORDER BY ps.total_worker_time DESC;

-- Procedures by total reads (IO)
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    (ps.total_logical_reads + ps.total_physical_reads)      AS total_reads,
    ((ps.total_logical_reads + ps.total_physical_reads)/NULLIF(ps.execution_count,0)) AS avg_reads,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4
ORDER BY total_reads DESC;

-- Procedures by average duration
SELECT TOP 20
    DB_NAME(ps.database_id)                                 AS database_name,
    OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id)        AS schema_name,
    OBJECT_NAME(ps.object_id, ps.database_id)               AS procedure_name,
    ps.execution_count,
    (ps.total_elapsed_time/1000.0)/NULLIF(ps.execution_count,0) AS avg_duration_ms,
    ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4 AND ps.execution_count > 0
ORDER BY avg_duration_ms DESC;


What’s heavy right now (live view)?
-- Currently executing requests ordered by CPU time
SELECT
    r.session_id,
    r.status,
    DB_NAME(r.database_id)            AS database_name,
    r.cpu_time                        AS cpu_ms,         -- already in ms
    r.total_elapsed_time              AS elapsed_ms,     -- already in ms
    r.wait_type,
    r.wait_time,
    r.blocking_session_id,
    SUBSTRING(t.text, r.statement_start_offset/2 + 1,
        (CASE WHEN r.statement_end_offset = -1
              THEN DATALENGTH(t.text)
              ELSE r.statement_end_offset END - r.statement_start_offset)/2 + 1) AS running_statement
FROM sys.dm_exec_requests AS r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t
WHERE r.session_id <> @@SPID
ORDER BY r.cpu_time DESC;


If you see blocking_session_id populated, chase that session and fix the blocker first.

Group similar statements together (query_hash)

Same query text with different literals can appear as separate rows. Aggregate by query_hash to see the true top offenders.
-- Roll up by query_hash to combine similar statements
WITH q AS (
    SELECT
        qs.query_hash,
        qs.execution_count,
        qs.total_worker_time,
        qs.total_elapsed_time,
        qs.total_logical_reads + qs.total_physical_reads AS total_reads
    FROM sys.dm_exec_query_stats qs
)
SELECT TOP 20
    query_hash,
    SUM(execution_count)                                  AS executions,
    SUM(total_worker_time)/1000.0                         AS total_cpu_ms,
    (SUM(total_worker_time)/1000.0)/NULLIF(SUM(execution_count),0) AS avg_cpu_ms,
    (SUM(total_elapsed_time)/1000.0)/NULLIF(SUM(execution_count),0) AS avg_duration_ms,
    SUM(total_reads)                                      AS total_reads
FROM q
GROUP BY query_hash
ORDER BY total_cpu_ms DESC;


Filters you’ll actually use

Add these lines to any query above as needed:
-- Only one DB
WHERE DB_NAME(st.dbid) = 'YourDbName'

-- Only statements executed in the last day (approx; uses last_execution_time)
WHERE qs.last_execution_time >= DATEADD(DAY, -1, SYSUTCDATETIME())

-- Exclude trivial one-off executions
AND qs.execution_count >= 5

Read the numbers the right way

  • High total + low average: popular query. Optimize for throughput (indexing, cached plan quality).
  • Low total + very high average: rare but slow. Optimize for latency (rewrite, avoid RBAR/scalar UDFs, better joins).
  • High duration but modest CPU/IO: usually blocking or waits. Check wait_type, blocking_session_id, and missing indexes that cause scans.
  • Metrics reset when plans get evicted or the instance restarts. Treat them as a rolling window, not forever history.


Quick wins to try after you find a culprit

  • Add the right index (covering where needed). Look at the actual plan’s missing index hints, then design a lean index yourself (don’t blindly accept 12-column monsters).
  • Kill implicit conversions (mismatched data types, e.g., NVARCHAR vs INT).
  • Replace SELECT * with exact columns (cuts reads).
  • Update statistics if they’re stale; consider WITH RECOMPILE for bad parameter sniffing cases (sparingly).
  • Avoid scalar UDFs in hot paths; inline logic or use APPLY.
  • Batch big writes; keep transactions short.


Bonus: store a snapshot for trending
If you want a daily/15-minute snapshot to trend over time:
-- One-time setup
CREATE TABLE dbo.TopQuerySnapshot
(
    captured_at           DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
    database_name         SYSNAME,
    executions            BIGINT,
    total_cpu_ms          BIGINT,
    avg_cpu_ms            DECIMAL(18,2),
    avg_duration_ms       DECIMAL(18,2),
    total_reads           BIGINT,
    query_text            NVARCHAR(MAX)
);

-- Collector (schedule as an Agent Job)
INSERT INTO dbo.TopQuerySnapshot (database_name, executions, total_cpu_ms, avg_cpu_ms, avg_duration_ms, total_reads, query_text)
SELECT TOP 50
    DB_NAME(st.dbid),
    qs.execution_count,
    qs.total_worker_time/1000,
    (qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0),
    (qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0),
    (qs.total_logical_reads + qs.total_physical_reads),
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(st.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
ORDER BY qs.total_worker_time DESC;


Now you can chart trends and prove improvements.

Conclusion

Start with CPU, then IO, then duration. Use the live view if users are complaining right now. Once you spot a heavy hitter, check its plan, add the right index, fix data types, and re-test. Small, focused changes usually deliver big wins.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: How Generative AI Can Help Database Engineers and DBAs Increase Productivity?

clock August 21, 2025 08:02 by author Peter

In contemporary software development, database engineers and database administrators, or DBAs, are essential. They frequently have a lot of work to do, from managing performance, security, and compliance to guaranteeing data veracity. Let's talk about generative AI. These AI models are capable of creating documentation, optimizing queries, writing SQL, designing schemas, and even assisting with compliance. By taking on repetitive tasks, they enhance the abilities of DBAs rather than replace them, allowing engineers to concentrate on high-value problem-solving.

With specific examples and tools you can use right now, we'll examine how database engineers and DBAs may use generative AI throughout the data lifecycle in this post.

1. Schema Design & Documentation
Problem: Designing normalized schemas and documenting them manually is time-consuming.
AI Solution: Generative AI can propose ERDs, normalize schemas, and generate Markdown or HTML documentation instantly.
Tools: ChatGPT, dbdiagram.io AI, Claude.

Example Prompt:
“Design a normalized Postgres schema for a healthcare awareness website with users, doctors, and articles. Include indexes for frequent queries.”

Benefit: Schema prototyping is 50–70% faster.

2. SQL Query Generation & Optimization

Problem: Writing and tuning complex queries can block analysts and slow delivery.
AI Solution: AI can convert plain English into SQL, write joins, and suggest performance improvements.

Tools: ChatGPT, Text2SQL tools, MindsDB.

Example Prompt:
“Write a SQL query to return the top 10 most read articles in the past 30 days by unique users.”

Benefit: Analysts and engineers save 60% time on query generation.

3. Performance Tuning

Problem: Query optimization often involves deep expertise and trial-and-error with execution plans.
AI Solution: Paste EXPLAIN ANALYZE outputs into ChatGPT to get recommendations for indexes, partitioning, or caching.

Example Prompt:
“Here’s a Postgres EXPLAIN ANALYZE output. Recommend optimizations and appropriate indexes.”

Benefit: Speeds up triage 30–40%.

4. Data Migration & ETL Automation

Problem: Writing migration scripts between databases or cleaning data pipelines can take weeks.
AI Solution: Generative AI can generate migration code, field mappings, and transformation logic.
Tools: ChatGPT, LangChain connectors, Claude.

Example Prompt:
“Generate SQL to migrate users from MySQL schema A to Postgres schema B. Map user_id → id, full_name → first_name + last_name.”

Benefit: Migration tasks are 2–3x faster.

5. Data Quality & Validation

Problem: Data validation scripts and anomaly detection rules are often neglected.

AI Solution: AI can generate test cases and quality checks.

Tools: ChatGPT + dbt.

Example Prompt:
“Write SQL checks to validate that zipcode is 5 digits and articles.title is never null. Output in dbt schema test format.”

Benefit: Expands coverage with little extra effort.

6. Security & Compliance
Problem: DBAs must enforce GDPR, HIPAA, and internal security rules — often manually.

AI Solution: AI can create row-level security policies, anonymization scripts, and audit queries.

Example Prompt:
“Generate Postgres row-level security so users can only see their own records.”

Benefit: Faster compliance, reduced risk.

7. Documentation & Knowledge Sharing
Problem: Documentation is often outdated, incomplete, or skipped.
AI Solution: AI can auto-generate data dictionaries, ER diagrams, and tutorials from schemas.

Example Prompt:
“Generate a Markdown data dictionary from this schema. Include data types, constraints, and sample values.”

Benefit: 80% of documentation time eliminated.

Productivity Gains for DBAs Using Generative AI

  • Schema prototyping: 50–70% faster
  • SQL generation: 60% faster
  • Query optimization: 30–40% faster
  • Documentation: 80% automated
  • Compliance & quality checks: 2–3× coverage

Generative AI transforms DBAs from “firefighters” into strategic enablers of innovation.

Learn More: Upskill with Generative AI

Want to master how to use Generative AI in your daily engineering work?

Check out LearnAI at C# Corner — our hands-on training designed for developers, data engineers, and DBAs who want to stay ahead of the AI curve.
You’ll learn prompt engineering, AI coding, AI database workflows, and production-ready integrations.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in