Concurrency problems are common in large enterprise systems. When the amount of transactions rises, you start to notice:Deadlocks
- A lock waits
- Resolve conflicts
- Missed updates
- Phantom peruses
- Row-level conflict
Strong ORM conventions are not enough to solve these problems. Your application can manage large volumes with predictable consistency and dependability thanks to SQL Server's Concurrency Framework.
Why Concurrency Fails in Real Systems
- Common scenarios:
- Two users editing the same sales order.
- Inventory decrement executed simultaneously by multiple workers.
- Long-running read queries blocking updates.
- Parallel background jobs updating the same rows.
- ORM-generated transactions holding unnecessary locks.
When uncontrolled, this leads to:
- Lost updates
- Dirty reads
- Deadlocks
- Constraint violations
- Incorrect financial totals
- Inconsistent stock quantities
A Concurrency Framework allows the database to enforce rules systematically rather than relying on ad-hoc fixes.
Hybrid Concurrency Model
The framework uses three pillars:
Optimistic Concurrency
No lock initially. Use a version field.
If version mismatches → reject update.
Pessimistic Concurrency
Acquire XLOCK or UPDLOCK to ensure only one writer.
Deadlock Retry
Retry the block 3–5 times if SQL error 1205 occurs.
This gives high performance on normal operations and safety during high contention.
SQL Version-Token Design
Add a RowVersion or TimestampToken field:
ALTER TABLE SalesOrder
ADD RowVersion BIGINT NOT NULL DEFAULT 1;
On every update:
UPDATE SalesOrder
SET Quantity = @Qty,
RowVersion = RowVersion + 1WHERE SalesOrderId = @IdAND RowVersion = @OldVersion;
If no row is updated → version was outdated → concurrency conflict.
Pessimistic Lock Pattern
Use:
- UPDLOCK: avoids deadlocks by indicating intention to update
- ROWLOCK: restrict lock to specific row
- HOLDLOCK: serializable behavior
Example:
SELECT *FROM SalesOrder WITH (UPDLOCK, ROWLOCK)
WHERE SalesOrderId = @Id;
This guarantees only one active writer.
7. Designing the Deadlock Retry Framework
Deadlocks are unavoidable, but retrying the failed block resolves 99 percent of them.
7.1 Deadlock Retry Wrapper
CREATE PROCEDURE DeadlockRetryWrapper
(
@Attempts INT,
@ProcName SYSNAME,
@JsonInput NVARCHAR(MAX)
)
ASBEGIN
DECLARE @Try INT = 1;
WHILE @Try <= @Attempts
BEGIN
BEGIN TRY
EXEC @ProcName @JsonInput;
RETURN;
END TRY
BEGIN CATCH
IF ERROR_NUMBER() = 1205 -- Deadlock
BEGIN
SET @Try += 1;
WAITFOR DELAY '00:00:00.150'; -- Backoff
CONTINUE;
END
ELSE
BEGIN
THROW; -- rethrow other errors
END
END CATCH
END
THROW 51000, 'Deadlock retry limit exceeded.', 1;
END
This can wrap all critical stored procedures.
8. Business Procedure Example with Hybrid Concurrency
Below is how a real transaction uses the framework.
CREATE PROCEDURE UpdateStockQty
(
@StockId INT,
@Qty INT,
@Version BIGINT
)
ASBEGIN
SET NOCOUNT ON;
BEGIN TRAN;
-- Optimistic check
UPDATE Stock
SET RowVersion = RowVersion + 1
WHERE StockId = @StockId
AND RowVersion = @Version;
IF @@ROWCOUNT = 0
BEGIN
ROLLBACK;
THROW 50001, 'Version conflict', 1;
RETURN;
END
-- Pessimistic lock
SELECT Quantity
INTO #Tmp
FROM Stock WITH (UPDLOCK, ROWLOCK)
WHERE StockId = @StockId;
UPDATE Stock
SET Quantity = Quantity - @Qty
WHERE StockId = @StockId;
COMMIT;
END
This ensures:
- No concurrency overwrite
- No phantom writes
- No deadlock
- No lost updates
Implementing this in .NET (Recommended Pattern)
Use a retry policy such as Polly:
var policy = Policy
.Handle<SqlException>(ex => ex.Number == 1205)
.WaitAndRetry(3, retry => TimeSpan.FromMilliseconds(150));
policy.Execute(() =>
{
ExecuteSqlStoredProcedure("UpdateStockQty", parameters);
});
Granular Locking Strategy Matrix
| Operation Type | Preferred Method | Reason |
| Read-only metadata |
Optimistic or Snapshot |
No locks |
| Small row updates |
UPDLOCK + RowVersion |
High performance |
| High-conflict updates |
UPDLOCK + HOLDLOCK |
Ensures order |
| Complex financial transactions |
Serializable + Retry |
Strong consistency |
| Long-running processes |
Optimistic + Version check |
Avoids blocking |
Snapshot Isolation
Enable it once per DB:
ALTER DATABASE MyDB SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE MyDB SET ALLOW_SNAPSHOT_ISOLATION ON;
This drastically reduces shared locks on reads.
Common Deadlock Causes & Prevention Rules
1. Access objects in the same order
If two procedures update Account then Ledger, always update in same order.
2. Avoid implicit transactions
ORMS often cause unexpected transaction scopes.
3. Keep transaction scope small
No logging, loops, or external API calls inside transactions.
4. Use UPDLOCK on SELECT-before-UPDATE patterns
Ensures consistent intent to update.
Framework Capabilities
Your SQL Concurrency Framework should support:
- Deadlock auto-retry
- Automatic backoff logic
- Lock type configuration
- Version-token validation
- Retry policies configurable per procedure
- Logging deadlock occurrences
- Metrics dashboards
Final Summary
A SQL Server Concurrency Framework must combine:
- Optimistic concurrency for low-conflict operations
- Pessimistic locking for high-contention cases
- Deadlock retry logic for stability
- Snapshot-based reads for performance
- Version tokens for correctness
- Retry-safe stored procedure design
- Systematic locking patterns instead of ad-hoc patches
Enterprises that adopt this hybrid strategy significantly reduce:
- Deadlocks
- Blocking
- Failed updates
- Data inconsistencies
- Production defects
HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.
