European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE :: Database Files Cannot be Opened by SQL Server (Error 5171/5172). How to Fix MDF/NDF Files?

clock January 7, 2026 07:18 by author Peter

Fixing SQL Server Errors 5171 and 5172
In order for SQL Server to load, attach, or open a database, the header page of an MDF file is essential. SQL Server displays a number of problems when it cannot identify the primary data file's header. Problems with the primary database file (MDF) or the log database file (LDF) are usually the cause of SQL Server errors 5171 and 5172.

These mistakes usually happen when the format or structure of the MDF file is wrong. The potential causes and fixes for SQL Server problems 5171 and 5172 are covered in this article.

What Causes the Database File Error (5171/5172) in SQL Server?
These problems can occur when an inappropriate file (such as an NDF file instead of an MDF file) is chosen during database attachment, or when the SQL Server service account on the MDF file has inadequate permissions. Corruption in the MDF or LDF file may possibly be the cause of these issues.

Typical reasons for corruption include:

Common causes of corruption include:

  • Lack of storage space on the hard drive where the database file is located
  • Bad sectors on the disk storing the database file
  • Abrupt system shutdown or unexpected power failure
  • Bugs or internal issues in SQL Server
  • Virus or malware infection

Methods to Resolve SQL Server Errors 5171 and 5172
Before proceeding with the methods below, ensure that you have selected the correct MDF file and verify the permissions on the folder where the database files are stored. Also, confirm that the SQL Server service account has complete control permissions. Once permissions are validated, proceed with the troubleshooting methods.

Method 1. Check and Repair the MDF File
Corruption in the MDF file is one of the primary reasons behind SQL Server errors 5171 and 5172. To verify corruption, you can run the DBCC CHECKDB command, which checks the consistency of database pages, rows, index relationships, and other objects.

DBCC CHECKDB Test11;

If corruption is detected, SQL Server displays error details along with recommended repair options. If you have a recent and valid backup of the database, restoring the .bak file is the safest approach. However, if the backup is unavailable or also corrupted, you can attempt to repair the database using DBCC CHECKDB.

Below is the general syntax of the DBCC CHECKDB command with repair options:

DBCC CHECKDB
[
    (
        db_name | db_id | 0
        [ , NOINDEX
        | , { REPAIR_ALLOW_DATA_LOSS | REPAIR_FAST | REPAIR_REBUILD } ]
    )
]
[
    WITH
    {
        [ ALL_ERRORMSGS ]
        [ , EXTENDED_LOGICAL_CHECKS ]
        [ , NO_INFOMSGS ]
        [ , TABLOCK ]
        [ , ESTIMATEONLY ]
        [ , { PHYSICAL_ONLY | DATA_PURITY } ]
        [ , MAXDOP = number_of_processors ]
    }
];


To repair severe corruption, you can use the REPAIR_ALLOW_DATA_LOSS option, as shown below:
DBCC CHECKDB (N'testing35', REPAIR_ALLOW_DATA_LOSS)
WITH ALL_ERRORMSGS, NO_INFOMSGS;
GO

Although REPAIR_ALLOW_DATA_LOSS can fix highly corrupted MDF files, it may result in data loss and does not always work in every scenario. For large databases, the command may need to be executed multiple times, making the process time-consuming.

To avoid data loss and speed up the recovery process, you may consider using a professional SQL database repair tool such as Stellar Repair for MS SQL. This tool is designed to repair corrupt MDF files while preserving data integrity.

Key Features of Stellar Repair for MS SQL

  • Repairs both MDF and NDF files without file size limitations
  • Recovers all database components with high accuracy
  • Restores deleted records from the MDF file
  • Allows saving repaired data to a new or live database, as well as formats like XLS, HTML, and CSV
  • Supports SQL Server 2022, 2019, 2017, and earlier versions

Method 2. Rebuild the Transaction Log File
SQL Server errors 5171 and 5172 may also occur if the transaction log file (LDF) is corrupted or missing. In such cases, you can rebuild the transaction log by attaching the database without the LDF file. SQL Server automatically creates a new transaction log file in the same directory as the MDF file.

Use the following command to attach the database and rebuild the log file:
CREATE DATABASE testdb
ON
(FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL\DATA\testdb5.mdf')
FOR ATTACH_REBUILD_LOG;
GO


Conclusion

SQL Server errors 5171 and 5172 can be caused by corruption in the header page, damage to the MDF file, or problems with the LDF file. We discussed useful techniques to diagnose and fix these mistakes in this article. Although DBCC CHECKDB can be useful in some situations, the safest and most effective method is frequently to use a professional SQL database repair solution like Stellar Repair for MS SQL. The program allows faulty MDF and NDF files to be quickly repaired and recovered while preserving total data integrity.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: Efficient SQL Query to Remove Duplicates with ROW_NUMBER

clock March 6, 2025 07:04 by author Peter

Using ROW_NUMBER() and PARTITION BY (Preferred Approach)
The ROW_NUMBER() function assigns a unique row number to each record within a partition (group). We can use this to identify and delete duplicates while keeping only the required data.

Query Syntax
WITH CTE AS (
    SELECT
        *,
        ROW_NUMBER() OVER (PARTITION BY column1, column2 ORDER BY id) AS row_num
    FROM table_name
    WHERE condition  -- Apply filtering condition here
)
DELETE FROM CTE
WHERE row_num > 1;


Example
Consider a Customer table with duplicate entries based on Email.

ID Name Email City
1 John [email protected] NY
2 Jane [email protected] LA
3 John [email protected] NY
4 Sam [email protected] TX

Removing Duplicates While Keeping the First Entry.

;WITH CTE AS (
    SELECT ID
    FROM (
        SELECT ID,
               ROW_NUMBER() OVER (PARTITION BY NAME, Email, City ORDER BY ID) AS RN
        FROM Customers
        WHERE City = 'NY'  -- Only NY state filtering condition
    ) AS sub
    WHERE RN > 1
)
DELETE FROM Customers
WHERE ID IN (SELECT ID FROM CTE);


Explanation of the Query
Identifies Duplicates

  • ROW_NUMBER() OVER (PARTITION BY Name, Email, City ORDER BY ID)
  • Assign a row number (RN) for each duplicate group, keeping the first record (RN = 1).

Filters Out Duplicates (RN > 1): Only marks duplicate records where City = 'NY'.
Deletes Duplicate Records: Deletes only IDs from the CTE that have RN > 1
This syntax will be useful when we are joining more tables and deleting duplicates from one specific table.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE :: An in-depth Analysis of SQL Server Triggers and Their Benefits

clock August 7, 2024 06:54 by author Peter

SQL Server triggers are special stored procedures that are intended to run automatically in response to specific database events. These events could include activities related to data definition, like CREATE, ALTER, or DROP, as well as actions related to data manipulation, like INSERT, UPDATE, or DELETE. There are two main categories of triggers.

  • DML Triggers (Data Manipulation Language Triggers): These triggers activate in response to DML events, which encompass operations like INSERT, UPDATE, or DELETE.
  • DDL Triggers (Data Definition Language Triggers): These triggers activate in response to DDL events, which include operations such as CREATE, ALTER, or DROP.

When Is a Trigger Useful?
In a variety of situations, triggers are employed to guarantee that particular actions are carried out automatically in reaction to particular database events. Here are a few such scenarios when triggers are useful.

  • Audit Trails
    • When: You need to track changes to important data for compliance, security, or historical analysis.
    • Why: Triggers can automatically log changes to an audit table without requiring additional application code, ensuring consistent and reliable tracking.
  • Enforcing Business Rules
    • When: Business rules must be enforced directly at the database level to ensure data integrity and consistency.
    • Why: Triggers ensure that business rules are applied uniformly, even if data is modified directly through SQL queries rather than through an application.
  • Maintaining Referential Integrity
    • When: You need to ensure that relationships between tables remain consistent, such as cascading updates or deletes.
    • Why: Triggers can automatically handle referential integrity tasks, reducing the risk of orphaned records or inconsistent data.
  • Synchronizing Tables
    • When: You need to keep multiple tables synchronized, such as maintaining a denormalized table or a summary table.
    • Why: Triggers can automatically propagate changes from one table to another, ensuring data consistency without manual intervention.
  • Complex Validations
    • When: Data validation rules are too complex to be implemented using standard constraints.
    • Why: Triggers can perform intricate checks and validations on data before it is committed to the database.
  • Preventing Invalid Transactions
    • When: Certain operations should be blocked if they don't meet specific criteria.
    • Why: Triggers can roll back transactions that violate predefined conditions, ensuring that only valid data modifications are allowed.
Benefits of Triggers
Automated Execution: Triggers execute automatically in response to particular events, reducing the necessity for manual interference.
Centralized Logic: Business regulations and data integrity checks can be centralized within triggers, enhancing system maintainability and reducing code repetition.
Real-time Auditing: Triggers can be utilized to generate real-time audit logs of data alterations.
Complex Integrity Checks: Triggers can enforce complex integrity checks that surpass the capabilities of standard SQL constraints.
Data Uniformity: Triggers aid in upholding data uniformity by ensuring that associated modifications are implemented throughout the database.

Special tables and SQL Server Triggers
The "magic tables" of SQL Server are two unique internal tables: DELETED and INSERTED. These tables are used by DML triggers to hold the data that is being changed by the action that initiates the trigger.

Comprehending Tables—INSERT AND DELETED

  • The INSERTED table stores the affected rows during INSERT and UPDATE operations. During an INSERT operation, it holds the new rows being added to the table. In an UPDATE operation, it contains the new values post-update.
  • The DELETED table, on the other hand, stores the affected rows during DELETE and UPDATE operations. In a DELETE operation, it contains the rows being removed from the table. For an UPDATE operation, it holds the old values before the update.
Example Logging Changes
Suppose we have an EmployeesDetails table and we want to create a trigger that logs all changes (inserts, updates, and deletes) to an EmployeesDetails _Log table.

SQL Script
CREATE TABLE EmployeesDetails (
    EmployeeID INT PRIMARY KEY,
    Name NVARCHAR(100),
    Position NVARCHAR(100),
    Salary DECIMAL(10, 2)
);
CREATE TABLE EmployeesDetails_Audit (
    AuditID INT IDENTITY(1,1) PRIMARY KEY,
    EmployeeID INT,
    Name NVARCHAR(100),
    Position NVARCHAR(100),
    Salary DECIMAL(10, 2),
    ChangeDate DATETIME DEFAULT GETDATE(),
    ChangeType NVARCHAR(10)
);
CREATE TRIGGER trgEmployeesDetailsAudit
ON EmployeesDetails
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
    IF EXISTS (SELECT * FROM inserted)
    BEGIN
        INSERT INTO EmployeesDetails_Audit (EmployeeID, Name, Position, Salary, ChangeType)
        SELECT EmployeeID, Name, Position, Salary, 'INSERT'
        FROM inserted;
    END
    IF EXISTS (SELECT * FROM deleted)
    BEGIN
        INSERT INTO EmployeesDetails_Audit (EmployeeID, Name, Position, Salary, ChangeType)
        SELECT EmployeeID, Name, Position, Salary, 'DELETE'
        FROM deleted;
    END
END;
-- Insert sample items into the EmployeesDetails table
INSERT INTO EmployeesDetails (EmployeeID, Name, Position, Salary)
VALUES
    (1, 'Alice Johnson', 'Software Engineer', 80000.00),
    (2, 'Bob Smith', 'Project Manager', 95000.00),
    (3, 'Charlie Davis', 'Analyst', 60000.00),
    (4, 'Diana Wilson', 'UX Designer', 72000.00),
    (5, 'Edward Brown', 'Database Administrator', 85000.00);
SELECT * FROM EmployeesDetails;
SELECT * FROM EmployeesDetails_Audit;

Query Window

Execution of trigger

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE :: Inner Workings of a Query Processor

clock July 19, 2024 08:03 by author Peter

An integral part of a database management system (DBMS) is a query processor, which interprets and runs user queries so that users can efficiently communicate with the database. It guarantees that searches are handled effectively and yield the intended outcomes. Here, we examine a query processor's essential features, including its parts and operations.

Query Processor


 

Linker and Compiler
In the query processing pipeline, the compiler and linker are essential components. High-level queries defined in Data Definition Language (DDL) or Data Manipulation Language (DML) are translated by the compiler into machine code or lower-level code that the database engine can run. These compiled code fragments are subsequently combined by the linker into a cohesive entity that is prepared for execution. This procedure is similar to the compilation and linking process used to create executable programs from traditional programming languages.

DML Queries

Data Manipulation Language (DML) queries are used to manipulate the data within a database. Common DML operations include.

  • SELECT: Retrieve data from the database.
  • INSERT: Add new records to the database.
  • UPDATE: Modify existing records.
  • DELETE: Remove records from the database.

The query processor interprets these queries, optimizes them, and ensures they are executed efficiently, maintaining data integrity and performance.DDL InterpreterThe Data Definition Language (DDL) interpreter is responsible for handling DDL commands that define the database schema. DDL commands include.

  • CREATE: Define new database objects like tables, indexes, and views.
  • ALTER: Modify the structure of existing database objects.
  • DROP: Delete database objects.

The DDL interpreter ensures that these commands are correctly parsed and executed, updating the database schema as required.

Application Program Object Code
Application programs use embedded SQL queries to communicate with the database. The query processor initially writes these questions in the application code before compiling them into object code. The executable form of the SQL queries is represented by the object code, which enables smooth database interaction between the application and the database.

DML Compiler and Organizer

DML queries are converted into an intermediate form by the DML compiler, which also optimizes them for quick execution. This intermediate form is typically a list of simple operations that the query evaluation engine is able to perform on its own. After these queries are created, the organizer puts them into an ideal execution plan so that the database engine can run them quickly.

Query Evaluation Engine
The main element in charge of carrying out the compiled and optimized queries is the query evaluation engine. It performs the necessary actions to retrieve or alter data as described in the query by processing the intermediate code that is produced by the DML compiler. The query optimizer's defined optimization strategies are followed by the evaluation engine, which guarantees efficient execution.

Conclusion
A query processor is a sophisticated component of a DBMS, integrating various functions to ensure efficient query processing. From compiling and linking queries to interpreting DDL commands and executing DML operations, each component plays a crucial role in maintaining the performance and integrity of the database. Understanding these components helps in appreciating the complexity and efficiency of modern database management systems.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE :: SQL's RANK, DENSE_RANK, and ROW_NUMBER ranking functions

clock July 12, 2024 09:23 by author Peter

With the use of SQL's ranking functions, you may give each row in a result set's partition a unique ranking. These functions come in very handy when you have to determine the row order according to certain standards. The RANK(), DENSE_RANK(), and ROW_NUMBER() functions are the three main ranking functions. Despite their apparent similarity, they exhibit different habits, particularly with regard to handling ties. We'll examine these functions in-depth and provide a useful example to illustrate how they differ in this post.

RANK()
The RANK() function assigns a unique rank to each row within a partition of a result set, with gaps in the ranking sequence where there are ties. This means that if two or more rows have the same value in the ordering column(s), they will be assigned the same rank, but the next rank will be incremented by the number of tied rows.

Syntax

RANK() OVER (
    PARTITION BY column1, column2, ...
    ORDER BY column1, column2, ...
)


Example
SELECT
    Name,
    Score,
    RANK() OVER (ORDER BY Score DESC) AS Rank
FROM
    Students;

Result

| Name    | Score | Rank |
|---------|-------|------|
| Alice   | 95    | 1    |
| Bob     | 85    | 2    |
| Charlie | 85    | 2    |
| Dave    | 75    | 4    |
| Eve     | 70    | 5    |

In this example, Bob and Charlie have the same score and are both ranked 2nd. The next rank, assigned to Dave, is 4th, leaving a gap at rank 3.

In this example, Bob and Charlie have the same score and are both ranked 2nd. The next rank, assigned to Dave, is 4th, leaving a gap at rank 3.
DENSE_RANK()

The DENSE_RANK() function is similar to RANK() but without gaps in the ranking sequence. When rows have the same value in the ordering column(s), they receive the same rank, but the next rank is incremented by one, regardless of the number of ties.

Syntax

DENSE_RANK() OVER (
    PARTITION BY column1, column2, ...
    ORDER BY column1, column2, ...
)


Example
SELECT
    Name,
    Score,
    DENSE_RANK() OVER (ORDER BY Score DESC) AS DenseRank
FROM
    Students;

Result
| Name    | Score | DenseRank |
|---------|-------|-----------|
| Alice   | 95    | 1         |
| Bob     | 85    | 2         |
| Charlie | 85    | 2         |
| Dave    | 75    | 3         |
| Eve     | 70    | 4         |

Here, Bob and Charlie are both ranked 2nd, but the next rank is 3rd, assigned to Dave, with no gaps in the ranking sequence.

ROW_NUMBER()
The ROW_NUMBER() function assigns a unique sequential integer to rows within a partition, without considering ties. Each row gets a distinct number, even if there are ties in the ordering column(s).
Syntax
ROW_NUMBER() OVER (
    PARTITION BY column1, column2, ...
    ORDER BY column1, column2, ...
)


Example
SELECT
    Name,
    Score,
    ROW_NUMBER() OVER (ORDER BY Score DESC) AS RowNum
FROM
    Students;


Result

| Name    | Score | RowNum |
|---------|-------|--------|
| Alice   | 95    | 1      |
| Bob     | 85    | 2      |
| Charlie | 85    | 3      |
| Dave    | 75    | 4      |
| Eve     | 70    | 5      |

In this example, even though Bob and Charlie have the same score, they are assigned unique row numbers 2 and 3, respectively.

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in