European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE.eu :: Cascading Deletes in LINQ to SQL

clock November 11, 2020 07:34 by author Peter

This article will discuss alternative methods for performing cascading deletes using LINQ to SQL.  Cascading delete refers to the action of removing records associated by a foreign key relationship to a record that is the target of a deletion action.  LINQ to SQL does not specifically handle cascading deletes and it is up to the developer to determine whether or not that action is desired.  It is also up to the developer to determine how to go about accomplishing the cascading delete.
 
Problem
The problem with performing a cascading delete is not new to LINQ to SQL and one has essentially the same alternatives for performing such a delete.  The issue is one of determining how to handle the deletion or retention of records associated with a record targeted for deletion where that record maintains a foreign key relationship with records contained within other tables within the database and more specifically where the foreign key fields are not nullable.
 
As an example, consider the customer table within the Northwind database.  The customer table has a foreign key relationship established with the Orders table (which in turn maintains a foreign key relationship with the Order_Details table).  In order to delete a customer which has associated Orders, one needs to dispose of or otherwise handle the associated records in both the Orders and Order_Details tables.  In the LINQ to SQL jargon, the associated tables are referred to as entity sets.
 
LINQ to SQL will not violate the foreign key relationships and if an application attempts to delete a record with such relationships in place, the executing code will throw an exception.
 
Using the Northwind example, if one were to attempt to delete a customer with associated orders, an exception will occur.  That is not really a problem, that is how it should be, otherwise, why have foreign key relationships at all.  The issue is really one of determining if you would really want to delete records with associated entity sets, and if you do, how would you want to handle it - do you want to keep the associated records or delete them right along with the targeted record?

There are several possible alternatives at your disposal.  You can handle the cascading deletes using LINQ to SQL from within your code, or you can handle the foreign key relationships from within SQL Server.
 
If you were to execute this code against the Northwind database, it would create a customer with an associated order and order details.
    try  
    {  
              Customer c = new Customer();  
              c.CustomerID = "AAAAA";  
              c.Address = "554 Westwind Avenue";  
              c.City = "Wichita";  
              c.CompanyName = "HostForLIFE.eu";  
              c.ContactName = "Peter";  
              c.ContactTitle = "Boss";  
              c.Country = "USA";  
              c.Fax = "888-335-5933";  
              c.Phone = "888-225-4934";  
              c.PostalCode = "88888";  
              c.Region = "EA";  
       
              Order_Detail od = new Order_Detail();  
              od.Discount = .25f;  
              od.ProductID = 1;  
              od.Quantity = 25;  
              od.UnitPrice = 25.00M;  
       
              Order o = new Order();  
              o.Order_Details.Add(od);  
              o.Freight = 25.50M;  
              o.EmployeeID = 1;  
              o.CustomerID = "AAAAA";  
       
              c.Orders.Add(o);  
       
              using (NWindDataContext dc = new NWindDataContext())  
              {  
                       var table = dc.GetTable<Customer>();  
                       table.InsertOnSubmit(c);  
                       dc.SubmitChanges();  
              }  
    }  
    catch (Exception ex)  
    {  
              MessageBox.Show(ex.Message);  
    }


But if you then tried to delete the customer without handling the entity sets using something like this:
    using (NWindDataContext dc = new NWindDataContext())  
    {  
       
              var q =  
                       (from c in dc.GetTable<Customer>()  
                       where c.CustomerID == "AAAAA"  
                       select c).Single<Customer>();  
       
                       dc.GetTable<Customer>().DeleteOnSubmit(q);  
                       dc.SubmitChanges();  
    }


Handling the Delete with LINQ to SQL

You can handle the cascading deletes manually deleting all of the related entities in the associated entity set; here is a simple approach to doing that:
    try  
    {  
              using (NWindDataContext dc = new NWindDataContext())  
              {  
       
                       var q =  
                       (from c in dc.GetTable<Customer>()  
                       where c.CustomerID == "AAAAA"  
                       select c).Single<Customer>();  
       
                       foreach (Order ord in q.Orders)  
                       {  
                                 dc.GetTable<Order>().DeleteOnSubmit(ord);  
       
                                 foreach (Order_Detail od in ord.Order_Details)  
                                 {  
                                           dc.GetTable<Order_Detail>().DeleteOnSubmit(od);  
                                 }  
                        }  
                        dc.GetTable<Customer>().DeleteOnSubmit(q);  
                        dc.SubmitChanges();  
              }  
              UpdateDataGrid();  
    }  
    catch (Exception ex)  
    {  
              MessageBox.Show(ex.Message);  
    }


In looking at this example, to delete the customer along with the related orders and order details, the code first selects the matching customer by the customer ID field (it's the primary key).  Once a match is found, the code loops through the orders related to each customer and marks them for deletion using the DeleteOnSubmit call.
 
Further, since another relationship exists between the order and order details, the code loops through all of the order details associated with the order and marks them for deletion as well.  Lastly, the customer itself if marked for deletion and then Submit Changes is called on the data context.  The order in which the entities are marked for deletion does not matter, LINQ to SQL sorts that out during the execution of the Submit Changes call based upon the configuration of the foreign keys.

Handling the Cascading Delete from within SQL Server
It is possible to manage the cascading deletes entirely from within SQL Server.  To do this, one need only set the delete rule for the foreign key relationship to cascade.

If you have a database diagram built, the easiest way to set the delete rule is to merely open the diagram, click on the foreign key relationship within the diagram, and then open the INSERT and UPDATE property to expose the Delete Rule property, and then set the Delete Rule property to Cascade as shown in Figure 3.
 
To repeat the example of deleting a customer with related orders, if we were to set all of the constraint delete rules to cascade we could delete a customer with this bit of code:
    try  
    {  
              using (NWindDataContext dc = new NWindDataContext())  
              {  
                       var q =  
                       (from c in dc.GetTable<Customer>()  
                       where c.CustomerID == "AAAAA"  
                       select c).Single<Customer>();  
       
                       dc.GetTable<Customer>().DeleteOnSubmit(q);  
                       dc.SubmitChanges();  
              }  
              UpdateDataGrid();  
    }  
    catch (Exception ex)  
    {  
              MessageBox.Show(ex.Message);  
    }


As you can see in this example code, there was no need to bother with marking each member of an entity set for deletion as in this case SQL Server was instructed on how to deal with a deletion of the customer or order records.  As a result, deleting the customer also results in the deletion of the related records contained in the Order and Order Details tables.
 
Handling the Cascading Delete From within SQL Server
It is also possible to set the foreign key field in the entity sets to nullable and then set the delete rule for that field to "Set Null".  One could also set a default value for the field and set the delete rule to "Set Default".
 
Either approach could be useful if there was a need to delete (in this example) a customer record but retain the order and order detail records.  Either approach could be handled in a manner similar to that used in the previous solution alternative.  Setting the foreign key value to nullable is probably not advisable but it is a workable alternative.
 
Handling the Cascading Delete With a Stored Procedure
One may create or add a stored procedure that will accomplish the cascading delete and evoke that stored procedure using LINQ to SQL.  Stored procedures added to the designer may be evoked directly from the data context, for example, if we had stored procedure called DeleteCustomer that took the customer ID as an argument and handled the cascading deletes, we could do something like this:
    Using(NwindDataContext dc = new NwindDataContext())  
    {  
         dc.DeleteCustomer("AAAAA");  
    }


Cascading deletes are not new to LINQ to SQL; it is the same issue it has always been.  In this article, I have described a few approaches to dealing with cascading deletes from within the code and from the SQL Server side but as is true with many things in .NET, there are several other ways to accomplish such action from within LINQ to SQL.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE.eu :: Multiple Active Result Sets (MARS)

clock November 5, 2020 08:10 by author Peter

MARS- Multiple Active Result Sets is a feature supported in sqlserver 2005. In this method having a single connection to the database, our applications can execute. Multiple queries and stored procedures and divides each result of a query or stored procedure as active results. These results are forward-only, read-only data. Previous to MARS, it was required multiple database connections to retrieve data for a separate query or stored procedure, but within a single connection to the database, it's possible to retrieve all data. This helps your application performance and code management greatly.

In the previous methods each sqlconnection object must be disposed of correctly and datareaders associated with each query must be disposed of. Otherwise, it will lead to some errors like "There is already an open DataReader associated with this Connection- error".

With the introduction of MARS in Sqlserver2005 bye-bye to all these problems and performance bottlenecks.

Let's start the Implementation

Settings

You must change your connection string as follows :

String connectionString = "Data Source=TestServer;" +"Initial Catalog=Pubs;IntegratedSecurity=SSPI;" + "MultipleActiveResultSets=True";

MARS is disabled by default on the Connection object. You have to enable it with the addition of MultipleActiveResultSets=true in your connection string.

Program:

This c# code snippet demonstrates the use of MARS.

using System.Data;  
using System.Data.SqlClient;  

public class MARSdemo {  
Public static void Main() {  
    String MarsConnectionStr = "Data Source=TestServer;" + "Initial Catalog=Pubs;IntegratedSecurity=SSPI;" + "MultipleActiveResultSets=True";  
    string Mars_Command_Str1 = "select * from Salary;";  
    string Mars_Command_Str2 = "select * from Employees;";  

    SqlCommand cmd1 = new SqlCommand(Mars_Command_Str1, MarsConnectionStr);  
    SqlCommand cmd2 = new SqlCommand(Mars_Command_Str1, MarsConnectionStr);  
    SqlDataReader rdr1 = null;  
    SqlDataReader rdr2 = null;  
    try {  
        con.Open();  
        rdr1 = cmd1.ExecuteReader();  
        rdr2 = cmd2.ExecuteReader();  
    } catch (SqlException ex) {} finally {  
        if (con.State == ConnectionState.Open) {  
            con.Close();  
        }  
        rdr1.Close();  
    }  
}  
}

If you try to execute this code with sqlserver 2000 or lower versions, u will get an InvalidOperationException and the following error message.

"There is already an open DataReader associated with this Command which must be closed first."

MARS works on the concept of "Multiplexing" and "interleaving". SQL's Select statement is a multiplexed enabled statement. These multiplexed enabled statements can pause its execution in between and allow a non-multiplexed statement to execute like an INSERT statement.

For example, consider you are retrieving 1 million records using a select statement and in between an insert query comes via a MARS-enabled application under the same connection object, then select statement pauses until this insert operation completes and afterward resumes execution.

Note: Managed code (stored procedures, functions, triggers) can't be multiplexed.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server 2019 Hosting - HostForLIFE.eu :: Resizing Tempdb In SQL Server

clock October 21, 2020 08:05 by author Peter

Occasionally, we must resize or realign our Tempdb log file (.ldf) or data files (.mdf or .ndf) due to a growth event that forces the file size out of whack. To resize we have three options, restart the SQL Server Service, add additional files, or shrink the current file. We most likely have all been faced with runaway log files and in an emergency situation restarting the SQL Services may not be an option but we still need to get our log file size smaller before we run out of disk space for example. The process of shrinking down that file can get tricky so I created this flow chart to help you out if you ever get into this situation.

Now its very important to note that many of these commands will clear your cache and will greatly impact your server performance as it warms cache backup. In addition, you should not shrink your database data or log file unless absolutely necessary. But doing so, it can result in a corrupt tempdb.
 
First thing you must do is issue a Checkpoint. A checkpoint marks the log as a “good up to here” point of reference. It lets the SQL Server Database Engine know it can start applying changes contained in the log during recovery after this point if an unexpected shutdown or crash occurs. Anything prior to the check point is what I like to call “Hardened”. This means all the dirty pages in memory have been written to disk, specifically to the .mdf and .ndf files. So, it is important to make that mark in the log before you proceed. Now, we know tempdb is not recovered during a restart it is recreated, however this is still a requirement.
    USE TEMPDB;    
    GO    
    CHECKPOINT;  


Next, we try to shrink the log by issuing a DBCC SHRINKFILE command. This is the step that frees the unallocated space from the database file if there is any unallocated space available. You will note the Shrink? decision block in the diagram after this step. It is possible that there is no unallocated space and you will need to move further along the path to free some up and try again.
    USE TEMPDB;    
    GO   
    DBCC SHRINKFILE (templog, 1000);   --Shrinks it to 1GB  


If the database shrinks, great congratulations, however for some of us we still might have work to do. Next up is to try and free up some of that allocated space by running DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE.
 
DBCC DROPCLEANBUFFERS

 
Clears the clean buffers from the buffer pool and columnstore object pool. This will flush cached indexes and data pages.
    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;  

DBCC FREEPROCCACHE
 
Clears the procedure cache, you are probably familiar with as a performance tuning tool in development. It will clean out all your execution plans from cache which may free up some space in tempdb. As we know though, this will create a performance as your execution plans now have to make it back into cache on their next execution and will not get the benefit of plan reuse. Now it’s not really clear why this works, so I asked tempdb expert Pam Lahoud (B|T) for clarification as to why this has anything to do with tempdb. Both of us are diving into this to understand exactly why this works. I believe it to be related to Tempdb using cached objects and memory objects associated with stored procedures which can have latches and locks on them that need to be release by running this. Check back for further clarification, I'll be updating this as I find out more.

    DBCC FREEPROCCACHE WITH NO_INFOMSGS;  

Once these two commands have been run and you have attempted to free up some space you can now try the DBCC SHRINKFILE command again. For most this should make the shrink possible and you will be good to go. Unfortunately, a few more of us may have to take a couple more steps through to get to that point.
 
The last two things I do when I have no other choice to get my log file smaller is to run those last two commands in the process. These should do the trick and get the log to shrink.
 
DBCC FREESESSIONCACHE
 
This command will flush any distributed query connection cache, meaning queries that are between two or more servers.
    DBCC FREESESSIONCACHE WITH NO_INFOMSGS;  

DBCC FREESYSTEMCACHE
 

This command will release all unused remaining cache entries from all cache stores including temp table cache. This covers any temp table or table variables remaining in cache that need to be released.
    DBCC FREESYSTEMCACHE ('ALL');  

In my early days as a database administrator I would have loved to have this diagram. Having some quick steps during stressful situations such as tempdb’s log file filling up on me would have been a huge help. So hopefully someone will find this handy and will be able to use it to take away a little of their stress.



European Entity Framework Core 1.0 Hosting - HostForLIFE.eu :: Entity Framework Code First Approach

clock October 14, 2020 08:57 by author Peter

Entity framework is an Object/Relational Mapping (O/RM) framework. It is an enhancement to ADO.NET that gives developers an automated mechanism for accessing & storing the data in the database. ORM framework automatically creates classes based on database tables and the opposite is also true; that is, it can also automatically generate necessary SQL to create database tables based on classes.

First we need to install Entity framework and the best way to install that is by using: -NuGet Package Manager.
Code first approach allows us to create our custom classes first and based on those classes entity framework can generate database automatically for us.
 
Step 1
Create New Project - Select Web Application- Select MVC Template - Cick OK.
 
Step 2
Go toTools - NuGet Package Manager - Manage NuGet Packages For Solution - Browse and type Entity Framework click on Install
 
Step 3
Add the following classes in the Model Folder of your project.
    using System.Collections.Generic;  
    using System.ComponentModel.DataAnnotations;  
    using System.ComponentModel.DataAnnotations.Schema;  
    namespace CodeFirstApproach.Models  
    {  
    [Table("tblClass")]  
    public class Class  
    {  
       [Key]  
       public int ClassID { get; set; }  
       public string ClassName { get; set; }  
       public List<Student> Students { get; set; }  
       public List<Course> Courses { get; set; }  
    }  
    }   
       
       
    [Table("tblStudent")]  
    public class Student  
    {  
       [Key]  
       public int StudentID { get; set; }  
       public string StudentName { get; set; }  
       public Class Class { get; set; }  
    }  
       
    [Table("tblCourse")]  
    public class Course  
    {  
       [Key]  
       public int CourseID { get; set; }  
       public string CourseName { get; set; }  
       public List<Class> Classes { get; set; }  
    }  


Step 4
Add another class as a Context Class and add connection string in web.config. In class add DbSet where each DbSet will map to a table in the database. If we have a property DbSet of Students, and the name of that property is Students, the Entity Framework will by default look for a Student table inside the database which in our case is tblStudent as we used [Table("tblStudent")] as Annotation .
    public class StudentContext: DbContext[DBContext maps to a specific database that has a schema that the DBContext understands] {  
        public StudentContext(): base("name=conn_StudentDB") {}  
        public DbSet < Class > Classes {  
            get;  
            set;  
        }  
        public DbSet < Student > Students {  
            get;  
            set;  
        }  
        public DbSet < Course > Courses {  
            get;  
            set;  
        }  
    }  
    protected override void OnModelCreating(DbModelBuilder modelBuilder) {  
        modelBuilder.Entity < Class > ().HasMany(s => s.Courses).WithMany(c => c.Classes).Map(cs => {  
            cs.MapLeftKey("ClassId");  
            cs.MapRightKey("CourseId");  
            cs.ToTable("tblClassCourse");  
        });  
    }  
    }  

Step 4
Add another class as a Context Class and add connection string in web.config.In class add DbSet where each DbSet will map to a table.
 
Note
We could also say that DBSet class represents an entity set that is used to create, read, update, and delete operations.
 
Web.Config
In the database, if we have a property DbSet of Students, and the name of that property is Students, the Entity Framework will by default look for an Student table inside the database which in our case is tblStudent as we used [Table("tblStudent")] as Annotation .
    <connectionStrings>  
       <add name="conn_StudentDB" connectionString="data source=.;initial catalog=StudentDB;integrated security=True;MultipleActiveResultSets=True;" providerName="System.Data.SqlClient" />  
    </connectionStrings>  

Step 5
 
In HomeController Index Action write the following code and Run (Press F5)

    public ActionResult Index()  
    {  
        StudentContext context = new StudentContext();  
         var x = (from z in context.Courses select z).ToList();  
        return View();  
    }  

Observation
 
It will create the database with all the tables.
 
Point to Start
 
Another Approach is Code Based Migration - Drop the database and follow the step below.
 
Step 1
Entity Framework Code First Migration
 
Go To Package Manager Console and type ,
 
PM> Enable-Migrations
 
Note
It will create a Configuration file in the Migration Folder.
 
Step 2
 
PM> Add-Migration -Name CreateDb
 
Note
It will create a Script in the Migration Folder with xxx_CreateDb.cs. It will consist of two methods,  Up() and Down().
    public partial class CreateDb: DbMigration {  
        public override void Up() {  
            //code to execute when you Update the DB.  
        }  
        public override void Down() {  
            //code to execute when you Rollback.   
        }  
    }  

Step3 - PM> Update-Database
We will see our Database getting created once again just like the image above.
 
Currently we have no data. Insert a few records in the following table.
    insert into [dbo].[tblClass] values('Computer Science'),('Electronics And Communication'),('Mechanical')  
    insert into [dbo].[tblStudent]values('Peter',1),('Scott',2),('Daniel',3),('Raymond',1),('Anthony',2),('Greg',3)  
    insert into [dbo].[tblCourse] values('.Net'),('SQL'),('Java'),('VLSI'),('Analog Communication'),('Optical Fiber Communication'),('Thermodynamics'),('Measurement and Instructions'),('Advanced Solid Mechanics')  
    insert into [dbo].[tblClassCourse] values(1,1),(1,2),(1,3),(2,4),(2,5),(2,6),(2,3),(3,7),(3,8),(3,9),(3,3),(1,5),(3,5);  

Updating Table
 
Step 1
 
Now we will add a property in Student class PhoneNo
    [Table("tblStudent")]  
    public class Student {  
        public int StudentID {  
            get;  
            set;  
        }  
        public string StudentName {  
            get;  
            set;  
        }  
        [Required]  
        public string PhoneNo {  
            get;  
            set;  
        }  
        public Class Class {  
            get;  
            set;  
        }  
    }  


Step 2
 
Go to Console and Add-Migration again for the changes in our model which will create _AddedProperty_tblStudent.cs inside the configuration folder.
 
PM> Add-Migration -Name AddedProperty_tblStudent
    public partial class AddedProperty_tblStudent: DbMigration {  
        public override void Up() {  
            AddColumn("dbo.tblStudent", "PhoneNo", c => c.String(nullable: false));  
            //Setting default value for any property  
            //AddColumn("dbo.tblStudent", "PhoneNo", c => c.String(nullable:true,defaultValue:"NULL"));      [Do not use Required Annotation then]  
            //AddColumn("dbo.tblStudent", "JoinDate", c => c.DateTime(nullable: false, defaultValueSql:"GetDate()"));  
        }  
        public override void Down() {  
            DropColumn("dbo.tblStudent", "PhoneNo");  
        }  
    }  

Step 3
 
PM> Update-Database. Check the following changes in DB for tblStudent.
 
RollBack - Update-Database -TargetMigration:"CreateDb"



SQL Server 2019 Hosting - HostForLIFE.eu :: SQL Injection Protection Methods

clock September 22, 2020 09:33 by author Peter

In this article, we are going to look at SQL Injection Protection Methods. SQL Injection is one of the most dangerous issues any organization can ever go through because it threatens organizational and customer confidentiality and integrity.

For us to be able to counter or prevent SQL Injection we need to look closely at how each type of Injection is implemented to figure out how best to avoid the implemented method. To start with we will look at the means of injection which in this case is SQL statements. Every type of injection is manipulating a SQL statement to pass a malicious statement. So can this be avoided? Yes. It is very true that SQL injections have remained a huge threat to web development and preventing them has not yet proven to be 100% accurate but in the meantime, there are some measures and vulnerabilities that we can handle and still be to limit the exposure to injection attacks.
 
Prevention of SQL Injections
In-band Injection (Classic) includes Error-Based Injection and Union-based injection. With this type of injection, the attacker uses the same channel to launch and gather information. The biggest vulnerability in this attack is dynamic SQL statements either in a simple statement or a stored procedure. Often developers assume that Stored Procedures are not injection targets but they are if they are implemented dynamically. In a secure application, it is handy to use a parameterized query which behaves as if it's pre-running the query with placeholder data in the input field, and this way the server knows what command to expect. This way the query will not be altered outside the expected variables (i.e. a SELECT statement cannot be changed to a DROP statement).
 
Example
    EXEC SelectAllCustomers @FulName = '" + parameter1 + "', @PostalCode = '" + parameter2 + "'  

A Stored Procedure implemented in this way will still be open to injection attack because of its dynamic nature. Since a lot has been tried and has failed, the most secure way to protect our web applications from Classic Injection is to use strict validations on user input. This has its disadvantages because it might restrict users when they using the application but it is the safest way to avoid Classic injection.
 
In Asp.net applications one can use the following Validation Control to monitor user input.
<asp:RegularExpressionValidator ID="RegularExpressionValidator1" ControlToValidate="TextBox1" ValidationExpression="[a-zA-Z0-9]*[^!@%~?:#$%^&*()0']*" runat="server" ErrorMessage="Invalid Input" ForeColor="Red" Font-Size="XX-Small" Display="dynamic" Font-Italic="true"></asp:RegularExpressionValidator>  

The above control will ensure that no special characters are included in the user input and will show a client-side response to inform the user that the input cannot be accepted.

And this will only allow letters and numbers which basically cannot be used in the injection. This may be seen as a disadvantage given that there some unique names such as ‘Ren’ee’ with special characters in them and this might limit user flexibility when using the web application.

Other than this we should also bear in mind that databases have their own security features which include READ and WRITE permissions so it is very important to ensure that our database cannot allow READ and WRITE permissions to UNKNOWN logins. You can find more on MSSQL permissions via this link.
 
Microsoft also put out an SQL Injection Inspection Tool that sniffs out invalidated input that is contained in the query. There are also other tools available online to search and scan your web application or website for vulnerabilities. They test your application using different types of SQL injection techniques. This will allow you to know the weaknesses and fix them beforehand.
 
The use of a Web Application Firewall for web applications that access databases can help identify SQL injection attempts and may help prevent SQL injection attempts from reaching the application.
 
Another safety precaution would be to encrypt passwords in the database. Password hashing is a safe way of ensuring that confidential passwords are not stored in the database as plain-text as this could also culminate into an internal threat if an untrusted member of the organization has access to the database. Encrypting passwords on insertion may be one way to ensure that the malicious attacker will not gain access to sensitive information.
 
The following snippet shows an insert statement that makes use of one-way hashing and encrypts just the input given by the user and this makes the database safer. There are many other ways of encryption which are supported by MS SQL Server.
    INSERT INTO dbo.[tblUser] (user_login, PasswordHash, FullName, Dept)  
            VALUES(@user_login, HASHBYTES('SHA2_512', @PWD), @pFullName, @Dept)  

Data in the table will look like this,

Another tip is to use error messages that reveals little of what is happening internally. In Try...Catch statements it is wiser to use a custom message once an exception has been encountered.
 
Example
    Try  
    Using con As New SqlConnection(ConfigurationManager.ConnectionStrings("Constring").ConnectionString)  
    Using cmd = New SqlCommand("SaveBeneficiary", con)  
    cmd.CommandType = CommandType.StoredProcedure  
    cmd.Parameters.AddWithValue("@Surname", txtBenSurname.Text)  
    cmd.Parameters.AddWithValue("@firstName", txtBenfName.Text)  
    cmd.Parameters.AddWithValue("@sex", cmbSex.SelectedValue)  
    cmd.Parameters.AddWithValue("@IDNum", txtBenIDNO.Text)  
    cmd.Parameters.AddWithValue("@Marital_Status", cmbBenMaritalStatus.SelectedValue)  
    cmd.Parameters.AddWithValue("@DOB", txtBenDOB.Text)  
    cmd.Parameters.AddWithValue("@Address", txtBenAddress.Text)  
    cmd.Parameters.AddWithValue("@Phone", txtBenContact.Text)  
    cmd.Parameters.AddWithValue("@Employer", txtBenEmployer.Text)  
    cmd.Parameters.AddWithValue("@Relationship", cmbRelationship.SelectedValue)  
    cmd.Parameters.AddWithValue("@PolicyNum", txtPolicyNo.Text)  
    cmd.Parameters.AddWithValue("@isDeceased", 0)  
    If con.State = ConnectionState.Open Then  
    con.Close()  
    End If  
    con.Open()  
    cmd.ExecuteNonQuery()  
    Response.Write("<script>alert('Beneficiary Details Successfully Saved') ; location.href='customer_registration.aspx'</script>")  
    Catch ex As Exception  
    MsgBox("Error")  
    End Try  


It is wiser to use the ex.Message for internal debugging and show little information to the users for protection.
 
For attacks such as Out-of-band injection you would want to ensure that your application does not have the following weaknesses:
    No network security parameters to restrict DNS or HTTP outbound requests.
    Sufficient privileges to execute the necessary function to initiate the outbound request.
    Lack of input validation on a web application.

Once these vulnerabilities are taken care of, it will ensure that your data is not attacked and accessed using an unknown domain, thus your application will be safe from Out-band Injection.
 
Prevention from SQL injection can also be countered through the use of modern and trusted technologies but all the information above can help you prevent your application from unauthorized and unwanted access.



SQL Server 2019 Hosting - HostForLIFE.eu :: Resizing Tempdb In SQL Server

clock September 16, 2020 08:36 by author Peter

Occasionally, we must resize or realign our Tempdb log file (.ldf) or data files (.mdf or .ndf) due to a growth event that forces the file size out of whack. To resize we have three options, restart the SQL Server Service, add additional files, or shrink the current file. We most likely have all been faced with runaway log files and in an emergency situation restarting the SQL Services may not be an option but we still need to get our log file size smaller before we run out of disk space for example. The process of shrinking down that file can get tricky so I created this flow chart to help you out if you ever get into this situation.
 
Now its very important to note that many of these commands will clear your cache and will greatly impact your server performance as it warms cache backup. In addition, you should not shrink your database data or log file unless absolutely necessary. But doing so, it can result in a corrupt tempdb.
 
Let’s walk through it and explain somethings as we go along.
First thing you must do is issue a Checkpoint. A checkpoint marks the log as a “good up to here” point of reference. It lets the SQL Server Database Engine know it can start applying changes contained in the log during recovery after this point if an unexpected shutdown or crash occurs. Anything prior to the check point is what I like to call “Hardened”. This means all the dirty pages in memory have been written to disk, specifically to the .mdf and .ndf files. So, it is important to make that mark in the log before you proceed. Now, we know tempdb is not recovered during a restart it is recreated, however this is still a requirement.
    USE TEMPDB;    
    GO    
    CHECKPOINT;  

Next, we try to shrink the log by issuing a DBCC SHRINKFILE command. This is the step that frees the unallocated space from the database file if there is any unallocated space available. You will note the Shrink? decision block in the diagram after this step. It is possible that there is no unallocated space and you will need to move further along the path to free some up and try again.
    USE TEMPDB;    
    GO   
    DBCC SHRINKFILE (templog, 1000);   --Shrinks it to 1GB
 

If the database shrinks, great congratulations, however for some of us we still might have work to do. Next up is to try and free up some of that allocated space by running DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE.


DBCC DROPCLEANBUFFERS

Clears the clean buffers from the buffer pool and columnstore object pool. This will flush cached indexes and data pages.
    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;  

DBCC FREEPROCCACHE
 
Clears the procedure cache, you are probably familiar with as a performance tuning tool in development. It will clean out all your execution plans from cache which may free up some space in tempdb. As we know though, this will create a performance as your execution plans now have to make it back into cache on their next execution and will not get the benefit of plan reuse. Now it’s not really clear why this works, so I asked tempdb expert Pam Lahoud (B|T) for clarification as to why this has anything to do with tempdb. Both of us are diving into this to understand exactly why this works. I believe it to be related to Tempdb using cached objects and memory objects associated with stored procedures which can have latches and locks on them that need to be release by running this. Check back for further clarification, I'll be updating this as I find out more.
    DBCC FREEPROCCACHE WITH NO_INFOMSGS;  

Once these two commands have been run and you have attempted to free up some space you can now try the DBCC SHRINKFILE command again. For most this should make the shrink possible and you will be good to go. Unfortunately, a few more of us may have to take a couple more steps through to get to that point.
 
The last two things I do when I have no other choice to get my log file smaller is to run those last two commands in the process. These should do the trick and get the log to shrink.
 
DBCC FREESESSIONCACHE
This command will flush any distributed query connection cache, meaning queries that are between two or more servers.
    DBCC FREESESSIONCACHE WITH NO_INFOMSGS;  

DBCC FREESYSTEMCACHE
This command will release all unused remaining cache entries from all cache stores including temp table cache. This covers any temp table or table variables remaining in cache that need to be released.
    DBCC FREESYSTEMCACHE ('ALL');  

In my early days as a database administrator I would have loved to have this diagram. Having some quick steps during stressful situations such as tempdb’s log file filling up on me would have been a huge help. So hopefully someone will find this handy and will be able to use it to take away a little of their stress.

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server 2019 Hosting - HostForLIFE.eu :: Using Magic Tables Without A Trigger

clock September 8, 2020 09:36 by author Peter

Today, I am going to show how to use magic tables in the SQL server without using a trigger.

In our daily working environments, we need to maintain history or track of master data in transaction tables for a clear understanding of how data changed day by day. We have different approaches to achieve this work like creating triggers, temporal tables, and some other things. But, we achieve without creating triggers and temporal tables in our database. As all we know, there are two magic tables, i.e, inserted, and deleted. We use these in our triggers. In many scenarios, triggers are not preferable because they hold and block a full table when they fired. Today, I came up with a good approach of using those beautiful magic tables in our SQL server database with creating a trigger. For example, I want to insert all values into table2 from table1 after inserting the values in table1 by using a single insert statement.
 
First, I am creating two tables in my database with the following code:
    CREATE TABLE Table1 (ID1 INT, Col1 VARCHAR(100))  
    GO  
    CREATE TABLE Table2 (ID2 INT, Col2 VARCHAR(100))  
    GO  


After that, I am inserting values in table1 by using the insert statement and inserting it into table2 at a time without using another insert statement, but I am using a magic table with the below code:
    INSERT INTO Table1 (ID1, Col1)  
    OUTPUT inserted.ID1, inserted.Col1  
    INTO Table2  
    VALUES(1,'Col'), (2, 'Col2')  
    GO  

Check the result after executing the above code whether the data inserted into a table2 or not using the select statements.
    SELECT *  
    FROM Table1  
    GO  
    SELECT *  
    FROM Table2  
    GO  


I got the output as follows:

In the same way, we can insert fetched records from another table by using a select statement. in General, we use to insert data into a table that is fetched by using the select command. You can observe the code below for a better understanding.
    insert into #temp1(BankAccountTransactionId)  
    output inserted.BankAccountTransactionId into #temp2  
    select BankAccountTransactionId    
    from Bankaccounttransactions (nolock) where transactiondate ='2020-08-01'  
    go  

In the above query, I inserted data into #temp1 that was fetched from my table present in my database. You can observe the output by selecting records from both #temp1 and #temp2 tables.
 
In the same way, we can use our other magic table deleted to track old data. Here I am updating data in col1 column agonist id1 = 1 in table1 and you observe in table2, a record was inserted with the old value of table one. For a clear understanding, truncate data in table2 before going to update the data in table1. Execute the below code.
    update  Table1 set col1 = 'col3'   
    OUTPUT deleted.ID1, deleted.Col1  
    INTO Table2  
    where ID1 = 1  


I am updating the record in col1 column against id1=1. For a better understanding of data changes in the table, I want to track history in table2. I can store old data in table2 by using the above code. The output is as follows:

 
We can also store them in log tables which we were deleted by using a magic table deleted. Generally, we never use such scenarios as the track of deleted records. I want to delete records from #temp1 but I need to store the records in my audit table I;e, #temp2. The below code will help in this scenario.
    delete from Table1  
    OUTPUT deleted.ID1, deleted.Col1  
    INTO Table2  


See the output below by executing the following command:
    SELECT *  
    FROM Table1  
    GO  
    SELECT *  
    FROM Table2  
    GO  


Notice that I can store old data and insert data into multiple tables at a time using a single insert statement without using triggers and temporal tables.



SQL Server 2019 Hosting - HostForLIFE.eu :: How To Find A String Repeated In Paragraph By Using SQL Server?

clock September 1, 2020 08:59 by author Peter

I went for a SQL Developer interview and the interviewer asked me to write code to find how many times a string appeared in a paragraph. For example, consider the below string.
    set @string = 'A stored procedures is a subroutine available to applications that access a relational database management system.     
    Such procedures are stored in the database data dictionary.     
    Uses for stored procedures include data-validation or access-control mechanisms'   


In the above example, I want to find "Procedures" repeated in a paragraph. The word repeated 3 times, and so the output should be 3. Please have a look at the below steps to see how I achieved the desired output.
 
In general, we have heard this type of question in a different manner, like how many times is a letter repeated in a given string. For example; I want to find how many times the letter "a" is repeated in the word "database".
select LEN(@string)-len(REPLACE('database','a',''))
 
After executing the above command we get the output as 3. But, it will not work in my scenario, because, my paragraph has 262 words and the "Procedures" string repeated 3 times and the length of my word is 10. If I execute the above I get the result as 30. So we need to go  a bit further to achieve the correct result. If I divide my output with the length of the word which I need to find we can get the correct result. I am attaching the sample code below, execute that code for better understanding.
    Declare @string varchar(max)    
        
    set @string = 'A stored procedures is a subroutine available to applications that access a relational database management system.     
    Such procedures are stored in the database data dictionary.     
    Uses for stored procedures include data-validation or access-control mechanisms'    
        
    select (LEN(@string)-len(REPLACE(@string,'procedures','')))/LEN('procedures')  


After executing the above code I got the output as follows.

For better understanding please try this with your own examples.

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server 2019 Hosting - HostForLIFE.eu :: Database Connection String MS-SQL Server - Easy Steps

clock August 28, 2020 07:42 by author Peter

This is the easiest way to create a connection string to the database without taking too much time. So, many times at the beginner level, I faced problems on the connection string and providers associated with the connection string. Then, I found a way to handle it simply following the below steps. I think it should be shared with you all.
 
Hope you like this simple tutorial!
 
Let's begin with the steps:
 
Step 1
Open Notepad on your PC.
 
Step 2
Now Save the Notepad File with Ctrl+Shift+S as Connection.udl on the desktop.
 
Step 3
Now go to the Desktop and double click on the file Connection.udl. The opened file looks like:

Step 4
Now select or Enter server Name
 
Step 5
Now Select Database Name
 
Step 6
Now Press Test Connection.
 
Step 7
Click OK on Test Connection PopUp and Click OK on the Connection.udl File.
 
Step 8

Then open the Connection.udl file with Notepad, then you get the Database Connection String.

Note
This Connection.udl file only create when it is not on your system by following Step1 to Step2.
Otherwise, you can continue with From Step 3 to Step 8. Those who are unable to create a Connection.udl file, please find the attachment, and continue with Step 3 to Step 8.

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server 2019 Hosting - HostForLIFE.eu :: Using LAST_QUERY_PLAN_STATS In SQL Server

clock August 14, 2020 11:00 by author Peter

In my opinion, SQL Server 2019 is one of the greatest releases of the product that Microsoft has ever produced. The amount of improvements across the platform really allows data professionals to have better and faster queries while simultaneously minimizing effort. One of the many things that it fixes is the ability to get an actual plan without having to re-run the query.

Currently, if you are not on SQL Server 2019 and wanted to see an execution plan, you would attempt to dive into the execution plan cache to retrieve an estimated plan. Keep in mind that this just an estimated plan and the actual plan, while the shape should be the same, will have runtime metrics. These actual runtime metrics could be, different than what is shown in the estimated plan, so it is important to get the actual whenever possible.
 
With the introduction of lightweight statistics, SQL Server can retain the metrics of the actual execution plan if enabled. Note that this could introduce a slight increase in overhead, however, I haven’t yet seen it be determinantal. These metrics are vastly important when performing query tuning.
 
Metrics
    Actual number of rows per operator
    Total CPU time
    Total execution time
    Actual maximum degree of parallelism
    Memory granted and subsequently used

Two new items are also introduced for this particular feature, one being a new database scoped configuration, LAST_QUERY_PLAN_STATS, and the other a new dynamic management function, sys.dm_exec_query_plan_stats.
 
LAST_QUERY_PLAN_STATS
Introduced in SQL Server 2016, database scoped configurations allow you to have more granular control of how the database is going to behave.
 
LAST_QUERY_PLAN_STATS can either be set globally with trace flag 2451 or on each individual database. It does require compatibility level of 150 so you must be on SQL Server 2019 to take advantage of this new ability. You can enable this database scoped configuration with this command.
    ALTER DATABASE SCOPED CONFIGURATION SET LAST_QUERY_PLAN_STATS = ON 

Like with anything else, there is a minor overhead with enabling this. SQL Server does have to work just a tad bit harder to capture and store the runtime metrics for the plans. This overhead should be minimal but as always, proper testing should be completed against your given workloads.
 
SYS.DM_EXEC_QUERY_PLAN_STATS
Along with the new database scoped configuration comes a new dynamic management function, or DMF. This DMF requires the plan_handle, which is obtains by cross applying it with one of the other dmvs, such as,
    dm_exec_cached_plans
    dm_exec_query_stats
    dm_exec_requests
    dm_exec_procedure_stats
    dm_exec_trigger_stats


Here is an example query,

    SELECT * FROM SYS.DM_EXEC_CACHED_PLANS CP 
    CROSS APPLY SYS.DM_EXEC_QUERY_PLAN_STATS(CP.PLAN_HANDLE)  PS 
    WHERE PS.DBID = 10 


Let’s take a look at this feature in action.
 
Finding the Last Known Actual Plan
 Using WideWorldImporters, let’s first ensure that the database is at compatibility level of 150 and the new database scoped configuration is also enabled.

 
We can see that it is now at the correct compatibility level and the database scoped configuration is also been enabled. Now we can run a query and go find the actual plan utilizing the new DMF.
 
I am going to do a simple join between the Sales.Orders and Sales.OrderLines tables. After the results are returned, I’ll interrogate the new DMF using sys.dm_exec_cached_plans as a way to get the plan handle
    select * from sales.orders o 
           inner join sales.OrderLines ol on o.OrderID=ol.OrderID; 
    select size_in_bytes,cacheobjtype,objtype,plan_handle,ps.dbid, ps.query_plan 
    from sys.dm_exec_cached_plans cp 
           cross apply sys.dm_exec_query_plan_stats(cp.plan_handle) ps 
    where ps.dbid = 10 


Using LAST_QUERY_PLAN_STATS in SQL Server
 
The resulting query_plan column is being derived from the new DMF. If you click on the hyperlink in SQL Server Management Studio, it will open the query plan. Once opened, hit F4 or right click to show the properties of the plan and select the Clustered Index Scan of the Sales.OrderLines table.

You can now see the actual runtime statistics that SQL Server captured when the query was executed.
 
As I mentioned at the beginning of this post, SQL Server 2019 is one of the best releases of the product that Microsoft has ever accomplished. Features such as lightweight query profiling and LAST_QUERY_PLAN_STATS just continue to make the data professionals life that much easier.
 
Have you migrated to SQL Server 2019 yet? Are you using this feature? If you are, drop me a note! I’d love to hear about your experience with it!



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in