
 December 19, 2018 10:13 by 
 Peter
 PeterCoding is always a fun but challenging job. Developers do not only  have to execute the right output based on the business requirement but  also need to maintain the right coding standards by using the optimum  use of variable sizes and keeping in view the other best practices. I am  going to provide a list of best practices and guidelines today, in this  article. Many of us already may have been using these but I thought to  gather those in a single page so that while developing, the code can be  useful, keeping the standard intact.
Application to be used,
- Developers need to use the Developer Edition of the SQL Server edition rather than using SQL Express or Enterprise edition.
Database Design
- Ensure  that the referential integrity is maintained at all the times, i.e.,  the relationship between Primary Key and Foreign Key.
- Always  specify the narrowest columns you can. In addition, always choose the  smallest data type you need to hold the data you need to store in a  column. The narrower the column, the less amount of data SQL Server must  store, and the faster SQL Server is able to read and write data.
 
Database Configuration Settings
- Create proper database file sizes (this includes Initial DB file size and growth values)
- Maintaining consistency in DB filegroups across the markets.
 
Data modeling
- Use  of appropriate data types and length while the data models are created  (limit using blob data types unless really needed). The narrower the  column, the less amount of data SQL Server must store, and the faster  SQL Server is able to read and write data.
 
Performance/Load Tests
- While performing perf tests, adjusts the database file sizes accordingly. Suspend the maintenance jobs during the period.
 
Purging solution
- Checking with the business team over a period to implement the correct data purging solutions which best fits your environment.
 
Database Objects
- Use user-defined constraint names rather using system-generated names.
- Use  database objects to sit in the respective defined Filegroups. Avoid  getting the user objects and data to sit in the system/primary  filegroup.
 
Indexing
- Appropriate Index Naming convention.
- Appropriate use of Case while creating objects such as tables, indexes and other objects.
- Use the appropriate column sequence while creating the index schema.
- Try to avoid creating duplicate indexes.
- Limit use of filtered indexes unless you are sure that can give you real benefits.
- When  creating a composite index, and when all other considerations are  equal, make the most selective column in the first column of the index.
- keep  the “width” of your indexes as narrow as possible. This reduces the  size of the index and reduces the number of disks I/O reads required to  read the index, boosting performance.
- Avoid adding a clustered  index to a GUID column (unique identifier data type). GUIDs take up  16-bytes of storage, more than an Identify column, which makes the index  larger, which increases I/O reads, which can hurt performance.
- Indexes  should be considered on all columns that are frequently accessed by the  JOIN, WHERE, ORDER BY, GROUP BY, TOP, and DISTINCT clauses.
- Don't  automatically add indexes on a table because it seems like the right  thing to do. Only add indexes if you know that they will be used by  queries run against the table. Always assess your workload before  creating the right indexes.
- When creating indexes, try to make  them unique indexes if possible. SQL Server can often search through a  unique index faster than a non-unique index because in a unique index,  each row is unique, and once the needed record is found, SQL Server  doesn't have to look any further.
 
Properties of Indexes
- Don’t  automatically accept the default value of 100 for the fill factor for  your indexes. It may or may not best meet your needs. A high fill factor  is good for seldom changed data, but highly modified data needs a lower  fill factor to reduce page splitting.
 
Transact-SQL
- If  you perform regular joins between two or more tables in your queries,  performance will be optimized if each of the joined columns has  appropriate indexes.
- Don't over-index your OLTP tables, as  every index you add increases the time it takes to perform INSERTS,  UPDATES, and DELETES. There is a fine line between having the ideal  number of indexes (for SELECTs) and the ideal number to minimize the  overhead that occurs with indexes during data modifications.
- If  you know that your application will be performing the same query over  and over on the same table, consider creating a non-clustered covering  index on the table. A covering index, which is a form of a composite  index, includes all the columns referenced in SELECT, JOIN, and WHERE  clauses of a query. Because of this, the index contains the data you are  looking for and SQL Server doesn't have to look up the actual data in  the table, reducing I/O, and boosting performance.
- Remove  encryption from table columns where it is not necessary at all. Overuse  of encryption can lead to poor performance. Check with business users  time to time on this to ensure right table columns are only considered  for encryption.
 
Version control/source control
- Maintain all code in a source control system and update it always for all type of changes happening to the code base.
 
Queries and Stored Procedures
- Keep  the transactions as short as possible. This reduces locking and  increases the application concurrency, which helps to boost the  performance.
- If needed to run SQL commands use only designated  columns rather using * to fetch all the commands and use the TOP  operator to limit the number of records.
- Avoid using query  hints unless you know exactly what you are doing, and you have verified  that the hint boosts performance. Instead, use the right isolation  levels.
- SET NOCOUNT ON at the beginning of each stored procedure you write.
- Do not unnecessarily use select statements in the code block if you do not want to send any result back to display.
- Avoid  such code which can lead to unnecessary network calls. Check the count  of the code block and verify if those many calls are really needed.
- When  using the UNION statement, keep in mind that, by default, it performs  the equivalent of a SELECT DISTINCT on the result set. In other words,  UNION takes the results of two like record sets, combines them, and then  performs a SELECT DISTINCT in order to eliminate any duplicate rows.  This process occurs even if there are no duplicate records in the final  recordset. If you know that there are duplicate records, and this  presents a problem for your application, then use the UNION statement to  eliminate the duplicate rows.
- If you see there is a necessity  to use Upper () or Lower () functions or need to use LTRIM () or RTRIM  () functions, it will be better to modify the data corrected while  accepting the user inputs from the application side rather performing  changes in the script. This will make the queries to run faster.
- Carefully  evaluate whether your SELECT query needs the DISTINCT clause or not.  Some developers automatically add this clause to every one of their  SELECT statements, even when it is not necessary.
- In your  queries, don't return column data you don't need. For example, you  should not use SELECT * to return all the columns from a table if you  don't need all the data from every column. In addition, using SELECT *  may prevent the use of covering indexes, further potentially hurting  query performance.
- Always include a WHERE clause in your SELECT statement to narrow the number of rows returned. Only return those rows you need.
- If  your application allows users to run queries, but you are unable in  your application to easily prevent users from returning hundreds, even  thousands of unnecessary rows of data, consider using the TOP operator  within the SELECT statement. This way, you can limit how many rows are  returned, even if the user doesn't enter any criteria to help reduce the  number of rows returned to the client.
- Try to avoid WHERE  clauses that are non-sargable. If a WHERE clause is sargable, this means  that it can take advantage of an index (assuming one is available) to  speed completion of the query. If a WHERE clause is non-sargable, this  means that the WHERE clause (or at least part of it) cannot take  advantage of an index, instead of performing a table/index scan, which  may cause the query's performance to suffer. Non-sargable search  arguments in the WHERE clause, such as "IS NULL", "<>", "!=",  "!>", "!<", "NOT", "NOT EXISTS", "NOT IN", "NOT LIKE", and "LIKE  '%500'" generally prevents (but not always) the query optimizer from  using an index to perform a search. In addition, expressions that  include a function on a column, expressions that have the same column on  both sides of the operator, or comparisons against a column (not a  constant), are not sargable. In such case, break the code appropriately  so that a proper index or set of indexes can be used.
- If using  temporary tables to keep the backup of the existing large tables in the  database, remove them as and when the task is needed as those contain a  lot of storage and the maintenance plan will take more time to complete  as because those tables will be considered for maintenance as well.
- Do  not use special characters or use them very carefully while passing  them via a stored procedure. It might change the query execution plan  and lead to poor performance. Often the situation is known as Parameter  sniffing.
- Avoid SQL server to do an implicit conversion of data, rather use it appropriately (use explicit code-based conversion).
 
SQL Server CLR
The  Common language runtime (CLR) feature allows you to write stored  procedures/trigger/functions in .NET managed code and execute the same  from SQL Server.
 
- Use the CLR to complement Transact-SQL code, not to replace it.
- Standard data access, such as SELECT, INSERTs, UPDATEs, and DELETEs are best done via Transact-SQL code, not the CLR.
- Computationally or procedurally intensive business logic can often be encapsulated as functions running in the CLR.
- Use CLR for error handling, as it is more robust than what Transact-SQL offers.
- Use CLR for string manipulation, as it is generally faster than using Transact-SQL.
- Use CLR when you need to take advantage of the large base class library.
- Use CLR when you want to access external resources, such as the file system, Event Log, a web service, or the registry.
- Set CLR code access security to the most restrictive permissions as possible.
 
Consultation
- You  can also a consultant with your in-house DBAs for any advice about  Microsoft Best Practices and standard or use of correct indexes to be  used.
- You can seek training from any good external training providers.
- Another  good option is to use good third-party tools such as RedGate, FogLight,  Idera, SQLSentry, SolarWinds and check the performance of the  environment by monitoring the scripts.
 
Process Improvement
- Document common mistakes in the Lessoned Learned page per the PMP best practice guidelines.
- Do check out the new upcoming editions and try to align your code accordingly.
 
Hope  that the above-advised guidelines would be useful. Do share if you use  any other steps in your workplace which is found to be helpful.
HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu     is European Windows Hosting Provider which focuses on Windows   Platform   only. We deliver on-demand hosting solutions including Shared   hosting,   Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT   as a  Service  for companies of all sizes.
 
