European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

European SQL 2016 Hosting - HostForLIFE.eu :: Warehousing JSON formatted data in SQL Server 2016

clock March 8, 2017 10:34 by author Scott

In this article, I continue to review the exciting features available in SQL Server 2016. One such feature is the long awaited T-SQL support for JSON formatted data. In this article we take a look at how JSON support will impact data warehouse solutions.

Background

Since the advent of EXtensible Markup Language (XML) many modern web applications have focused on providing data that is both human-readable and machine-readable. From a relational database perspective, SQL Server kept up with these modern web applications by providing support for XML data in a form of an XML data type and several functions that could be used to parse, query and manipulate XML formatted data.

As a result of being supported in SQL Server, data warehouse solutions based off SQL Server were then able to source XML-based OLTP data into a data mart. To illustrate this point, let’s take a look at the XML representation of our fictitious Fruit Sales data shown in figure below.

To process this data in data warehouse, we would first have to convert it into relational format of rows and columns using T-SQL XML built-in functions such as the nodes() function. 

The results of the above script are shown in figure below in a recognisable format for data warehouse.

Soon after XML became a dominant language for data interchange for many modern web applications, JavaScript Object Notation (JSON) was introduced as a lightweight data-interchange format that is more convenient for web applications to process than XML. Likewise most relational database vendors released newer versions of their database systems that included the support for JSON formatted data. Unfortunately, Microsoft SQL Server was not one of those vendors and up until SQL Server 2014, JSON data was not supported. Obviously this lack of support for JSON, created challenges for data warehouse environments that are based off SQL Server.

Although there were workarounds (i.e. using Json.Net) to addressing the lack of JSON support in SQL Server, there was always sense that these workarounds were inadequate, time-wasting, and were forcing data warehouse development teams to pick up a new skill (i.e. learn .Net). Fortunately, the release of SQL Server 2016 has ensured that development teams can throw away their JSON workarounds as JSON is supported in SQL Server 2016.

Parsing JSON Data into Data Warehouse

Similarly to XML support in SQL Server, SQL Server supports of JSON can be classified into two ways:

  • Converting Relational dataset into JSON format
  • Converting JSON dataset into relational format

However, for the purposes of this discussion we are focusing primarily on the second part – which is converting a JSON formatted data (retrieved from OLTP sources) into a relational format of rows and columns. To illustrate our discussion points we once again make use of the fictitious fruit sales dataset. This time around the fictitious dataset has been converted into a JSON format as shown below.

ISJSON function

As part of supporting JSON formatted data in other relational databases such as MySQL and PostgreSQL 9.2, there is a separate JSON data type that has been introduced by these vendors. Amongst other things, JSON data type conducts validation checks to ensure that values being stored are indeed of valid JSON format.

Unfortunately, SQL Server 2016 (and ORACLE 12c) do not have a special data type for storing JSON data instead a variable character (varchar/nvarchar) data type is used. Therefore, a recommended practice to dealing with JSON data in SQL Server 2016 is to firstly ensure that you are indeed dealing with a valid JSON data. The simplest way to do so is to use the ISJSON function. This is a built-in T-SQL function that returns 1 for a valid JSON dataset and 0 for invalids.

Image below shows us the implementation of ISJSON function whereby we validate our fictitious sample dataset.

OPENJSON function

Now that we have confirmed that we are working with a valid JSON dataset, the next step is to convert the data into a table format. Again, we have a built-in T-SQL function to do this in a form of OPENJSON. OPENJSON works similar to OPENXML in that it takes in an object and convert its data into rows and columns.

Figure below shows a complete T-SQL script for converting JSON object into rows and columns.

Once we execute the above script, we get relational output shown below.

Now that we have our relational dataset, we can process this data into data warehouse.

JSON_VALUE function

Prior to concluding our discussion of JSON in SQL Server 2016, it is worth mentioning that in addition to OPENJSON, you have other functions such as JSON_VALUE that could be used to query JSON data. However this function returns a scalar value which means that unlike the multiple rows and columns returned using OPENJSON, JSON_VALUE returns a single value as shown below.

If you the JSON object that you are querying doesn’t have multiple elements, than you don’t have to specify the row index (i.e. [0]) as shown below.

Conclusion

The long wait is finally over and with the release of SQL Server 2016, JSON is now supported. Similarly to XML, T-SQL support the conversion of JSON object to relational format as well the conversion of relational tables to a JSON object. This support is implemented via built-in T-SQL functions such as OPENJSON and JSON_VALUE. In spite of all the excitement with the support of JSON is SQL Server 2016, we still don’t have a JSON data type. The ISJSON function can then be used to validate JSON text.



European SQL 2016 Hosting - HostForLIFE.eu :: MSSQL Server Comes with JSON?

clock January 17, 2017 10:27 by author Scott

With over a thousand votes on the Microsoft Connect site, JSON support is the most requested feature for SQL Server 2016. This month, Microsoft announced that the upcoming release of SQL Server 2016 will fulfill that request. Sort of.

Microsoft will certainly be touting this feature as an additional reason to upgrade when the time comes. Natively supporting JSON helps bridge the gap between the desirable aspects of a NoSQL database and a relational database like SQL Server. The trouble is that with this current implementation, you're not really gaining anything over what's currently available. It's nothing like the native XML data type that became available starting with SQL Server 2005.

The announcement starts off with a big caveat: they will be providing native JSON support, but not a native JSON type. In fact, storage of JSON data will happen the same way it happens today, in a NVARCHAR typed column. They list 3 hollow reasons for this:

  • Backward compatibility
  • Cross feature compatibility
  • Non Microsoft controlled JSON parsers on the client (C#)

Backward compatibility is weak, if you're already storing JSON data you wouldn't have a hard time moving into the JSON type. Cross feature compatibility means that they're not interested in implementing JSON in other SQL Server components, so instead everything that already works with NVARCHAR (aka everything) will still work (aka nothing's changed). The client side JSON parser point is an odd one and it leads into my next gripe of this feature implementation.

On the client side, such as in a C# application, it's already common to serialize data back and forth in JSON using a JSON parsing library. The most common, though maybe not the fastest, is the Newtonsoft JSON.Net library which comes packaged with the default templates in ASP.NET projects etc. The fact that there are other options out there doesn't really matter in my opinion. Sure some behave differently, but in the end, a JSON object has a specific syntax and any proper parser should be able to serialize and deserialize the output of any other parser.

What's baffling are the features that are being supported in this native JSON support.

You can now use FOR JSON to export the results of a query as JSON. This means that you can make a normal T-SQL query and ask for the result back as a JSON formatted result.  OK, but I could have just serialized the result on the client side in literally one line of code. They use the example of returning the results directly from an OData request through a web service.

You can transform a JSON object to a relational table with OPENJSON. This means that within a T-SQL query, you can provide a JSON object as part of the query and each item in the JSON will be returned as a table row which can be used to query or insert records into a relational table. They use the example of loading a JSON document into rows of a database, but again, that can easily be done on the client site by deserializing the JSON Array first.

Finally they are providing some built in functions for processing JSON data in the database. ISJSON will check if a NVARCHAR column has JSON data and JSON_VALUE which provides some scalar selection ability similar to the dot notation of JavaScript, e.g. 

$.property1.property2.array1[5].property3.array2[15].property4

That's basically the least they could do with this implementation. And as for indexing of JSON data, you're left with some basic support on the level of full-text indexing of any other NVARCHAR column.

So in the end, this "native JSON support" is basically nothing more that a few convenience function on top of a normal NVARCHAR column. It takes a familiar eye about 2 minutes to uncover the sad truth about this feature and the response has shown that Microsoft isn't fooling anybody with this JSON implementation. That's not to say there aren't great features coming in SQL Server 2016 (stretch database seems pretty cool), but JSON support isn't one of them.



European Entity Framework Core 1.0 Hosting - HostForLIFE.eu :: How to Access SQL from Entity Framework Core

clock January 12, 2017 08:33 by author Scott

ORMs like Entity Framework have always been shrouded in a bit of controversy from SQL purists who aren't in control of the types of queries that these tools are building behind the scenes. While this is a perfectly valid stance to take, these same tools can often assist in getting things done much faster than tinkering with an endless chain of JOINs, sub-queries, etc.

But what if you want to know exactly what is going on behind the scenes? What SQL is being generated by Entity Framework? And is it efficient enough to suit your needs, or do you need to handle writing some yourself?

This post will cover a quick tip that will allow you to see the SQL behind the scenes and judge for yourself using Entity Framework Core.

How Did This Work Prior to Entity Framework Core?

Previously, you could use Reflection to create an ObjectQuery object and then call the ToTraceString()method to actually store the query results as seen below:

// Build a query using Entity Framework
var query = _context.Widgets.Where(w => w.IsReal && w.Id == 42); 
// Get the generated SQL
var sql = ((System.Data.Objects.ObjectQuery)query).ToTraceString(); 

And that's really it. The result of the ToTraceString() call will return a string variable containing the entire SQL query being executed.

Options for Entity Framework Core

The previous approach no longer works within the Entity Framework Core (EF7) world, so we have to resort to one of three options, which may vary depending on your needs:

  • Using Built-in or Custom Logging. Logging the executing query using your logger of choice or the built-in Logger in .NET Core as mentioned in this tutorial.
  • Using a Profiler. Using an SQL Profiler like MiniProfiler to monitor the executing query.
  • Using Crazy Reflection Code. You can implement some custom reflection code similar to the older approach to perform the same basic concept.

Since both of the first two options are fairly well documented, we will be focusing on the crazy reflection approach.

Getting Behind the Scenes in Entity Framework Core

Using the following snippets of code, which rely on Reflection to resolve information about the compiler, parser, database, and fields being targeted, we can use those things to reconstruct what is actually going on behind the scenes.

NOTE: Since this relies on Reflection, it may be subject to breaking in the future, especially with any API changes.

public class IQueryableExtensions 
{
    private static readonly FieldInfo QueryCompilerField =
    typeof(EntityQueryProvider).GetTypeInfo().DeclaredFields.First
    (x => x.Name == "_queryCompiler");

    private static readonly PropertyInfo NodeTypeProviderField =
    QueryCompilerTypeInfo.DeclaredProperties.Single(x => x.Name == "NodeTypeProvider");

    private static readonly MethodInfo CreateQueryParserMethod =
    QueryCompilerTypeInfo.DeclaredMethods.First(x => x.Name == "CreateQueryParser");

    private static readonly FieldInfo DataBaseField =
    QueryCompilerTypeInfo.DeclaredFields.Single(x => x.Name == "_database");

    private static readonly FieldInfo QueryCompilationContextFactoryField =
    typeof(Database).GetTypeInfo().DeclaredFields.Single
    (x => x.Name == "_queryCompilationContextFactory");

    public static string ToSql<TEntity>
    (this IQueryable<TEntity> query) where TEntity : class
    {
        if (!(query is EntityQueryable<TEntity>)
        && !(query is InternalDbSet<TEntity>))
        {
            throw new ArgumentException("Invalid query");
        }

        var queryCompiler = (IQueryCompiler)QueryCompilerField.GetValue(query.Provider);
        var nodeTypeProvider =
        (INodeTypeProvider)NodeTypeProviderField.GetValue(queryCompiler);
        var parser = (IQueryParser)CreateQueryParserMethod.Invoke
        (queryCompiler, new object[] { nodeTypeProvider });
        var queryModel = parser.GetParsedQuery(query.Expression);
        var database = DataBaseField.GetValue(queryCompiler);
        var queryCompilationContextFactory =
        (IQueryCompilationContextFactory)QueryCompilationContextFactoryField.GetValue(database);
        var queryCompilationContext = queryCompilationContextFactory.Create(false);
        var modelVisitor =
        (RelationalQueryModelVisitor)queryCompilationContext.CreateQueryModelVisitor();
        modelVisitor.CreateQueryExecutor<TEntity>(queryModel);
        var sql = modelVisitor.Queries.First().ToString();

        return sql;
    }
}

And as far as actual usage goes, you would simply call the ToSql() method to return your SQL query string:

// Build a query using Entity Framework
var query = _context.Widgets.Where(w => w.IsReal && w.Id == 42); 
// Get the generated SQL
var sql = query.ToSql(); 



European Windows 2016 Hosting - HostForLIFE.eu :: Top Security Features in Windows Server 2016

clock December 9, 2016 07:19 by author Scott

Most of the press surrounding the release of Windows Server 2016 has focused primarily on new features such as containers and Nano Server. While these new capabilities are undeniably useful, the bigger story with the Windows Server 2016 release is Microsoft's focus on security.

Admittedly, it is easy to dismiss claims of enhanced operating system security as being little more than marketing hype. After all, every new Windows release boasts improved security. In Windows Server 2016, however, Microsoft has implemented a number of new security mechanisms that are designed to work together to provide better overall security.

This article discusses virtual secure mode, which serves as the foundation for several new Windows Server 2016 security features, and three of the more innovative new features.

Microsoft's Virtual Secure Mode

Microsoft's latest security-related buzz phrase is Virtual Secure Mode. The idea behind virtual secure mode is that the Windows operating system can be made more secure by offloading some of its security functions to the hardware, rather than performing those functions solely at the software level.

There are two important things to understand about virtual secure mode. First, virtual secure mode doesn't really provide any security by itself. Instead, virtual secure mode is more of an infrastructure- level component of the operating system, and is the basis for other security features which will be discussed later on.

The other thing that must be understood about virtual secure mode is that the word virtual is there for a reason. As you probably know, modern CPUs include on-chip virtualization extensions. Historically, these virtualization extensions have been the basis of server virtualization. The hypervisor sits on top of the CPU and acts as an intermediary between the virtual machines and the hardware.

One of the big advantages to using this approach to server virtualization is that the hypervisor is able to ensure that virtual machines are truly isolated from one another. Virtual secure mode uses a similar technique to create a virtualized space on top of the hypervisor. Sensitive operations can be securely performed within this space, without being exposed to the host operating system.

Feature No. 1: Credential Guard

As previously noted, virtual secure mode is not a security feature itself, but rather a platform that can be used by other security features. Credential Guard is one of the security features that relies on virtual secure mode. As its name implies, Credential Guard is designed to prevent user credentials from being compromised.

The authentication process used by the Windows operating system is a function of the Local Security Authority (LSA). Not only does the LSA provide interactive authentication services, but it also generates security tokens, manages the local security policy and manages the system's audit policy. Credential Guard works by moving the LSA into Isolated User Mode, the virtualized space created by virtual secure mode.

Although the operating system must be able to communicate with the LSA in order to perform authentication services, Microsoft has designed the operating system to protect the integrity of the LSA. First, the memory used by the LSA is isolated, just as a virtual machine's memory is isolated. Microsoft also limits the LSA to running only the bare minimum binaries, and strict signing of those binaries is enforced. Finally, Microsoft prevents other code, such as drivers, from running in Isolated User Mode.

Feature No. 2: Device Guard

Device Guard is another operating system feature that leverages virtual secure mode. Device Guard isn't really a feature per se, but rather a collection of three security features that fall collectively under the Device Guard label. These three features include Configurable Code Integrity, VSM Protected Code Integrity, and Platform and UEFI Secure Boot (which has been around since Windows 8). Collectively, these three features work together to prevent malware infections.

The Device Guard component that is designed to work with virtual secure mode is VSM Protected Code Integrity. This component ensures the integrity of code running at the kernel level. Although moving kernel mode code integrity into virtual secure mode goes a long way toward protecting the operating system, the Configurable Code Integrity feature is equally noteworthy. This feature is designed to ensure that only trusted code is allowed to run. Administrators can use the PowerShell New-CIPolicy cmdlet to create integrity policies that essentially act as whitelists for applications.

In case you are wondering, these policies are based on application signatures. Since not all applications are signed, Microsoft provides a tool called SignTool.exe that can create a catalog (a signature) for unsigned applications.

Feature No. 3: Host Guardian and Shielded Virtual Machines

Although server virtualization has been proven to be relatively secure, it has always had one major Achilles heel: virtual machine portability. Today, there is little to prevent a virtualization administrator, or even a storage administrator for that matter, from copying a virtual machine's virtual hard disk to removable media.

The rogue administrator would then be able to take the media home, mount the virtual hard disks on his own computer and gain full access to the virtual hard disk's contents. If necessary, the administrator could even go so far as to set up their own host server and actually boot the stolen virtual machine. Microsoft's Host Guardian Service is designed to prevent this from happening by allowing the creation of shielded virtual machines.

The Host Guardian Service is a Windows Server 2016 attestation and key protection service that allows a Hyper-V host to be configured to act as a guarded host. A guarded host must be positively identified on the network and attested at the Active Directory and/or TPM level. If TPM trusted attestation is being used, then Windows goes so far as to verify the host's health by comparing its configuration against a known good baseline configuration. It is worth noting, however, that Active Directory trusted attestation does not support host configuration verification.

The Host Guardian Service enables the use of shielded virtual machines. A shielded virtual machine is a virtual machine whose virtual hard disks are encrypted via virtual TPM. This encryption prevents a shielded virtual machine from running on any Hyper-V server other than a designated guarded host. If a virtual hard disk is removed from the organization, its contents cannot be accessed and the virtual machine cannot be run.

Shielded virtual machines are BitLocker encrypted. BitLocker makes use of a virtual TPM device, residing on the host server. The virtual TPM is encrypted using a transport key, and the transport key is in turn protected by the Host Guardian Service.



European SQL 2016 Hosting - HostForLIFE.eu :: Using SQL Server 2016 Query Store to Force Query Execution

clock December 5, 2016 10:44 by author Scott

Many a time we come across a scenario where suddenly in production without any release or changes some query which was working perfectly alright till yesterday is taking too long to execute or consuming lot of resources or timing out.

Most of the times such issue are related to execution plan change (commonly referred as Plan Regression). Till yesterday the query was running fine as it was running with good cached execution plan and today a bad plan is generated and cached. Because of this bad cached plan the query which was executing perfectly alright suddenly starts misbehaving.

To identify and fix such performance problems due to the execution plan change the Query Store feature introduced in Sql Server 2016 will be very handy.

Query Store basically captures and stores the history of query execution plans and its performance data. And provides the facility to force the old execution plan if the new execution plan generated was not performing well.

What’s Interesting with SQL Query Store

The SQL Server Query Store consists of two main store parts; the Plan Store, where the execution plans information stored, and the Runtime State Store, where the execution statistics will be stored. The query execution statistics and plans are stored first in memory, and flushed to the disk after a specific interval of time. In this way, the Query Store information will not be lost when the SQL Server service is restarted, as the data is hardened to the disk. The default flush to disk configurable database option DATA_FLUSH_INTERNAL_SECONDS value is 15 minutes, this means that, the executed queries information will be written to the disk from the Query Store every 15 minutes. The smaller flush interval, the more frequent write-to-disk operations, the worst SQL performance. The Query Store data will be flushed automatically to the disk to release the memory for other processes if there is a memory pressure. For proper memory and space usage for the Query Store, the execution information is aggregated in the memory first over fixed interval of time, and then flushed to the disk in aggregated form. The max_plans_per_query option can be used to control the number of plans that will be stored for review.

SQL Server Query Store provides you with an easy way to troubleshoot query performance, where you can identify the top CPU, Memory and IO consuming queries, with full execution history for these queries in addition to find which and when the query performance regressed and fix it directly by forcing the best plan, preventing the SQL Server Query Optimizer from using the less efficient new plan. It also can be used to draw a general image about the workload of your environment, with the query text, execution plans, the number of executions and SQL Server resources utilization, which can help you in evaluating how much resources required by your SQL Server database.

When the query is executed for its first time, the query execution plan with full query properties and statistics will be stored in the Query Store internal tables. If you run the query again and the SQL Server Query Optimizer decides to recompile the query and create a new execution plan, this plan will be added to the Query Store, in addition to the old one, with the plan and query execution statistics. Query Store allows you to track the queries execution plans and performance changes, and enforce the plan that the query behaves better when you execute the query with it. When a query is compiled, the SQL Server Query Optimizer will take the latest plan in the plan cache if it is still useful, unless a PLAN HINT is used, or another plan is forced by the Query Store. In this case, the Query Optimizer will recompile the query and use that new plan. No change is required from the application side to perform that, as this is performed transparently from the users.

To start using the new Query Store feature, you need to enable it at the database level, then it will start capturing the queries execution statistics and plans automatically. The below ALTER DATABASE statement is used to enable the Query Store feature on the SQLShackDemo database , which is disabled by default, and specify the operation mode for that feature:

USE [master]
GO
ALTER DATABASE [SQLShackDemo] SET QUERY_STORE = ON
GO
ALTER DATABASE [SQLShackDemo] SET QUERY_STORE (OPERATION_MODE = READ_WRITE)
GO

You can also use the SQL Server Management Studio to enable the Query Store, from the Database Properties window below, where you can find the new Query Store tab introduced when you use the SQL Server 2016 version:

As you can see from the previous image, SQL Server Query Store can be run in two operation modes; Read-Onlymode, where you can only use the persisted statistics to analyze the queries , without capturing any new data. This occurrs when the Query Store reaches its maximum allocation space. In Read-Write mode, the Query Store will capture the execution statistics for the current workload and store it, to be used for analyzing the queries execution performance. The DATA_FLUSH_INTERVAL_SECONDS option determines how frequent the Query Store data stored in the memory will be asynchronously transferred to the disk. By default, SQL Server will write the in-memory Query Store statistics to the disk every 15 minutes, or 900 seconds. You can also flush the Query Store data manually from the memory to the disk by executing the below query:

USE SQLShackDemo
GO
EXEC sys.sp_query_store_flush_db

The maximum size of data that can be stored in the Query Store can be controlled by the MAX_STORAGE_SIZE_MB option. As mentioned previously, exceeding that limit will change the Query Store operation mode to Read-Only mode automatically. By default, the Query Store can keep up to 100 MB of query statistical data before transferring to the Read-Only operation mode. You can make sure that the Query Store will activate the cleanup process if the execution data exceeds the MAX_STORAGE_SIZE_MB value by setting the SIZE_BASED_CLEANUP_MODE to AUTO, which is the default value, or turn it OFF to stop the automatic cleanup process.

The QUERY_CAPTURE_MODE option specifies if the Query Store will capture ALL queries, or ignore the queries that are not running frequently or running on a very small time with AUTO capture mode or stop capturing any new query using the NONE capture mode. The number of days to keep the Query Store data is specified by the STALE_QUERY_THRESHOLD_DAYS parameter.

As mentioned previously in this article, query execution statistics data is aggregated in memory and later flushed to Query Store internal tables to optimize the space usage. The aggregation process is performed over a fixed time interval that is controlled by the INTERVAL_LENGTH_MINUTES parameter, which is 60 minutes by default.

You can also find other useful information in the Query Store page under the Database Properties window, such as the Query Store disk usage in the current database and disk space used by the Query Store internal tables. This page allows you to delete or purge the Query Store data by clicking on the Purge Query Data at the right bottom part of the window:

Or use the below ALTER statement to purge the content of the Query Store:

ALTER DATABASE [SQLShackDemo] SET QUERY_STORE CLEAR

SQL Server introduced 6 new system stored procedures and 7 new system views to check the Query Store feature information and deal with it. These system objects can be listed by querying the sys.all_objects system table as below:

USE master
GO
SELECT Name as ObjectName , type_desc as ObjectType
FROM sys.all_objects
WHERE name LIKE '%query_store%'
or name= 'query_context_settings'

The result will be like:

The sys.query_store_plan, sys.query_store_query, and sys.query_store_query_text system tables can be used to get the current query plans in the Query Store. To show you up-to-date statistics, the data stored in the disk and the current data in the memory will be merged together to provide toy with full image as follows:

SELECT QST.query_text_id,
QST.query_sql_text,
QSP.plan_id,
QSRS.first_execution_time,
QSQ.last_execution_time,
QSQ.count_compiles,
QSQ.last_compile_duration,
QSQ.last_compile_memory_kb,
QSRS.avg_rowcount,
QSRS.avg_logical_io_reads,
QSRS.avg_logical_io_writes
FROM sys.query_store_plan AS QSP  
JOIN sys.query_store_query AS QSQ  
    ON QSP.query_id = QSQ.query_id  
JOIN sys.query_store_query_text AS QST  
    ON QSQ.query_text_id = QST.query_text_id
JOIN sys.query_store_runtime_stats  QSRS
   ON QSP.plan_id =QSRS.plan_id

The result will be like:

The SQL Server Query Store has no direct impact on SQL Server performance, but you need to take into consideration the disk space required to store the aggregated query execution data, where the data will be stored in the Primary database filegroup and consume the configurable MAX_STORAGE_SIZE_MB disk amount as described previously. Also the Query Store capture the execution data asynchronously as it writes the data to the memory first and flush it later to the disk. Query Store avoids consuming the CPU and Memory resources in the way it uses to capture the new plans and the executions statistics.

Once the Query Store is enabled on your database, you can query the sys.database_query_store_options system object to check the Query Store settings you configured as follows:

USE SQLShackDemo
GO
SELECT  actual_state_desc,
                                flush_interval_seconds,  
                                interval_length_minutes,
                                max_storage_size_mb,
                                stale_query_threshold_days,
                                max_plans_per_query,
                                query_capture_mode_desc,
                                size_based_cleanup_mode_desc    
FROM sys.database_query_store_options;  

The result in our case will be like:

Refresh your database node from the Object Explorer to ensure that the new Query Store node that is added in SQL Server 2016, is enabled as below:

Four nodes will be displayed, from where you can check and track the changes in the queries execution statistics and performance. Choose the Regressed Queries node that will show you the queries execution plans with all related statistics:

From the previous window, you can use the Metric drop-down list to choose the criteria that will be used to compare the execution plans performance, with the Statistic drop-down list to select the aggregation function used in the comparison. You can also view the graphical plan for the queries, the query text and the available plans for the query to force the best plan if requested.

Assume that we faced a memory consumption performance issue recently with one of our queries, and we arrange to force the Query Optimizer to use the old plan. Using the Query Store feature, select the Memory Consumption (KB) metric, the query and the plan as in the below window:

<image>

It is clear that the memory consumption for plan 56 is double the memory consumption of plan 31, so we will force the plan 31 for that query. The size of the bubbles shown in the right chart of the window depends on the total number of executions for each plan. The plans data can be displayed in grid form or as bubbles depends in the view you choose. Choose plan 31 in the previous window and click on Force Plan button.

A message will be displayed to confirm that you need to force the selected plan for that query as follows:

This action will force the SQL Server Query Optimizer to recompile that query in the next run using the forced plan. The selected plan will be shown with tick inside it, indicating that this plan is forced now for that query as below:

You can also use the sp_query_store_force_plan SP to enforce a specific plan for the query as follows:

EXEC sp_query_store_force_plan @query_id = 31, @plan_id = 31;

Conclusion:

The SQL Server Query Store is a nice feature introduced in SQL Server 2016, which certainly merits your time and attention, although it will take few minutes from you to configure and learn. This feature is a simple way that can be used to track performance changes and troubleshoot any degradation in the queries performance by comparing the execution plans for the same query and force the best one by overriding the one saved in the plan cache and used by the Query Optimizer. The SQL Server Query Store has no major impact on SQL Server performance due to the way that is used in capturing and saving the queries execution statistics and plans to be viewed later.



European SQL 2016 Hosting - HostForLIFE.eu :: Json in SQL Server 2016

clock November 17, 2016 10:17 by author Scott

In this article, we will focus on Json support in SQL Server 2016. We will take a look how Json will impact data warehouse solutions

Since the advent of EXtensible Markup Language (XML) many modern web applications have focused on providing data that is both human-readable and machine-readable. From a relational database perspective, SQL Server kept up with these modern web applications by providing support for XML data in a form of an XML data type and several functions that could be used to parse, query and manipulate XML formatted data.

As a result of being supported in SQL Server, data warehouse solutions based off SQL Server were then able to source XML-based OLTP data into a data mart. To illustrate this point, let’s take a look at the XML representation of our fictitious Fruit Sales data shown in figure below.

To process this data in data warehouse, we would first have to convert it into relational format of rows and columns using T-SQL XML built-in functions such as the nodes() function. 

The results of the above script are shown below in a recognisable format for data warehouse.

Soon after XML became a dominant language for data interchange for many modern web applications, JavaScript Object Notation (JSON) was introduced as a lightweight data-interchange format that is more convenient for web applications to process than XML. Likewise most relational database vendors released newer versions of their database systems that included the support for JSON formatted data. Unfortunately, Microsoft SQL Server was not one of those vendors and up until SQL Server 2014, JSON data was not supported. Obviously this lack of support for JSON, created challenges for data warehouse environments that are based off SQL Server.

Although there were workarounds (i.e. using Json.Net) to addressing the lack of JSON support in SQL Server, there was always sense that these workarounds were inadequate, time-wasting, and were forcing data warehouse development teams to pick up a new skill (i.e. learn .Net). Fortunately, the release of SQL Server 2016 has ensured that development teams can throw away their JSON workarounds as JSON is supported in SQL Server 2016.

Parsing JSON Data into Data Warehouse

Similarly to XML support in SQL Server, SQL Server supports of JSON can be classified into two ways:

1. Converting Relational dataset into JSON format
2. Converting JSON dataset into relational format

However, for the purposes of this discussion we are focusing primarily on the second part – which is converting a JSON formatted data (retrieved from OLTP sources) into a relational format of rows and columns. To illustrate our discussion points we once again make use of the fictitious fruit sales dataset. This time around the fictitious dataset has been converted into a JSON format as shown in below.

ISJSON function

As part of supporting JSON formatted data in other relational databases such as MySQL and PostgreSQL 9.2, there is a separate JSON data type that has been introduced by these vendors. Amongst other things, JSON data type conducts validation checks to ensure that values being stored are indeed of valid JSON format.

Unfortunately, SQL Server 2016 (and ORACLE 12c) do not have a special data type for storing JSON data instead a variable character (varchar/nvarchar) data type is used. Therefore, a recommended practice to dealing with JSON data in SQL Server 2016 is to firstly ensure that you are indeed dealing with a valid JSON data. The simplest way to do so is to use the ISJSON function. This is a built-in T-SQL function that returns 1 for a valid JSON dataset and 0 for invalids.

OPENJSON function

Now that we have confirmed that we are working with a valid JSON dataset, the next step is to convert the data into a table format. Again, we have a built-in T-SQL function to do this in a form of OPENJSON. OPENJSON works similar to OPENXML in that it takes in an object and convert its data into rows and columns.

Below shows a complete T-SQL script for converting JSON object into rows and columns.

Once we execute the above script, we get relational output shown below

Now that we have our relational dataset, we can process this data into data warehouse.

JSON_VALUE function

Prior to concluding our discussion of JSON in SQL Server 2016, it is worth mentioning that in addition to OPENJSON, you have other functions such as JSON_VALUE that could be used to query JSON data. However this function returns a scalar value which means that unlike the multiple rows and columns returned using OPENJSON, JSON_VALUE returns a single value as shown in Figure below.

If you the JSON object that you are querying doesn’t have multiple elements, than you don’t have to specify the row index (i.e. [0]) 

Conclusion

The long wait is finally over and with the release of SQL Server 2016, JSON is now supported. Similarly to XML, T-SQL support the conversion of JSON object to relational format as well the conversion of relational tables to a JSON object. This support is implemented via built-in T-SQL functions such as OPENJSON and JSON_VALUE. In spite of all the excitement with the support of JSON is SQL Server 2016, we still don’t have a JSON data type. The ISJSON function can then be used to validate JSON text.



European Windows Hosting - HostForLIFE.eu :: New Features in Windows Server 2016

clock November 3, 2016 08:59 by author Scott

As we’ve come to expect from new versions of Windows Server, Windows Server 2016 arrives packed with a huge array of new features. Many of the new capabilities, such as containers and Nano Server, stem from Microsoft’s focus on the cloud. Others, such as Shielded VMs, illustrate a strong emphasis on security. Still others, like the many added networking and storage capabilities, continue an emphasis on software-defined infrastructure begun in Windows Server 2012.

The GA release of Windows Server 2016 rolls up all of the features introduced in the five Technical Previews we’ve seen along the way, plus a few surprises. Now that Windows Server 2016 is fully baked, we’ll treat you to the new features we like the most.

Here are several features that you can get from Windows Server 2016:

Nano Server

Nano Server boasts a 92 percent smaller installation footprint than the Windows Server graphical user interface (GUI) installation option. Beyond just that, these compelling reasons may make you start running Nano for at least some of your Windows Server workloads:

  • Bare-metal OS means far fewer updates and reboots are necessary.
  • Because you have to administratively inject any server roles from outside Nano, the server has a much-reduced attack surface when compared to GUI Windows Server.
  • Nano is so small that it can be ported easily across servers, data centers and physical sites.
  • Nano hosts the most common Windows Server workloads, including Hyper-V host.
  • Nano is intended to be managed completely remotely. However, Nano does include a minimal local management UI called "Nano Server Recovery Console," shown in the previous screenshot, that allows you to perform initial configuration tasks.

Containers

Microsoft is working closely with the Docker development team to bring Docker-based containers to Windows Server. Until now, containers have existed almost entirely in the Linux/UNIX open-source world. They allow you to isolate applications and services in an agile, easy-to-administer way. Windows Server 2016 offers two different types of "containerized" Windows Server instances:

  • Windows Server Container. This container type is intended for low-trust workloads where you don't mind that container instances running on the same server may share some common resources
  • Hyper-V Container. This isn't a Hyper-V host or VM. Instead, its a "super isolated" containerized Windows Server instance that is completely isolated from other containers and potentially from the host server. Hyper-V containers are appropriate for high-trust workloads.

Linux Secure Boot

Secure Boot is part of the Unified Extensible Firmware Interface (UEFI) specification that protects a server's startup environment against the injection of rootkits or other assorted boot-time malware.

The problem with Windows Server-based Secure Boot is that your server would blow up (figuratively speaking) if you tried to create a Linux-based Generation 2 Hyper-V VM because the Linux kernel drivers weren't part of the trusted device store. Technically, the VM's UEFI firmware presents a "Failed Secure Boot Verification" error and stops startup.

Nowadays, the Windows Server and Azure engineering teams seemingly love Linux. Therefore, we can now deploy Linux VMs under Windows Server 2016 Hyper-V with no trouble without having to disable the otherwise stellar Secure Boot feature.

ReFS

The Resilient File System (ReFS) has been a long time coming in Windows Server. In Windows Server 2016, we finally get a stable version. ReFS is intended as a high-performance, high-resiliency file system intended for use with Storage Spaces Direct (discussed next in this article) and Hyper-V workloads.

Storage Spaces Direct

Storage Spaces is a cool Windows Server feature that makes it more affordable for administrators to create redundant and flexible disk storage. Storage Spaces Direct in Windows Server 2016 extends Storage Spaces to allow failover cluster nodes to use their local storage inside this cluster, avoiding the previous necessity of a shared storage fabric.

ADFS v4

Active Directory Federation Services (ADFS) is a Windows Server role that supports claims (token)-based identity. Claims-based identity is crucial thanks to the need for single-sign on (SSO) between on-premises Active Directory and various cloud-based services.

ADFS v4 in Windows Server 2016 finally brings support for OpenID Connect-based authentication, multi-factor authentication (MFA), and what Microsoft calls "hybrid conditional access." This latter technology allows ADFS to respond when user or device attributes fall out of compliance with security policies on either end of the trust relationship.

Nested Virtualization

Nested virtualization refers to the capability of a virtual machine to itself host virtual machines. This has historically been a "no go" in Windows Server Hyper-V, but we finally have that ability in Windows Server 2016.

Nested virtualization makes sense when a business wants to deploy additional Hyper-V hosts and needs to minimize hardware costs.

Hyper-V Server has allowed us to add virtual hardware or adjust the allocated RAM to a virtual machine. However, those changes historically required that we first power down the VM. In Windows Server 2016, we can now "hot add" virtual hardware while VMs are online and running. I was able to add an additional virtual network interface card (NIC) to my running Hyper-V virtual machine.

PowerShell Direct

In Windows Server 2012 R2, Hyper-V administrators ordinarily performed Windows PowerShell-based remote administration of VMs the same way they would with physical hosts. In Windows Server 2016, PowerShell remoting commands now have -VM* parameters that allows us to send PowerShell directly into the Hyper-V host's VMs!

Invoke-Command -VMName 'server2' -ScriptBlock {Stop-Service -Name Spooler} -Credential 'tomsitprotim' -Verbose

We used the new -VMName parameter of the Invoke-Command cmdlet to run the Stop-Service cmdlet on the Hyper-V VM named server2.

Shielded VMs

The new Host Guardian Service server role, which hosts the shielded VM feature, is far too complex to discuss in this limited space. For now, suffice it to say that Windows Server 2016 shielded VMs allow for much deeper, fine-grained control over Hyper-V VM access.

For example, your Hyper-V host may have VMs from more than one tenant, and you need to ensure that different Hyper-V admin groups can access only their designated VMs. By using BitLocker Drive Encryption to encrypt the VM's virtual hard disks, shielded VMs can solve that problem.

 



European SQL 2016 Hosting - HostForLIFE :: Dynamic Data Masking in SQL 2016. Is it Enough?

clock October 11, 2016 23:43 by author Scott

Dynamic vs. Static Data Masking

When masking data, organizations prevent unauthorized users from viewing sensitive data and protect information for following regulatory needs.  Data masking technology provides data security by replacing sensitive information with a non-sensitive content, but doing so in such a way that the copy of data that looks and acts like the original.

In this article, we talk about the different types of data masking and discuss how organizations can use data masking to protect sensitive data.

Masking data isn’t the same as a firewall

Most organizations have a fair amount of security around their most sensitive data in the production (live) databases. Access to databases is restricted in a variety of ways from authentication to firewalls.

Masking limits the duplication of sensitive data within development and testing environments by distributing substitute data sets for analysis. In other cases, masking will dynamically provide masked content if a user’s request for sensitive information is considered ‘risky’. Masking data is designed to fit within existing data management frameworks and mitigate risks to information without sacrificing its usefulness. Masking platforms tend to guard data, locate data, identify risks and protect as information moves in and out of the applications.

Data masking hides the actual data. There are a variety of different algorithms for masking, depending on the requirements.

Simple masking just turns characters to blank, so, for example, an e-mail address would appear as [email protected]

More complex masking understands values, so, for example, a real name like “David Patrick” would be transformed into a fake name (with the same gender characteristics), like “John Smith”

In some algorithms, values are scrambled, so, for example, a table of health conditions might appear with values of the health conditions, but not associated with the correct person for the particular salary

Most data masking tools will offer a variety of levels of masking that can be enabled in your network. Both static and dynamic data masking use these same masking methodologies.

Static data masking

Static data masking is used by most organizations when they create testing and development environments, and, in fact, is the only possible masking method when using outsourced contractors or developers in a separate location or separate company. In these cases, it’s necessary to duplicate the database. When doing so, it is crucial to use a static data masking tools. These tools ensure that all sensitive data is masked before sending it out of the organization.

Static data masking provides a basic level of data protection by creating an offline or testing database using a standard ETL procedure. This procedure replicates a production database, but substitute’s data that has been masked, in other words, the data fields are changed to data that’s not original or is not readable.

It’s important to be aware that static masking can provide a backdoor, especially when outsourced personnel is used for administration, development, or testing. To mask data, the data is extracted from the database, at least for inspection, to comprehend the data before masking. Theoretically, this could provide a backdoor for data breaches, though it is not one of the common methods of malicious data capture.

Also, it’s clear that the static database always lags behind the actual data. The static database can be updated periodically, for example on a daily or weekly basis. This is not a security risk, but it often has implications for a variety of tests and development issues.

Static data masking allows database administrators, quality assurance, and developers to work on a non-live system so that private data is not exposed.

In many cases, in fact, you’ll want a test database anyhow. You don’t want to be running live experiments on a production database, so for R&D and testing, it makes sense to have a test database. There’s nothing wrong with this scenario.

Is your database protected with static data masking?

The answer should be obvious from the image above. Your actual production database is, in fact, not protected in any way when it comes to concealing sensitive information. Anyone or any system that has access to the production database might also have access to sensitive information. For most organizations, the only protection under this scenario is provided by limiting authorization access to the production database.

Concerns about static data masking

With static data masking, most of the DBAs, programmers, and testers never actually get to touch the production database. All of their work is done on the dummy test database. This provides one level of protection and is necessary for many environments. However, it is not a complete solution because it does not protect authorized users from viewing and extracting unauthorized information. The following concerns should be noted when using static database solutions.

Static solutions actually require extraction of all the data before it is masked, that is, it actually guarantees the data gets out of the database in unmasked form. One of the most disturbing facts about static data masking is the standard ETL (extract, transform, and load) approach. In other words, the database information was extracted as-is from the database, and only afterward transformed. You have to hope or trust that the masking solution successfully deleted the real data, and that the static masking solution is working on a secure platform that was not compromised.

The live database is not protected from those who do have permissions to access the database. There are always some administrators, QA, developers, and others with access to the actual live database. This personnel can access actual data records, which are not masked.

For organizations where a test database is not necessary for other purposes, it is wasteful to have a full test database that is a copy of the full production database, minus identifying information. The cost is in the hardware and maintenance of the second system.

Activities have to be performed twice: once on the test system and then implemented on the live system. There’s no guarantee that it will work on the production system, and then the developers or DBAs who need to debug the system will be either debugging on the testing system, or they will be granted permissions that allow them to see the actual live data.

Dynamic data masking: security for live systems

Dynamic data masking is designed to secure data in real time for live production and non-live systems. Dynamic data masking masks all sensitive data as it is accessed, in real time and the sensitive information never leaves the database. When a DBA or other authorized personal views actual data in the production database, data is masked or garbled, so the real data is never exposed. This way, under no circumstances, is anyone exposed to private data through direct database access.

Using a reverse proxy, the dynamic masking tool investigates each query before it reaches the database server. If the query involves any sensitive data, the data is masked on the database server before it reaches the application or the individual who is requesting the data. This way, the data is fully functional for development or testing purposes but is not displayed to unauthorized users.

Dynamic masking allows all authorized personnel to perform any type of action on the database without seeing real data. Of course, activities that are supposed to show data do show that data, but only to the authorized personnel using the correct access. When using advanced data masking rules, it’s possible to identify whether a particular field should be shown to a particular person, and under what circumstances. For example, someone may be able to access one hospital record at a time but only from a particular terminal or IP address, using a specific application and specific credentials. Accessing that same record using a direct database command would not work or would produce masked data.

Concerns with dynamic data masking

Dynamic data masking requires a reverse proxy, which means adding a component between the data query and response. Different solutions exist, some of which require a separate on-premises server, and others that are software-only based and can be installed on the database server.

Furthermore, when a company uses only dynamic data masking and does not have a production system, there are issues associated with performing functions on the live database.

The following concerns should be noted when using dynamic database masking solutions.

  • Response time for real-time database requests. In environments where milliseconds are of crucial importance, dynamic masking needs to be carefully tested to ensure that performance meets the organization standards. Even when a particular item of data is not masked, the proxy does inspect the incoming request.
  • Security of the proxy itself. Any type of software installed on the database server needs to be secure. And once a proxy is present, you have to enforce that the entire connections to the database are now passing through this SQL proxy. Bypassing this proxy in any way will result in access to the sensitive data without masking.
  • Performing of database development and testing on live systems can cause errors in the production system. In many cases, DBAs perform changes on a limited part of the system before deploying. However, best practices would require a separate database for development and testing.

Static vs. Dynamic Data Masking

The main reason to use data masking is to protect sensitive and confidential information from being breached and protected according to regulatory compliance requirements. At the same time, the data must stay in the same structure, otherwise, the testing will not show accurate results. The data needs to look real and perform exactly as data normally would in the production system. Some companies take real data for non-production environments but sometimes the data may have other uses. For example, in some organizations, when a call center personnel views customer data, the credit card data may be masked on screened.

Generally speaking, most organizations will need some combination of dynamic and static database masking. Even when static data masking is in place, almost any organization with sensitive information in the database should add dynamic data masking to protect live production systems. Organizations with minimal development and testing can rely solely on dynamic data masking, though they may find themselves providing some data with static masking to outside developers or other types of contractors.

Advantages of static data masking

  • Allows the development and testing without influencing live systems
  • Best practice for working with contractors and outsourced developers, DBAs, and testing teams
  • Provides a more in-depth policy of masking capabilities
  • Allows organizations to share the database with external companies

Advantages of dynamic data masking

  • The sensitive information never leaves the database!
  • No changes are required at the application or the database layer
  • Customized access per IP address,  per user, or per  application
  • No duplicate or off-line database required
  • Activities are performed on real data, saving time and providing real feedback to developers and quality assurance



European DotNetNuke 7 Hosting - UK :: What You Must DO and DON'T for Your DotNetNuke SEO

clock November 12, 2015 19:47 by author Scott

DotNetNuke is easily one of the most popular ASP .Net content management (CMS) systems out there. In this post I am going to cover some of the simplest, fastest things you can do to your DotNetNuke site in order to improve SEO. If your first question was “What is SEO?” then this post is for you. If you are familiar with SEO and want a quick refresher that pertains specifically to DotNetNuke then this post is for you. If you have already carefully tuned your site and are looking for advanced optimization techniques, go hire a marketing professional with a proven track record of SEO success.

So, what is SEO? The term SEO stands for Search Engine Optimization and it essentially means tweaking your site to make it more search-friendly. A search-friendly site makes it much easier for your current and potential customers to locate your site and to find what they need on your site. The end-goal of SEO is often more specifically about getting into one of the coveted top spots in Google’s search results for a particular term or phrase. While achieving that goal usually takes a lot more than the simple tips presented in this post (for example, Google’s ranking depends a lot on the number and types of other sites that link to you), these tips will get you started on the right path and will make your site more useful and usable in the process.

DO use a DotNetNuke (DNN) skin that is web standards-based and follows current recommended practices for accessible content. A good skin will probably note that it is XHTML or HTML5 compliant and may display a small W3C icon indicating that its code validates properly. The W3C is the internationally-recognized body responsible for setting standards that govern key web technologies. A good skin will not use tables for layout. You may also see references to Section 508, which refers to standards set forth in US law for make a site accessible to all users including those using screen readers and other assistive technology. Section 508-compliant sites tend to also be extremely search-friendly as they will include additional text and meta-data to serve assistive technology that is also useful to the robots used by Google, Bing, Yahoo and others.


DON’T make any key text into an image. In fact, avoid making any text into an image at all. Search engines cannot read any text from an image, and neither can screen readers for visually-impaired users. Web font services such as TypekitWebtype and Google Web Fonts make it easier than ever to replace images with plain text and maintain custom styling. If you must use an image for text, make sure that you properly define the alt tag of the image. This is something that is absolutely essential to do for all images on your site, not just those that contain text. In DotNetNuke you typically set up this text in the properties area when inserting an image. You can also configure an additional long description through this dialog where appropriate.


DO take advantage of the site-wide and page-level descriptions and keywords available in DNN. Most themes will use these elements in the header meta-tags of your site. While search engines never rely on keyword tags alone for indexing, these keywords can be useful for pointing out to the search engine which words on the page are particularly important. On the other hand, the description is typically used by search engines as the snippet that will be displayed in their search results. For this reason you should keep it short, to-the-point and self-explanatory. This is your chance to grab the attention of someone who is quickly scanning a search results page for relevant links. The site-wide description will often be used when no page-level description is present but you should always override it with a more specific description per page when possible to prevent the appearance of duplicate content and to give a better idea of what is actually on each page. You can set the description and keywords for the entire site in Site Settings and for an individual page while editing that page.


DON’T post misleading content, duplicate content on multiple pages or rip content from other sites on the web. Google in particular is known to penalize this type of behavior. This should be an obvious one as it is also clearly an ethical issue. The best way to attract the customers that are right for you is to post authentic, original content that benefits them in some way. If you find an article elsewhere that you think may be beneficial to your customers the best thing to do is to write your own post adding value to the discussion, including one or two short, properly-attributed quotes and linking to the full text on the original site. Who knows, this neighborly behavior could even lead to a productive relationship with the author of the content and perhaps a link back to your site from theirs at some future date.


DO update frequently, write plenty of informative text and try to include the words and phrases that you think customers will use when trying to search for the content you have written. Pay attention to the types of questions your customers are asking and how they are asking them. Try to think like a customer when writing and use the same terms that they would be likely to use when searching. Mention those things which are most distinguishing about you and that you want others to find out about you. By doing this not only will you make your content rank higher in search results but you will also make it far more relevant to your customers themselves. It is also a very good idea to include the keywords you identify in the titles of your pages or posts whenever it makes sense to do so – search engines tend to place special emphasis on URLs and titles.


DON’T rename or move pages unless absolutely necessary (and even then seriously consider creating a redirect from the old location to the new one). I should really say don’t rename or move pages ever. Changing the structure of your site can cause dead links both internally and externally. If someone has bookmarked a particular location or linked to it from a blog, web site or tweet and you move or rename that page the link will stop working and you will end up with missed opportunities and frustrated and alienated customers. Restructuring your site can also lead to a (usually temporary) search ranking penalty until your site is re-indexed by every major search provider. If you must move a page you can create a placeholder at the old location and use the DNN link/redirect options to make sure that people with only the old URL will still end up in the right place.


DO update frequently, write plenty of informative text and try to include the words and phrases that you think customers will use when trying to search for the content you have written. Pay attention to the types of questions your customers are asking and how they are asking them. Try to think like a customer when writing and use the same terms that they would be likely to use when searching. Mention those things which are most distinguishing about you and that you want others to find out about you. By doing this not only will you make your content rank higher in search results but you will also make it far more relevant to your customers themselves. It is also a very good idea to include the keywords you identify in the titles of your pages or posts whenever it makes sense to do so – search engines tend to place special emphasis on URLs and titles.


DON’T rename or move pages unless absolutely necessary (and even then seriously consider creating a redirect from the old location to the new one). I should really say don’t rename or move pages ever. Changing the structure of your site can cause dead links both internally and externally. If someone has bookmarked a particular location or linked to it from a blog, web site or tweet and you move or rename that page the link will stop working and you will end up with missed opportunities and frustrated and alienated customers. Restructuring your site can also lead to a (usually temporary) search ranking penalty until your site is re-indexed by every major search provider. If you must move a page you can create a placeholder at the old location and use the DNN link/redirect options to make sure that people with only the old URL will still end up in the right place.


DO use the semantic nature of HTML to add value to your content. An &lt;h1&gt; (Heading 1) tag should be the most important heading on your page. &lt;h2&gt; should be next and so forth, like an outline. The &lt;p&gt; should separate paragraphs. Addresses should be indicated with &lt;address&gt; and lists with &lt;ul&gt; (unordered) or &lt;ol&gt; (ordered). Links created with the &lt;a&gt; tag should include title attributes. Like many of the previous tips, producing well structured content helps both search engines and assistive devices parse your site with greater success. It also starts you down the path towards microformats and some of the advanced and exciting things being done with them. DNN makes at least the basics of this relatively easy without editing the actual code. You can simply choose the tag that will be used from the built in editor. When you do this, just remember that you are describing the text as well as styling it.

That’s it for today! I hope that these tips help you make your site more search- and user-friendly! 



Advantages Using Cloud Hosting - Disaster Recovery Planning

clock October 6, 2015 07:50 by author Scott

Disaster Recovery plans and infrastructures are a necessity for large enterprises for which operating would be an issue if their mission critical applications were to crash or become unavailable. Most DR infrastructures have adopted the method of replicating the infrastructure and hardware of the primary site at a backup location; this ensures that when an issue occurs at the primary site, applications can still be served from an unaffected location that contains the necessary capacity. However, replicating an already complex infrastructure is time consuming and costly to not just build, but to maintain too. Cloud DR looks to build upon DR plans by providing a virtualized cloud infrastructure that is able to operate idly with minimal resources, but can scale up to cope with demand when the primary site fails.

Reduction in DR costs

Moving a DR configuration into the cloud can help you to realize huge cost savings by reducing the amount of physical infrastructure that you need to maintain and rely on to protect you in the event of a disaster that takes down your primary data center locations. Because a DR environment only requires resources that are of a bare minimum when it isn’t being actively used, the amount that you are paying will also be the bare minimum.

As a physical DR environment is comprised of hardware and resources that are equal to that of your primary sites, the amount being paid for is often the same as that of the primary site, the only difference being that most of the time these resources are lying idle. So in effect, with traditional DR you could be paying for unused resources a lot of the time. In a cloud DR configuration, if it is called into action then additional resources can be automatically provisioned as the environment scales to cope with the demand being placed on it. Once demand recedes, the resources are then returned to the cloud. With cloud DR you will only ever be paying for resources that are actually being used, which is where the cost efficiencies arise.

Virtualization caters for unpredictable demand

With traditional DR, the capacity of the DR environment is equal to that of the primary site. So whilst there will be enough capacity to meet demand when the primary site is down, it does mean that even when the DR environment is in use that there could still be a substantial amount of free resources. These are free resources that you will still be paying for. DR in the cloud accounts for this unpredictable demand by scaling up to account for the demand placed on it, so you are only ever paying for resources that are actually being used, therefore there will never be any spare resources. This can also be of assistance for times where demand is actually more than even the primary site can handle.

Minimal recovery time

With cloud DR, the backup environment will be ready to serve your mission critical applications the moment any issues are detected at your primary sites. In the event that your primary site does become unavailable, your end-users shouldn’t notice any difference as we have designed the failover process to take place with minimal downtime. Once your end-users have been transferred to the DR environment, you can get to work repairing the primary site as soon as possible to minimize the amount of time that is spent utilizing the recovery site. Once you have repaired the issue and are confident that the primary site is ready to be returned to live use, the transfer from the DR site to primary site will also be flawless and completed with minimal downtime. These processes make sure that issues don’t have the opportunity to have a large, negative impact on your business; although sometimes they may take time and money to repair, from your end-user’s perspective at least your business will continue to operate as normal because they will still be able to access their mission critical applications and data without issue.

Try our new Cloud Hosting as low as €3.49/month!!

As we have explained above the benefits using Cloud. Now, you can try our new Cloud technology start from an affordable cost. For more information, please visit our cloud official site at http://hostforlife.eu/ASPNET-Cloud-European-Hosting-Plans.

 



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2012 Hosting, ASP.NET 4.5 Hosting, ASP.NET MVC 5 Hosting, SQL 2014 Hosting and SQL 2014 Hosting.

Tag cloud

Sign in