European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

European SQL Hosting - Amsterdam :: Moving Database Files of a Mirrored SQL Server Database

clock April 15, 2013 08:11 by author Scott

As you may know, you cannot detach a mirrored database or bring it offline to move a data or log file from one drive to another drive. Moving a database file for a mirrored database is not the same as moving a normal database. Here I will show you the step by step process on how to move the data and/or log file(s) from one drive to another drive with minimum downtime.

Solution

Moving database files can be done two ways; by detaching the database then moving the database file(s) to the target location and then attaching the database from the new location. The other option is to run an ALTER statement to change the file location in the system catalog view, bring the database offline, then copy the file(s) to the target location and bring the database online. With database mirroring enabled for the database, both options will fail because your database is mirrored. We can't detach the mirrored database, nor can we bring it OFFLINE.

Here is step by step solution to reduce your downtime and move your database file from one location to another location for a mirrored database.

Steps

Step 1
Check the database files location for all database files.  Here we are going to move database "NASSP2".

sp_helpdb NASSP2

Here we can see two database files placed on the C: drive. As per best practice, we should not place database files on the system C: drive, so this is what we are going to move.

Step 2
Check the database mirroring configuration for your database. Run the below script to check the database mirroring status and its current partner name.

SELECT (SELECT DB_NAME(7)AS DBName),
database_id,
mirroring_state_desc,
mirroring_role_desc,
mirroring_partner_name,
mirroring_partner_instance
FROM sys.database_mirroring
WHERE database_id=7

We can see the mirroring_role_desc for this server is principal and its partner/mirrored instance name.

Step 3
If the database file size is big it will take some time to copy from one drive to another drive. So to over come this issue and minimize the downtime, we will failover our database from our principal server to its MIRROR server and route our application to the new principal server (earlier mirrored box) to bring the application online and run business as usual. Run the below command to failover this database.

ALTER DATABASE NASSP2 SET PARTNER FAILOVER

Step 4
Now we can again check the database mirroring configuration to see the current mirroring state. Run the same script which we ran in step 2. This time the output is the same, except one column. Here mirroring_state_desc is MIRROR where earlier it was principal.

Now our principal instance has become mirrored. Ask your application team to change the ODBC configurations and route the application connection to the new principal server. Now we can do make changes without downtime being of any concern. Note that if you have databases in a shared environment, then you may need to failover all databases to the mirrored server to reduce any downtime.  The technique requires stopping the database services, so this could impact other databases on this server.

Step 5
As we saw in step 1, two database files are on the C: drive. Now we have to move these two database files from 'C:' to 'E\MSSQL2008\DATA' drive. First we need to run an ALTER DATABASE statement to change the file location in master database system catalog view. Make sure to run this ALTER statement for every file that needs to be moved to the new location.

ALTER DATABASE NASSP2
MODIFY FILE (NAME='NASSP2_System_Data', FILENAME='E:\MSSQL2008\DATA\nassp2_system_data.mdf')
go
ALTER DATABASE NASSP2
MODIFY FILE (NAME='NASSP2_log', FILENAME='E:\MSSQL2008\DATA\nassp2_log.ldf')
go

Step 6
Now, stop the SQL Server instance to copy the data and log file(s) to the target location. I used PowerShell to stop and start the services. You can use services.msc utility or SQL Server Configuration Manager as well.

STOP-SERVICE MSSQLSERVER –FORCE

Check the status of SQL Server service.

GET-SERVICE MSSQLSERVER

Step 7
Now copy both database files (nassp2_system_data.mdf and nassp2_log.ldf) to the new target location ('C' to 'E:\MSSQL2008\DATA').

Step 8
Once both files has been copied, start the SQL Server services.

START-SERVICE MSSQLSERVER

Check the status of SQL Server service

GET-SERVICE MSSQLSERVER

Step 9
Once SQL Server has started, failback your database from current principal to your primary box. Run step 1 again to check the location of the files and run step 2 again to check the mirroring status.

 



European SQL Hosting Tips :: SQL Query and Optimization in SQL Server

clock April 5, 2013 08:09 by author Scott

This article going to talk about real world query optimization.Most of the times all developers and Database administrators face the long time running query.so this article will help to you optimize the sql query with index.

I've run a simple test on sql query involve 2 tables, tblEmail & tblEmailPromotion

Table columns: 

tblEmail (email varchar(255), IsDeleted int)
tblEmailPromotion (email varchar(255), PromotionID int)

Both tables without index,

SELECT TOP (90) e.Email FROM tblEmail e
LEFT OUTER JOIN tblPromotionEmail pe ON e.Email = pe.Email AND pe.PromotionID
= 6
WHERE pe.PromotionID IS NULL
AND e.IsDeleted = 0
AND e.Email LIKE '%hotmail%'
ORDER BY e.Email

It takes about 2:43 (2 minutes 43 seconds) to get the result.

Change the question from "LEFT OUTER JOIN" to "IN" become

SELECT TOP (90) e.Email FROM tblEmail e
WHERE e.Email NOT IN (SELECT Email FROM tblPromotionEmail pe WHERE
pe.PromotionID = 6)
AND e.IsDeleted = 0
AND e.Email LIKE '%hotmail%'
ORDER BY e.Email

Now, you’ll see the different, it will takes faster result.

Now, we index table tblEmail column Email (Unique)

SELECT TOP (90) e.Email FROM tblEmail e
LEFT OUTER JOIN tblPromotionEmail pe ON e.Email = pe.Email AND pe.PromotionID
= 6
WHERE pe.PromotionID IS NULL
AND e.IsDeleted = 0
AND e.Email LIKE '%hotmail%'
ORDER BY e.Email

It takes about 1:22 (1 minute 22 seconds) to get the result

Now, we run the 2nd query:

SELECT TOP (90) e.Email FROM tblEmail e
WHERE e.Email NOT IN (SELECT Email FROM tblPromotionEmail pe WHERE
pe.PromotionID = 6)
AND e.IsDeleted = 0
AND e.Email LIKE '%hotmail%'
ORDER BY e.Email

It only takes 27 Seconds

To make the 2nd query better:

SELECT TOP (90) e.Email FROM tblEmail e
WHERE e.Email NOT IN (SELECT Email FROM tblPromotionEmail pe WHERE pe.PromotionID = 6 AND pe.Email LIKE '%hotmail%')
AND e.IsDeleted = 0
AND e.Email LIKE '%hotmail%'
ORDER BY e.Email

This will give 15 seconds.Now you can feel the different way of query optimization and retrive records from database in short time.

Hope the tutorial above help you. If you’re looking for SQL 2012 hosting on Europe server, please visit HostForLIFE.eu. HostForLIFE.eu is awarded Top No#1 SPOTLIGHT Recommended Hosting Partner by Microsoft (see http://www.microsoft.com/web/hosting/HostingProvider/Details/953). Our service is ranked the highest top #1 spot in several European countries, such as: Germany, Italy, Netherlands, France, Belgium, United Kingdom, Sweden, Finland, Switzerland and other European countries. Besides this award, we have also won several awards from reputable organizations in the hosting industry and the detail can be found on our official website.

 



European SQL Hosting - Amsterdam :: How to Fix Cannot resolve the collation conflict between "..." and "..." in the equal to operation

clock February 14, 2013 05:12 by author Scott

Sometimes you’ll get this error message when queries your database:

"Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "Latin1_General_CI_AI" in the equal to operation"

So, how to fix this error?


Just use the following syntax to collate on the fly when joining up tables with different collations. I integrate system so have to do this allot.

select * from [Product] p join [category] c
on 
c.[Name] collate SQL_Latin1_General_CP1_CI_AS
=
p.[Name] collate SQL_Latin1_General_CP1_CI_AS

Hope this help.



European SQL 2012 Hosting - Amsterdam :; Why a Session With sp_readrequest Takes so Long to Execute

clock February 12, 2013 05:36 by author Scott

While applying, Long Running Sessions Detection Job on a production server, we start receiving alert that a session is taking more then 3 minutes. But what actually this session was doing. Here is the alert report.

SP ID

Stored Procedure Call

DB Name

Executing Since

58

msdb.dbo.sp_readrequest;1�

msdb

3 min

sp_readrequest is a system stored procedure, which basically reads a message request from the the queue and returns its  contents.

This process can remain active for a time we have configured for parameter DatabaseMailExeMinimumLifeTime, at the time of database mail profile configuration. 600 seconds is the default value for this external mail process. According to BOL DatabaseMailExeMinimumLifeTime is the The minimum amount of time, in seconds, that the external mail process remains active.

This can be changed, at the time of mail profile configuration or you can just use update query to change this time.

UPDATE msdb.dbo.sysmail_configuration
SET
paramvalue = 60 --60 Seconds
WHERE paramname = 'DatabaseMailExeMinimumLifeTime'


We have changed this to 60 seconds to resolve our problem.

 



European SQL 2012 Hosting - Amsterdam :: Tabular Models vs PowerPivot Models SQL 2012

clock January 23, 2013 06:32 by author Scott

In SQL Server 2012, there is a new data model, called tabular, that is part of the new feature called the Business Intelligence Semantic Model (BISM).  BISM also includes the multidimensional model (formally called the UDM).

The Tabular model is based on concepts like tables and relationships that are familiar to anyone who has a relational database background, making it easier to use than the multidimensional model.  The tabular model is a server mode you choose when installing Analysis Services.

The tabular model is an enhancement of the current PowerPivot data model experience, both of which use the Vertipaq engine.  When opening a PowerPivot for SharePoint workbook, a SSAS cube is created behind the scenes, which is why PowerPivot for SharePoint requires SSAS to be installed.

So if tabular models and PowerPivot models use the same Analysis Services engine, why are tabular models necessary when we already have PowerPivot?

There are four things that tabular models offer that PowerPivot models does not:

  1. Scalability - PowerPivot has a 2 GB limit for the size of the Excel file and does not support partitions, but tabular model have no limit and support partitions.  Tabular models also support DirectQuery
  2. Manageability – There are a lot of tools you can use with the tabular model that you can’t use with PowerPivot: SSMS, AMO, AMOMD, XMLA, Deployment Wizard, AMO for PowerShell, and Integration Services
  3. Securability – Tabular models can use row security and dynamic security, neither of which PowerPivot supports, only Excel workbook file security
  4. Professional development toolchain - Tabular models live in the Visual Studio shell.  Thus, they enjoy all the shell services such as integrated source control, msbuild integration, and Team Build integration.  PowerPivot lives in the Excel environment, thus it is limited to the extensibility model provided in Excel (which doesn’t include source control or build configuration).  Also, because tabular models live in the VS environment, build and deployment can be naturally separated


So Analysis Services can now be installed in one of three server modes: Multidimensional and Data Mining (default), PowerPivot for SharePoint, and Tabular.

More info:

When to choose tabular models over PowerPivot models

Comparing Analysis Services and PowerPivot

Feature by Server Mode or Solution Type (SSAS)

 



European SQL 2012 Hosting - Amsterdam :: SQL Server 2012 Integration Services with HostForLIFE.eu

clock December 24, 2012 05:51 by author Scott

SQL Server 2012 Integration Services introduces an innovative approach to deploying SSIS projects, known as Project Deployment Model. This is the default and recommended deployment technique, due to a number of benefits it delivers (such as, the ability to centralize management of package properties across deployed projects, as well as monitor and log package execution performance and progress). However, even though the new model has become the recommended and default configuration for new packages created using SQL Server Data Tools, the traditional, package-based methodology remains available and supported. More importantly, in some scenarios, it might be considered a more viable option, since it allows for separation of SQL Server Database Engine and SQL Server Integration Services roles, yielding performance benefits and facilitating a distributed extraction, transformation, and loading (ETL) activities. Project Deployment Model, on the other hand, requires SSIS to be collocated on a SQL Server 2012 instance, due to its dependency on SSIS catalog. We will discuss its characteristics and walk through its implementation in this post.

Reporting Services 2008 for Internet deployment

The deployment method available from a SQL Server Data Tools-based project is directly dependent on the model employed during its creation (it is also possible to switch between two models after a project is created by applying a conversion process, which is invoked from the Project menu in SQL Server Data Tools). Effectively, in order to use the traditional package deployment mechanism, you will need to first ensure that your project is based on the package deployment model, which you can easily identify by checking its label in the Solution Explorer window.

In addition, you will also have to modify default project properties. To access them, right-click on the node representing the project in the Solution Explorer window and select the Properties entry from its context sensitive menu. In the resulting dialog box, switch to the Deployment section, where you will find three settings displayed in the grid on the right hand side:

AllowConfigurationChanges – a Boolean value (i.e. True, which is the default, or False) that determines whether it will be possible to choose package configurations during its deployment and assign values of properties or variables they reference.

CreateDeploymentUtility – a Boolean value (True or False, which is the default) that indicates whether initiating the project build will result in the creation of its Deployment Utility.

DeploymentOutputPath – a path that points to the file system folder where the Deployment Utility will be created. The path is relative to the location where project files reside (and set, by default, to bin\Deployment).

Set the value of the CreateDeploymentUtility property to True and modify AllowConfigurationChanges according to your requirements (e.g. set it to False if your project does not contain any package configurations or you want to prevent their modifications during deployment). Next, start the build process (using the Build item in the top level main menu of SQL Server Data Tools), which will populate the output folder (designated by DeploymentOutputPath parameter) with the .dtsx package file (or files, depending on the number of packages in your project), an XML-formatted Deployment Manifest file (whose name is constructed by concatenating the project name and .SSISDeploymentManifestsuffix) and, potentially, a number of other, project related files (representing, for example, custom components or package configurations).

The subsequent step in the deployment process involves copying the entire content of the Deployment Output folder to a target server and double-clicking on the Deployment Manifest file at its destination. This action will automatically launch the Package Installation Wizard. After its first, informational page, you will be prompted to choose between File system and SQL Server deployments.

The first of these options creates an XML-formatted file (with extension .dtsx) in an arbitrarily chosen folder. If the project contains configuration files (and the AllowConfigurationChanges project property was set to True when you generated the build), then you will be given an option to modify values of properties included in their content. At the end of this procedure, the corresponding .dtsConfigfiles will be added to the target folder.

The second option, labeled SQL Server deployment in the Package Installation Wizard relies on a SQL Server instance as the package store. Once you select it, you will be prompted for the name of the server hosting the SQL Server Database Engine, an appropriate authentication method, and a path where the package should be stored. If you want to organize your packages into a custom folder hierarchy, you will need to pre-create it by connecting to the Integration Services component using SQL Server Management Studio. In case your package contains additional files (such as package configurations), you will also be given an opportunity to designate their location (by default, the wizard points to Program Files\Microsoft SQL Server\110\DTS\Packagesfolder).

In either case, you can decide whether you want to validate packages following their installation (which will append the Packages Validation page to the Package Installation Wizard, allowing you to identify any issues encountered during deployment). In addition, when using SQL Server deployment, you have an option to set a package protection level (resulting in assignment of ServerStorage value to the ProtectionLevel package property). When deploying to file system, this capability is not available, forcing you to resort (depending on the deployment target) to either NTFS permissions or SQL Server database roles for securing access to your packages and sensitive information they might contain.

Just as in earlier versions of SQL Server, local SQL Service Integration Services 11.0 instance (implemented as the MSDTSServer110 service) offers the ability to manage packages stored in the MSDB database and file system, providing you with additional benefits (such as monitoring functionality), which we will discuss in more detail in our upcoming articles. In the case of MSDB storage, this is accomplished by following the SQL Server deployment process we just described and is reflected by entries appearing under the MSDB subfolder of the Stored Packages folder of Integration Services node when viewed in SQL Server 2012 Management Studio. In the same Storage Packages folder, you will also find the File System subfolder containing file system-based packages that have been identified by the SSIS service in the local file system. By default, the service automatically enumerates packages located in Program Files\Microsoft SQL Server\110\DTS\Packages directory, however, it is possible to alter this location by editing the Program Files\Microsoft SQL Server\110\DTS\Binn\MsDtsSrvr.ini.xml file and modifying the content of its StoragePath XML element. Incidentally, the same file controls other characteristics of MSDTSServer110 service, such as, package execution behavior in scenarios where the service fails or stops (by default, the execution is halted) or location of target SSIS instances (defined using the ServerName XMLelement).

While the Package Installation Wizard is straightforward to use, it is not well suited for deploying multiple packages. This shortcoming can be remedied by taking advantage of the DTUtil.exe command line utility, which in addition to its versatility (including support for package deployment and encryption), can also be conveniently incorporated into batch files. Its syntax takes the form DTUtil /option [value] [/option [value]] …, pairing option names with the values associated with them. For example, /SQL, /FILE, or /DTS options designating the package storage type (SSIS Package Store, File System, and MSDB database, respectively) can be combined with a value that specifies package location (as a relative or absolute path). By including COPY, MOVE, DELETE, or EXISTS options (with a value identifying package location) you can effectively copy, move, delete, or verify the existence of packages across all package stores.

In conclusion, the traditional package deployment methods available in earlier versions of SQL Server Integration Services are still supported and fully functional in the current release. However, in most scenarios, you should consider migrating your environment to the latest Project Deployment Model due to a wide range of advantages it delivers.

 



European SQL 2012 Hosting - Amsterdam :: Enabling Contained Databases in SQL Server 2012

clock September 8, 2012 05:48 by author Scott

Authentication mechanism to login to SQL Server database engine is either Windows authentication or SQL Server account. Sometimes you will face authentication issues with database portability, example when you migrate a database from one SQL Server instance to another SQL Server instance, DBA has to ensure that all logins in Source SQL Server instance is existed on the target SQL Server instance. Organisations often experience these problems during failover when using database mirroring.

SQL Server 2012
addresses these authentication and login dependency challenges by introducing Contained Database authentication to enhance authorization and portability of user databases.

What is Contained Database Authentication?

Contained Database Authentication allows users to be authenticate directly into a user database without logins that reside in database engine. It allows authentication without logins for both SQL users with passwords and windows authentication without login. It is a great feature when you want to implement AlwaysOn Availability Groups.


Enabling Contained Databases

Contained Databases is a property which you can enable or disable via the Advanced Properties page in SQL Server Management Studio or with T-SQL

Enable Contained Database Authentication using SQL server Management Studio

1. In Object explorer, right-click a SQL Server instance, and then click properties

2. Select the Advanced page, and in the Containment section , set the property Contained Database to true and then click OK.



Enable Contained Database Authentication using T-SQL

   1: sp_configure 'show advanced options' 1,
   2: Go
   3: sp_configure 'Contained database authentication', 1;
   4: Go
   5: RECONFIGURE;
   6: GO


Creating Users

If user does not have a login in master database, the connection string must include the contained database as initial catalog. The below T-SQL can be used to create a contained database user with a password.

   1: CREATE User KennyB
   2: WITH PASSWORD = '2e4ZK933'
   3: ,DEFAULT_LANGUAGE = [ENGLISH]
   4: ,DEFAULT_SCHEMA = [dbo]
   5: GO


To migrate the SQL Server authentication login to contained database user with a password then you can use below T-SQL

   1: sp_migrate_user_to_contained
   2: @username = N '<User Name>',
   3: @rename = N 'keep_name',
   4: @disablelogin = N 'do_not_disable_login';
   5: GO


Contained Database Authentication Security Concerns

Without knowledge of DBA, user can create and grant database users in contained database if user has ALTER ANY USER permission.If user gains the access to a database via contained database authentication then user has potential to access other databases within the database engine if these databases has the guest account enabled.

 



European SQL 2012 Hosting - Amsterdam :: New SSIS Features in SQL Server 2012

clock July 17, 2012 06:42 by author Scott

SQL Server Integration Services (SSIS) has under-gone through some significant changes in SQL Server 2012 which I will outline in this article.

Connection Managers


Now you have project-based connection managers which mean those connection will be available for all the packages that you are creating. This avoids recreating frequently used connections for every package. Those connections are created under Connection Manager in the Solution explorer as you can see in the below image.




As in the previous versions of SSIS, in SQL Server 2012 the connection manager will be shown in Connection Mangers region of the package. However, now there is additional text for project connections so users can easily understand and take extra care when modifying them.




By right-clicking the project connection manager and selecting Convert to Package Connection, you can demote a project connection to a package connection. Similarly, you have the option of prompting a package connection to a project connection.


Apart from the above two connection types, there are two more connection types. , namely Offline Connections and Expression Connection.




In previous versions, if a connection is invalid, every time you open the package it will hang until the connection times out to show the error. However, in SQL Server 2012, when a connection is invalid after the initial check, the connection will be set to offline and so avoid checking the connection again. When the connection is ready, you can test the connectivity and you can bring the connection online by right-clicking it. In addition, you can set the connection to offline manually. Expression Connections are simply parameters in variables.


The Execute Package Task has undergone a slight change with respect to connection managers. The Execute Package Task now has a new parameter called Reference type as shown in the below image.




Project Reference is for child packages within the project and when this is selected , you will not be shown the connections in the Connection Manager section. External reference is for the packages outside of the project.


ODBC Support

ODBC source and ODBC destination components are available in SSIS 2012. Prefviously, there were some difficulties in connecting to MySQL because of the unavailability of the OLEDB drivers for MySQL. Users were forced to use OLEDB for ODBC drivers which was comparatively slow. With ODBC support in 2012, you can directly connect to MySQL using ODBC.


Flat File Improvements

Importing flat files are very important and very frequent task used in SSIS. However, in previous versions, you are unable to import text files with variable columns and it has to have fixed number of columns. This is what you see in in preview if you try to import text file with a variable number of columns in previous versions of SSIS.



If you want to import these types of text files, you may have to use scripting which is not an easy task.


However, in SQL Server 2012 this issue (or bug the way you prefer to call it that) is fixed as you can see from the below image.




These kinds of text files are available in legacy systems such as COBOL. In such systems, there will be several different types of data in the same file. For example, Order file master details and transaction details will be in a same file. The only way you can distinguish them by the record type. For master records it will be ‘M’while for detail records it will be ‘D’.


Since the column records are different (i.e. for master records you will have customer id, date etc for detail records you will have product code, quantity, unit price, unit etc) you will need the facility to support variable columns.


Variables

You will have surely experienced difficulties when it comes to configuring variables in previous versions of SSIS. In SSIS 2012 the handling of variables has undergone significant improvements.




In SQL Server 2012, variable scope is handled different than previous versions. In previous versions, the default scope is the task which you are currently in. This led to many issues in past. If you really want to change it you could click the button at the end of row and modify the scope of the variable.


As we saw in connection managers, variables with expression now have a different icon, so that users have the ability to distinguish expression variables from others. This is very handy when it comes to trouble shooting.


Parameters

Parameters are read only variables which means you can’t change them from the package execution. Now parameters are in the package tab.




The most important feature of a parameter is the
Required option. If it is set to True, you have to pass a value to that parameter. If the parameter is not passed default value will not be evaluated. By using this, you can avoid mistakes when moving from one environment to the other.

If you set the Sensitive parameter to
True, you won’t be able to see the parameter value. As shown in the above image – for password parameter this is a valuable option.

In addition, you have the option of setting project level parameters where the parameters are accessible for all the packages in the SSIS project.


Data Viewer

Enabling data viewers in previous versions of SSIS required quite a bit of effort. With SQL Server 2012 SSIS, simply right click the data flow path and select Enable Data Viewer and you are done.




Similarly, if you want to disable them follow the same path.


Tasks

Before discussing about new tasks let us discuss about the tasks you won’t see in SQL Server 2012. ActiveX Script Task and Executes DTS 2000 Package Taskare removed from the SQL Server 2012. Since Microsoft has stopped supporting SQL Server 2000, it has now stopped support for DTS 2000 package execution. If you are seriously thinking about moving to SQL Server 2012, make sure you have taken steps to convert those DTS’s in SQL Server 2000 to SSIS packages.


Unlike in the previous versions, now you can edit task components while those components are not connected or they are in an error state.



SQL 2012 Hosting :: Improvements to SQL Server Integration Services in SQL Server 2012

clock July 12, 2012 11:51 by author Scott

Because SSIS is a development tool, and the updates are mostly of a technical nature, trying to explain their business value is quite challenging. Putting it simply, the main value to business is that with the updates, development will be easier and therefore faster.

I will focus on a few of the development improvements about which I'm the most excited.

Visual Studio 2010

Business Intelligence Development Studio (BIDS) has been replaced with SQL Server Data Tools, which uses the core of Visual Studio 2010. This does not just apply to SSIS but the whole BI development environment. This is due to Microsoft's internal realignment of their internal product delivery cycles which should help reduce the mismatch between functionality in related tools. This makes deployments much simpler and integration with Team Foundation Server 2010 a lot smoother.

Ability to debug Script Tasks

In previous versions of SQL Server, you had the ability to debug Script Components but not Script Tasks. With the release of SQL Server 2012, this is no longer the case: you can forget about having to output to the console to try and figure out where exactly your code is failing.

Change Data Capture

Although Change Data Capture (CDC) is not is not new to SQL Server, there are now CDC Tasks and Components within SSIS that make it easier to implement.

Undo and Redo

At long last you are now able to undo or redo any actions – such as bringing back the data flow that you accidently deleted – without having to reload the whole project. In my opinion this improvement alone makes it worth upgrading!

Flat File Source Improvements

Two great additions to SQL Server 2012 that will solve a lot of headaches when importing data from flat files are the support for varying numbers of columns and embedded text qualifiers.

Project Connection Managers

Gone are the days where you had to recreate connections to your source and destination within each SSIS package. Connections can now be set up at a project level which can then be shared within the packages.

Column Mappings

In SQL Server 2012, SSIS is a lot smarter about how it deals with column mappings and now uses the column names instead of the lineage ID. This means that if you decide to recreate your data source task, you do not have to remap all the columns as was the case in the past. SQL Server 2012 also comes with a Resolve Column Reference Editor which allows you to link unmapped output columns to unmapped input columns across the whole execution tree; in the past this had to be done from task to task.

Parameter Handling

Parameters are a new addition to SSIS and are very useful. In the past you had to use configurations which could only be assigned at a package level. Parameters can now be set at both a package and project level. You can assign three different types of values to parameters, namely Design default, Server default and Execution.

There are quite a few more additions to SSIS (including its built-in reporting capabilities, improvements to the user interface, and integration with Data Quality Services), but the features I have focused on in this post are improvements to issues that I have frequently come across on previous projects. I'm sure these improvements and additions to SSIS will be greatly appreciated by the industry.



European SQL 2012 Hosting - Amsterdam :: New FileTable Feature in SQL Server 2012

clock June 26, 2012 10:10 by author Scott

Problem

The FileTable feature of SQL Server 2012 is an enhancement to the FILESTREAM feature which was introduced in SQL Server 2008. In this tip we will take a look at how to use FileTable feature of SQL Server 2012.

Solution

A FileTable is a new user table which gets created within a FILESTREAM enabled database. Using the FileTable feature, organizations can now store files and documents within a special table in SQL Server and they will have the ability to access those files and documents from windows. When you use this feature it will appear to you as if the files and documents are residing on a file system rather than in SQL Server. However, in order to use the FileTable feature you need to enable the FILESTREAM feature on the instance of SQL Server 2012. Database administrators can define indexes, constraints and triggers; however the columns and system defined constrains cannot be altered or dropped. Also, in order to enable the FILESTREAM feature you need to be a member of the SYSADMIN or SERVERADMIN fixed server roles.

Steps to Setup

1. Execute the below mentioned TSQL code to enabling the XP_CMDSHELL feature on SQL Server 2012. Once XP_CMDSHELL feature is enabled it will create a folder on the C: drive to store the FILESTREAM data (note: you can use any drive, but I am using the C: drive for this example).

USE master
GO
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
EXEC sp_configure 'xp_cmdshell', 1;
GO
RECONFIGURE;
GO
EXEC xp_cmdshell 'IF NOT EXIST C:\DemoFileTable MKDIR C:\DemoFileTable';
GO


2. Create a database named
DemoFileTable which uses the FILESTREAM feature for the purpose of the demo using the below mentioned TSQL code. In the below script you can see that we are specifying new options for the FILESTREAM clause i.e. “NON_TRANSACTED_ACCESS = FULL” and we have also provided the windows directory name “DemoFileTable” which we created in the previous step.

IF EXISTS (SELECT 1 FROM sys.databases WHERE name = 'DemoFileTable') BEGIN

ALTER DATABASE DemoFileTable SET SINGLE_USER WITH ROLLBACK IMMEDIATE;

DROP DATABASE DemoFileTable;

END;

CREATE DATABASE DemoFileTable

WITH FILESTREAM

(

NON_TRANSACTED_ACCESS = FULL,

DIRECTORY_NAME = N'DemoFileTable'

);

GO

/* Add a FileGroup that can be used for FILESTREAM */

ALTER DATABASE DemoFileTable

ADD FILEGROUP DemoFileTable_FG

CONTAINS FILESTREAM;

GO

/* Add the folder that needs to be used for the FILESTREAM filegroup. */

ALTER DATABASE DemoFileTable

ADD FILE

(

NAME= 'DemoFileTable_File',

FILENAME = 'C:\DemoFileTable\DemoFileTable_File'

)

TO FILEGROUP DemoFileTable_FG;

GO


3. Next will be to
Create a FileTable within FILESTREAM enabled database. This can be done by executing the below mentioned TSQL script which will create a FileTable within the FILESTREAM enabled database. The name of the FileTable is DemoFileTable and you need to specify FILETABLE_DIRECTORY as DemoFileTableFiles and FILETABLE_COLLATE_FILENAME as database_default

USE DemoFileTable;
GO
/* Create a FileTable */
CREATE TABLE DemoFileTable AS FILETABLE
WITH
(
FILETABLE_DIRECTORY = 'DemoFileTableFiles',
FILETABLE_COLLATE_FILENAME = database_default
);
GO

4. Once the FileTable is created successfully, in Object Explorer > Expand Databases > Expand DemoFileTable database > Expand Tables > Expand FileTables > Expand dbo.DemoFileTable > Expand Columns to view the structure of FileTable as shown below.



5. In the below snippet you can see the files which were created within the
C:\DemoFileTable\DemoFileTable_File folder when the FILESTREAM enabled database is created along with the FileTable DemoFileTableFiles. The filestream.hdr is a very important system file which basically contains FILESTREAM header information. Database Administrators need to make sure that this file is not removed or modified as this will corrupt the FILESTREAM enabled database.



6. Once the FileTable is created successfully you can access the FileTable using Windows Explorer. The path to access the FileTable will be:

\\SERVERNAME\FILESTREAM_WINDOWS_SHARE_NAME\FILESTREAM_TABLE_NAME\FILETABLE_DIRECTORY\

Copying Documents and Files to the FileTable

Now that we have created a FILESTREAM enabled database and a FileTable the next step will be to copy the documents and files to the newly created FileTable in Windows Explorer. You can copy the files by dragging files or by using the Copy-and-Paste operation to the below mentioned location.


\\SERVERNAME\FILESTREAM_WINDOWS_SHARE_NAME\FILESTREAM_TABLE_NAME\FILETABLE_DIRECTORY\

In the below snippet you can see that I have copied MSSQLTIPS.gif logo to FileTable folder. To open the image file double click the MSSQLTips.gif file and it will open in Internet Explorer.




How to View Documents and Files Stored in FileTable Using SQL Server Management Studio

To view the files and documents stored in a FileTable execute the below mentioned TSQL code.


Use DemoFileTable;
GO
SELECT * FROM DemoFileTable;
GO



Finally disable the XP_CMDSHELL feature which was enabled for this demo by executing the below mentioned TSQL code.


USE master

GO

EXEC sp_configure 'show advanced options', 1;

GO

RECONFIGURE;

GO

EXEC sp_configure 'xp_cmdshell', 0;

GO

RECONFIGURE;

GO

 



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in