European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

European SQL 2012 Hosting - Amsterdam :; Why a Session With sp_readrequest Takes so Long to Execute

clock February 12, 2013 05:36 by author Scott

While applying, Long Running Sessions Detection Job on a production server, we start receiving alert that a session is taking more then 3 minutes. But what actually this session was doing. Here is the alert report.

SP ID

Stored Procedure Call

DB Name

Executing Since

58

msdb.dbo.sp_readrequest;1�

msdb

3 min

sp_readrequest is a system stored procedure, which basically reads a message request from the the queue and returns its  contents.

This process can remain active for a time we have configured for parameter DatabaseMailExeMinimumLifeTime, at the time of database mail profile configuration. 600 seconds is the default value for this external mail process. According to BOL DatabaseMailExeMinimumLifeTime is the The minimum amount of time, in seconds, that the external mail process remains active.

This can be changed, at the time of mail profile configuration or you can just use update query to change this time.

UPDATE msdb.dbo.sysmail_configuration
SET
paramvalue = 60 --60 Seconds
WHERE paramname = 'DatabaseMailExeMinimumLifeTime'


We have changed this to 60 seconds to resolve our problem.

 



European SQL 2012 Hosting - Amsterdam :: Tabular Models vs PowerPivot Models SQL 2012

clock January 23, 2013 06:32 by author Scott

In SQL Server 2012, there is a new data model, called tabular, that is part of the new feature called the Business Intelligence Semantic Model (BISM).  BISM also includes the multidimensional model (formally called the UDM).

The Tabular model is based on concepts like tables and relationships that are familiar to anyone who has a relational database background, making it easier to use than the multidimensional model.  The tabular model is a server mode you choose when installing Analysis Services.

The tabular model is an enhancement of the current PowerPivot data model experience, both of which use the Vertipaq engine.  When opening a PowerPivot for SharePoint workbook, a SSAS cube is created behind the scenes, which is why PowerPivot for SharePoint requires SSAS to be installed.

So if tabular models and PowerPivot models use the same Analysis Services engine, why are tabular models necessary when we already have PowerPivot?

There are four things that tabular models offer that PowerPivot models does not:

  1. Scalability - PowerPivot has a 2 GB limit for the size of the Excel file and does not support partitions, but tabular model have no limit and support partitions.  Tabular models also support DirectQuery
  2. Manageability – There are a lot of tools you can use with the tabular model that you can’t use with PowerPivot: SSMS, AMO, AMOMD, XMLA, Deployment Wizard, AMO for PowerShell, and Integration Services
  3. Securability – Tabular models can use row security and dynamic security, neither of which PowerPivot supports, only Excel workbook file security
  4. Professional development toolchain - Tabular models live in the Visual Studio shell.  Thus, they enjoy all the shell services such as integrated source control, msbuild integration, and Team Build integration.  PowerPivot lives in the Excel environment, thus it is limited to the extensibility model provided in Excel (which doesn’t include source control or build configuration).  Also, because tabular models live in the VS environment, build and deployment can be naturally separated


So Analysis Services can now be installed in one of three server modes: Multidimensional and Data Mining (default), PowerPivot for SharePoint, and Tabular.

More info:

When to choose tabular models over PowerPivot models

Comparing Analysis Services and PowerPivot

Feature by Server Mode or Solution Type (SSAS)

 



European SQL 2012 Hosting - Amsterdam :: SQL Server 2012 Integration Services with HostForLIFE.eu

clock December 24, 2012 05:51 by author Scott

SQL Server 2012 Integration Services introduces an innovative approach to deploying SSIS projects, known as Project Deployment Model. This is the default and recommended deployment technique, due to a number of benefits it delivers (such as, the ability to centralize management of package properties across deployed projects, as well as monitor and log package execution performance and progress). However, even though the new model has become the recommended and default configuration for new packages created using SQL Server Data Tools, the traditional, package-based methodology remains available and supported. More importantly, in some scenarios, it might be considered a more viable option, since it allows for separation of SQL Server Database Engine and SQL Server Integration Services roles, yielding performance benefits and facilitating a distributed extraction, transformation, and loading (ETL) activities. Project Deployment Model, on the other hand, requires SSIS to be collocated on a SQL Server 2012 instance, due to its dependency on SSIS catalog. We will discuss its characteristics and walk through its implementation in this post.

Reporting Services 2008 for Internet deployment

The deployment method available from a SQL Server Data Tools-based project is directly dependent on the model employed during its creation (it is also possible to switch between two models after a project is created by applying a conversion process, which is invoked from the Project menu in SQL Server Data Tools). Effectively, in order to use the traditional package deployment mechanism, you will need to first ensure that your project is based on the package deployment model, which you can easily identify by checking its label in the Solution Explorer window.

In addition, you will also have to modify default project properties. To access them, right-click on the node representing the project in the Solution Explorer window and select the Properties entry from its context sensitive menu. In the resulting dialog box, switch to the Deployment section, where you will find three settings displayed in the grid on the right hand side:

AllowConfigurationChanges – a Boolean value (i.e. True, which is the default, or False) that determines whether it will be possible to choose package configurations during its deployment and assign values of properties or variables they reference.

CreateDeploymentUtility – a Boolean value (True or False, which is the default) that indicates whether initiating the project build will result in the creation of its Deployment Utility.

DeploymentOutputPath – a path that points to the file system folder where the Deployment Utility will be created. The path is relative to the location where project files reside (and set, by default, to bin\Deployment).

Set the value of the CreateDeploymentUtility property to True and modify AllowConfigurationChanges according to your requirements (e.g. set it to False if your project does not contain any package configurations or you want to prevent their modifications during deployment). Next, start the build process (using the Build item in the top level main menu of SQL Server Data Tools), which will populate the output folder (designated by DeploymentOutputPath parameter) with the .dtsx package file (or files, depending on the number of packages in your project), an XML-formatted Deployment Manifest file (whose name is constructed by concatenating the project name and .SSISDeploymentManifestsuffix) and, potentially, a number of other, project related files (representing, for example, custom components or package configurations).

The subsequent step in the deployment process involves copying the entire content of the Deployment Output folder to a target server and double-clicking on the Deployment Manifest file at its destination. This action will automatically launch the Package Installation Wizard. After its first, informational page, you will be prompted to choose between File system and SQL Server deployments.

The first of these options creates an XML-formatted file (with extension .dtsx) in an arbitrarily chosen folder. If the project contains configuration files (and the AllowConfigurationChanges project property was set to True when you generated the build), then you will be given an option to modify values of properties included in their content. At the end of this procedure, the corresponding .dtsConfigfiles will be added to the target folder.

The second option, labeled SQL Server deployment in the Package Installation Wizard relies on a SQL Server instance as the package store. Once you select it, you will be prompted for the name of the server hosting the SQL Server Database Engine, an appropriate authentication method, and a path where the package should be stored. If you want to organize your packages into a custom folder hierarchy, you will need to pre-create it by connecting to the Integration Services component using SQL Server Management Studio. In case your package contains additional files (such as package configurations), you will also be given an opportunity to designate their location (by default, the wizard points to Program Files\Microsoft SQL Server\110\DTS\Packagesfolder).

In either case, you can decide whether you want to validate packages following their installation (which will append the Packages Validation page to the Package Installation Wizard, allowing you to identify any issues encountered during deployment). In addition, when using SQL Server deployment, you have an option to set a package protection level (resulting in assignment of ServerStorage value to the ProtectionLevel package property). When deploying to file system, this capability is not available, forcing you to resort (depending on the deployment target) to either NTFS permissions or SQL Server database roles for securing access to your packages and sensitive information they might contain.

Just as in earlier versions of SQL Server, local SQL Service Integration Services 11.0 instance (implemented as the MSDTSServer110 service) offers the ability to manage packages stored in the MSDB database and file system, providing you with additional benefits (such as monitoring functionality), which we will discuss in more detail in our upcoming articles. In the case of MSDB storage, this is accomplished by following the SQL Server deployment process we just described and is reflected by entries appearing under the MSDB subfolder of the Stored Packages folder of Integration Services node when viewed in SQL Server 2012 Management Studio. In the same Storage Packages folder, you will also find the File System subfolder containing file system-based packages that have been identified by the SSIS service in the local file system. By default, the service automatically enumerates packages located in Program Files\Microsoft SQL Server\110\DTS\Packages directory, however, it is possible to alter this location by editing the Program Files\Microsoft SQL Server\110\DTS\Binn\MsDtsSrvr.ini.xml file and modifying the content of its StoragePath XML element. Incidentally, the same file controls other characteristics of MSDTSServer110 service, such as, package execution behavior in scenarios where the service fails or stops (by default, the execution is halted) or location of target SSIS instances (defined using the ServerName XMLelement).

While the Package Installation Wizard is straightforward to use, it is not well suited for deploying multiple packages. This shortcoming can be remedied by taking advantage of the DTUtil.exe command line utility, which in addition to its versatility (including support for package deployment and encryption), can also be conveniently incorporated into batch files. Its syntax takes the form DTUtil /option [value] [/option [value]] …, pairing option names with the values associated with them. For example, /SQL, /FILE, or /DTS options designating the package storage type (SSIS Package Store, File System, and MSDB database, respectively) can be combined with a value that specifies package location (as a relative or absolute path). By including COPY, MOVE, DELETE, or EXISTS options (with a value identifying package location) you can effectively copy, move, delete, or verify the existence of packages across all package stores.

In conclusion, the traditional package deployment methods available in earlier versions of SQL Server Integration Services are still supported and fully functional in the current release. However, in most scenarios, you should consider migrating your environment to the latest Project Deployment Model due to a wide range of advantages it delivers.

 



European SQL 2012 Hosting - Amsterdam :: Enabling Contained Databases in SQL Server 2012

clock September 8, 2012 05:48 by author Scott

Authentication mechanism to login to SQL Server database engine is either Windows authentication or SQL Server account. Sometimes you will face authentication issues with database portability, example when you migrate a database from one SQL Server instance to another SQL Server instance, DBA has to ensure that all logins in Source SQL Server instance is existed on the target SQL Server instance. Organisations often experience these problems during failover when using database mirroring.

SQL Server 2012
addresses these authentication and login dependency challenges by introducing Contained Database authentication to enhance authorization and portability of user databases.

What is Contained Database Authentication?

Contained Database Authentication allows users to be authenticate directly into a user database without logins that reside in database engine. It allows authentication without logins for both SQL users with passwords and windows authentication without login. It is a great feature when you want to implement AlwaysOn Availability Groups.


Enabling Contained Databases

Contained Databases is a property which you can enable or disable via the Advanced Properties page in SQL Server Management Studio or with T-SQL

Enable Contained Database Authentication using SQL server Management Studio

1. In Object explorer, right-click a SQL Server instance, and then click properties

2. Select the Advanced page, and in the Containment section , set the property Contained Database to true and then click OK.



Enable Contained Database Authentication using T-SQL

   1: sp_configure 'show advanced options' 1,
   2: Go
   3: sp_configure 'Contained database authentication', 1;
   4: Go
   5: RECONFIGURE;
   6: GO


Creating Users

If user does not have a login in master database, the connection string must include the contained database as initial catalog. The below T-SQL can be used to create a contained database user with a password.

   1: CREATE User KennyB
   2: WITH PASSWORD = '2e4ZK933'
   3: ,DEFAULT_LANGUAGE = [ENGLISH]
   4: ,DEFAULT_SCHEMA = [dbo]
   5: GO


To migrate the SQL Server authentication login to contained database user with a password then you can use below T-SQL

   1: sp_migrate_user_to_contained
   2: @username = N '<User Name>',
   3: @rename = N 'keep_name',
   4: @disablelogin = N 'do_not_disable_login';
   5: GO


Contained Database Authentication Security Concerns

Without knowledge of DBA, user can create and grant database users in contained database if user has ALTER ANY USER permission.If user gains the access to a database via contained database authentication then user has potential to access other databases within the database engine if these databases has the guest account enabled.

 



European SQL 2012 Hosting - Amsterdam :: New SSIS Features in SQL Server 2012

clock July 17, 2012 06:42 by author Scott

SQL Server Integration Services (SSIS) has under-gone through some significant changes in SQL Server 2012 which I will outline in this article.

Connection Managers


Now you have project-based connection managers which mean those connection will be available for all the packages that you are creating. This avoids recreating frequently used connections for every package. Those connections are created under Connection Manager in the Solution explorer as you can see in the below image.




As in the previous versions of SSIS, in SQL Server 2012 the connection manager will be shown in Connection Mangers region of the package. However, now there is additional text for project connections so users can easily understand and take extra care when modifying them.




By right-clicking the project connection manager and selecting Convert to Package Connection, you can demote a project connection to a package connection. Similarly, you have the option of prompting a package connection to a project connection.


Apart from the above two connection types, there are two more connection types. , namely Offline Connections and Expression Connection.




In previous versions, if a connection is invalid, every time you open the package it will hang until the connection times out to show the error. However, in SQL Server 2012, when a connection is invalid after the initial check, the connection will be set to offline and so avoid checking the connection again. When the connection is ready, you can test the connectivity and you can bring the connection online by right-clicking it. In addition, you can set the connection to offline manually. Expression Connections are simply parameters in variables.


The Execute Package Task has undergone a slight change with respect to connection managers. The Execute Package Task now has a new parameter called Reference type as shown in the below image.




Project Reference is for child packages within the project and when this is selected , you will not be shown the connections in the Connection Manager section. External reference is for the packages outside of the project.


ODBC Support

ODBC source and ODBC destination components are available in SSIS 2012. Prefviously, there were some difficulties in connecting to MySQL because of the unavailability of the OLEDB drivers for MySQL. Users were forced to use OLEDB for ODBC drivers which was comparatively slow. With ODBC support in 2012, you can directly connect to MySQL using ODBC.


Flat File Improvements

Importing flat files are very important and very frequent task used in SSIS. However, in previous versions, you are unable to import text files with variable columns and it has to have fixed number of columns. This is what you see in in preview if you try to import text file with a variable number of columns in previous versions of SSIS.



If you want to import these types of text files, you may have to use scripting which is not an easy task.


However, in SQL Server 2012 this issue (or bug the way you prefer to call it that) is fixed as you can see from the below image.




These kinds of text files are available in legacy systems such as COBOL. In such systems, there will be several different types of data in the same file. For example, Order file master details and transaction details will be in a same file. The only way you can distinguish them by the record type. For master records it will be ‘M’while for detail records it will be ‘D’.


Since the column records are different (i.e. for master records you will have customer id, date etc for detail records you will have product code, quantity, unit price, unit etc) you will need the facility to support variable columns.


Variables

You will have surely experienced difficulties when it comes to configuring variables in previous versions of SSIS. In SSIS 2012 the handling of variables has undergone significant improvements.




In SQL Server 2012, variable scope is handled different than previous versions. In previous versions, the default scope is the task which you are currently in. This led to many issues in past. If you really want to change it you could click the button at the end of row and modify the scope of the variable.


As we saw in connection managers, variables with expression now have a different icon, so that users have the ability to distinguish expression variables from others. This is very handy when it comes to trouble shooting.


Parameters

Parameters are read only variables which means you can’t change them from the package execution. Now parameters are in the package tab.




The most important feature of a parameter is the
Required option. If it is set to True, you have to pass a value to that parameter. If the parameter is not passed default value will not be evaluated. By using this, you can avoid mistakes when moving from one environment to the other.

If you set the Sensitive parameter to
True, you won’t be able to see the parameter value. As shown in the above image – for password parameter this is a valuable option.

In addition, you have the option of setting project level parameters where the parameters are accessible for all the packages in the SSIS project.


Data Viewer

Enabling data viewers in previous versions of SSIS required quite a bit of effort. With SQL Server 2012 SSIS, simply right click the data flow path and select Enable Data Viewer and you are done.




Similarly, if you want to disable them follow the same path.


Tasks

Before discussing about new tasks let us discuss about the tasks you won’t see in SQL Server 2012. ActiveX Script Task and Executes DTS 2000 Package Taskare removed from the SQL Server 2012. Since Microsoft has stopped supporting SQL Server 2000, it has now stopped support for DTS 2000 package execution. If you are seriously thinking about moving to SQL Server 2012, make sure you have taken steps to convert those DTS’s in SQL Server 2000 to SSIS packages.


Unlike in the previous versions, now you can edit task components while those components are not connected or they are in an error state.



SQL 2012 Hosting :: Improvements to SQL Server Integration Services in SQL Server 2012

clock July 12, 2012 11:51 by author Scott

Because SSIS is a development tool, and the updates are mostly of a technical nature, trying to explain their business value is quite challenging. Putting it simply, the main value to business is that with the updates, development will be easier and therefore faster.

I will focus on a few of the development improvements about which I'm the most excited.

Visual Studio 2010

Business Intelligence Development Studio (BIDS) has been replaced with SQL Server Data Tools, which uses the core of Visual Studio 2010. This does not just apply to SSIS but the whole BI development environment. This is due to Microsoft's internal realignment of their internal product delivery cycles which should help reduce the mismatch between functionality in related tools. This makes deployments much simpler and integration with Team Foundation Server 2010 a lot smoother.

Ability to debug Script Tasks

In previous versions of SQL Server, you had the ability to debug Script Components but not Script Tasks. With the release of SQL Server 2012, this is no longer the case: you can forget about having to output to the console to try and figure out where exactly your code is failing.

Change Data Capture

Although Change Data Capture (CDC) is not is not new to SQL Server, there are now CDC Tasks and Components within SSIS that make it easier to implement.

Undo and Redo

At long last you are now able to undo or redo any actions – such as bringing back the data flow that you accidently deleted – without having to reload the whole project. In my opinion this improvement alone makes it worth upgrading!

Flat File Source Improvements

Two great additions to SQL Server 2012 that will solve a lot of headaches when importing data from flat files are the support for varying numbers of columns and embedded text qualifiers.

Project Connection Managers

Gone are the days where you had to recreate connections to your source and destination within each SSIS package. Connections can now be set up at a project level which can then be shared within the packages.

Column Mappings

In SQL Server 2012, SSIS is a lot smarter about how it deals with column mappings and now uses the column names instead of the lineage ID. This means that if you decide to recreate your data source task, you do not have to remap all the columns as was the case in the past. SQL Server 2012 also comes with a Resolve Column Reference Editor which allows you to link unmapped output columns to unmapped input columns across the whole execution tree; in the past this had to be done from task to task.

Parameter Handling

Parameters are a new addition to SSIS and are very useful. In the past you had to use configurations which could only be assigned at a package level. Parameters can now be set at both a package and project level. You can assign three different types of values to parameters, namely Design default, Server default and Execution.

There are quite a few more additions to SSIS (including its built-in reporting capabilities, improvements to the user interface, and integration with Data Quality Services), but the features I have focused on in this post are improvements to issues that I have frequently come across on previous projects. I'm sure these improvements and additions to SSIS will be greatly appreciated by the industry.



European SQL 2012 Hosting - Amsterdam :: New FileTable Feature in SQL Server 2012

clock June 26, 2012 10:10 by author Scott

Problem

The FileTable feature of SQL Server 2012 is an enhancement to the FILESTREAM feature which was introduced in SQL Server 2008. In this tip we will take a look at how to use FileTable feature of SQL Server 2012.

Solution

A FileTable is a new user table which gets created within a FILESTREAM enabled database. Using the FileTable feature, organizations can now store files and documents within a special table in SQL Server and they will have the ability to access those files and documents from windows. When you use this feature it will appear to you as if the files and documents are residing on a file system rather than in SQL Server. However, in order to use the FileTable feature you need to enable the FILESTREAM feature on the instance of SQL Server 2012. Database administrators can define indexes, constraints and triggers; however the columns and system defined constrains cannot be altered or dropped. Also, in order to enable the FILESTREAM feature you need to be a member of the SYSADMIN or SERVERADMIN fixed server roles.

Steps to Setup

1. Execute the below mentioned TSQL code to enabling the XP_CMDSHELL feature on SQL Server 2012. Once XP_CMDSHELL feature is enabled it will create a folder on the C: drive to store the FILESTREAM data (note: you can use any drive, but I am using the C: drive for this example).

USE master
GO
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
EXEC sp_configure 'xp_cmdshell', 1;
GO
RECONFIGURE;
GO
EXEC xp_cmdshell 'IF NOT EXIST C:\DemoFileTable MKDIR C:\DemoFileTable';
GO


2. Create a database named
DemoFileTable which uses the FILESTREAM feature for the purpose of the demo using the below mentioned TSQL code. In the below script you can see that we are specifying new options for the FILESTREAM clause i.e. “NON_TRANSACTED_ACCESS = FULL” and we have also provided the windows directory name “DemoFileTable” which we created in the previous step.

IF EXISTS (SELECT 1 FROM sys.databases WHERE name = 'DemoFileTable') BEGIN

ALTER DATABASE DemoFileTable SET SINGLE_USER WITH ROLLBACK IMMEDIATE;

DROP DATABASE DemoFileTable;

END;

CREATE DATABASE DemoFileTable

WITH FILESTREAM

(

NON_TRANSACTED_ACCESS = FULL,

DIRECTORY_NAME = N'DemoFileTable'

);

GO

/* Add a FileGroup that can be used for FILESTREAM */

ALTER DATABASE DemoFileTable

ADD FILEGROUP DemoFileTable_FG

CONTAINS FILESTREAM;

GO

/* Add the folder that needs to be used for the FILESTREAM filegroup. */

ALTER DATABASE DemoFileTable

ADD FILE

(

NAME= 'DemoFileTable_File',

FILENAME = 'C:\DemoFileTable\DemoFileTable_File'

)

TO FILEGROUP DemoFileTable_FG;

GO


3. Next will be to
Create a FileTable within FILESTREAM enabled database. This can be done by executing the below mentioned TSQL script which will create a FileTable within the FILESTREAM enabled database. The name of the FileTable is DemoFileTable and you need to specify FILETABLE_DIRECTORY as DemoFileTableFiles and FILETABLE_COLLATE_FILENAME as database_default

USE DemoFileTable;
GO
/* Create a FileTable */
CREATE TABLE DemoFileTable AS FILETABLE
WITH
(
FILETABLE_DIRECTORY = 'DemoFileTableFiles',
FILETABLE_COLLATE_FILENAME = database_default
);
GO

4. Once the FileTable is created successfully, in Object Explorer > Expand Databases > Expand DemoFileTable database > Expand Tables > Expand FileTables > Expand dbo.DemoFileTable > Expand Columns to view the structure of FileTable as shown below.



5. In the below snippet you can see the files which were created within the
C:\DemoFileTable\DemoFileTable_File folder when the FILESTREAM enabled database is created along with the FileTable DemoFileTableFiles. The filestream.hdr is a very important system file which basically contains FILESTREAM header information. Database Administrators need to make sure that this file is not removed or modified as this will corrupt the FILESTREAM enabled database.



6. Once the FileTable is created successfully you can access the FileTable using Windows Explorer. The path to access the FileTable will be:

\\SERVERNAME\FILESTREAM_WINDOWS_SHARE_NAME\FILESTREAM_TABLE_NAME\FILETABLE_DIRECTORY\

Copying Documents and Files to the FileTable

Now that we have created a FILESTREAM enabled database and a FileTable the next step will be to copy the documents and files to the newly created FileTable in Windows Explorer. You can copy the files by dragging files or by using the Copy-and-Paste operation to the below mentioned location.


\\SERVERNAME\FILESTREAM_WINDOWS_SHARE_NAME\FILESTREAM_TABLE_NAME\FILETABLE_DIRECTORY\

In the below snippet you can see that I have copied MSSQLTIPS.gif logo to FileTable folder. To open the image file double click the MSSQLTips.gif file and it will open in Internet Explorer.




How to View Documents and Files Stored in FileTable Using SQL Server Management Studio

To view the files and documents stored in a FileTable execute the below mentioned TSQL code.


Use DemoFileTable;
GO
SELECT * FROM DemoFileTable;
GO



Finally disable the XP_CMDSHELL feature which was enabled for this demo by executing the below mentioned TSQL code.


USE master

GO

EXEC sp_configure 'show advanced options', 1;

GO

RECONFIGURE;

GO

EXEC sp_configure 'xp_cmdshell', 0;

GO

RECONFIGURE;

GO

 



European SQL 2012 Hosting - Amsterdam :: New string function in SQL Server 2012 – FORMAT()

clock June 21, 2012 09:21 by author Scott

 

Formatting numbers in an SSRS report is a common task. For example, you may want to format a number as currency or percentage.

You can select a format from the number page of the properties window.




You can let sql handle the formatting, so data in the result set is pre-formatted.


DECLARE @Sales MONEY = 32182000.85;

SELECT ‘$’
+ CONVERT(VARCHAR(32),@Sales,1);

Results:




Finally, you can use the newly introduced FORMAT() function in SQL Server 2012. Format() will, according to books online, return a value formatted with the specified format and optional culture. So, instead of converting and concatenating like we did in the previous example, FORMAT() can be used:


DECLARE @Sales MONEY = 32182000.85;

SELECT FORMAT(@Sales,‘c’,‘en-us’);


Results:




FORMAT() accepts the following parameters:


- Value. Actual value that needs to be formatted.

- Format. Value will be formatted to the specified format. Currency, percentage, and date are few examples.
- Optional Culture. Specifies the language. More about cultures on BOL.PARSE()

Consider the following query. Value is formatted to three different languages based on the culture:


Formatting Currency:

DECLARE @Sales MONEY = 32182000.85;

SELECT FORMAT(@Sales,‘c’,‘it-IT’) [Italy]
, FORMAT(@Sales,‘c’,‘fr’) [France]
, FORMAT(@Sales,‘c’,‘ru-RU’) [Russian];

Results:




Formatting percentages:

DECLARE @Per DECIMAL(2,2) = 0.72;

SELECT FORMAT(@Per,‘p0′,‘en-us’)
, FORMAT(@Per,‘p2′,‘en-us’);


Results:



Conclusion:

Similar formatting is ideally done in the presentation layer, reporting services for example. But I would want to let reporting services do minimal processing. FORMAT() simplifies string formatting. It provides functionality that most developers have always wanted.

 

 



European SQL 2012 Hosting - Amsterdam :: Installing SSAS (Analysis Services 2012) in Tabular Mode

clock May 25, 2012 06:53 by author Scott

In SQL Server 2012, There are two modes of installing Analysis Services – “Multidimensional and Data Mining Mode” and “Tabular Mode”. During the install you can install only one of the above, not both. If you’d like to have both, you will need to create a new instance of Analysis Services by launching the SQL Server 2012 setup again and choose the Tabular Mode in the “Analysis Services Configuration” page.







With SQL Server 2012, there are two types of Analysis Services projects in Business Intelligence Development Studio (BIDS). You can create and edit the multidimensional projects that you all know and love. You can also use BIDS to create tabular projects.


Check out these additional resources on Tabular Models:


http://blogs.msdn.com/b/analysisservices/archive/2011/07/13/welcome-to-tabular-projects.aspx

 



European SQL 2012 Hosting - Amsterdam :: Top Ten Reasons to Use SQL 2012

clock May 9, 2012 06:07 by author Scott

Our new SQL 2012 just released. I get asked all the time why should I upgrade to the latest. There are hundreds of reasons with this one, I tried to pick the best 10. More info here.

1. AlwaysOn SQL Server Failover Cluster Instances

-
Multi-subnet failover clusters: A SQL Server multi-subnet failover cluster is a configuration where each failover cluster node is connected to a different subnet or different set of subnets. These subnets can be in the same location or in geographically dispersed sites. Clustering across geographically dispersed sites is sometimes referred to as Stretch clusters. As there is no shared storage that all the nodes can access, data should be replicated between the data storage on the multiple subnets. With data replication, there is more than one copy of the data available. Therefore, a multi-subnet failover cluster provides a disaster recovery solution in addition to high availability. For more information, see SQL Server Multi-Subnet Clustering.

-
Flexible failover policy for cluster health detection: In a SQL Server failover cluster instance, only one node can own the cluster resource group at a given time. The client requests are served through this node for that failover cluster instance. In the case of a failure, the group ownership is moved to another node in the failover cluster. This process is called failover. The improved failure detection introduced in SQL Server 2012, and addition of failure condition level property allows you to configure a more flexible failover policy. For more information, see Failover Policy for Failover Cluster Instances.

-
Indirect checkpoints: The indirect checkpoints feature provides a database-specific alternative to automatic checkpoints, which are configured by a server property. Indirect checkpoints implements a new checkpointing algorithm for the Database Engine. This algorithm provides a more accurate guarantee of database recovery time in the event of a crash or a failover than is provided by automatic checkpoints. To ensure that database recovery does not exceed allowable downtime for a given database, you can specify the maximum allowable downtime for that database.

2. Windows PowerShell

Starting with SQL Server 2012, Windows PowerShell is no longer installed by SQL Server Setup. Windows PowerShell 2.0 is a pre-requisite for installing SQL Server 2012. If PowerShell 2.0 is not installed or enabled on your computer, you can enable it by following the instructions on the
Windows Management Framework page. For more information about SQL Server PowerShell, see SQL Server PowerShell.

SQL Server 2012 now uses the new Windows PowerShell 2.0 feature called modules for loading the SQL Server components into a PowerShell environment. Users import the sqlps module into PowerShell, and the module then loads the SQL Server snap-ins. For more information, see Run Windows PowerShell from SQL Server Management Studio.

The sqlps Utility is no longer a PowerShell 1.0 mini-shell; it now starts PowerShell 2.0 and imports the sqlps module. This improves SQL Server interoperability by making it easier for PowerShell scripts to also load the snap-ins for other products. The sqlps utility is also added to the list of deprecated features starting in SQL Server 2012.

The SQL Server PowerShell provider includes two new cmdlets: backup-sqldatabase and restore-sqldatabase. For more information, use the get-help cmdlet after loading in the sqlps module.

3. Data-tier Applications

-
The data-tier application (DAC) upgrade has been changed to an in-place process that alters the existing database to match the schema defined in the new version of the DAC. This replaces the side-by-side upgrade process, which created a new database with the new schema definitions. The Upgrade a Data-Tier Application wizard has been updated to perform an in-place upgrade. The Upgrade method of the DacStore type is now deprecated, and replaced with a new IncrementalUpgrade method. Upgrades are also supported for DACs deployed to SQL Azure. For more information, see Upgrade a Data-tier Application.

-
In addition to just extracting a schema definition as a new DAC package file, you can now export both the schema definition and data from a database as a DAC export file. You can then import the file to create a new database with the same schema and data. For more information, see Export a Data-tier Application and Import a BACPAC File to Create a New User Database.

-
Data-tier applications now support many more objects than in SQL Server 2008 R2. For more information, see DAC Support For SQL Server Objects and Versions.

4. Server Modes for Analysis Services Instances: Multidimensional, Tabular, and SharePoint

This release adds a server mode concept to an Analysis Services installation. An instance is always installed in one of three modes that determines the memory management and storage engines used to query and process data. Server modes include Multidimensional and Data Mining, SharePoint, and Tabular. For more information, see
Determine the Server Mode of an Analysis Services Instance

5. Integration Services Performance Reduced Memory Usage by the Merge and Merge Join Transformations

Microsoft has made the Integration Services Merge and Merge Join transformations more robust and reliable. This is achieved by reducing the risk that these components will consume excessive memory when the multiple inputs produce data at uneven rates. This improvement helps packages that use the Merge or Merge Join transformations to use memory more efficiently.


Microsoft has also provided new properties and methods for developers of custom data flow components to implement a similar solution in their own components. This improvement makes it more feasible to develop a robust custom data flow component that supports multiple inputs. For more information, see Developing Data Flow Components with Multiple Inputs.

6. Answering the Need with (DQS) Data Quality Services

Data quality is not defined in absolute terms. It depends upon whether data is appropriate for the purpose for which it is intended. DQS identifies potentially incorrect data, and provides you with an assessment of the likelihood that the data is in fact incorrect. DQS provides you with a semantic understanding of the data so you can decide its appropriateness. DQS enables you to resolve issues involving incompleteness, lack of conformity, inconsistency, inaccuracy, invalidity, and data duplication.

DQS provides the following features to resolve data quality issues.

-
Data Cleansing: the modification, removal, or enrichment of data that is incorrect or incomplete, using both computer-assisted and interactive processes. For more information, see Data Cleansing.

- Matching: the identification of semantic duplicates in a rules-based process that enables you to determine what constitutes a match and perform de-duplication. For more information, see Data Matching.

-
Reference Data Services: verification of the quality of your data using the services of a reference data provider. You can use reference data services from Windows Azure Marketplace DataMarket to easily cleanse, validate, match, and enrich data. For more information, see Reference Data Services in DQS.

-
Profiling: the analysis of a data source to provide insight into the quality of the data at every stage in the knowledge discovery, domain management, matching, and data cleansing processes. Profiling is a powerful tool in a DQS data quality solution. You can create a data quality solution in which profiling is just as important as knowledge management, matching, or data cleansing. For more information, see Data Profiling and Notifications in DQS.

- Monitoring: the tracking and determination of the state of data quality activities. Monitoring enables you to verify that your data quality solution is doing what it was designed to do. For more information, see DQS Administration.

-
Knowledge Base: Data Quality Services is a knowledge-driven solution that analyzes data based upon knowledge that you build with DQS. This enables you to create data quality processes that continually enhances the knowledge about your data and in so doing, continually improves the quality of your data.

7. Replication Support for AlwaysOn Availability Groups

Replication supports the following features on Availability groups:

-
A publication database can be part of an availability group. The publisher instances must share a common distributor. Transaction, merge, and snapshot replication are supported.

In an AlwaysOn Availability Group an AlwaysOn secondary cannot be a publisher. Republishing is not supported when replication is combined with AlwaysOn.

Peer-To-Peer (P2P), bi-directional, reciprocal transactional publications, and Oracle Publishing are not supported.

-
A database that is enabled for Change Data Capture (CDC) can be part of an availability group.

-
A database enabled for Change Tracking (CT) can be part of an availability group.

Four new stored procedures provide replication support for AlwaysOn.

-
sp_redirect_publisher (Transact-SQL)
-
sp_get_redirected_publisher (Transact-SQL)
-
sp_validate_redirected_publisher (Transact-SQL)
-
sp_validate_replica_hosts_as_publishers (Transact-SQL)

For more information about replication with AlwaysOn, see Configure Replication for AlwaysOn Availability Groups (SQL Server), Maintaining an AlwaysOn Publication Database (SQL Server), and Replication, Change Tracking, Change Data Capture, and AlwaysOn Availability Groups (SQL Server).

8. Power View


Power View, a feature of SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Server 2010 Enterprise Edition, is an interactive data exploration, visualization, and presentation experience. It provides drag-and-drop ad hoc reporting for business users such as data analysts, business decision makers, and information workers. Power View reports are in a new file format, RDLX.


Power View expands on the self-service BI capabilities delivered with PowerPivot for Excel and PowerPivot for SharePoint by enabling customers to visualize and interact with modeled data in a meaningful way, using interactive visualizations, animations, and smart querying. It is a browser-based Silverlight application launched from within SharePoint Server 2010 that enables users to present and share insights with others in their organization through interactive presentations.

Based on Tabular Models

With Power View, customers start from an SQL Server 2012 Analysis Services (SSAS) tabular model to build their reports. Tabular models use metadata to present an underlying data source to end users, with predefined relationships and behaviors, in terms they understand. For more information about tabular models, see What's New (Analysis Services) and Tabular Modeling (SSAS Tabular).

Coexists with Report Builder

Power View does not replace Report Builder, the report authoring tool for richly designed operational reports. Power View addresses the need for Web-based, ad hoc reporting. It co-exists with the latest version of Report Builder, which also ships in SQL Server 2012.

For more information, see the following:

-
Power View (SSRS)

-
Power View and PowerPivot HelloWorldPicnic Samples for SQL Server 2012 (http://go.microsoft.com/fwlink/?LinkId=219108)

-
Tutorial: Create a Sample Report in Power View (http://go.microsoft.com/fwlink/?LinkId=221204)

9. What's New (Master Data Services) Use Excel to Manage Master Data


You can now manage your master data in the Master Data Services Add-in for Excel. You can use this add-in to load a filtered set of data from your Master Data Services database, work with the data in Excel, and then publish the data back to the database. If you are an administrator, you can also use the add-in to create new entities and attributes. It is easy to share shortcut query files, which contain information about the server, the model, version, entity, and any applied filters. You can send the shortcut query file to another user via Microsoft Outlook. You can refresh data in the Excel worksheet with data from the server, refreshing either the entire Excel worksheet or a contiguous selection of MDS-managed cells in the worksheet. For more information, see
Master Data Services Add-in for Microsoft Excel.

Match Data before Loading

Before adding more data to MDS, you can now confirm that you are not adding duplicate records. The MDS Add-in for Excel uses SQL Server Data Quality Services (DQS) to compare two sources of data: the data from MDS and the data from another system or spreadsheet. DQS provides suggestions for updating your data, along with the percent of confidence that the changes are correct. For more information, see Data Quality Matching in the MDS Add-in for Excel.

Load Data into MDS Using Entity-Based Staging

Loading data into MDS has become easier. You can now load all members and attribute values for an entity at one time. Previously you had to load members and attributes in separate batches. See Importing Data (Master Data Services).

10. SQL Server Install

-
Server Core Installation: Starting with SQL Server 2012, we can install SQL Server on Windows Server 2008 R2 Server Core SP1. For more information, see Install SQL Server 2012 on Server Core.

-
SQL Server Data Tools (Formerly called Business Intelligence Development Studio): Starting with SQL Server 2012, you can install SQL Server Data Tools (SSDT) which provides an IDE for building solutions for the Business Intelligence components: Analysis Services, Reporting Services, and Integration Services.

SSDT also includes "Database Projects", which provides an integrated environment for database developers to carry out all their database design work for any SQL Server platform (both on and off premise) within Visual Studio. Database developers can use the enhanced Server Object Explorer in Visual Studio to easily create or edit database objects and data, or execute queries.

-
SQL Server multi-subnet clustering: You can now configure a SQL Server failover cluster using clustered nodes on different subnets. For more information, see SQL Server Multi-Subnet Clustering.

-
SMB file share is a supported storage option: System databases (Master, Model, MSDB, and TempDB), and Database Engine user databases can be installed on a file share on an SMB file server. This applies to both SQL Server stand-alone and SQL Server failover cluster installations. For more information, see Install SQL Server with SMB fileshare as a storage option.

If you need SQL 2012 hosting, please visit our site at http://www.hostforlife.eu. If you have any question about this feature, please feel free to email us at [email protected].



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in