European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

HostForLIFE.eu Proudly Launches Visual Studio 2017 Hosting

clock December 2, 2016 06:51 by author Peter

European leading web hosting provider, HostForLIFE.eu announces the launch of Visual Studio 2017 Hosting

HostForLIFE.eu was established to cater to an underserved market in the hosting industry; web hosting for customers who want excellent service. HostForLIFE.eu - a cheap, constant uptime, excellent customer service, quality, and also reliable hosting provider in advanced Windows and ASP.NET technology. HostForLIFE.eu proudly announces the availability of the Visual Studio 2017 hosting in their entire servers environment.

The smallest install is just a few hundred megabytes, yet still contains basic code editing support for more than twenty languages along with source code control. Most users will want to install more, and so customer can add one or more 'workloads' that represent common frameworks, languages and platforms - covering everything from .NET desktop development to data science with R, Python and F#.

System administrators can now create an offline layout of Visual Studio that contains all of the content needed to install the product without requiring Internet access. To do so, run the bootstrapper executable associated with the product customer want to make available offline using the --layout [path] switch (e.g. vs_enterprise.exe --layout c:\mylayout). This will download the packages required to install offline. Optionally, customer can specify a locale code for the product languages customer want included (e.g. --lang en-us). If not specified, support for all localized languages will be downloaded.

HostForLIFE.eu hosts its servers in top class data centers that is located in Amsterdam (NL), London (UK), Paris (FR), Frankfurt(DE) and Seattle (US) to guarantee 99.9% network uptime. All data center feature redundancies in network connectivity, power, HVAC, security, and fire suppression. All hosting plans from HostForLIFE.eu include 24×7 support and 30 days money back guarantee. The customers can start hosting their Visual Studio 2017 site on their environment from as just low €3.00/month only.

HostForLIFE.eu is a popular online ASP.NET based hosting service provider catering to those people who face such issues. The company has managed to build a strong client base in a very short period of time. It is known for offering ultra-fast, fully-managed and secured services in the competitive market.

HostForLIFE.eu offers the latest European Visual Studio 2017 hosting installation to all their new and existing customers. The customers can simply deploy their Visual Studio 2017 website via their world-class Control Panel or conventional FTP tool. HostForLIFE.eu is happy to be offering the most up to date Microsoft services and always had a great appreciation for the products that Microsoft offers.

Further information and the full range of features Visual Studio 2017 Hosting can be viewed here http://hostforlife.eu/European-Visual-Studio-2017-Hosting

About Company
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. HostForLIFE.eu deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

HostForLIFE.eu is awarded Top No#1 SPOTLIGHT Recommended Hosting Partner by Microsoft. Their service is ranked the highest top #1 spot in several European countries, such as: Germany, Italy, Netherlands, France, Belgium, United Kingdom, Sweden, Finland, Switzerland and other European countries. Besides this award, they have also won several awards from reputable organizations in the hosting industry and the detail can be found on their official website.

 



SQL Server 2016 Hosting - HostForLIFE.eu :: LAG and LEAD Functions in SQL Server

clock November 24, 2016 07:33 by author Peter

In this post, I will tell you about the LAG and LEAD functions in SQL Server. These two functions are analytical functions in SQL Server. In actual scenarios we need to analyze the data, for example, comparing previous sales data.

The Lag and Lead functions support the window partitioning and ordering clauses in SQL Server. The Lag and Lead functions do not support the window frame clause.

LAG
The Lag function gives the previous column values based on ordering.

LEAD
The Lead function gives the next column values based on ordering.

Demo
    CREATE TABLE DBO.SALES 
    ( 
    PROD_ID INT , 
    SALES_YEAR INT, 
    SALES_AMOUNT INT  
    ) 

    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(1,2009,10000) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(1,2010,9000) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(1,2011,8000) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(1,2012,7000) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(1,2013,14000) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(1,2014,18000) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(1,2015,15000) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(2,2013,12000) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(2,2014,8000) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(2,2015,16000) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(3,2012,7000) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(3,2013,8000) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(3,2014,9700) 
    INSERT INTO DBO.SALES(PROD_ID,SALES_YEAR,SALES_AMOUNT) VALUES(3,2015,12500) 

    SELECT * FROM DBO.SALES

The following example shows the Previous Year Sales Amount.
SELECT *, LAG(SALES_AMOUNT) OVER(ORDER BY PROD_ID ,SALES_YEAR) [Prevoius Year Sales] FROM DBO.SALES

The following example shows the Next Year Sales Amount.
    SELECT * , 
    LEAD(SALES_AMOUNT) OVER(ORDER BY PROD_ID ,SALES_YEAR) [Next Year Sales]  
    FROM DBO.SALES 

The following example shows the Previous Year Next Year Sales Amount using the partition by clause.
    SELECT *, 
     
       LAG(SALES_AMOUNT) OVER(PARTITION BY PROD_ID ORDER BY PROD_ID ,SALES_YEAR) [PREVOIUS YEAR SALES] , 
       LEAD(SALES_AMOUNT) OVER(PARTITION BY PROD_ID ORDER BY PROD_ID ,SALES_YEAR) [NEXT YEAR SALES]  
    FROM DBO.SALES     

The following example shows an offset other than 1.

The offset is by default 1. If we want an offset other than 1 then we need to provide 2 argument values in the Lag and Lead functions.
    SELECT * , 
       LAG(SALES_AMOUNT,2)  OVER(ORDER BY PROD_ID ,SALES_YEAR) [PREVOIUS YEAR SALES] , 
       LEAD(SALES_AMOUNT,2)  OVER( ORDER BY PROD_ID ,SALES_YEAR) [NEXT YEAR SALES]  
    FROM DBO.SALES 

The following example shows replacing the null with various values:

    SELECT * , 
       LAG(SALES_AMOUNT,2,0)  OVER(ORDER BY PROD_ID ,SALES_YEAR) [PREVOIUS YEAR SALES] , 
       LEAD(SALES_AMOUNT,2,0)  OVER( ORDER BY PROD_ID ,SALES_YEAR) [NEXT YEAR SALES]  
    FROM DBO.SALES 

 

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



European SQL 2016 Hosting - HostForLIFE.eu :: Json in SQL Server 2016

clock November 17, 2016 10:17 by author Scott

In this article, we will focus on Json support in SQL Server 2016. We will take a look how Json will impact data warehouse solutions

Since the advent of EXtensible Markup Language (XML) many modern web applications have focused on providing data that is both human-readable and machine-readable. From a relational database perspective, SQL Server kept up with these modern web applications by providing support for XML data in a form of an XML data type and several functions that could be used to parse, query and manipulate XML formatted data.

As a result of being supported in SQL Server, data warehouse solutions based off SQL Server were then able to source XML-based OLTP data into a data mart. To illustrate this point, let’s take a look at the XML representation of our fictitious Fruit Sales data shown in figure below.

To process this data in data warehouse, we would first have to convert it into relational format of rows and columns using T-SQL XML built-in functions such as the nodes() function. 

The results of the above script are shown below in a recognisable format for data warehouse.

Soon after XML became a dominant language for data interchange for many modern web applications, JavaScript Object Notation (JSON) was introduced as a lightweight data-interchange format that is more convenient for web applications to process than XML. Likewise most relational database vendors released newer versions of their database systems that included the support for JSON formatted data. Unfortunately, Microsoft SQL Server was not one of those vendors and up until SQL Server 2014, JSON data was not supported. Obviously this lack of support for JSON, created challenges for data warehouse environments that are based off SQL Server.

Although there were workarounds (i.e. using Json.Net) to addressing the lack of JSON support in SQL Server, there was always sense that these workarounds were inadequate, time-wasting, and were forcing data warehouse development teams to pick up a new skill (i.e. learn .Net). Fortunately, the release of SQL Server 2016 has ensured that development teams can throw away their JSON workarounds as JSON is supported in SQL Server 2016.

Parsing JSON Data into Data Warehouse

Similarly to XML support in SQL Server, SQL Server supports of JSON can be classified into two ways:

1. Converting Relational dataset into JSON format
2. Converting JSON dataset into relational format

However, for the purposes of this discussion we are focusing primarily on the second part – which is converting a JSON formatted data (retrieved from OLTP sources) into a relational format of rows and columns. To illustrate our discussion points we once again make use of the fictitious fruit sales dataset. This time around the fictitious dataset has been converted into a JSON format as shown in below.

ISJSON function

As part of supporting JSON formatted data in other relational databases such as MySQL and PostgreSQL 9.2, there is a separate JSON data type that has been introduced by these vendors. Amongst other things, JSON data type conducts validation checks to ensure that values being stored are indeed of valid JSON format.

Unfortunately, SQL Server 2016 (and ORACLE 12c) do not have a special data type for storing JSON data instead a variable character (varchar/nvarchar) data type is used. Therefore, a recommended practice to dealing with JSON data in SQL Server 2016 is to firstly ensure that you are indeed dealing with a valid JSON data. The simplest way to do so is to use the ISJSON function. This is a built-in T-SQL function that returns 1 for a valid JSON dataset and 0 for invalids.

OPENJSON function

Now that we have confirmed that we are working with a valid JSON dataset, the next step is to convert the data into a table format. Again, we have a built-in T-SQL function to do this in a form of OPENJSON. OPENJSON works similar to OPENXML in that it takes in an object and convert its data into rows and columns.

Below shows a complete T-SQL script for converting JSON object into rows and columns.

Once we execute the above script, we get relational output shown below

Now that we have our relational dataset, we can process this data into data warehouse.

JSON_VALUE function

Prior to concluding our discussion of JSON in SQL Server 2016, it is worth mentioning that in addition to OPENJSON, you have other functions such as JSON_VALUE that could be used to query JSON data. However this function returns a scalar value which means that unlike the multiple rows and columns returned using OPENJSON, JSON_VALUE returns a single value as shown in Figure below.

If you the JSON object that you are querying doesn’t have multiple elements, than you don’t have to specify the row index (i.e. [0]) 

Conclusion

The long wait is finally over and with the release of SQL Server 2016, JSON is now supported. Similarly to XML, T-SQL support the conversion of JSON object to relational format as well the conversion of relational tables to a JSON object. This support is implemented via built-in T-SQL functions such as OPENJSON and JSON_VALUE. In spite of all the excitement with the support of JSON is SQL Server 2016, we still don’t have a JSON data type. The ISJSON function can then be used to validate JSON text.



SQL Server 2012 Hosting - HostForLIFE.eu :: How to Clearing Down A Database Full Of Constraints In SQL Server?

clock November 17, 2016 08:18 by author Peter

In this post I will show you how to Clearing Down A Database Full Of Constraints In SQL Server. Have you ever been in a scenario where you have to clear down some data within a database that is chock full of constraints but don't want to wipe out your precious relationships, indices and all that other jazz?

I found myself in a similar situation earlier this week, and needed a clear-down script that would wipe out all of the data within an entire database, without being bothered by any existing constraints. Here it is.

    USE @YourTable;  
    EXEC sp_MSForEachTable "ALTER TABLE ? NOCHECK CONSTRAINT ALL"  
    EXEC sp_MSForEachTable "DELETE FROM ?"  
    EXEC sp_MSForEachTable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL"  
    GO  

What is this doing?
The script itself takes advantage of an undocumented stored procedure within SQL Server called sp_MSForEachTable that will actually iterate through all of the tables within a given database.

Now that we know we are going to be looping through each of the tables within the specified database, let's see what is going to happen to each of the tables.

    ALTER TABLE ? NOCHECK CONSTRAINT ALL
    This will disable any constraint checking that is present on the table (so, operations like deleting a primary key or a related object won't trigger any errors).

    DELETE FROM ?
    This will delete every record within the table.

    ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL
    This re-enables the constraint checking, bringing your table back to its original state, sans data.

Note
It is very important that you properly scope this query to the table that you are targeting to avoid any crazy data loss.

While I don't think that you could just leave that out and execute on master, I wouldn't want to even risk testing that out (although feel free to try it out and let me know if it nukes everything).

 

HostForLIFE.eu SQL Server 2012 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server 2016 Hosting - HostForLIFE.eu :: Stored Procedure With Real Time Scenario In SQL Server

clock November 10, 2016 08:00 by author Peter

In this article, I am going to tell you how to create a procedure in the real time scenario. A stored procedure is a set of SQL statements, which has been created and stored in the database as an object. Stored procedure will accept the input and output parameters, so that a single procedure can be used over the network by several users, using different input. Stored procedure will reduce the network traffic and increase the performance.

Real time scenario
Step 1: Create a table to describe and create the stored procedure.
    create table Product 
    ( 
          ProductId int primary key, 
          ProductName varchar(20) unique, 
          ProductQty int, 
          ProductPrice float 
    ) 


Step 2: Insert some value to the describe scenario.
    insert product values(1,'Printer',10,4500) 
    insert product values(2,'Scanner',15,3500) 
    insert product values(3,'Mouse',45,500)  


Step 3: Check your table with the inserted value.
    select * from product  

Step 4: Real time scenario is given below:
Create a stored procedure, which is used to perform the requirements, given below:

Before inserting, check the detail about the product name. If the product name is available, update an existing product qty + inserted product qty,

  • Before inserting, check the detail about the product name.
  • If the product name is available, check the product price.
  • If the existing product price is less, the inserted product product price replaces the existing product price with the inserted product price.
  • If first and second conditions are not satisfied, insert the product information, as new record into the table.

    create procedure prcInsert  
    @id int, 
    @name varchar(40), 
    @qty int, 
    @price float 
    as 
    begin 
     declare @cnt int 
     declare @p float 
     select @cnt=COUNT(ProductId)from Product where pname=@name 
     if(@cnt>0) 
     begin 
      update Product set ProductQty=ProductQty+@qty where ProductName=@name 
      select @p=ProductPrice from Product where ProductName=@name 
      if(@p<@price) 
      begin 
       update Product set ProductPrice=@price where ProductName=@name 
      end 
     end 
     else 
     begin 
      insert Product values(@id,@name,@qty,@price) 
     end 
    end  

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



European Windows Hosting - HostForLIFE.eu :: New Features in Windows Server 2016

clock November 3, 2016 08:59 by author Scott

As we’ve come to expect from new versions of Windows Server, Windows Server 2016 arrives packed with a huge array of new features. Many of the new capabilities, such as containers and Nano Server, stem from Microsoft’s focus on the cloud. Others, such as Shielded VMs, illustrate a strong emphasis on security. Still others, like the many added networking and storage capabilities, continue an emphasis on software-defined infrastructure begun in Windows Server 2012.

The GA release of Windows Server 2016 rolls up all of the features introduced in the five Technical Previews we’ve seen along the way, plus a few surprises. Now that Windows Server 2016 is fully baked, we’ll treat you to the new features we like the most.

Here are several features that you can get from Windows Server 2016:

Nano Server

Nano Server boasts a 92 percent smaller installation footprint than the Windows Server graphical user interface (GUI) installation option. Beyond just that, these compelling reasons may make you start running Nano for at least some of your Windows Server workloads:

  • Bare-metal OS means far fewer updates and reboots are necessary.
  • Because you have to administratively inject any server roles from outside Nano, the server has a much-reduced attack surface when compared to GUI Windows Server.
  • Nano is so small that it can be ported easily across servers, data centers and physical sites.
  • Nano hosts the most common Windows Server workloads, including Hyper-V host.
  • Nano is intended to be managed completely remotely. However, Nano does include a minimal local management UI called "Nano Server Recovery Console," shown in the previous screenshot, that allows you to perform initial configuration tasks.

Containers

Microsoft is working closely with the Docker development team to bring Docker-based containers to Windows Server. Until now, containers have existed almost entirely in the Linux/UNIX open-source world. They allow you to isolate applications and services in an agile, easy-to-administer way. Windows Server 2016 offers two different types of "containerized" Windows Server instances:

  • Windows Server Container. This container type is intended for low-trust workloads where you don't mind that container instances running on the same server may share some common resources
  • Hyper-V Container. This isn't a Hyper-V host or VM. Instead, its a "super isolated" containerized Windows Server instance that is completely isolated from other containers and potentially from the host server. Hyper-V containers are appropriate for high-trust workloads.

Linux Secure Boot

Secure Boot is part of the Unified Extensible Firmware Interface (UEFI) specification that protects a server's startup environment against the injection of rootkits or other assorted boot-time malware.

The problem with Windows Server-based Secure Boot is that your server would blow up (figuratively speaking) if you tried to create a Linux-based Generation 2 Hyper-V VM because the Linux kernel drivers weren't part of the trusted device store. Technically, the VM's UEFI firmware presents a "Failed Secure Boot Verification" error and stops startup.

Nowadays, the Windows Server and Azure engineering teams seemingly love Linux. Therefore, we can now deploy Linux VMs under Windows Server 2016 Hyper-V with no trouble without having to disable the otherwise stellar Secure Boot feature.

ReFS

The Resilient File System (ReFS) has been a long time coming in Windows Server. In Windows Server 2016, we finally get a stable version. ReFS is intended as a high-performance, high-resiliency file system intended for use with Storage Spaces Direct (discussed next in this article) and Hyper-V workloads.

Storage Spaces Direct

Storage Spaces is a cool Windows Server feature that makes it more affordable for administrators to create redundant and flexible disk storage. Storage Spaces Direct in Windows Server 2016 extends Storage Spaces to allow failover cluster nodes to use their local storage inside this cluster, avoiding the previous necessity of a shared storage fabric.

ADFS v4

Active Directory Federation Services (ADFS) is a Windows Server role that supports claims (token)-based identity. Claims-based identity is crucial thanks to the need for single-sign on (SSO) between on-premises Active Directory and various cloud-based services.

ADFS v4 in Windows Server 2016 finally brings support for OpenID Connect-based authentication, multi-factor authentication (MFA), and what Microsoft calls "hybrid conditional access." This latter technology allows ADFS to respond when user or device attributes fall out of compliance with security policies on either end of the trust relationship.

Nested Virtualization

Nested virtualization refers to the capability of a virtual machine to itself host virtual machines. This has historically been a "no go" in Windows Server Hyper-V, but we finally have that ability in Windows Server 2016.

Nested virtualization makes sense when a business wants to deploy additional Hyper-V hosts and needs to minimize hardware costs.

Hyper-V Server has allowed us to add virtual hardware or adjust the allocated RAM to a virtual machine. However, those changes historically required that we first power down the VM. In Windows Server 2016, we can now "hot add" virtual hardware while VMs are online and running. I was able to add an additional virtual network interface card (NIC) to my running Hyper-V virtual machine.

PowerShell Direct

In Windows Server 2012 R2, Hyper-V administrators ordinarily performed Windows PowerShell-based remote administration of VMs the same way they would with physical hosts. In Windows Server 2016, PowerShell remoting commands now have -VM* parameters that allows us to send PowerShell directly into the Hyper-V host's VMs!

Invoke-Command -VMName 'server2' -ScriptBlock {Stop-Service -Name Spooler} -Credential 'tomsitprotim' -Verbose

We used the new -VMName parameter of the Invoke-Command cmdlet to run the Stop-Service cmdlet on the Hyper-V VM named server2.

Shielded VMs

The new Host Guardian Service server role, which hosts the shielded VM feature, is far too complex to discuss in this limited space. For now, suffice it to say that Windows Server 2016 shielded VMs allow for much deeper, fine-grained control over Hyper-V VM access.

For example, your Hyper-V host may have VMs from more than one tenant, and you need to ensure that different Hyper-V admin groups can access only their designated VMs. By using BitLocker Drive Encryption to encrypt the VM's virtual hard disks, shielded VMs can solve that problem.

 



AngularJS Hosting - HostForLIFE.eu :: AngularJs ng-repeat

clock November 2, 2016 08:32 by author Peter

Angular js URL:
<script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js"></script> 

Html tag:
<div ng-app="myApp" ng-controller="myCntrl"> 
    <table> 
        <thead> 
            <tr> 
                <th> 
                    Emp Code. 
                </th> 
                <th> 
                    Employee Name 
                </th> 
                <th> 
                    Pan No 
                </th> 
                  
            </tr> 
        </thead> 
        <tr ng-repeat="student in EmployeeList"> 
            <td ng-bind="student.StudentID"> 
            </td> 
            <td ng-bind="student.StudentName"> 
            </td> 
            <td ng-bind="student.PanNO"> 
            </td> 
              
        </tr> 
    </table> 
</div> 

Angular js Script :

<script> 
   var app = angular.module("myApp", []); 
   app.controller("myCntrl", function ($scope, $http) { 

       $scope.fillList = function () { 
           $scope.EmployeeName = ""; 
           var httpreq = { 
               method: 'POST', 
               url: 'Default2.aspx/GetList', 
               headers: { 
                   'Content-Type': 'application/json; charset=utf-8', 
                   'dataType': 'json' 
               }, 
               data: {} 
           } 
           $http(httpreq).success(function (response) { 
               $scope.EmployeeList = response.d; 
           }) 
       }; 
       $scope.fillList(); 
   }); 
</script> 


Asp.net Cs page code:
using System; 
using System.Collections.Generic; 
using System.Data.SqlClient; 
using System.Data; 

public partial class Default2 : System.Web.UI.Page 

protected void Page_Load(object sender, EventArgs e) 



[System.Web.Services.WebMethod()] 
public static List<Employee> GetList() 

    List<Employee> names = new List<Employee>(); 
    DataSet ds = new DataSet(); 
    using (SqlConnection con = new SqlConnection(@"Data Source=140.175.165.10;Initial Catalog=Payroll_290716;user id=sa;password=Goal@12345;")) 
    { 
        using (SqlCommand cmd = new SqlCommand()) 
        { 
            cmd.Connection = con; 
            cmd.CommandText = "select EmpId,Empcode, name,PanNo from EMPLOYEEMASTER  order by Name;"; 
            using (SqlDataAdapter da = new SqlDataAdapter(cmd)) 
            { 
                da.Fill(ds); 
            } 
        } 
    } 
    if (ds != null && ds.Tables.Count > 0) 
    { 
        foreach (DataRow dr in ds.Tables[0].Rows) 
            names.Add(new Employee(int.Parse(dr["EmpId"].ToString()), dr["name"].ToString(), dr["PanNo"].ToString())); 
    } 
    return names; 


public class Employee 

public int StudentID; 
public string StudentName; 
public string PanNO; 
public Employee(int _StudentID, string _StudentName, string _PanNO) 

    StudentID = _StudentID; 
    StudentName = _StudentName; 
    PanNO = _PanNO; 

}  


Whole HTML page:
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default2.aspx.cs" Inherits="Default2" %> 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
<html xmlns="http://www.w3.org/1999/xhtml"> 
<head runat="server"> 
<title></title> 
<script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js"></script> 
</head> 
<body> 
<form id="form1" runat="server"> 
<div ng-app="myApp" ng-controller="myCntrl"> 
    <table> 
        <thead> 
            <tr> 
                <th> 
                    Emp Code. 
                </th> 
                <th> 
                    Employee Name 
                </th> 
                <th> 
                    Pan No 
                </th> 
                  
            </tr> 
        </thead> 
        <tr ng-repeat="student in EmployeeList"> 
            <td ng-bind="student.StudentID"> 
            </td> 
            <td ng-bind="student.StudentName"> 
            </td> 
            <td ng-bind="student.PanNO"> 
            </td> 
              
        </tr> 
    </table> 
</div> 
<script> 
    var app = angular.module("myApp", []); 
    app.controller("myCntrl", function ($scope, $http) { 

        $scope.fillList = function () { 
            $scope.EmployeeName = ""; 
            var httpreq = { 
                method: 'POST', 
                url: 'Default2.aspx/GetList', 
                headers: { 
                    'Content-Type': 'application/json; charset=utf-8', 
                    'dataType': 'json' 
                }, 
                data: {} 
            } 
            $http(httpreq).success(function (response) { 
                $scope.EmployeeList = response.d; 
            }) 
        }; 
        $scope.fillList(); 
    }); 
</script> 
</form> 
</body> 
</html>  

HostForLIFE.eu AngularJS Hosting

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



European SQL 2016 Hosting - HostForLIFE :: Dynamic Data Masking in SQL 2016. Is it Enough?

clock October 11, 2016 23:43 by author Scott

Dynamic vs. Static Data Masking

When masking data, organizations prevent unauthorized users from viewing sensitive data and protect information for following regulatory needs.  Data masking technology provides data security by replacing sensitive information with a non-sensitive content, but doing so in such a way that the copy of data that looks and acts like the original.

In this article, we talk about the different types of data masking and discuss how organizations can use data masking to protect sensitive data.

Masking data isn’t the same as a firewall

Most organizations have a fair amount of security around their most sensitive data in the production (live) databases. Access to databases is restricted in a variety of ways from authentication to firewalls.

Masking limits the duplication of sensitive data within development and testing environments by distributing substitute data sets for analysis. In other cases, masking will dynamically provide masked content if a user’s request for sensitive information is considered ‘risky’. Masking data is designed to fit within existing data management frameworks and mitigate risks to information without sacrificing its usefulness. Masking platforms tend to guard data, locate data, identify risks and protect as information moves in and out of the applications.

Data masking hides the actual data. There are a variety of different algorithms for masking, depending on the requirements.

Simple masking just turns characters to blank, so, for example, an e-mail address would appear as [email protected]

More complex masking understands values, so, for example, a real name like “David Patrick” would be transformed into a fake name (with the same gender characteristics), like “John Smith”

In some algorithms, values are scrambled, so, for example, a table of health conditions might appear with values of the health conditions, but not associated with the correct person for the particular salary

Most data masking tools will offer a variety of levels of masking that can be enabled in your network. Both static and dynamic data masking use these same masking methodologies.

Static data masking

Static data masking is used by most organizations when they create testing and development environments, and, in fact, is the only possible masking method when using outsourced contractors or developers in a separate location or separate company. In these cases, it’s necessary to duplicate the database. When doing so, it is crucial to use a static data masking tools. These tools ensure that all sensitive data is masked before sending it out of the organization.

Static data masking provides a basic level of data protection by creating an offline or testing database using a standard ETL procedure. This procedure replicates a production database, but substitute’s data that has been masked, in other words, the data fields are changed to data that’s not original or is not readable.

It’s important to be aware that static masking can provide a backdoor, especially when outsourced personnel is used for administration, development, or testing. To mask data, the data is extracted from the database, at least for inspection, to comprehend the data before masking. Theoretically, this could provide a backdoor for data breaches, though it is not one of the common methods of malicious data capture.

Also, it’s clear that the static database always lags behind the actual data. The static database can be updated periodically, for example on a daily or weekly basis. This is not a security risk, but it often has implications for a variety of tests and development issues.

Static data masking allows database administrators, quality assurance, and developers to work on a non-live system so that private data is not exposed.

In many cases, in fact, you’ll want a test database anyhow. You don’t want to be running live experiments on a production database, so for R&D and testing, it makes sense to have a test database. There’s nothing wrong with this scenario.

Is your database protected with static data masking?

The answer should be obvious from the image above. Your actual production database is, in fact, not protected in any way when it comes to concealing sensitive information. Anyone or any system that has access to the production database might also have access to sensitive information. For most organizations, the only protection under this scenario is provided by limiting authorization access to the production database.

Concerns about static data masking

With static data masking, most of the DBAs, programmers, and testers never actually get to touch the production database. All of their work is done on the dummy test database. This provides one level of protection and is necessary for many environments. However, it is not a complete solution because it does not protect authorized users from viewing and extracting unauthorized information. The following concerns should be noted when using static database solutions.

Static solutions actually require extraction of all the data before it is masked, that is, it actually guarantees the data gets out of the database in unmasked form. One of the most disturbing facts about static data masking is the standard ETL (extract, transform, and load) approach. In other words, the database information was extracted as-is from the database, and only afterward transformed. You have to hope or trust that the masking solution successfully deleted the real data, and that the static masking solution is working on a secure platform that was not compromised.

The live database is not protected from those who do have permissions to access the database. There are always some administrators, QA, developers, and others with access to the actual live database. This personnel can access actual data records, which are not masked.

For organizations where a test database is not necessary for other purposes, it is wasteful to have a full test database that is a copy of the full production database, minus identifying information. The cost is in the hardware and maintenance of the second system.

Activities have to be performed twice: once on the test system and then implemented on the live system. There’s no guarantee that it will work on the production system, and then the developers or DBAs who need to debug the system will be either debugging on the testing system, or they will be granted permissions that allow them to see the actual live data.

Dynamic data masking: security for live systems

Dynamic data masking is designed to secure data in real time for live production and non-live systems. Dynamic data masking masks all sensitive data as it is accessed, in real time and the sensitive information never leaves the database. When a DBA or other authorized personal views actual data in the production database, data is masked or garbled, so the real data is never exposed. This way, under no circumstances, is anyone exposed to private data through direct database access.

Using a reverse proxy, the dynamic masking tool investigates each query before it reaches the database server. If the query involves any sensitive data, the data is masked on the database server before it reaches the application or the individual who is requesting the data. This way, the data is fully functional for development or testing purposes but is not displayed to unauthorized users.

Dynamic masking allows all authorized personnel to perform any type of action on the database without seeing real data. Of course, activities that are supposed to show data do show that data, but only to the authorized personnel using the correct access. When using advanced data masking rules, it’s possible to identify whether a particular field should be shown to a particular person, and under what circumstances. For example, someone may be able to access one hospital record at a time but only from a particular terminal or IP address, using a specific application and specific credentials. Accessing that same record using a direct database command would not work or would produce masked data.

Concerns with dynamic data masking

Dynamic data masking requires a reverse proxy, which means adding a component between the data query and response. Different solutions exist, some of which require a separate on-premises server, and others that are software-only based and can be installed on the database server.

Furthermore, when a company uses only dynamic data masking and does not have a production system, there are issues associated with performing functions on the live database.

The following concerns should be noted when using dynamic database masking solutions.

  • Response time for real-time database requests. In environments where milliseconds are of crucial importance, dynamic masking needs to be carefully tested to ensure that performance meets the organization standards. Even when a particular item of data is not masked, the proxy does inspect the incoming request.
  • Security of the proxy itself. Any type of software installed on the database server needs to be secure. And once a proxy is present, you have to enforce that the entire connections to the database are now passing through this SQL proxy. Bypassing this proxy in any way will result in access to the sensitive data without masking.
  • Performing of database development and testing on live systems can cause errors in the production system. In many cases, DBAs perform changes on a limited part of the system before deploying. However, best practices would require a separate database for development and testing.

Static vs. Dynamic Data Masking

The main reason to use data masking is to protect sensitive and confidential information from being breached and protected according to regulatory compliance requirements. At the same time, the data must stay in the same structure, otherwise, the testing will not show accurate results. The data needs to look real and perform exactly as data normally would in the production system. Some companies take real data for non-production environments but sometimes the data may have other uses. For example, in some organizations, when a call center personnel views customer data, the credit card data may be masked on screened.

Generally speaking, most organizations will need some combination of dynamic and static database masking. Even when static data masking is in place, almost any organization with sensitive information in the database should add dynamic data masking to protect live production systems. Organizations with minimal development and testing can rely solely on dynamic data masking, though they may find themselves providing some data with static masking to outside developers or other types of contractors.

Advantages of static data masking

  • Allows the development and testing without influencing live systems
  • Best practice for working with contractors and outsourced developers, DBAs, and testing teams
  • Provides a more in-depth policy of masking capabilities
  • Allows organizations to share the database with external companies

Advantages of dynamic data masking

  • The sensitive information never leaves the database!
  • No changes are required at the application or the database layer
  • Customized access per IP address,  per user, or per  application
  • No duplicate or off-line database required
  • Activities are performed on real data, saving time and providing real feedback to developers and quality assurance



SQL Server 2012 Hosting - HostForLIFE.eu :: Fix SQL Server Can't Connect to Local Host

clock October 5, 2016 21:25 by author Peter

In this post, let me show you how to fix SQL Server Can't Connect to Local Host. Many times we find issues when connecting to the SQL Server. It gives us the message “SQL Server is not able to connect to  local host” as you can see on the following picture:

To fix this issue:
Go to Start->Run->Services.msc.

Once the Services are open, select SQL Server and start it, as per the given screenshot, given below:


After you do it, SQL server will be up and running again.

SQL Server sometimes stops because of some problem. These steps help to make it up. Thanks for reading.

HostForLIFE.eu SQL Server 2012 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server 2016 Hosting - HostForLIFE.eu :: How to Convert String To Color in SQL Server 2016?

clock September 28, 2016 21:00 by author Peter

In this post, I will show you how to Convert String To Color in SQL Server. The following code will describe how to convert String To Color In SQL Server 2016. And now write code below:

CREATE FUNCTION dbo.fn_conversion_string_color(  
@in_string VARCHAR(200)  
RETURNS NVARCHAR(500)  
AS  
BEGIN  
DECLARE @fsetprefix BIT, -- append '0x' to the output   
@pbinin VARBINARY(MAX), -- input binary stream   
@startoffset INT, -- starting offset    
@cbytesin INT, -- length of input to consider, 0 means total length   
@pstrout NVARCHAR(MAX),   
@i INT,   
@firstnibble INT ,  
@secondnibble INT,   
@tempint INT,  
@hexstring CHAR(16)  
  
SELECT @fsetprefix = 1,  
@pbinin = SUBSTRING(HASHBYTES('SHA1', @in_string), 1, 3),  
@startoffset = 1,  
@cbytesin = 0   
  
-- initialize and validate   

IF (@pbinin IS NOT NULL)   
BEGIN    
SELECT @i = 0,   
@cbytesin = CASE  WHEN (@cbytesin > 0 AND @cbytesin <= DATALENGTH(@pbinin))  
  THEN @cbytesin  
  ELSE DATALENGTH(@pbinin)  
  END,   
@pstrout =  CASE  WHEN (@fsetprefix = 1)  
  THEN N'0x'  
  ELSE N''  
  END,   
@hexstring = '0123456789abcdef'   
   
--the output limit for nvarchar(max) is 2147483648 (2^31) bytes, that is 1073741824 (2^30) unicode characters   
  
IF (  
((@cbytesin * 2) + 2 > 1073741824)  
OR ((@cbytesin * 2) + 2 < 1)  
OR (@cbytesin IS NULL )  
)   
RETURN NULL   
   
IF (  
( @startoffset > DATALENGTH(@pbinin) )  
OR (@startoffset < 1 )  
OR (@startoffset IS NULL )  
)   
RETURN NULL   
   
-- adjust the length to process based on start offset and total length   
  
IF ((DATALENGTH(@pbinin) - @startoffset + 1) < @cbytesin)   
SELECT @cbytesin = DATALENGTH(@pbinin) - @startoffset + 1   
  
-- do for each byte   
WHILE (@i < @cbytesin)   
BEGIN   
-- Each byte has two nibbles  which we convert to character   

SELECT @tempint = CAST(SUBSTRING(@pbinin, @i + @startoffset, 1) AS INT)   
SELECT @firstnibble = @tempint / 16   
SELECT @secondnibble = @tempint % 16   
    
-- we need to do an explicit cast with substring for proper string conversion.     

SELECT @pstrout = @pstrout +   
  CAST(SUBSTRING(@hexstring, (@firstnibble+1), 1) AS NVARCHAR) +   
  CAST(SUBSTRING(@hexstring, (@secondnibble+1), 1) AS NVARCHAR)   
SELECT @i = @i + 1   
END   
END   
RETURN  '#' + UPPER(RIGHT(@pstrout, 6))  
 END  

HostForLIFE.eu SQL 2016 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.




About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in