European Windows 2012 Hosting BLOG

BLOG about Windows 2012 Hosting and SQL 2012 Hosting - Dedicated to European Windows Hosting Customer

European Windows Hosting - HostForLIFE.eu :: New Features in Windows Server 2016

clock November 3, 2016 08:59 by author Scott

As we’ve come to expect from new versions of Windows Server, Windows Server 2016 arrives packed with a huge array of new features. Many of the new capabilities, such as containers and Nano Server, stem from Microsoft’s focus on the cloud. Others, such as Shielded VMs, illustrate a strong emphasis on security. Still others, like the many added networking and storage capabilities, continue an emphasis on software-defined infrastructure begun in Windows Server 2012.

The GA release of Windows Server 2016 rolls up all of the features introduced in the five Technical Previews we’ve seen along the way, plus a few surprises. Now that Windows Server 2016 is fully baked, we’ll treat you to the new features we like the most.

Here are several features that you can get from Windows Server 2016:

Nano Server

Nano Server boasts a 92 percent smaller installation footprint than the Windows Server graphical user interface (GUI) installation option. Beyond just that, these compelling reasons may make you start running Nano for at least some of your Windows Server workloads:

  • Bare-metal OS means far fewer updates and reboots are necessary.
  • Because you have to administratively inject any server roles from outside Nano, the server has a much-reduced attack surface when compared to GUI Windows Server.
  • Nano is so small that it can be ported easily across servers, data centers and physical sites.
  • Nano hosts the most common Windows Server workloads, including Hyper-V host.
  • Nano is intended to be managed completely remotely. However, Nano does include a minimal local management UI called "Nano Server Recovery Console," shown in the previous screenshot, that allows you to perform initial configuration tasks.

Containers

Microsoft is working closely with the Docker development team to bring Docker-based containers to Windows Server. Until now, containers have existed almost entirely in the Linux/UNIX open-source world. They allow you to isolate applications and services in an agile, easy-to-administer way. Windows Server 2016 offers two different types of "containerized" Windows Server instances:

  • Windows Server Container. This container type is intended for low-trust workloads where you don't mind that container instances running on the same server may share some common resources
  • Hyper-V Container. This isn't a Hyper-V host or VM. Instead, its a "super isolated" containerized Windows Server instance that is completely isolated from other containers and potentially from the host server. Hyper-V containers are appropriate for high-trust workloads.

Linux Secure Boot

Secure Boot is part of the Unified Extensible Firmware Interface (UEFI) specification that protects a server's startup environment against the injection of rootkits or other assorted boot-time malware.

The problem with Windows Server-based Secure Boot is that your server would blow up (figuratively speaking) if you tried to create a Linux-based Generation 2 Hyper-V VM because the Linux kernel drivers weren't part of the trusted device store. Technically, the VM's UEFI firmware presents a "Failed Secure Boot Verification" error and stops startup.

Nowadays, the Windows Server and Azure engineering teams seemingly love Linux. Therefore, we can now deploy Linux VMs under Windows Server 2016 Hyper-V with no trouble without having to disable the otherwise stellar Secure Boot feature.

ReFS

The Resilient File System (ReFS) has been a long time coming in Windows Server. In Windows Server 2016, we finally get a stable version. ReFS is intended as a high-performance, high-resiliency file system intended for use with Storage Spaces Direct (discussed next in this article) and Hyper-V workloads.

Storage Spaces Direct

Storage Spaces is a cool Windows Server feature that makes it more affordable for administrators to create redundant and flexible disk storage. Storage Spaces Direct in Windows Server 2016 extends Storage Spaces to allow failover cluster nodes to use their local storage inside this cluster, avoiding the previous necessity of a shared storage fabric.

ADFS v4

Active Directory Federation Services (ADFS) is a Windows Server role that supports claims (token)-based identity. Claims-based identity is crucial thanks to the need for single-sign on (SSO) between on-premises Active Directory and various cloud-based services.

ADFS v4 in Windows Server 2016 finally brings support for OpenID Connect-based authentication, multi-factor authentication (MFA), and what Microsoft calls "hybrid conditional access." This latter technology allows ADFS to respond when user or device attributes fall out of compliance with security policies on either end of the trust relationship.

Nested Virtualization

Nested virtualization refers to the capability of a virtual machine to itself host virtual machines. This has historically been a "no go" in Windows Server Hyper-V, but we finally have that ability in Windows Server 2016.

Nested virtualization makes sense when a business wants to deploy additional Hyper-V hosts and needs to minimize hardware costs.

Hyper-V Server has allowed us to add virtual hardware or adjust the allocated RAM to a virtual machine. However, those changes historically required that we first power down the VM. In Windows Server 2016, we can now "hot add" virtual hardware while VMs are online and running. I was able to add an additional virtual network interface card (NIC) to my running Hyper-V virtual machine.

PowerShell Direct

In Windows Server 2012 R2, Hyper-V administrators ordinarily performed Windows PowerShell-based remote administration of VMs the same way they would with physical hosts. In Windows Server 2016, PowerShell remoting commands now have -VM* parameters that allows us to send PowerShell directly into the Hyper-V host's VMs!

Invoke-Command -VMName 'server2' -ScriptBlock {Stop-Service -Name Spooler} -Credential 'tomsitprotim' -Verbose

We used the new -VMName parameter of the Invoke-Command cmdlet to run the Stop-Service cmdlet on the Hyper-V VM named server2.

Shielded VMs

The new Host Guardian Service server role, which hosts the shielded VM feature, is far too complex to discuss in this limited space. For now, suffice it to say that Windows Server 2016 shielded VMs allow for much deeper, fine-grained control over Hyper-V VM access.

For example, your Hyper-V host may have VMs from more than one tenant, and you need to ensure that different Hyper-V admin groups can access only their designated VMs. By using BitLocker Drive Encryption to encrypt the VM's virtual hard disks, shielded VMs can solve that problem.

 



European SQL 2016 Hosting - HostForLIFE :: Dynamic Data Masking in SQL 2016. Is it Enough?

clock October 11, 2016 23:43 by author Scott

Dynamic vs. Static Data Masking

When masking data, organizations prevent unauthorized users from viewing sensitive data and protect information for following regulatory needs.  Data masking technology provides data security by replacing sensitive information with a non-sensitive content, but doing so in such a way that the copy of data that looks and acts like the original.

In this article, we talk about the different types of data masking and discuss how organizations can use data masking to protect sensitive data.

Masking data isn’t the same as a firewall

Most organizations have a fair amount of security around their most sensitive data in the production (live) databases. Access to databases is restricted in a variety of ways from authentication to firewalls.

Masking limits the duplication of sensitive data within development and testing environments by distributing substitute data sets for analysis. In other cases, masking will dynamically provide masked content if a user’s request for sensitive information is considered ‘risky’. Masking data is designed to fit within existing data management frameworks and mitigate risks to information without sacrificing its usefulness. Masking platforms tend to guard data, locate data, identify risks and protect as information moves in and out of the applications.

Data masking hides the actual data. There are a variety of different algorithms for masking, depending on the requirements.

Simple masking just turns characters to blank, so, for example, an e-mail address would appear as [email protected]

More complex masking understands values, so, for example, a real name like “David Patrick” would be transformed into a fake name (with the same gender characteristics), like “John Smith”

In some algorithms, values are scrambled, so, for example, a table of health conditions might appear with values of the health conditions, but not associated with the correct person for the particular salary

Most data masking tools will offer a variety of levels of masking that can be enabled in your network. Both static and dynamic data masking use these same masking methodologies.

Static data masking

Static data masking is used by most organizations when they create testing and development environments, and, in fact, is the only possible masking method when using outsourced contractors or developers in a separate location or separate company. In these cases, it’s necessary to duplicate the database. When doing so, it is crucial to use a static data masking tools. These tools ensure that all sensitive data is masked before sending it out of the organization.

Static data masking provides a basic level of data protection by creating an offline or testing database using a standard ETL procedure. This procedure replicates a production database, but substitute’s data that has been masked, in other words, the data fields are changed to data that’s not original or is not readable.

It’s important to be aware that static masking can provide a backdoor, especially when outsourced personnel is used for administration, development, or testing. To mask data, the data is extracted from the database, at least for inspection, to comprehend the data before masking. Theoretically, this could provide a backdoor for data breaches, though it is not one of the common methods of malicious data capture.

Also, it’s clear that the static database always lags behind the actual data. The static database can be updated periodically, for example on a daily or weekly basis. This is not a security risk, but it often has implications for a variety of tests and development issues.

Static data masking allows database administrators, quality assurance, and developers to work on a non-live system so that private data is not exposed.

In many cases, in fact, you’ll want a test database anyhow. You don’t want to be running live experiments on a production database, so for R&D and testing, it makes sense to have a test database. There’s nothing wrong with this scenario.

Is your database protected with static data masking?

The answer should be obvious from the image above. Your actual production database is, in fact, not protected in any way when it comes to concealing sensitive information. Anyone or any system that has access to the production database might also have access to sensitive information. For most organizations, the only protection under this scenario is provided by limiting authorization access to the production database.

Concerns about static data masking

With static data masking, most of the DBAs, programmers, and testers never actually get to touch the production database. All of their work is done on the dummy test database. This provides one level of protection and is necessary for many environments. However, it is not a complete solution because it does not protect authorized users from viewing and extracting unauthorized information. The following concerns should be noted when using static database solutions.

Static solutions actually require extraction of all the data before it is masked, that is, it actually guarantees the data gets out of the database in unmasked form. One of the most disturbing facts about static data masking is the standard ETL (extract, transform, and load) approach. In other words, the database information was extracted as-is from the database, and only afterward transformed. You have to hope or trust that the masking solution successfully deleted the real data, and that the static masking solution is working on a secure platform that was not compromised.

The live database is not protected from those who do have permissions to access the database. There are always some administrators, QA, developers, and others with access to the actual live database. This personnel can access actual data records, which are not masked.

For organizations where a test database is not necessary for other purposes, it is wasteful to have a full test database that is a copy of the full production database, minus identifying information. The cost is in the hardware and maintenance of the second system.

Activities have to be performed twice: once on the test system and then implemented on the live system. There’s no guarantee that it will work on the production system, and then the developers or DBAs who need to debug the system will be either debugging on the testing system, or they will be granted permissions that allow them to see the actual live data.

Dynamic data masking: security for live systems

Dynamic data masking is designed to secure data in real time for live production and non-live systems. Dynamic data masking masks all sensitive data as it is accessed, in real time and the sensitive information never leaves the database. When a DBA or other authorized personal views actual data in the production database, data is masked or garbled, so the real data is never exposed. This way, under no circumstances, is anyone exposed to private data through direct database access.

Using a reverse proxy, the dynamic masking tool investigates each query before it reaches the database server. If the query involves any sensitive data, the data is masked on the database server before it reaches the application or the individual who is requesting the data. This way, the data is fully functional for development or testing purposes but is not displayed to unauthorized users.

Dynamic masking allows all authorized personnel to perform any type of action on the database without seeing real data. Of course, activities that are supposed to show data do show that data, but only to the authorized personnel using the correct access. When using advanced data masking rules, it’s possible to identify whether a particular field should be shown to a particular person, and under what circumstances. For example, someone may be able to access one hospital record at a time but only from a particular terminal or IP address, using a specific application and specific credentials. Accessing that same record using a direct database command would not work or would produce masked data.

Concerns with dynamic data masking

Dynamic data masking requires a reverse proxy, which means adding a component between the data query and response. Different solutions exist, some of which require a separate on-premises server, and others that are software-only based and can be installed on the database server.

Furthermore, when a company uses only dynamic data masking and does not have a production system, there are issues associated with performing functions on the live database.

The following concerns should be noted when using dynamic database masking solutions.

  • Response time for real-time database requests. In environments where milliseconds are of crucial importance, dynamic masking needs to be carefully tested to ensure that performance meets the organization standards. Even when a particular item of data is not masked, the proxy does inspect the incoming request.
  • Security of the proxy itself. Any type of software installed on the database server needs to be secure. And once a proxy is present, you have to enforce that the entire connections to the database are now passing through this SQL proxy. Bypassing this proxy in any way will result in access to the sensitive data without masking.
  • Performing of database development and testing on live systems can cause errors in the production system. In many cases, DBAs perform changes on a limited part of the system before deploying. However, best practices would require a separate database for development and testing.

Static vs. Dynamic Data Masking

The main reason to use data masking is to protect sensitive and confidential information from being breached and protected according to regulatory compliance requirements. At the same time, the data must stay in the same structure, otherwise, the testing will not show accurate results. The data needs to look real and perform exactly as data normally would in the production system. Some companies take real data for non-production environments but sometimes the data may have other uses. For example, in some organizations, when a call center personnel views customer data, the credit card data may be masked on screened.

Generally speaking, most organizations will need some combination of dynamic and static database masking. Even when static data masking is in place, almost any organization with sensitive information in the database should add dynamic data masking to protect live production systems. Organizations with minimal development and testing can rely solely on dynamic data masking, though they may find themselves providing some data with static masking to outside developers or other types of contractors.

Advantages of static data masking

  • Allows the development and testing without influencing live systems
  • Best practice for working with contractors and outsourced developers, DBAs, and testing teams
  • Provides a more in-depth policy of masking capabilities
  • Allows organizations to share the database with external companies

Advantages of dynamic data masking

  • The sensitive information never leaves the database!
  • No changes are required at the application or the database layer
  • Customized access per IP address,  per user, or per  application
  • No duplicate or off-line database required
  • Activities are performed on real data, saving time and providing real feedback to developers and quality assurance



European DotNetNuke 7 Hosting - UK :: What You Must DO and DON'T for Your DotNetNuke SEO

clock November 12, 2015 19:47 by author Scott

DotNetNuke is easily one of the most popular ASP .Net content management (CMS) systems out there. In this post I am going to cover some of the simplest, fastest things you can do to your DotNetNuke site in order to improve SEO. If your first question was “What is SEO?” then this post is for you. If you are familiar with SEO and want a quick refresher that pertains specifically to DotNetNuke then this post is for you. If you have already carefully tuned your site and are looking for advanced optimization techniques, go hire a marketing professional with a proven track record of SEO success.

So, what is SEO? The term SEO stands for Search Engine Optimization and it essentially means tweaking your site to make it more search-friendly. A search-friendly site makes it much easier for your current and potential customers to locate your site and to find what they need on your site. The end-goal of SEO is often more specifically about getting into one of the coveted top spots in Google’s search results for a particular term or phrase. While achieving that goal usually takes a lot more than the simple tips presented in this post (for example, Google’s ranking depends a lot on the number and types of other sites that link to you), these tips will get you started on the right path and will make your site more useful and usable in the process.

DO use a DotNetNuke (DNN) skin that is web standards-based and follows current recommended practices for accessible content. A good skin will probably note that it is XHTML or HTML5 compliant and may display a small W3C icon indicating that its code validates properly. The W3C is the internationally-recognized body responsible for setting standards that govern key web technologies. A good skin will not use tables for layout. You may also see references to Section 508, which refers to standards set forth in US law for make a site accessible to all users including those using screen readers and other assistive technology. Section 508-compliant sites tend to also be extremely search-friendly as they will include additional text and meta-data to serve assistive technology that is also useful to the robots used by Google, Bing, Yahoo and others.


DON’T make any key text into an image. In fact, avoid making any text into an image at all. Search engines cannot read any text from an image, and neither can screen readers for visually-impaired users. Web font services such as TypekitWebtype and Google Web Fonts make it easier than ever to replace images with plain text and maintain custom styling. If you must use an image for text, make sure that you properly define the alt tag of the image. This is something that is absolutely essential to do for all images on your site, not just those that contain text. In DotNetNuke you typically set up this text in the properties area when inserting an image. You can also configure an additional long description through this dialog where appropriate.


DO take advantage of the site-wide and page-level descriptions and keywords available in DNN. Most themes will use these elements in the header meta-tags of your site. While search engines never rely on keyword tags alone for indexing, these keywords can be useful for pointing out to the search engine which words on the page are particularly important. On the other hand, the description is typically used by search engines as the snippet that will be displayed in their search results. For this reason you should keep it short, to-the-point and self-explanatory. This is your chance to grab the attention of someone who is quickly scanning a search results page for relevant links. The site-wide description will often be used when no page-level description is present but you should always override it with a more specific description per page when possible to prevent the appearance of duplicate content and to give a better idea of what is actually on each page. You can set the description and keywords for the entire site in Site Settings and for an individual page while editing that page.


DON’T post misleading content, duplicate content on multiple pages or rip content from other sites on the web. Google in particular is known to penalize this type of behavior. This should be an obvious one as it is also clearly an ethical issue. The best way to attract the customers that are right for you is to post authentic, original content that benefits them in some way. If you find an article elsewhere that you think may be beneficial to your customers the best thing to do is to write your own post adding value to the discussion, including one or two short, properly-attributed quotes and linking to the full text on the original site. Who knows, this neighborly behavior could even lead to a productive relationship with the author of the content and perhaps a link back to your site from theirs at some future date.


DO update frequently, write plenty of informative text and try to include the words and phrases that you think customers will use when trying to search for the content you have written. Pay attention to the types of questions your customers are asking and how they are asking them. Try to think like a customer when writing and use the same terms that they would be likely to use when searching. Mention those things which are most distinguishing about you and that you want others to find out about you. By doing this not only will you make your content rank higher in search results but you will also make it far more relevant to your customers themselves. It is also a very good idea to include the keywords you identify in the titles of your pages or posts whenever it makes sense to do so – search engines tend to place special emphasis on URLs and titles.


DON’T rename or move pages unless absolutely necessary (and even then seriously consider creating a redirect from the old location to the new one). I should really say don’t rename or move pages ever. Changing the structure of your site can cause dead links both internally and externally. If someone has bookmarked a particular location or linked to it from a blog, web site or tweet and you move or rename that page the link will stop working and you will end up with missed opportunities and frustrated and alienated customers. Restructuring your site can also lead to a (usually temporary) search ranking penalty until your site is re-indexed by every major search provider. If you must move a page you can create a placeholder at the old location and use the DNN link/redirect options to make sure that people with only the old URL will still end up in the right place.


DO update frequently, write plenty of informative text and try to include the words and phrases that you think customers will use when trying to search for the content you have written. Pay attention to the types of questions your customers are asking and how they are asking them. Try to think like a customer when writing and use the same terms that they would be likely to use when searching. Mention those things which are most distinguishing about you and that you want others to find out about you. By doing this not only will you make your content rank higher in search results but you will also make it far more relevant to your customers themselves. It is also a very good idea to include the keywords you identify in the titles of your pages or posts whenever it makes sense to do so – search engines tend to place special emphasis on URLs and titles.


DON’T rename or move pages unless absolutely necessary (and even then seriously consider creating a redirect from the old location to the new one). I should really say don’t rename or move pages ever. Changing the structure of your site can cause dead links both internally and externally. If someone has bookmarked a particular location or linked to it from a blog, web site or tweet and you move or rename that page the link will stop working and you will end up with missed opportunities and frustrated and alienated customers. Restructuring your site can also lead to a (usually temporary) search ranking penalty until your site is re-indexed by every major search provider. If you must move a page you can create a placeholder at the old location and use the DNN link/redirect options to make sure that people with only the old URL will still end up in the right place.


DO use the semantic nature of HTML to add value to your content. An <h1> (Heading 1) tag should be the most important heading on your page. <h2> should be next and so forth, like an outline. The <p> should separate paragraphs. Addresses should be indicated with <address> and lists with <ul> (unordered) or <ol> (ordered). Links created with the <a> tag should include title attributes. Like many of the previous tips, producing well structured content helps both search engines and assistive devices parse your site with greater success. It also starts you down the path towards microformats and some of the advanced and exciting things being done with them. DNN makes at least the basics of this relatively easy without editing the actual code. You can simply choose the tag that will be used from the built in editor. When you do this, just remember that you are describing the text as well as styling it.

That’s it for today! I hope that these tips help you make your site more search- and user-friendly! 



Advantages Using Cloud Hosting - Disaster Recovery Planning

clock October 6, 2015 07:50 by author Scott

Disaster Recovery plans and infrastructures are a necessity for large enterprises for which operating would be an issue if their mission critical applications were to crash or become unavailable. Most DR infrastructures have adopted the method of replicating the infrastructure and hardware of the primary site at a backup location; this ensures that when an issue occurs at the primary site, applications can still be served from an unaffected location that contains the necessary capacity. However, replicating an already complex infrastructure is time consuming and costly to not just build, but to maintain too. Cloud DR looks to build upon DR plans by providing a virtualized cloud infrastructure that is able to operate idly with minimal resources, but can scale up to cope with demand when the primary site fails.

Reduction in DR costs

Moving a DR configuration into the cloud can help you to realize huge cost savings by reducing the amount of physical infrastructure that you need to maintain and rely on to protect you in the event of a disaster that takes down your primary data center locations. Because a DR environment only requires resources that are of a bare minimum when it isn’t being actively used, the amount that you are paying will also be the bare minimum.

As a physical DR environment is comprised of hardware and resources that are equal to that of your primary sites, the amount being paid for is often the same as that of the primary site, the only difference being that most of the time these resources are lying idle. So in effect, with traditional DR you could be paying for unused resources a lot of the time. In a cloud DR configuration, if it is called into action then additional resources can be automatically provisioned as the environment scales to cope with the demand being placed on it. Once demand recedes, the resources are then returned to the cloud. With cloud DR you will only ever be paying for resources that are actually being used, which is where the cost efficiencies arise.

Virtualization caters for unpredictable demand

With traditional DR, the capacity of the DR environment is equal to that of the primary site. So whilst there will be enough capacity to meet demand when the primary site is down, it does mean that even when the DR environment is in use that there could still be a substantial amount of free resources. These are free resources that you will still be paying for. DR in the cloud accounts for this unpredictable demand by scaling up to account for the demand placed on it, so you are only ever paying for resources that are actually being used, therefore there will never be any spare resources. This can also be of assistance for times where demand is actually more than even the primary site can handle.

Minimal recovery time

With cloud DR, the backup environment will be ready to serve your mission critical applications the moment any issues are detected at your primary sites. In the event that your primary site does become unavailable, your end-users shouldn’t notice any difference as we have designed the failover process to take place with minimal downtime. Once your end-users have been transferred to the DR environment, you can get to work repairing the primary site as soon as possible to minimize the amount of time that is spent utilizing the recovery site. Once you have repaired the issue and are confident that the primary site is ready to be returned to live use, the transfer from the DR site to primary site will also be flawless and completed with minimal downtime. These processes make sure that issues don’t have the opportunity to have a large, negative impact on your business; although sometimes they may take time and money to repair, from your end-user’s perspective at least your business will continue to operate as normal because they will still be able to access their mission critical applications and data without issue.

Try our new Cloud Hosting as low as €3.49/month!!

As we have explained above the benefits using Cloud. Now, you can try our new Cloud technology start from an affordable cost. For more information, please visit our cloud official site at http://hostforlife.eu/ASPNET-Cloud-European-Hosting-Plans.

 



European AngularJS Hosting - UK :: Why Use AngularJS for Web Development

clock March 5, 2015 06:50 by author Scott

AngularJS Introduction

AngularJs is an open source Javascript framework to organise and assist web applications and single page applications. In 2012 we witnessed rise of Javascript MVC frameworks and libraries including Backbone.js , Ember.js and Angular.js.

AngularJS is created by Google to build single page applications which could be more architectured and maintainable. AngularJS is completely client-side and entirely JavaScript, so wherever Javascript runs AngularJS also runs. It is even less than 29kb making it highly minified and compressed. Angular is the next generation framework , in which every tool is designed such that it works with every other tool in an interconnected way.

Interesting Point Using AngularJS

Angular has some compelling features for not just the developers but for designers as well.

Two Way Data Binding
It is the most crucial and useful feature of Angular. This feature is what all modern web apps are all about i.e Real Time. Two way binding permanently binds the view to the model and reduces refresh cycles, it also saves a considerable amount of code as previously 80% of code was dedicated to manipulating, traversing and listening to DOM . With data binding this code disappears and hence more concentration can be laid to application. Normally with change in model the DOM elements and attributes need to be manually manipulated to reflect the changes, it proves to be a complex process mainly when application grows in complexity and in size. But with two way data binding the synchronization between the DOM and the model is well taken care of.

HTML Template
AngularJS doesn’t rely on any rendering engine but uses browser parseable .html files for its partials. The HTML templates are parsed by the browser in the DOM. The DOM is now the input to the AngularJS compiler . Angular then traverses the DOM template for rendering called the Directives. The input here is bowser DOM and not the HTML string , this is the noticeable change between angular and all fellow frameworks.

Directives
Directives are stand alone reusable elements separated from the app . All DOM manipulations are performed by Directives. Directives are used to create custom HTML tags to serve as new custom widgets .With Directive you can create a new HTML tag or attribute and make it do anything you want . Directives are very unique, useful, powerful and reusable feature available only in angular. With Directives you can invent new HTML syntax that are specific to your application.

Dependency Injection
Dependency Injection is an angular feature that enables developers to easily build, develop, test and manage applications . With this feature you merely ask for for the dependencies instead of making manually , it will provide you an instance for any service asked provided you add the service as a parameter to get access to this service.

Testing
AngularJS is designed by keeping testability in mind such that angular applications can be tested easily as any Javascript code comes with a strong set of tests . Angular comes with a end to end test runner setup .

Why Use AngularJS?

AngularJS is a new Javascript framework by Google and is designed to make front end development easier . The popularity of single page applications and angular is flaming. Angular provides numerous concepts to to organize and manipulate the web applications.

Enabling a Parallel Workflow
It enables a parallel workflow between designers and developers. For a project both designing and hard coded developing can go side by side. For a project that is estimated to be completed in 4 months then by following the traditional sequential approach there would be dedicated 4 months of design followed by 4 months of coding making it 8 months altogether. But XAML allows to work in parallel by agreeing upon an interface for a screen. Developers can work on grabbing the data and writing all properties and tests around them while designers can animate and manipulate until they reach their final desired design . Those not familiar with XAML it is a declarative XML based language to instantiate object graphs and set values. The reason XML became so popular is because they translate well to angular.

Handling Dependencies
AngularJS easily handles dependency injections , angular lets you divide your app into modules that are initialized separately and having dependencies on each other. This enables you to test only the modules you want at once while also unfolding the ability to create end to end tests as well.

Dynamic loading is used by single page applications to deliver native app feel, but it involves a lot of dependencies on various modules and services, angular organises these and even manages the lifetime of an object for you.

Declarative UI
Having a declarative UI has many advantages associated with it. A structured UI is always easier to understand and manipulate. Without ,then by mere looking at the markup it can’t be figured what UI will actually do. So its not apparent whether any translations and validations are taking place by looking at some form tags.

But by declaring UI and by directly placing markup in HTML one can understand the extended markup angular provides. It makes it clear where and to what data is being bound to. With added tools like filters and directives the intent of UI is much clearer.

Development <-> Design Workflow
This works very well with angular, markup can be added without breaking an application as it depends on a particular structure or id to locate element and do task. Even rearrangement of code is much easier as the corresponding code that binds with it also moves along.

Flexibility with Filters
Filters are standalone functions and filter the data before reaching the view, it can involve formatting decimal places or reversing an array or simply implementing pagination. Filters are separate from app just like directives and are so resourceful that creation of a sorted HTML table is possible with filters without writing any Javascript.

These fundamental features and principles will let you create a performance driven, maintainable , extensible and efficient front end codebase . AngularJS provides a rich experience to the end user, it is a robust and well maintained Javascript framework suitable for any professional web development.

 



European ASP.NET Hosting - UK :: Simple ASP Script to Send Email Using ASPEmail

clock February 23, 2015 07:22 by author Scott

Simple ASP Script to send mail using AspEmail

The following is a very basic script to send a mail from an .asp page (we will build on this mail in the next scripting example to retrieve and e-mail information from a form). Note that lines starting ' are explanatory only - these lines are not parsed by the server.

<%

' First Step is to create the AspEmail message object
Set Mail = Server.CreateObject("Persits.MailSender")

' Set the from address - replace value within the quotes with your own
Mail.From = "[email protected]"

' Add the e-mail recipient address - again replace value within the quotes with your own
Mail.AddAddress "[email protected]"

' Set the subject for the e-mail
Mail.Subject = "Test mail via AspEmail"

' Create the body text for the e-mail
Mail.Body = "This mail was sent via AspEmail"

' The mail server requires that we authenticate so supply username and password
Mail.Username = "[email protected]"
Mail.Password = "your_password"

' The e-mail is now ready to go, we just need to specify the server and send
Mail.Host = "smtp.hostforlife.eu"
Mail.Send

' Mail is sent - tidy up and delete the AspEmail message object
Set Mail = Nothing

%>

That's it - you just need to substitute your own values where required and you should be able to copy this script to your account and send your first e-mail.

Note that as we use authentication the 'Mail.From' e-mail address must be a live user on our mailserver and should match the address used for the 'Mail.Username'.

 



European Umbraco 7 Hosting - UK :: How to Fix Error The virtual path ‘/install/steps/welcome.ascx’ maps to another application, which is not allowed.

clock December 11, 2014 08:02 by author Scott

Sometimes you’ll get this error message that you’ll get after installing Umbraco:

The virtual path ‘/install/steps/welcome.ascx’ maps to another application, which is not allowed.

This is because you have installed Umbraco in a virtual directory (not at the web root of the “Default Web Site” in IIS terminology). It is a path problem, easily corrected by fixing the paths in the web.config file.

In the example below, you can see that Umbraco is installed in a virtual directory called “umbraco” which is a child of the Default Web Site. This will cause the standard paths to be incorrect. This installation was done with Microsoft’s Web Platform Installer, which does not appear to correct for the problem.

Here is a copy of the appsettings portion of web.config file (located inside the umbraco folder) with the paths corrected for the folder configuration shown above. Just open it up and edit it with notepad to fix.

<appSettings>
<add key="umbracoDbDSN" value="datalayer=SqlServer;server=server;database=db;user id=dbuser;password=password" />
<add key="umbracoConfigurationStatus" />
<add key="umbracoReservedUrls"
value="/umbraco/config/splashes/booting.aspx,/umbraco/install/default.aspx,/umbraco/config/splashes/noNodes.aspx" />
<add key="umbracoReservedPaths" value="/umbraco/umbraco/,/umbraco/install/" />
<add key="umbracoContentXML" value="/umbraco/data/umbraco.config" />
<add key="umbracoStorageDirectory" value="/umbraco/data" />
<add key="umbracoPath" value="/umbraco/umbraco" />
<add key="umbracoEnableStat" value="false" />
<add key="umbracoHideTopLevelNodeFromPath" value="true" />
<add key="umbracoEditXhtmlMode" value="true" />
<add key="umbracoUseDirectoryUrls" value="false" />
<add key="umbracoDebugMode" value="true" />
<add key="umbracoTimeOutInMinutes" value="20" />
<add key="umbracoVersionCheckPeriod" value="7" />
<add key="umbracoDisableXsltExtensions" value="true" />
<add key="umbracoDefaultUILanguage" value="en" />
<add key="umbracoProfileUrl" value="profiler" />
<add key="umbracoUseSSL" value="false" />
<add key="umbracoUseMediumTrust" value="false" />
</appSettings>



European nopCommerce Hosting - UK :: How to Recover Your Administrator Password in nopCommerce?

clock November 27, 2014 10:21 by author Scott

In this tutorial, we will talk about how to recover administrator password in nopcommerce. Once you have installed nopCommerce, you suddenly deleted administrator account by mistake. So, how do you recover the administrator account?

Don’t be panic. Please just follow this steps below:

1. Login to the database in SQL Server (using tool like SSMS etc)
2. Open the database and run this sql query:

UPDATE Customer
  SETDeleted = 0
  WHERE Id = 1

The things you need to note:

a. Id is the account id from the "Customer" table of your admin account. Usually id=1 for default admin account. If you create any extra admin account and trying to recover the account, then you

will need to check the "Customer" table to get the correct id for the specific account (or record).

b. All the account records if deleted, never gets permanently deleted from the database. Only the value of column "Delete" change from "False" to "True" if you delete any account. In  the sql script, we are simply changing the value back from "True" to "False".

3. How to recover the account if you do not have the "id"?

Well, as long as you remember the username or email address, you can change your sql script accordingly like this:

WHERE Username = 'MyUsername'

Or

WHERE Email = '[email protected]'

Hope this helps!

 



European SQL 2014 Hosting - UK :: SQL Server 2014’s In-Memory OLTP

clock October 31, 2014 08:53 by author Scott

Transactions in SQL Server’s In-Memory OLTP are rather straight forward. While there are probably optimizations that are not discussed, the basic design pattern is fairly easy to follow and could probably be reused in other projects.

Transactions in SQL Server’s In-Memory OLTP rely on a timestamp-like construct known as a Transaction ID. A transaction uses two timestamps, one for the beginning of the operation and one that is assigned when the transaction is committed. While multiple transactions can share the same start value.

 

Likewise each version of a row in memory has a starting and ending transaction id. The basic rule is that a transaction can only read data when RowVersion.StartingId <= Transaction.StartingId < RowVersion.EndingId.

For a DELETE operation, the row version’s ending id is initially set to the transaction’s starting id. Then a flag is set to indicate that a transaction is in process.

UPDATE operations begin like DELETE operations with the setting of an ending transaction id on the previous row version. Then a new row version is created with a starting transaction id that is equal to the transaction’s starting id. The ending id is initially set to infinity and again an active transaction flag is set. The old row version also gets a pointer to the new row version.

An INSERT operation is the same as an UPDATE without the need to delete the previous row version.

Commit and Validation

The commit phase starts by assigning a unique transaction id to current transaction. Then a validation process begins in which the affected records are checked for isolation errors. The type of errors depend on the level of transactional isolation requested. Only three levels, Snapshot, Repeatable Read, and Serializable, are supported by memory optimized tables.

Snapshot

Just like with normal tables, inserts into memory optimized tables can fail if another transaction has attempted to insert a row at the same time. But the way it fails is a bit different. Normally one transaction has to wait for the other to complete, after which the losing transaction just sees the duplicate row and never make the insertion attempt.

Here we instead see both transactions insert their row. After which they will read back the data to see if they won the race. If they don’t error 41325 is raised with the message “The current transaction failed to commit due to a repeatable read validation failure on table [name].”

Repeatable Read Transactions

MSDN has this warning about the repeatable read isolation level, “One important thing to note is that, because the repeatable read isolation level is achieved using blocking of the other transaction, the use of this isolation level greatly increases the number of locks held for the duration of the transaction.”

Since memory optimized tables don’t have locks, repeatable read works very differently for them. Rather than blocking other transactions, it rereads the rows at the end of the transaction. If any of them have changed, the transaction is aborted. This is reflected in error code 41305 with the message “The current transaction failed to commit due to a repeatable read validation failure on table [name].”

Serializable Transactions

Like Repeatable Read, Serializable Transactions traditionally relied on locks to keep other transactions from interfering with the data being examined. So instead it will check for to see if it failed to read any valid rows or encountered phantom rows. If either occurs then again the transaction will be aborted.

Post Processing

If validation is successful, the ending transaction id of each affected row version is set to the transaction’s ending id. Likewise the starting id for new row versions (e.g. from inserts and updates) is set to the transaction’s ending id. The active flags are cleared and the indexes are updated to point to the new records.

Garbage Collection

It should be noted that the indexes are not necessarily updated to remove pointers to the old row versions. Nor are the old versions deleted immediately.

Instead the Memory Optimized Tables require the use of a reference counted garbage collector. Details are not available yet, but based on the rest of the design its behavior is predictable. The GC will need to start at the indexes and check to see which of them point to out of date rows. When detected, it can decrement the reference counter and update the index to point to the most recent version of the row. If the counter reaches zero, then the row version is deleted.

The tricky part with the garbage collector is to know which rows to look at in the first place. One would speculate that simply iterating over all the rows of each index would be rather cost prohibitive.

Design Notes

When using In-Memory OLTP, developers need to be much more aware of their access patterns. If code isn’t written to avoid overlapping transactions then the resulting isolation level violations will make aborted transactions much more common than they would be using traditional tables.

 



European SQL 2014 Hosting - UK :: SQL 2014 Buffer Pool Extensions

clock October 21, 2014 09:40 by author Scott

SQL 2014 is a great release from Microsoft and in this post I will describe information about Buffer pool extension in SQL Server 2014. In previous article, we have anylize new feature in SQL 2014.

As you know, the Buffer Pool is one of the main memory consumers in SQL Server. When you read data from your storage, the data is cached in the Buffer Pool. SQL Server caches Execution Plans in the Plan Cache, which is also part of the Buffer Pool. The more physical memory you have, the larger your Buffer Pool will be (configured through the Max Server Memory setting).

A lot of SQL Server customers are dealing with the problem that physical memory is restricted in database servers: all memory slots are already occupied, so how do you want to add additional memory to the physical server? Of course, you can migrate to a larger server, but that’s another story… The solution to this specific problem is the introduction of Buffer Pool Extensions in SQL Server 2014. 

Before I’m talking about the configuration and the concrete use of the Buffer Pool Extensions, I want to talk a little bit about the architecture and design behind the Buffer Pool Extensions. The traditional Buffer Pool of SQL Server always differentiates between clean and dirty pages. A clean page is a page which has the same content as the page in the storage. A dirty page is changed in memory, but hasn’t yet written to the storage. Around every minute the so-called CHECKPOINT process writes dirty pages out to the storage, means that our dirty page becomes a clean page

The Buffer Pool Extensions itself are only used if the Buffer Pool of SQL Server get’s into memory pressure. Memory pressure means that SQL Server needs more memory than currently available. In that case the Buffer evicts pages from the Buffer Pool, which were least recently used. SQL Server uses here a Least Recently Used Policy (LRU). If you have now configured an Extension File, then SQL Server will write these pages into it, instead of writing them directly out to our slow storage. If the page is a dirty one, then the page will be also concurrently written to the physical storage (through an asynchronous I/O operation). Therefore you can’t loose any data when you are dealing with the Buffer Pool Extensions. At some point in time your Extension File will be also completely full. In that case SQL Server has to evict older pages from the Extension File (again through a LRU policy), and finally writes them to the physical storage. The Extension File just acts as an additional layer between the Buffer Pool and the storage itself.

Let’s have now a look on how we can configure the Buffer Pool Extensions in SQL Server 2014. SQL Server offers you here the command ALTER SERVER CONFIGURATION SET BUFFER POOL EXTENSION. Let’s have a more detailed look on how you use it:

ALTER SERVER CONFIGURATION
SET BUFFER POOL EXTENSION ON
(
   FILENAME = 'd:\ExtensionFile.BPE',
   SIZE = 10 GB
)
GO

The first restriction that you are hitting here is the fact that the Extension File must have at least the same size as the Buffer Pool itself. If you are specifying a smaller file size, you will get a lovely error message from SQL Server:

Msg 868, Level 16, State 1, Line 1
Buffer pool extension size must be larger than the current memory allocation threshold 1596 MB. Buffer pool extension is not enabled.

The next restriction that you will definitely hit is that you can’t change the size of the Extension File during the runtime of SQL Server. For example, when you want to change the Extension File to a larger size, you have to disable the Buffer Pool Extensions, and re-enable them again. You will have of course a performance degradation in the time of this operation, because you are just disabling one important caching layer used by SQL Server!

Be aware of this fact, when you are planning a deployment of the Buffer Pool Extensions for your production environment!!!

And in addition you are also not able to reduce the size of the Extension File, the file must be always larger than previously. Otherwise you are again getting an error message:

Msg 868, Level 16, State 1, Line 3
Buffer pool extension size must be larger than the current memory allocation threshold 4096 MB. Buffer pool extension is not enabled.

The whole configuration of the Buffer Pool Extensions can be also retrieved through the Dynamic Management View sys.dm_os_buffer_pool_extension_configuration.

When should you use the Buffer Pool Extensions? Microsoft makes the recommendation that your workload should be write-heavy, e.g. a OLTP workload. You should not have a look on the Buffer Pool Extensions, when you are dealing with a DWH/BI related workload – an Extension File doesn’t make sense here. And when we are talking about the Extension File itself, you should spend a very fast SSD for it! Traditional rotational hard disks are a No-Go for it!

 



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in