Windows 2012 Hosting - MVC 6 and SQL 2014 BLOG

Tutorial and Articles about Windows Hosting, SQL Hosting, MVC Hosting, and Silverlight Hosting

ASP.NET MVC 6 Hosting - ASPHostPortal :: Creating Custom Controller Factory ASP.NET MVC

clock February 20, 2016 00:01 by author Jervis

I was reading about “Control application behavior by using MVC extensibility points” which is one of the objectives for the 70-486 Microsoft certification, and it was not clear to me, the explanation provided. So I decided to write about it to make clear for me, and I hope this help you as well.

An ASP.NET MVC application contains the following class:

public class HomeController: Controller
{
   public HomeController(Ilogger logger)//notice the parameter in the constructor
   { 

   }
}

This throw an error with the DefaultControllerFactory see image below.

The application won’t be able to load the Home controller because it have a parameter. You need to ensure that Home Controller can be instantiated by the MVC framework. In order to accomplish this we are going to use dependency injection.

The solution is to create a custom controller factory.

It calls the default constructor of that class. To have the MVC framework create controller class instances for constructors that have parameters, you must use a custom controller factory. To accomplish this, you create a class that implements IControllerFactory and implement its methods. You then call the SetControllerFactory method of the ControllerBuilder class to an instance of your custom controller factory class.

Create the CustomControllerFactory that inherit from IControllerFactory:

public class CustomControllerFactory : IControllerFactory
   { 

       public CustomControllerFactory()
       {
       }  


       public IController CreateController(System.Web.Routing.RequestContext requestContext, string controllerName)
       {
            ILogger logger = new DefaultLogger();
       var controller = new HomeController(logger);
       return controller;
       } 

       public System.Web.SessionState.SessionStateBehavior GetControllerSessionBehavior(System.Web.Routing.RequestContext requestContext, string controllerName)
       {
            return SessionStateBehavior.Default;
       } 

       public void ReleaseController(IController controller)
       {
           var disposable = controller as IDisposable;
           if (disposable != null)
           {
               disposable.Dispose();
           } 

       }
   }

You can implement the CreateController() method with a more generic way, using reflection.

public class CustomControllerFactory : IControllerFactory
{
    private readonly string _controllerNamespace;
    public CustomControllerFactory(string controllerNamespace)
    {
        _controllerNamespace = controllerNamespace;
    }
    public IController CreateController(System.Web.Routing.RequestContext requestContext, string controllerName)
    {
        ILogger logger = new DefaultLogger();
        Type controllerType = Type.GetType(string.Concat(_controllerNamespace, ".", controllerName, "Controller"));
        IController controller = Activator.CreateInstance(controllerType, new[] { logger }) as Controller;
        return controller;
    }
}

Set your controller factory in Application_Start by using SetControllerFactory method:

protected void Application_Start()
       {
           AreaRegistration.RegisterAllAreas();
           GlobalConfiguration.Configure(WebApiConfig.Register);
           FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
           RouteConfig.RegisterRoutes(RouteTable.Routes);
           BundleConfig.RegisterBundles(BundleTable.Bundles); 

           ControllerBuilder.Current.SetControllerFactory(typeof(CustomControllerFactory));
       }

This could be one of the objective of the Microsoft certification exam 70-486, more specific for “Develop the user experience”, sub-objective “Control application behavior by using MVC extensibility points”.

Hope this helped you to understand how to do dependency injection in controllers with MVC.

 

 



SQL 2014 Hosting - ASPHostPortal :: Make Your SSAS Works Like a Private Jet!

clock December 10, 2015 19:54 by author Jervis

SQL Server Analysis Services (SSAS) Tabular is a popular choice as an analytical engine for many customers. With its state-of-the-art compression algorithms, multi-threaded query processor and in-memory capabilities, SSAS Tabular can provide super quick access to data by reporting client applications. However, as a consultant, I have been called by many clients to resolve slow query performance when accessing data from SSAS Tabular models. My experiences have taught me most, if not all, of the performance issues can be resolved by taking care of the following five subject areas. 

Estimate Current Size and Growth Carefully

Tabular models compress data really well and on an average, you can expect to see 10x the compression rates (though it can be much more or less depending on the cardinality of your data). However, when you are estimating the size of your model as well as future growth, a rough figure like this is not going to be optimal. If you already have data sitting in a data warehouse, import a subset of that data — say a month — to find the model size for that and then extrapolate the value based on the required number of rows / years as well as the number of columns. If not, try to at least get a subset of the data from the source to find the model size. There are tools like BISM Memory Report and Vertipaq Analyzer that can further help in this process.

It is also important to record the number of users who will be accessing the system currently as well as the estimated growth for the number of users.

Select or Upgrade Hardware Appropriately

Everyone knows SSAS Tabular is a memory intensive application, and one major issue I have seen is only the RAM is considered when hardware selections are made. However, just focusing on the RAM is not enough and there are a lot of other variables. Suppose all the other variables are constant and there is an unlimited budget, these are the recommendations:

CPU Speed – The faster, the better, will help in computing results faster especially when there is a bottleneck on the single-threaded formula engine.

CPU Cores – In theory, the more the better as it helps in managing concurrent user loads. However, in reality, a rise in the number of cores usually corresponds to a decrease in the CPU speed due to management overload. So a balanced approach has to be taken when determining the CPU Cores and Speed. Also, licensing cost increases with the number of cores for many software.

CPU Sockets – The lesser, the better as SSAS Tabular is not NUMA aware till SQL 2014. However, this is expected to change in SQL 2016 where some NUMA optimization has been made. For large tabular models, it might be a challenge to go single socket as the amount of RAM that can be supported on a system will depend on the CPU sockets.

CPU Cache – The more, the better. Retrieving data from CPU caches are 10-100x faster than retrieving data from RAM.

CPU Architecture – The newer, the better due to the hardware performance optimizations. For eg, Intel Xeon processors with Haswell architecture is always going to be faster than Sandy architecture keeping all other variables constant.

Amount of RAM – Should have at least 2.5x the model size, if the model is going to be processed on the same server. The amount of RAM can be lesser in cases of certain scale out architectures where the model is processed in a separate server.

RAM Speed – The faster, the better (yes, RAMs have speed too!) This is very important for a memory-bound application like Tabular and should always go for the faster speeds, if budget allows.

Storage – Not important at all as it does not have any effect on query performance. However, if budget allows, it might not be a bad idea to get faster storage like SSDs, as that will help in maintenance related activities like backup, storage or even getting the tabular model online faster when the service is restarted. Apart from this, there are other factors also like network latency, server architecture (scale out), etc that have to be considered, but depending on the budget and specific customer requirements, a balanced approach will have to be made.

Design the Data Model Properly

Tabular is really good at performance and in the case of small models, is extremely forgiving in terms of bad design. However, when the amount of data grows, performance problems begin to show up. In theory, you will get the best performance in SSAS tabular if the entire data is flattened into a single table. However, in reality, this would translate to an extremely bad user experience as well as a lengthy and expensive ETL process. So the best practice is to have a star schema, generally. Also, it is recommended to only include the relevant columns from the source tables, as increasing the columns will result in an increase in model size which in turn will result in slower query performances. Increase in number of rows might still be ok as long as the cardinality of the columns don’t change much.

Depending on the specific customer requirements, there could be deviations from the best practices. For e.g., we built custom aggregate tables along with the detailed fact table in the case of a very large production model for a client. The resultant measure had a conditional statement to retrieve data from the aggregate table if the detailed level dimension data was not used in the report. Since the aggregate table was only 1/10 the size of the detailed fact table, the query came out 10x times faster whenever the details were not used, which was almost 90% of the times.

Optimize the DAX Calculations

In case of small models, Tabular is extremely forgiving in terms of bad DAX code also. However, just like in the case of bad design, performance takes a hit for the worse as you increase the data, add more users, or run complex queries. DAX performance tuning is the most difficult to tune from the current list, and it is important to have a strategy for maintaining and tuning the performance. A good place to start would be the Performance Tuning of Tabular models in SSAS 2012 whitepaper.

Monitor User Query Patterns and Train Users

Once your model is in production, it is important to keep monitoring the user query patterns as well as the resources to see potential bottlenecks. Through this, you can find whether the performance issues are being caused due to inefficient DAX, bad design, insufficient resources or most importantly, whether it is just because a user is using the model inefficiently. For e.g., in one of the cases, we found out the slow performance for all users was due to a single user dumping the entire 100 GB model into spreadsheets so he could perform custom calculations on top of it. This blocked the queries for all the other users and made things really slow for them. With appropriate requirement gatherings, we ensured all the required calculations for that user were there in the model and then trained the user to use the model for his analytics.

The success of any tabular project depends on the adoption by the end users and it is needless to say the adoption would be much better if the system is fast. These 5 tips will ensure you already have a jumpstart on that journey.

Looking to Use SQL Server Analysis Services Hosting?

As this is intermediate service, you can get this SQL Server Analysis Services on our Windows Dedicated server plan. You can start from our Bundled Cloud Platinum Class to get this service running on your server. This plan has included

- Windows Server 2008/2012 license
- 2 x 2.0 Ghz Core
- 4 GB RAM --> FREE Upgrade to 8 GB RAM (Use Promo Code ‘DOUBLERAM’)
- 2 x 500 GB Storage
- 20000 GB Bandwith
- 1000 Mbps Connection speed
- 4 Static IP
- Full 24 x 7 RDP Access
- Full 24/7 Firewall Protection
- Standard ver. SQL 2008/2012
- Support MySQL db
- PLESK Control Panel
- Antivirus
- Unlimited of MSSQL dbs
- Unlimited of MySQL dbs
- FREE SmarterMail service if you register this December

Get a HUGE discount for this Christmas!! For more information, please visit our official site at http://www.asphostportal.com.



SQL 2014 Hosting - ASPHostPortal :: How to Optimize Your SQL Query

clock October 6, 2015 08:59 by author Jervis

Modern day software applications have millions of concurrent users. Development of an efficiently serviceable application requires a huge amount of effort and requires many tools and techniques. Software developers always try to improve the performance of the application by improving design, coding and database development. For database development, query optimization and evaluation techniques are playing vital parts.

Selection of required field only.

It is very important to avoid unnecessary data selection of the query. We should select a data field that we need but not all fields of the table.

SELECT login_id, pawwsord FROM tbluser  

Index

Properly created Indexes help to optimize search results. You need to better understand the databases before the selection of a better performing index. The selection of a highly used field as an index is very important.

CREATE clustered INDEX ind_login_id ON tbluser(login_id)  

Primary Key

The Primary Key is the most important index of the table. The most important thing about a Primary Key is the selection of a short and unique field. This will lead to easy access to the data records.

CREATE TABLE tbluser(
  id INT,  
  name VARCHAR(150),  
  email VARCHAR(100),  
  login_id VARCHAR(100),  
  password VARCHAR(10),  
  primary_key(id)  
)

Index unique column

The indexing of a unique column will improve searching and increase the efficiency of the database. You must have a better understanding of the data field and their utilization before indexing a unique column. The indexing of a less used column does not help improve the efficiency of the database.

CREATE INDEX ind_email ON tbluser(email)  

Select limited records

None of the user interfaces can visualize thousands of records at once. Hence there is no way to select all the records at once, so always limit the selection when you have a large number of records. Select the required data only.

SELECT id, name, email, login_id,password FROM tbluser WHERE 1 limite 10  

Selection of correct data type and length

Use the most appropriate data type and correct length of the data. The bad selection of a data type will produce bulky databases and poor performance. This will improve resource utilization of the database server.

CREATE TABLE tbluser(id INT,  
   name VARCHAR(150),  
   email VARCHAR(100),  
   login_id VARCHAR(100),  
   password VARCHAR(10)  
)  

Avoid in sub query

Always avoid use of IN sub-queries in your applications. An In sub-query will evaluate all the records of table A with table B (product of records) before selecting the required data.

SELECT login_id,name, email FROM tbluser WHERE login_id IN ( SELECT login_id FROM tbllogin_details)

One of the correct ways is to use an inner join as in the following:  

SELECT login_id,name, email FROM tbluser INNER JOIN tbllogin_details ON tbluser.login_id =tbllogin_details.login_id 

Avoid NOT operator

Please avoid the usage of the NOT operator situation that the number of qualifying records are lower than unqualified records. Always use a positive operator such as LIKE, EXIST than NOT LIKE, NOT EXIST.

SELECT * FROM tbluser WHERE email NOT LIKE '%gmail%'  

The prefered way is:

SELECT * FROM tbluser WHERE email LIKE '%yahoo%'  



SQL Hosting with ASPHostPortal :: Using SQLBulkCopy and C# to Upload File

clock August 12, 2015 08:17 by author Jervis

In this article I am going to write about SQLBulkCopy and its major properties and methods. This article will give you the code for high performance transfer of rows from XML file to SQL server with SQLBulkCopy and C#.

SQLBulkCopy introduced as part of .Net framework 2.0. It is simple and easy tool to transfer complicated or simple data from one data source to other. You can read data from any data source as long as that data can be load to DataTable or read by IDataReader and transfer the data with high performance to SQL Server using SQLBulkCopy.

In real time applications every day millions of records get transferred from one data store to other. There are multiple ways to transfer the data like command prompt bcp utility of SQL Server, creating INSERT statements, creating SSIS packages and SQLBulkCopy. SQLBulkCopy gives you significant performance gain over other tools.

SQLBulkCopy Constructor

SQLBulkCopy initializes instance in four different way.

1. Accepts already open SqlConnection for destination.
2. Accepts connection string of SQLConnection. This constructor actually opens and initializes new instance of SQLConnection for destination.
3. Accepts connection string of SQLconnection and enum value of SqlBulkCopyOptions. This constructor actually opens and initializes new instance of SQLConnection for destination.
4. Accepts already opened SQLConnection and enum value of SqlBulkCopyOptions.

SqlBulkCopy bulkCopy =
            new SqlBulkCopy(destinationConnection.ConnectionString, 
                SqlBulkCopyOptions.TableLock))

BatchSize

SQLBulkCopy BatchSize is integer property with default value of 0. It decides how many rows need to be send to the server in one batch. If you do not set any value for this property or set it as 0, all the records will be send in single batch.

Following example sets BatchSize property as 50.

bulkCopy.BatchSize = 50;

ColumnMappings

SQLBulkCopy ColumnMappings is a collection of columns which needs to be map from source table to destination table's columns. You do not need to map the columns if column names are same. However it is very important to map the columns if column names are different. If matching SQLBulkCopy does not found the matching column it throws System.InvalidOperationException.

You can map the columns in different ways, giving both column names is easy and readable method.

Below code match the column OrderID from source table with columnNewOrderID of destination column.

bulkCopy.ColumnMappings.Add("OrderID", "NewOrderID");  

Data Type issue while mapping the column

SqlBulkCopy is particular about matching column DataType. Both the columns has to be of same DataType. If you have nullable columns, you explicitly have to convert such columns into desired DataType.

Below code converts Null to varchar(2) and can be mapped to any varchar(2) column of destination table.

SELECT  CAST(ISNULL(ShipRegion,'') as varchar(2))
            as ShipRegion FROM Orders

Quick note: If you are having computed columns like SUM, AVG etc. make sure it returns in expected DataType. If your destination table expects columns with decimal(15,7) you will have to explicitly convert the source column as decimal(15,7) because SUM will by default return decimal(38,7).

DestinationTableName

It sets the name of destination table. The method WriteToServer will copy the source rows to this particular table.

Below code will set the destination table as "TopOrders".

bulkCopy.DestinationTableName = "TopOrders";   

NotifyAfter and SqlRowsCopied

NotifyAfter is an integer property with default value of 0 and SqlRowsCopied is an event. The value of NotifyAfter indicates when to raise eventSqlRowsCopied.

The below code shows after processing 100 rows, event SqlRowsCopied will be executed.

bulkCopy.SqlRowsCopied +=
    new SqlRowsCopiedEventHandler(OnSqlRowsTransfer);
bulkCopy.NotifyAfter = 100;

private static void
    OnSqlRowsTransfer(object sender, SqlRowsCopiedEventArgs e)
{
        Console.WriteLine("Copied {0} so far...", e.RowsCopied);
}

WriteToServer

WriteToServer is a method which actually processes your source table data to destination table. It accepts array of DataRows or DataTable or IDataReader. With DataTable you can also specify the state of the rows that needs to be processed.

The following code will process rows from sourceData DataTable which has RowState as Added to DestinationTable.

bulkCopy.WriteToServer(sourceData, DataRowState.Added);



ASP.NET MVC 4 Hosting - ASPHostPortal :: How to Fix Could not load file or assembly DotNetOpenAuth.Core, Version=4.0.0.0

clock September 8, 2014 09:33 by author Jervis

We have seen that many of our clients experience this problem and we have also found this issue on many forums. So, we decide to write the solution on this tutorial and hope it will fix your problem.

Error Message

Could not load file or assembly 'DotNetOpenAuth.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

The Problem

The application was actually upgraded earlier from ASP.NET MVC 3, i.e. it was not autogenerated by Visual Studio 2012. The pre-binding info in the exception was:

=== Pre-bind state information ===
LOG: User = NT AUTHORITY\NETWORK SERVICE
LOG: DisplayName = DotNetOpenAuth.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246
 (Fully-specified)
LOG: Appbase = file:///
LOG: Initial PrivatePath = \bin
Calling assembly : Microsoft.Web.WebPages.OAuth, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35.
===
LOG: This bind starts in default load context.
LOG: Using application configuration file: \web.config
LOG: Using host configuration file: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet.config
LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework64\v4.0.30319\config\machine.config.
LOG: Post-policy reference: DotNetOpenAuth.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246
LOG: Attempting download of new URL file:////DotNetOpenAuth.Core.DLL.
LOG: Attempting download of new URL file:////DotNetOpenAuth.Core/DotNetOpenAuth.Core.DLL.
LOG: Attempting download of new URL file:////DotNetOpenAuth.Core.DLL.
WRN: Comparing the assembly name resulted in the mismatch: Minor Version
ERR: Failed to complete setup of assembly (hr = 0x80131040). Probing terminated.

The key pieces of information here are at lines 3 and 7. Basically, Microsoft.Web.WebPages.OAuth needs DotNetOpenAuth.Core 4.0.0.0. Please check your DotNetOpenAuth.Core version again. Basically the problem from the different version.

Solution

1. Add these lines under the <runtime>/<assemblyBinding> section of the root Web.config:

<dependentAssembly>
                <assemblyIdentity name="DotNetOpenAuth.AspNet" publicKeyToken="2780ccd10d57b246" culture="neutral" />
                <bindingRedirect oldVersion="0.0.0.0-4.3.0.0" newVersion="4.3.0.0" />
</dependentAssembly>
<dependentAssembly>
                <assemblyIdentity name="DotNetOpenAuth.Core" publicKeyToken="2780ccd10d57b246" culture="neutral" />
                <bindingRedirect oldVersion="0.0.0.0-4.3.0.0" newVersion="4.3.0.0" />
</dependentAssembly>

2. The above solution works for other packages that ASP.NET MVC 4 depends on. For example, if you upgrade WebGrease from 1.0.0.0 to 1.3.0.0, you have to add this to the <runtime>/<assemblyBinding> section:

<dependentAssembly>
                <assemblyIdentity name="WebGrease" publicKeyToken="31bf3856ad364e35" culture="neutral" />
                <bindingRedirect oldVersion="0.0.0.0-1.3.0.0" newVersion="1.3.0.0" />
</dependentAssembly>

 

 



Windows Server 2008/2012 Hosting :: Unmanaged Dedicated Hosting vs Managed Dedicated Hosting

clock August 6, 2014 08:34 by author Jervis

We believe that you must very familiar with this word “Managed Hosting and Unmanaged Hosting”. What is it about? When you purchase VPS/cloud/dedicated server, you will see 2 option, do you want to use managed or unmanaged hosting. In this post, we will discuss differences between Managed Hosting & Unmanaged Hosting as well as advantages and disadvantages of both. We will more focus in managed hosting.

When you’re buying a house and you have two options. You can buy a home with no strings attached, so that YOU are responsible for cleaning, landscaping, security, and maintenance; OR, you can pay a little extra and have a self-sustaining house that completes all those tasks for you. You can purchase a home that has automatic security and fire protection built in–one that cleans itself, protects itself and magically keeps the lawn and flower beds looking great without any effort on your part. Sounds nice, right?

Well, there’s a chance you might be thinking: Why would I pay extra if I know how to lock the doors, put out a fire, mop the floors, mow the lawn and plant flowers on my own?

Now, imagine you don’t know how to do ANY

of those tasks: If you have no knowledge of how to manage a home, your house will be much better off in the long run if you purchase the fully managed, self-sustaining option. This scenario is much like what small business owners face when confronted with the hosting industry. Often they don’t have the tools or know-how to manage a server on their own, and a fully managed hosting plan can help.

It is same with hosting business. If you choose unmanaged hosting, you simply rent a server from a hosting provider, and the responsibility of managing that server falls on whoever has the most IT experience at a small business.

Well, managed hosting basically provides your small business with an exclusive, out-of-the-office IT department. This type of hosting plan means that a perfectly capable team professionals monitors your dedicated server 24/7/365, keeping your site performing at it’s best at all times. If you don’t have experience to manage the server, then you need to choose this managed service. Same like our description above. Just imagine it. 

Technical Knowledge

There is no doubt on the fact that technical knowledge is a requirement when it comes to managing dedicated and others among web hosting services. However, requirement of technical knowledge is different in managed and unmanaged hosting. The first option does not need you to be an expert in server management and its sections, as one team from your web-hosting provider will be there to manage your cloud or dedicated server. Although the full list may be quite long, main responsibilities of team include server configuration, maintenance, security, sufficient upgrades, firewall services, backup services etc. In other words, once you have purchased managed web server, you do not have to be concerned about managing the technical aspect of your server. So, when you want to manage your website, you can log into the control panel and do things as you do in shared web servers.

Nevertheless, unmanaged web hosting is quite opposite in this aspect! Things that come with an unmanaged dedicated are the physical server itself and initial installation procedure. This also means that you will have to do everything else except these. The list of tasks includes software upgrades, security, frequent maintenance, etc. Thus, an unmanaged dedicated needs you to be an expert in terms of server management, in its all aspects. So, we would say that you should go for managed hosting if you want to skip hassles of server management.

Security System

When it comes to the case of security of web server, managed dedicated grabs the first price, because the team behind the web host will be more than enough to keep your server and websites secure from almost every possible threat. But, unmanaged server cannot, even if you have hired an effective team for maintenance, guarantee that much security. So, when you want to be less concerned about security section of your web server, you should choose managed dedicated with no doubt.

What is Benefits of Managed Hosting:

Security:

Hosting providers have the tools and experience to monitor and handle your security issues properly: they filter spam, scan for viruses, run security audits, configure software firewalls, and provide OS updates.

Backups:

Lost data can be detrimental and expensive for a company. In many cases a natural disaster is to blame for a significant loss of data, but a hosting provider has procedures to deal with data loss and provide secure, backed up data after such a situation occurs.

Monitoring Server:

This is the process of constantly scanning the server for any problems or irregularities, and it’s crucial for your small business. System admins will ensure network availability for your clients by constant monitoring.

Managed Storage & Database:

System admins will save you money by managing your storage needs with the help of a managed services provider, so that the amount of space needed is managed properly and scaled properly. Managing of the database is more complicated, and it’s conducted by a database admin that can effectively design the best database to meet your company’s specific needs and requirements.

Pricing

Indeed, pricing is a significant factor while choosing between managed hosting and unmanaged hosting! As you may have guessed, managed web servers cost more than unmanaged web server, as such, servers offer assistance as well as support. However, if you do not have enough technical knowledge about web servers, purchasing an unmanaged web server means purchase of a team that is capable of maintaining the server in its optimum functional condition.

Conclusion

Whether you choose unmanaged and managed hosting, you don’t need to worry to spend a lot of money as we have affordable windows cloud dedicated server that we believe it will suits your requirement. Please visit cheap cloud dedicated server price on our site. Our cheap windows cloud dedicated server start from only $18.00/month. You can choose managed or unmanaged service.



SQL 2014 Hosting - ASPHostPortal.com :: Introduction In-Memory OLTP in SQL 2014

clock July 4, 2014 06:49 by author Jervis

Microsoft's new release of SQL Server 2014 comes pretty close on the heels of the last SQL Server 2012 release. For many organizations, this could be a hurdle to adoption, because upgrading core pieces of an IT infrastructure can be both costly and resource-intensive. However, SQL Server 2014 has several compelling new features that can definitely justify an upgrade. Here are the overview of SQL 2014 features:

1. New In-Memory OLTP Engine
2. Enhanced Windows Server 2012 Integration
3. Improvement in Business Intelligence
4. Office 365 Integration
5. Etc

In today post, I want to examine SQL Server 2014 In-Memory OLTP from different angles: how to start using it, provide directions for migration planning, review closely many of its limitations, discuss SQL 2014 In-Memory OLTP applicability and see where the SQL Server In-Memory OLTP can be an alternative to in-memory dynamic caching, and where it is complimentary.

The question now is what is Memory Online Transaction Processing?

SQL Server 2014’s biggest feature is definitely its In-Memory transaction processing, or in-memory OLTP, which Microsoft claims make database operations much faster. In-memory database technology for SQL Server has long been in the works under the code name “Hekaton”. Hekaton is a database engine component which is optimized for accessing Memory resident tables. This component is great, it is fully integrated into SQL 2014 database engine.

The Function of Memory Online Transaction Processing

Hekaton facilitates creation of Memory resident Tables (i.e. Memory Optimized Tables) and Indexes. Beside that, it also provide the option to compile the Transact-Sql Stored Procedure accessing Memory Optimized Tables to Machine code. With Memory Optimized Tables, it provide better performance as the core engine uses the lock free algorithm which doesn’t require any lock and latches when the Memory optimized tables are referenced during the transaction processing.

Low Cost Using In Memory OLTP

Sql Server database engine was designed in the days when Main Memory was very costly. As per this design data is stored on the disk and is loaded to the main memory as required for the transaction processing and any changes to the In-Memory data is written back to the disk. This disk IO is main bottle neck for the OLTP applications having huge number of concurrent users, as it involves waiting for locks to be released, latches to be available, waiting for the log writes to complete.

As per the current trend Main Memory prices are less expensive and enterprises can easily afford to have production database servers with Main Memory sizes in TB’s. And this declining Memory prices made Microsoft to re-think on the initial database engine which is designed in the days when Main Memory was costly. And the result of this re-think is the In-Memory OLTP (a.k.a. Hekaton) Database engine component which supports memory resident Tables and Index. In-Memory OLTP engine uses the lock free algorithm (i.e. MultiVersion Optimistic Concurrency Control) which doesn’t require any lock and latches when the Memory optimized tables are referenced during the transaction processing. And for supporting data durability it still writes to the transaction log but the amount of data which is written to the log is reduced considerably.

Migrating to SQL Server 2014 In-Memory OLTP

Migration to In-Memory OLTP has to be performed in a development environment and carefully tested. Your High-Availability design, Databases design, Tables schemas and data, stored procedures, business logic in the database and even application code – all may require many syntax changes to use In-Memory OLTP.

This is not a “click and migrate” process. It requires development cycles, application and database design and code changes.

The right way to use the In-Memory OLTP engine is:

  • Plan your production database architecture. The In-Memory OLTP is very different and has many limitations in terms of H/A, Mirroring, Replications available functionalities;
  • Plan carefully your new database (and possibly application) design;
  • Migrate several specific tables and procedures that are good benefit candidates;
  • Develop or change your business-logic to fit the new design;
  • Test and evaluate;
  • Deploy

To evaluate whether the In-Memory OLTP can improve your database performance, you can use Microsoft new AMR tool (Analysis, Migrate and Report). For helping with actual migration you can use the Memory Optimization Advisor for tables and the Native Compilation Advisor to help porting a stored procedure to a natively compiled stored procedure.

The AMR tool helps identifying the tables and stored procedures that would benefit by moving them into memory and also help performing the actual migration of those database objects. The AMR tool is installed when you select the “Complete” option of “Management Tools”, and is later accessed through SQL Server Management Studio (SSMS) in Reports  –>> Management Data Warehouse Transaction performance reports tool:

The AMR tool provides reports which tables and procedures can benefit the most from In-Memory OLTP and provide a hint how complex will be the conversion. The reports show either recommendations based on usage, contention and performance. Here is example (graphics may change in the GA release):

After you identify a table that you would like to port to use In-Memory OLTP, you can use the Memory-Optimization Advisor to help you migrate the disk-based database table to In-Memory OLTP. In SSMS Object Explorer, right click the table you want to convert, and select Memory-Optimization Advisor.

Conclusion

Above article is only brief information about one of new feature in SQL 2014. Want to try more? We have supported the latest SQL 2014 hosting on our hosting environment. Just take a look on our site for more information.



Windows 2012 Hosting :: Hyper-v 3.0 Network Virtualization on Windows 2012

clock August 28, 2013 10:21 by author Mike

Windows Server introduces a slew of new technologies. These technologies enable Windows Server systems and virtual environments to meet all manner of new requirements and scenarios, including private and public cloud implementations. Often, this type of scenario involves a single infrastructure that's shared by different business units or even different organizations. 

In this article, I want to describe Network virtualization. Other great capabilities include a new site-to-site VPN solution; huge enhancements to the Server Message Block (SMB) protocol, enabling VMs to run from a Server 8 file share; native NIC teaming; and consistent device naming. But I want to focus on the major network technologies that most affect virtualization.

Virtualization has always striven to abstract one resource layer from another, giving improved functionality and portability. But networking hasn't embraced this goal, and VMs are tied to the networking configuration on the host that runs them. Microsoft System Center Virtual Machine Manager (VMM) 2012 tries to link VMs to physical networks through its logical networks feature, which lets you create logical networks such as Development, Production, and Backup. You can then create IP subnets and virtual LANs (VLANs) for each physical location that has a connection to a logical network. This capability lets you create VMs that automatically connect to the Production network, for example; VMM works out the actual Hyper-V switch that should be used and the IP scheme and VLAN tag, based on the actual location to which the VM is deployed.

This feature is great. But it still doesn't help in scenarios in which I might be hosting multiple tenants that require their own IP schemes, or even one tenant that requires VMs to move between different locations or between private and public clouds, without changing IP addresses or policies that relate to the network. Typically, public cloud providers require clients to use the hosted IP scheme, which is an issue for flexible migration between on-premises and off-premises hosting.

Both these scenarios require the network to be virtualized, and the virtual network must believe that it wholly owns the network fabric, in the same way that a VM believes it owns the hardware on which it runs. VMs don't see other VMs, and virtual networks shouldn't see or care about other virtual networks on the same physical fabric, even when they have overlapping IP schemes. Network isolation is a crucial part of network virtualization, especially when you consider hosted scenarios. If I'm hosting Pepsi and Coca-Cola on the same physical infrastructure, I need to be very sure that they can't see each other's virtual networks. They need complete network isolation.

This virtual network capability is enabled through the use of two IP addresses for each VM and a virtual subnet identifier that indicates the virtual network to which a particular VM belongs. The first IP address is the standard address that's configured within the VM and is referred to as the customer address (using IEEE terms). The second IP address is the address that the VM communicates over the physical network and is known as the provider address.

In the example that Figure 1 shows, we have one physical fabric. Running on that fabric are two separate organizations: red and blue. Each organization has its own IP scheme, which can overlap, and the virtual networks can span multiple physical locations. Each VM that is part of the virtual red or blue network has its own customer address. A separate provider address is used to send the actual IP traffic over the physical fabric.

Figure 1: Virtual networking example


Figure 1: Virtual networking example 

You can see that the physical fabric has the network and compute resources and that multiple VMs run across the hosts and sites. The color of the VM coordinates with its virtual network (red or blue). Even though the VMs are distributed across hosts and locations, the hosts in the virtual networks are completely isolated from the other virtual networks with their own IP schemes.

Two solutions-IP rewrite and Generic Routing Encapsulation (GRE)-enable network virtualization in Server 8. Both solutions allow completely separate virtual networks with their own IP schemes (which can overlap) to run over one shared fabric.

IP rewrite. The first option is IP rewrite, which does exactly what the name suggests. Each VM has two IP addresses: a customer address, which is configured within the VM, and a provider address, which is used for the actual packet transmission over the network. The Hyper-V switch looks at the traffic that the VM is sending out, looks at the virtual subnet ID to identify the correct virtual network, and rewrites the IP address source and target from the customer addresses to the corresponding provider addresses. This approach requires many IP addresses from the provider address pool because every VM needs its own provider address. The good news is that because the IP packet isn't being modified (apart from the address), hardware offloads such as virtual machine queue (VMQ), checksum, and receive-side scaling (RSS) continue to function. IP rewrite adds very little overhead to the network process and gives very high performance.

Figure 2 shows the IP rewrite process, along with the mapping table that the Hyper-V host maintains. The Hyper-V host maintains the mapping of customer-to-provider addresses, each of which is unique for each VM. The source and destination IP addresses of the original packet are changed as the packet is sent via the Hyper-V switch. The arrows in the figure show the flow of IP traffic.

Figure 2: IP rewrite process


Figure 2: IP rewrite process 

GRE. The second option is GRE, an Internet Engineering Task Force (IETF) standard. GRE wraps the originating packet, which uses the customer addresses, inside a packet that can be routed on the physical network by using the provider address and that includes the actual virtual subnet ID. Because the virtual subnet ID is included in the wrapper packet, VMs don't require their own provider addresses. The receiving host can identify the targeted VM based on the target customer address within the original packet and the virtual subnet ID in the wrapper packet. All the Hyper-V host on the originating VM needs to know is which Hyper-V host is running the target VM and can send the packet over the network.

The use of a shared provider address means that far fewer IP addresses from the provider IP pools are needed. This is good news for IP management and the network infrastructure. However, there is a downside, at least as of this writing. Because the original packet is wrapped inside the GRE packet, any kind of NIC offloading will break. The offloads won't understand the new packet format. The good news is that many major hardware manufacturers are in the process of adding support for GRE to all their network equipment, enabling offloading even when GRE is used.

Figure 3 shows the GRE process. The Hyper-V host still maintains the mapping of customer-to-provider address, but this time the provider address is per Hyper-V host virtual switch. The original packet is unchanged. Rather, the packet is wrapped in the GRE packet as it passes through the Hyper-V switch, which includes the correct source and destination provider addresses in addition to the virtual subnet ID.


Figure 3: GRE 

In both technologies, virtualization policies are used between all the Hyper-V hosts that participate in a specific virtual network. These policies enable the routing of the customer address across the physical fabric and track the customer-to-provider address mapping. The virtualization policies can also define the virtual networks that are allowed to communicate with other virtual networks. The virtualization policies can be configured by using Windows PowerShell, which is a common direction for Server 8. This makes sense: When you consider massive scale and automation, the current GUI really isn't sufficient. The challenge when using native PowerShell commands is the synchronous orchestration of the virtual-network configuration across all participating Hyper-V hosts.

Both options sound great, but which one should you use? GRE should be the network virtualization technology of choice because it's faster than IP rewrite. The network hardware supports GRE, which is important because otherwise GRE would break offloading, and software would need to perform offloading, which would be very slow. Also, because of the reduced provider address requirements, GRE places fewer burdens on the network infrastructure. However, until the networking equipment supports GRE, you should use IP rewrite, which requires no changes on the network infrastructure equipment.

 



Windows 2012 Hosting - ASPHostPortal :: Things to Know About Deduplication in Windows Server 2012

clock August 15, 2013 08:14 by author Jervis

Talk to most administrators about deduplication and the usual response is: Why? Disk space is getting cheaper all the time, with I/O speeds ramping up along with it. The discussion often ends there with a shrug.

But the problem isn’t how much you’re storing or how fast you can get to it. The problem is whether the improvements in storage per gigabyte or I/O throughputs are being outpaced by the amount of data being stored in your organization. The more we can store, the more we do store. And while deduplication is not a magic bullet, it is one of many strategies that can be used to cut into data storage demands.

Microsoft added a deduplication subsystem feature in Windows Server 2012, which provides a way to perform deduplication on all volumes managed by a given instance of Windows Server. Instead of relegating deduplication duty to a piece of hardware or a software layer, it’s done in the OS on both a block and file level — meaning that many kinds of data (such as multiple instances of a virtual machine) can be successfully deduplicated with minimal overhead.

If you plan to implement Windows Server 2012 deduplication technology, be sure you understand these seven points:

1. Deduplication is not enabled by default

Don’t upgrade to Windows Server 2012 and expect to see space savings automatically appear. Deduplication is treated as a file-and-storage service feature, rather than a core OS component. To that end, you must enable it and manually configure it in Server Roles | File And Storage Services | File and iSCSI Services. Once enabled, it also needs to be configured on a volume-by-volume basis.

2. Deduplication won’t burden the system

Microsoft put a fair amount of thought into setting up deduplication so it has a small system footprint and can run even on servers that have a heavy load. Here are three reasons why:

a. Content is only deduplicated after n number of days, with n being 5 by default, but this is user-configurable. This time delay keeps the deduplicator from trying to process content that is currently and aggressively being used or from processing files as they’re being written to disk (which would constitute a major performance hit).

b. Deduplication can be constrained by directory or file type. If you want to exclude certain kinds of files or folders from deduplication, you can specify those as well.

c. The deduplication process is self-throttling and can be run at varying priority levels. You can set the actual deduplication process to run at low priority and it will pause itself if the system is under heavy load. You can also set a window of time for the deduplicator to run at full speed, during off-hours, for example.

This way, with a little admin oversight, deduplication can be put into place on even a busy server and not impact its performance.

3. Deduplicated volumes are ‘atomic units’

‘Atomic units’ mean that all of the deduplication information about a given volume is kept on that volume, so it can be moved without injury to another system that supports deduplication. If you move it to a system that doesn’t have deduplication, you’ll only be able to see the nondeduplicated files. The best rule is not to move a deduplicated volume unless it’s to another Windows Server 2012 machine.

4. Deduplication works with BranchCache

If you have a branch server also running deduplication, it shares data about deduped files with the central server and thus cuts down on the amount of data needed to be sent between the two.

5. Backing up deduplicated volumes can be tricky

A block-based backup solution — e.g., a disk-image backup method — should work as-is and will preserve all deduplication data.

File-based backups will also work, but they won’t preserve deduplication data unless they’re dedupe-aware. They’ll back up everything in its original, discrete, undeduplicated form. What’s more, this means backup media should be large enough to hold the undeduplicated data as well.

The native Windows Server Backup solution is dedupe-aware, although any third-party backup products for Windows Server 2012 should be checked to see if deduplication awareness is either present or being added in a future revision.

6. More is better when it comes to cores and memory

Microsoft recommends devoting at least one CPU core and 350 MB of free memory to process one volume at a time, with around 100 GB of storage processed in an hour (without interruptions) or around 2 TB a day. The more parallelism you have to spare, the more volumes you can simultaneously process.

7. Deduplication mileage may vary

Microsoft has crunched its own numbers and found that the nature of the deployment affected the amount of space savings. Multiple OS instances on virtual hard disks (VHDs) exhibited a great deal of savings because of the amount of redundant material between them; user folders, less so.

In its rundown of what are good and bad candidates for deduping, Microsoft notes that live Exchange Server databases are actually poor candidates. This sounds counterintuitive; you’d think an Exchange mailbox database might have a lot of redundant data in it. But the constantly changing nature of data (messages being moved, deleted, created, etc.) offsets the gains in throughput and storage savings made by deduplication. However, an Exchange Server backup volume is a better candidate since it changes less often and can be deduplicated without visibly slowing things down.

How much you actually get from deduplication in your particular setting is the real test for whether to use it. Therefore, it’s best to start provisionally, perhaps on a staging server where you can set the “crawl rate” for deduplication as high as needed, see how much space savings you get with your data and then establish a schedule for performing deduplication on your own live servers.

 



Windows 2008 Hosting Tips :: Installing IIS 7.5 on Windows Server 2008/2008 R2

clock July 1, 2013 09:05 by author Mike

IIS (Internet Information Services) Version 7.5 – is a web server application and a set of feature extension modules created by Microsoft for use with Microsoft Windows. Released with Windows Server 2008 R2 and Windows 7. The installation procedure to install IIS 7.5 is slightly different depending on which operating system you are using, so you will need to make sure you follow the correct method below for your operating system.

Follow this steps to install:

  • Navigate to Start\Control Panel\Administrative Tools and open Server Manager.
  • In the tree menu on the left, select Roles.
  • In the main window, find the sub-section Roles Summary (it should be at the top) and click Add Roles.
  • On the pop-up window, read the security warnings and then select Next.
  • Select Web Server (IIS) from the list of check box options.
  • After reading the introduction to IIS, click Next to continue
  • From the check boxes, find the Application Development section and select CGI. This will allow us to install PHP later without any modifications.

 

  • Click Next to continue.
  • Click Install to begin the installation.
  • After installation has finished, click Close.

To check the installation, you can follow this steps:
- Navigate to Start\Control Panel\Administrative Tools and open Internet Information Services (IIS) Manager.

 

If it's installed properly you'll be introduced with the IIS Manager window, shown below.

That's it, IIS 7.5 is now installed and ready to be put into service.



About ASPHostPortal.com

We’re a company that works differently to most. Value is what we output and help our customers achieve, not how much money we put in the bank. It’s not because we are altruistic. It’s based on an even simpler principle. "Do good things, and good things will come to you".

Success for us is something that is continually experienced, not something that is reached. For us it is all about the experience – more than the journey. Life is a continual experience. We see the Internet as being an incredible amplifier to the experience of life for all of us. It can help humanity come together to explode in knowledge exploration and discussion. It is continual enlightenment of new ideas, experiences, and passions


Author Link


Corporate Address (Location)

ASPHostPortal
170 W 56th Street, Suite 121
New York, NY 10019
United States

Sign in