Windows 2012 Hosting - MVC 6 and SQL 2014 BLOG

Tutorial and Articles about Windows Hosting, SQL Hosting, MVC Hosting, and Silverlight Hosting

Windows 2016 Hosting :: How to Create and Configure VMs in Windows Server 2016 Hyper-V

clock January 21, 2019 08:04 by author Jervis

In this post, we will explore how to create and configure VMs in Windows Server 2016 Hyper-V.

Creating a New VM

First, you need to use the Hyper-V manager to connect to the Hyper-V host. The Hyper-V manager is included in the Remote Server Administration Tools (RSAT; a separate download) for client operating systems such as Windows 10, or included in the Server Manager “install features” section of Windows Server 2016.

To begin, right-click your Hyper-V host and select New > VM.

This launches the New Virtual Machine Wizard.

Begin the configuration by selecting a name for your VM.

Generation of the VM

Next, you are asked to select the Generation of the VM. There are two choices here: Generation 1 and Generation 2. What are the differences?

To start with, Generation-2 VMs are only compatible with Hyper-V versions 2012 R2 and later. Furthermore, Windows Server 2012/Windows 8 64-bit and above are supported with Generation-2; 32-bit versions of those operating systems do not work. In fact, if you create a Generation-2 VM and try to boot from an ISO of a 32-bit OS, you receive an error stating that no boot media can be found. Microsoft has also been working on support of Generation-2 VMs with Linux. Be sure to check with your particular distribution, as currently not all are supported with Generation 2. There is one more consideration: for those thinking of moving a previously-created Hyper-V VM to Azure, Generation 2 is not supported.

For greater compatibility including moving to Azure, Generation 1 VMs should be selected. If none of the limitations mentioned are true, and you want to utilize such features as UEFI secure boot, then Generation 2 would be the preferred choice.

Once a VM is created, you cannot change the Generation. Make sure you choose the right Generation before proceeding.

Memory Management in Hyper-V

The next configuration section is where we can Assign Memory.

The memory management in Hyper-V has an option called Dynamic Memory; you can see the checkbox that can be selected to enable the feature at this stage. If you choose to enable this option, Hyper-V cooperates with the VM guest operating system in managing guest operating system memory.

Using the “hot add” feature, Hyper-V expands the guest operating system memory as memory demands increase within the guest. Dynamic Memory helps to dynamically and automatically divide RAM between running VMs, reassigning memory based on changes in their resource demands. This helps to provide more efficient use of memory resources on a Hyper-V host as well as greater VM density.

When you select Use Dynamic Memory for this virtual machine, you can set minimum and maximum values for the RAM that is dynamically assigned to the VM.

Networking Configuration

The next step in our VM configuration is to Configuring Networking. In order for a particular VM to have connectivity to the network, you must attach a virtual switch that is connected. You can also leave a VM in a disconnected state; connection to a network is not a requirement in completing VM configuration. In this example, we are connecting the VM to the ExternalSwitch, which is a virtual switch connected with the production LAN.

Hard Disk Configuration

The next step is configuring the hard disk that is assigned to your VM. There are three options that you can choose from:

If you choose the Create a virtual hard disk option, you are creating a brand new vhdxdisk on your Hyper-V host. You can set the size of the disk as well. The wizard defaults to 127 GB, which can easily be changed.

The Use an existing virtual hard disk option lets you attach your new VM configuration to an existing virtual disk. Perhaps you copied over a vhdx file that you want to reuse with the new VM configuration. You can simply point the wizard to the vhdx file with this option.

With the third option – Attach a virtual hard disk later – you can choose to skip the creation of a hard disk in the wizard and assign a disk later.

There is one significant caveat to the create a virtual hard disk option: you have no choice in the type of disk that is created. By default, Hyper-V creates “dynamically expanding” disks, which are thin-provisioned disks. Space is used only as needed. There are some downsides to this approach, however. While the Hyper-V storage driver generally makes efficient use of resources, for the best performance, many may still prefer to provision thick disks or fixed size in Hyper-V. To do that, you should choose the third option and attach a thick virtual hard disk after your VM is created.

Installation Options

The next step is to go through the Installation Options. This means configuring how you want to install the guest operating system (OS) in your new VM.

The most common way is to Install an operating system from a bootable image file. You need to have an ISO file of the OS saved somewhere on your server. Simply guide the Wizard to the location using the Browse button.

Your alternatives are to Install an operating system later or Install an operating system from a network-based installation server.

You’ve now reached the summary of your configuration choices. Once you click Finish, your VM is created according to the options you specified.

Now that configuration and creation are complete, you can power on your VM. Simply right-click the VM and select Start.

 

You can connect to the console by right-clicking the VM and selecting Connect.

 

After connecting to the console, we should now be able to boot our VM and install the operating system as usual, through the operating system installation prompts.



Windows Server 2019 Hosting :: Top 6 Features in Windows Server 2019

clock January 11, 2019 07:43 by author Jervis

Windows Server 2019 is now generally available to the public! As you know, whenever Windows gets ready to make a major operating system release, it’s time to prepare for some changes. In this piece, we’ll give you a crash course in what to be excited (or worried) about in Server 2019, provide an overview of some exciting new features, and discuss how you can get your hands on Microsoft’s latest server operating system.

What do you expect with this new Windows 2019? Let’s get started. For your information, we as Microsoft hosting partner will also support this latest Windows Server 2019 on our hosting environment soon.

1. Enterprise-grade hyperconverged infrastructure (HCI)

With the release of Windows Server 2019, Microsoft rolls up three years of updates for its HCI platform. That’s because the gradual upgrade schedule Microsoft now uses includes what it calls Semi-Annual Channel releases – incremental upgrades as they become available. Then every couple of years it creates a major release called the Long-Term Servicing Channel (LTSC) version that includes the upgrades from the preceding Semi-Annual Channel releases.The LTSC Windows Server 2019 is due out this fall, and is now available to members of Microsoft’s Insider program.

While the fundamental components of HCI (compute, storage and networking) have been improved with the Semi-Annual Channel releases, for organizations building datacenters and high-scale software defined platforms, Windows Server 2019 is a significant release for the software-defined datacenter.

With the latest release, HCI is provided on top of a set of components that are bundled in with the server license. This means a backbone of servers running HyperV to enable dynamic increase or decrease of capacity for workloads without downtime.

2. GUI for Windows Server 2019

A surprise for many enterprises that started to roll-out the Semi-Annual Channel versions of Windows Server 2016 was the lack of a GUI for those releases.  The Semi-Annual Channel releases only supported ServerCore (and Nano) GUI-less configurations.  With the LTSC release of Windows Server 2019, IT Pros will once again get their desktop GUI of Windows Server in addition to the GUI-less ServerCore and Nano releases.

3. Project Honolulu

With the release of Windows Server 2019, Microsoft will formally release their Project Honolulu server management tool. Project Honolulu is a central console that allows IT pros to easily manage GUI and GUI-less Windows 2019, 2016 and 2012R2 servers in their environments. 

Early adopters have found the simplicity of management that Project Honolulu provides by rolling up common tasks such as performance monitoring (PerfMon), server configuration and settings tasks, and the management of Windows Services that run on server systems.  This makes these tasks easier for administrators to manage on a mix of servers in their environment.

4. Improvements in security

Microsoft has continued to include built-in security functionality to help organizations address an “expect breach” model of security management.  Rather than assuming firewalls along the perimeter of an enterprise will prevent any and all security compromises, Windows Server 2019 assumes servers and applications within the core of a datacenter have already been compromised. 

Windows Server 2019 includes Windows Defender Advanced Threat Protection (ATP) that assess common vectors for security breaches, and automatically blocks and alerts about potential malicious attacks.  Users of Windows 10 have received many of the Windows Defender ATP features over the past few months. Including  Windows Defender ATP on Windows Server 2019 lets them take advantage of data storage, network transport and security-integrity components to prevent compromises on Windows Server 2019 systems.

5. Smaller, more efficient containers

Organizations are rapidly minimizing the footprint and overhead of their IT operations and eliminating more bloated servers with thinner and more efficient containers. Windows Insiders have benefited by achieving higher density of compute to improve overall application operations with no additional expenditure in hardware server systems or expansion of hardware capacity.

Windows Server 2019 has a smaller, leaner ServerCore image that cuts virtual machine overhead by 50-80 percent.  When an organization can get the same (or more) functionality in a significantly smaller image, the organization is able to lower costs and improve efficiencies in IT investments.

6. Windows subsystem on Linux

A decade ago, one would rarely say Microsoft and Linux in the same breath as complimentary platform services, but that has changed. Windows Server 2016 has open support for Linux instances as virtual machines, and the new Windows Server 2019 release makes huge headway by including an entire subsystem optimized for the operation of Linux systems on Windows Server.

The Windows Subsystem for Linux extends basic virtual machine operation of Linux systems on Windows Server, and provides a deeper layer of integration for networking, native filesystem storage and security controls. It can enable encrypted Linux virtual instances. That’s exactly how Microsoft provided Shielded VMs for Windows in Windows Server 2016, but now native Shielded VMs for Linux on Windows Server 2019. 

Enterprises have found the optimization of containers along with the ability to natively support Linux on Windows Server hosts can decrease costs by eliminating the need for two or three infrastructure platforms, and instead running them on Windows Server 2019. 

Because most of the “new features” in Windows Server 2019 have been included in updates over the past couple years, these features are not earth-shattering surprises.  However, it also means that the features in Windows Server 2019 that were part of Windows Server 2016 Semi-Annual Channel releases have been tried, tested, updated and proven already, so that when Windows Server 2019 ships, organizations don’t have to wait six to 12 months for a service pack of bug fixes.

This is a significant change that is helping organizations plan their adoption of Windows Server 2019 sooner than orgs may have adopted a major release platform in the past, and with significant improvements for enterprise datacenters in gaining the benefits of Windows Server 2019 to meet security, scalability, and optimized data center requirements so badly needed in today’s fast-paced environment.

 

 



IIS Hosting :: Tips to Monitor Your IIS Performance

clock December 21, 2018 08:03 by author Jervis

Need help on how to monitor IIS? This guide covers how to cover the basics including HTTP ping checks, IIS Application Pools, and important Windows Performance Counters. We also take a look at how to use an application performance management system to simplify all of this and get more advanced IIS performance monitoring for ASP.NET applications.

From Basics to Advanced IIS Performance Monitoring:

  • Ensuring your IIS Application is running
  • Windows performance counters for IIS & ASP.NET
  • Advanced IIS performance monitoring for ASP.NET

How to Monitor if Your IIS Application is Running

The first thing you want to do is setup monitoring to ensure that your application is running.

Website Monitor via HTTP Testing

One of the best and easiest things you can do is set up a simple HTTP check that runs every minute. This will give you a baseline to know if your site is up or down. It can also help you track how long it takes to respond. You could also monitor for a 200 OK status or if the request returns specific text that you know should be included in the response.

Monitoring IIS via a simple HTTP check is also a good way to establish a basic SLA monitor. No matter how many servers you have, you can use this to know if your web application was online and available.

Here is an example of one of our HTTP checks we use against Elasticsearch to help with monitoring it. We do this via Retrace; you could also you tools like Pingdom. In this example, we receive alerts if the number_of_nodes is not what we are expecting or if it doesn’t find an HTTP status of 200 OK.

Ensure Your IIS Application Pool is Running

If you have been using IIS very long, you have probably witnessed times when your application mysteriously stops working. After some troubleshooting, you may find that your IIS Application Pool is stopped for some reason, causing your site to be offline.

Sometimes an IIS Application Pool will crash and stop due to various fatal application errors, issues with the user the app pool is running under, bad configurations, or other random problems. It is possible to get it into a state where it won’t start at all due to these type of problems.

It is a good best practice always to monitor that your IIS Application Pool is started. It runs as w3wp.exe. Most monitoring tools have a way to monitor IIS Application Pools. Our product, Retrace, monitors them by default.

One weird thing about app pools is they can be set to “Started” but may not actually be running as w3wp.exe if there is no traffic to your application. In these scenarios, w3wp.exe may not be running, but there is no actual problem. This is why you need to monitor it via IIS’s status and not just look for w3wp.exe to be running on your server.

Recommended Performance Counters for IIS Monitoring

One of the advantages of using IIS as a web server is all of the metrics available via Windows Performance Counters. There is a wide array of them available between IIS, ASP.NET and .NET. For this guide on IIS performance monitoring, I am going to review some of the top Performance Counters to monitor.

System/Process Counters

  • CPU %: The overall server and CPU usage for your IIS Worker Process should be monitored.
  • Memory: You should consider tracking the currently used and available memory for your IIS Worker Process.

IIS Performance Counters

  • Web Service – Bytes Received/Sec: Helpful to track to identify potential spikes in traffic.
  • Web Service – Bytes Sent/Sec: Helpful to track to identify potential spikes in traffic.
  • Web Service – Current Connections: Through experience with your app you can identify what is a normal value for this.

ASP.NET Performance Counters

  • ASP.NET Applications – Requests/Sec: You should track how many requests are handled by both IIS and ASP.NET. Some requests, like static files, could only be processed by IIS and never touch ASP.NET.
  • ASP.NET Applications – Requests in Application Queue: If this number is high, your server may not be able to handle requests fast enough.
  • .NET CLR Memory – % Time in GC: If your app spends more than 5% of its time in garbage collection, you may want to review how object allocations are performed.

ASP.NET Error Rate Counters

  • .NET CLR Exceptions – # of Exceps Thrown: This counter allows you track all .NET exceptions that are thrown even if they are handled and thrown away. A very high rate of exceptions can cause hidden performance problems.
  • ASP.NET Applications – Errors Unhandled During Execution/sec: The number of unhandled exceptions that may have impacted your users.
  • ASP.NET Applications – Errors Total/Sec: Number of errors during compilations, pre-processing and execution. This may catch some types of errors that other Exception counts don’t include.

You should be able to monitor these Windows Performance Counters with most server monitoring solutions.

Note: Some Windows Performance Counters are difficult to monitor because of the process name or ID changes constantly. You may find it hard to monitor them in some server monitoring solutions due to this.

Advanced IIS Performance Monitoring for ASP.NET

Some application monitoring tools, like Retrace, are designed to provide holistic monitoring for your ASP.NET applications. All you have to do is install them, and they can auto-detect all of your ASP.NET applications and automatically start monitoring all the basics. Including key Performance Counters and if your IIS Site and Application Pool are running.

Retrace also does lightweight profiling of your ASP.NET code. This gives you code-level visibility to understand how your application is performing and how to improve it.



OWIN Hosting :: Introduction about OWIN

clock November 12, 2018 07:34 by author Jervis

Introduction

If you look at the current web stacks in open-source, it is fast evolving with wide-range of capabilities getting added day by day.  On the other hand, Microsoft too is constantly working to update its web application stack and released many new framework components. Though Microsoft’s Asp.Net is very mature framework, it had lacked some basic qualities like portability, modularity and scalability which the web stacks in Open Source communities were offering. This had led to the development of OWIN, a specification how an Asp.Net application and hosting servers has to be built to work without any dependency on each other and with minimal runtime packages. By implementing OWIN specification, Asp.Net can become more modular, have better scalability, it can be easily ported to different environments, and thus making it competitive with its open source counterparts. Besides this, it is also aimed to nurture the .net open source community participation for its framework and tooling support.

OWIN

OWIN stands for Open Web Interface for .Net. It is a community-owned specification (or standard) and not a framework of its own. OWIN defines an interface specification to de-couple webserver and application using a simple delegate structure. We will discuss more about this delegate later in this article. Now, let’s take a closer look at classic Asp.Net framework’s design issues in detail and how OWIN tries to mitigate it.

ASP.Net - Webserver Dependencies

Asp.Net framework is strongly dependent on IIS and its capabilities, so it can be hosted only within IIS. This has made the portability of Asp.Net application an impossible task. In particular, Asp.Net applications are basically build upon the assembly called System.Web which in turn heavily depends on IIS for providing many of the web infrastructure features like request/response filtering, logging, etc.

System.Web assembly also includes many default components that are plugged into the Http pipeline regardless of its usage in the application. This means there are some unwanted features that are executed in the pipeline for every request which degrades the performance. This has made the current open source counter-parts like NodeJs, Ruby, etc. perform way better than Asp.Net framework.

To remove these dependencies, to make it more modular and to build a loosely coupled system, the OWIN specification is built. In simple terms, OWIN removes Asp.Net application dependency on System.Web assembly at first place. That being said, OWIN is not designed to replace entire Asp.Net framework or IIS as such, thus when using OWIN model we are still going to develop an Asp.Net web application in the same way we were doing all these days but with some changes in infrastructure services of Asp.Net framework.

The other major drawback of System.Web assembly is it is bundled as part of .Netframework installer package. This has made the delivery of updates and bug-fixes to Asp.Net components a difficult and time consuming task for the Microsoft’s Asp.Net team. So, by removing the dependency with System.Web assembly, Microsoft is now delivering its owin web stack updates faster through its Nuget package manager.

Implementing OWIN

As I said before, OWIN is not an implementation by itself. It just defines a simple delegate structure commonly called as Application Delegate or AppFunc designed for the interaction between webserver and application with less dependency. AppFunc delegate signature below.

Func<IDictionary<string, object>, Task> 

This delegate takes a Dictionary object (IDictionary<string, object>) as a single argument called Environment Dictionary and it returns a Task object. This Dictionary object is mutable, meaning it can be modified further down the process. All applications should implement this delegate to become OWIN complaint.

In an OWIN deployment, the OWIN host will populate the environment dictionary with all necessary information about the request and invoke it. This means it is the entry point to the application, in other words, it is where the applications startup/bootstrap happens. Hence, this is called the Startup class. The application can then modify or populate the response in the dictionary during its execution. There are some mandatory key/values in the environment dictionary which the host will populate before invoking application.

Below are the components of Owin-based application that makes this happen. From OWIN spec,

  • Server — The HTTP server that directly communicates with the client and then uses OWIN semantics to process requests. Servers may require an adapter layer that converts to OWIN semantics.
  • Web Framework — A self-contained component on top of OWIN exposing its own object model or API that applications may use to facilitate request processing. Web Frameworks may require an adapter layer that converts from OWIN semantics.
  • Web Application — A specific application, possibly built on top of a Web Framework, which is run using OWIN compatible Servers.
  • Middleware — Pass through components that form a pipeline between a server and application to inspect, route, or modify request and response messages for a specific purpose.
  • Host — The process an application and server execute inside of, primarily responsible for application startup. Some Servers are also Hosts.

Building an OWIN complaint application

To be OWIN complaint, our Asp.Net application should implement the application delegate AppFunc. With current set of framework components, we also need the actual OWIN implementation for host, asp.net application and infrastructure service components. So, building Owin complaint application is not just implementing AppFunc delegate alone it also requires other components. Here comes the need of the Project Katana, which is Microsoft’s own implementation of this specification.

The Asp.Net's infrastructure services like Authentication, Authorization, routing services and other request/response filtering has to be done by OWIN middleware pass-through components to prevent its dependency with IIS. These middleware components resembles the Http modules in the traditional Asp.Net pipeline. They are called in the same order they are added in the Startup class similar to HttpModule events subscription in classic Asp.Net application object(Global.asax). To recall, Owin’s AppFunc delegate implementation in our application is commonly called as Startup class. We will understand it better when we build our first application.

Project Katana has evolved so much after its initial release and it is now fully incorporated into the newest version of Asp.Net called Asp.Net Core. Next section will provide us a brief history of OWIN implementation from Project Katana to Asp.Net Core releases.

Project Katana to Asp.Net Core

Project Katana is Microsoft’s first own implementation of OWIN specification and it is delivered as Nuget packages. Developers can include these packages from Nuget and start working.

Microsoft planned for Asp.Net vNext, the next version after Asp.Net 4.6 with full support of OWIN and thus Project Katana was slowly retiring. Note – Any project implementation with Katana libraries will continue to work as expected.

.Net Core 1.0 is another implementation of .NetFramework. .Net Core is a portable, open-source, modular framework and it is re-built from scratch with new implementation of CLR. The Asp.Net vNext, the next version of Asp.Net was renamed as Asp.Net 5.0 and is capable of running on .Net Core framework and latest .Net framework 4.6.2

Asp.Net 5.0 was renamed to Asp.Net Core since this framework was re-written from scratch and Microsoft felt the name was more appropriate. Asp.Net Core is delivered as Nuget packages and it will run on both .Net Core 1.0 and .NetFramework 4.5.1+ frameworks. So, the latest version of Asp.Net is now officially called as Asp.Net Core.

Though OWIN and Project Katana was released years ago, there were lots of updates happened to its implementation all these days. Hope this article helped you understand the current status and start the learning process of building Owin-based application.



ASP.NET MVC 6 Hosting - ASPHostPortal :: Creating Custom Controller Factory ASP.NET MVC

clock February 20, 2016 00:01 by author Jervis

I was reading about “Control application behavior by using MVC extensibility points” which is one of the objectives for the 70-486 Microsoft certification, and it was not clear to me, the explanation provided. So I decided to write about it to make clear for me, and I hope this help you as well.

An ASP.NET MVC application contains the following class:

public class HomeController: Controller
{
   public HomeController(Ilogger logger)//notice the parameter in the constructor
   { 

   }
}

This throw an error with the DefaultControllerFactory see image below.

The application won’t be able to load the Home controller because it have a parameter. You need to ensure that Home Controller can be instantiated by the MVC framework. In order to accomplish this we are going to use dependency injection.

The solution is to create a custom controller factory.

It calls the default constructor of that class. To have the MVC framework create controller class instances for constructors that have parameters, you must use a custom controller factory. To accomplish this, you create a class that implements IControllerFactory and implement its methods. You then call the SetControllerFactory method of the ControllerBuilder class to an instance of your custom controller factory class.

Create the CustomControllerFactory that inherit from IControllerFactory:

public class CustomControllerFactory : IControllerFactory
   { 

       public CustomControllerFactory()
       {
       }  


       public IController CreateController(System.Web.Routing.RequestContext requestContext, string controllerName)
       {
            ILogger logger = new DefaultLogger();
       var controller = new HomeController(logger);
       return controller;
       } 

       public System.Web.SessionState.SessionStateBehavior GetControllerSessionBehavior(System.Web.Routing.RequestContext requestContext, string controllerName)
       {
            return SessionStateBehavior.Default;
       } 

       public void ReleaseController(IController controller)
       {
           var disposable = controller as IDisposable;
           if (disposable != null)
           {
               disposable.Dispose();
           } 

       }
   }

You can implement the CreateController() method with a more generic way, using reflection.

public class CustomControllerFactory : IControllerFactory
{
    private readonly string _controllerNamespace;
    public CustomControllerFactory(string controllerNamespace)
    {
        _controllerNamespace = controllerNamespace;
    }
    public IController CreateController(System.Web.Routing.RequestContext requestContext, string controllerName)
    {
        ILogger logger = new DefaultLogger();
        Type controllerType = Type.GetType(string.Concat(_controllerNamespace, ".", controllerName, "Controller"));
        IController controller = Activator.CreateInstance(controllerType, new[] { logger }) as Controller;
        return controller;
    }
}

Set your controller factory in Application_Start by using SetControllerFactory method:

protected void Application_Start()
       {
           AreaRegistration.RegisterAllAreas();
           GlobalConfiguration.Configure(WebApiConfig.Register);
           FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
           RouteConfig.RegisterRoutes(RouteTable.Routes);
           BundleConfig.RegisterBundles(BundleTable.Bundles); 

           ControllerBuilder.Current.SetControllerFactory(typeof(CustomControllerFactory));
       }

This could be one of the objective of the Microsoft certification exam 70-486, more specific for “Develop the user experience”, sub-objective “Control application behavior by using MVC extensibility points”.

Hope this helped you to understand how to do dependency injection in controllers with MVC.

 

 



ASP.NET MVC Hosting - ASPHostPortal :: Session State Performance Issue in ASP.NET MVC

clock January 15, 2016 20:54 by author Jervis

I cannot recall any real Web Application that doesn’t make use of the Session State feature, the one that is capable to store data that are available across multiple requests from the same browser. More over, in this very modern times, Web Applications tends to make extensive use of Ajax requests to the server, in order to avoid page refreshes. So the question here is, have you ever noticed performance issues while making multiple ajax requests to an ASP.NET MVC action when using Session data?

Put Session into the Context

Before demonstrating the real problem with coding, let’s find out the basics on how Session works. When a new request is arrived to the server for the first time, which means no session cookie is contained, the server will create a new session identifier accessible through the

System.Web.HttpContext.Current.Session.SessionID

Though, this does not mean that from now on all requests back to server will contain a session cookie. Instead, this will happened only if a specific request store some data in the Session. In other words,ASP.NET Framework adds the session cookie to the response at the first time some data is stored in the session. So how is this related to performance issues? Well, normally ASP.NET can process multiple requests from the same browser concurrently which means for example, multiple Ajax requests can be processed simultaneously.

The above schema works only when no session cookie is contained in browser’s request or in other words, server hasn’t yet stored any data into the Session. If server does store some data into the session and hence adds a session cookie in the response, then all subsequent requests using the same session cookie are queued and proccessed sequentially.

This is actually a normal behavior, cause consider for example that multiple requests may modify or read the same Session key value simultaneously. That would certainly result in inconsistent data.

Code time

Create an ASP.NET Empty Web Application checking the MVC option and add the following HomeController.

[OutputCache(NoStore = true, Duration = 0)]
    public class HomeController : Controller
    {
        public List<string> boxes = new List<string>() { "red", "green", "blue", "black", "gray", "yellow", "orange" };
        // GET: Home
        public ActionResult Index()
        {
            return View();
        } 

        public string GetBox()
        {
            System.Threading.Thread.Sleep(10);
            Random rnd = new Random();
            int index = rnd.Next(0, boxes.Count); 

            return boxes[index];
        } 

        public ActionResult StartSession()
        {
            System.Web.HttpContext.Current.Session["Name"] = "Jervis"; 

            return RedirectToAction("Index");
        }
    }

The controller has a GetBox which returns a random color from a list of colors defined in the class. We will use it’s value to add a div element in the view having the background-color property set accoarding the returned value. More over, has a StartSession action which simply stores a session value and hence it’s the point that the server adds a session cookie into the response. You will need to add the Index view so right click in the index method, add it and paste the following code.

<!DOCTYPE html>
<html lang="en" ng-app="asyncApp">
<head>
    <title>Troubleshooting ASP.NET MVC Performance Issues</title>
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.2/css/bootstrap.min.css">
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.2/css/bootstrap-theme.min.css">
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js"></script>
    <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.2/js/bootstrap.min.js"></script>
    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.js"></script>
    <link href="~/Content/Site.css" rel="stylesheet" />
</head>
<body ng-controller="asyncCtrl" ng-init="getBoxes()">
    <nav role="navigation" class="navbar navbar-default navbar-fixed-top">
        <div class="container-fluid">
            <!-- Brand and toggle get grouped for better mobile display -->
            <div class="navbar-header">
                <button type="button" data-target="#navbarCollapse" data-toggle="collapse" class="navbar-toggle">
                    <span class="sr-only">Toggle navigation</span>
                    <span class="icon-bar"></span>
                    <span class="icon-bar"></span>
                    <span class="icon-bar"></span>
                </button>
            </div>
            <!-- Collection of nav links and other content for toggling -->
            <div id="navbarCollapse" class="collapse navbar-collapse">
                <ul class="nav navbar-nav">
                    <li class="active"><a href="#">Performace testing</a></li>
                    <li>
                        @Html.ActionLink("Start Session", "StartSession")
                    </li>
                    <li>
                        <a class="links" ng-click="getBoxes()">Not resolved</a>
                    </li>
                    <li>
                        <a class="links" ng-click="getBoxes(true)">Resolved</a>
                    </li>
                    <li>
                        <form class="navbar-form">
                            <label class="checkbox" style="margin-top:5px">
                                @Html.CheckBox("isSessionNewChk", Session.IsNewSession, new { @disabled = "disabled" })
                                Is Session New
                            </label>
                        </form>
                    </li>
                </ul>
                <ul class="nav navbar-nav navbar-right">
                    <li><a href="#">{{boxes.length}} Boxes</a></li>
                </ul>
            </div>
        </div>
    </nav>
    <br /><br /><br />
    <div class="container">
        <div class="row">
            <div id="boxesContainer" ng-repeat="color in boxes track by $index">
                <div class="box" ng-class="color" />
            </div>
        </div>
        <br />
        <div class="row">
            <div id="timeOccured" ng-show="showResults" class="alert" ng-class="isResolved()" ng-bind="timeElapsed"></div>
        </div>
    </div>
    <script src="~/Scripts/app.js"></script>
</body>
</html>

Add a Content folder in the root of your application and create the following css stylesheet.

.box {
    height: 15px;
    width: 15px;
    margin: 2px;
    float:left;
} 

#timeOccured {
    clear:both;
} 

.links {
    cursor:pointer;
} 

.red {
    background-color: red;
} 

.green {
    background-color: green;
} 

.blue {
    background-color: blue;
} 

.black {
    background-color: black;
} 

.gray {
    background-color: gray;
} 

.yellow {
    background-color: yellow;
} 

.orange {
    background-color: orange;
}

You will have noticed that I make use of simple AngularJS code but that’s OK if you aren’t familiar with it. Add a Scripts folder and create the following app.js javascript file.

angular.module('asyncApp', [])
    .value('mvcuri', 'http://localhost:49588/home/getbox')
    .value('mvcurisessionresolved', 'http://localhost:49588/SessionResolved/getbox')
    .controller('asyncCtrl', function ($http, $scope, mvcuri, mvcurisessionresolved) { 

        $scope.boxes = [];
        $scope.showResults = false;
        var uri; 

        $scope.getBoxes = function (resolved) {
            var start = new Date();
            var counter = 300; 

            if (resolved)
                uri = mvcurisessionresolved;
            else
                uri = mvcuri; 

            // Init variables
            $scope.boxes = [];
            $scope.showResults = false;
            $scope.timeElapsed = ''; 

            for (var i = 0; i < 300; i++) {
                $http.get(uri)
                    .success(function (data, status, headers, config) {
                        $scope.boxes.push(data);
                        counter--; 

                        if (counter == 0) {
                            var time = new Date().getTime() - start.getTime();
                            $scope.timeElapsed = 'Time elapsed (ms): ' + time;
                            $scope.showResults = true;
                        }
                    })
                        .error(function (error) {
                            $scope.timeElapsed = error.Message;
                        }).finally(function () {
                        });
            }
        }; 

        $scope.isResolved = function () {
            return uri == mvcuri ? 'alert-danger' : 'alert-success';
        } 

    });

Make sure you replace the localhost:port with yours. Before explain all application’s functionality let’s add a new MVC controller named SessionResolvedController.

[OutputCache(NoStore = true, Duration = 0)]
    public class SessionResolvedController : Controller
    {
        public List<string> boxes = new List<string>() { "red", "green", "blue", "black", "gray", "yellow", "orange" }; 

        public string GetBox()
        {
            System.Threading.Thread.Sleep(10);
            Random rnd = new Random();
            int index = rnd.Next(0, boxes.Count); 

            return boxes[index];
        }
    }

At this point you should be able to fire your application.

So what is happening here? When the application starts, 300 Ajax calls are sent to the HomeController’sGetBox action and foreach color returned a respective box is added to the view.

<body ng-controller="asyncCtrl" ng-init="getBoxes()"> 

<div id="boxesContainer" ng-repeat="color in boxes track by $index">
     <div class="box" ng-class="color" />
 </div>

Every time you refresh the Page you will notice that the Is Session New checkbox is checked, which means that there isn’t yet a session cookie sent to the browser. All Ajax requests are processed concurrently as expected and the time elapsed to complete all the ajax calls is displayed in a red box under all boxes. The Start Session button will simply store a session value as follow.

System.Web.HttpContext.Current.Session["Name"] = "Jervis";

Press the button and notice the difference in the elapsed time for displaying all the 300 boxes. Also notice that the checkbox isn’t checked anymore.

Now that there is a Session cookie in each request to the server, all requests are proccessed sequentially as described previously. That’s why we need twice the time we needed before storing a session variable. You can repeat the same ajax requests without refreshing the page by clicking the Not Resolved button.

Solution

At the moment, if you press the Resolved button you will get the same results calling theSessionResolvedController’s GetBox() action.

<a class="links" ng-click="getBoxes(true)">Resolved</a>

Notice that we don’t read or modify any Session state data when calling the GetBox() actions, so the question is, is there anything that we can do to prevent our requests being processed sequentially despite the fact that they contain a cookie session? The answer is YES and it’s hidding in the SessionState attribute that can be applied in the controller’s class as follow:

[SessionState(SessionStateBehavior.Disabled)]
    public class SessionResolvedController : Controller
    {
        public List<string> boxes = new List<string>() { "red", "green", "blue", "black", "gray", "yellow", "orange" }; 

        public string GetBox()
        {
            System.Threading.Thread.Sleep(10);
            Random rnd = new Random();
            int index = rnd.Next(0, boxes.Count); 

            return boxes[index];
        }
    }

Applying this attribute to the controller will make all requests targeting this controller processed concurrently. Build, run your application and repeat what we did before. Notice that the Resolved button now finishes at the half time which was the time needed when no session cookie existed in the request.

I run the application without debugging in order to get better results. If you run with debugging you will get higher delays but that should not be a problem, the differences will be still noticeable.

And following is a complete demonstration of what we have done till now. When I fired the application for the first time, it tooked 928 milliseconds to complete all 300 Ajax requests. Then I clicked to create a Session cookie and that caused an important performance issue increasing the delay to 2201 milliseconds! When I requested the data through the SessionResolvedController the delay was almost the same when there wasn’t any Session cookie.

I have also used the

[OutputCache(NoStore = true, Duration = 0)]

to the MVC Controllers which will result to ask the browser (client) not to store the content retrieved from the server. This way the test is even more reliable.



SQL 2014 Hosting - ASPHostPortal :: Make Your SSAS Works Like a Private Jet!

clock December 10, 2015 19:54 by author Jervis

SQL Server Analysis Services (SSAS) Tabular is a popular choice as an analytical engine for many customers. With its state-of-the-art compression algorithms, multi-threaded query processor and in-memory capabilities, SSAS Tabular can provide super quick access to data by reporting client applications. However, as a consultant, I have been called by many clients to resolve slow query performance when accessing data from SSAS Tabular models. My experiences have taught me most, if not all, of the performance issues can be resolved by taking care of the following five subject areas. 

Estimate Current Size and Growth Carefully

Tabular models compress data really well and on an average, you can expect to see 10x the compression rates (though it can be much more or less depending on the cardinality of your data). However, when you are estimating the size of your model as well as future growth, a rough figure like this is not going to be optimal. If you already have data sitting in a data warehouse, import a subset of that data — say a month — to find the model size for that and then extrapolate the value based on the required number of rows / years as well as the number of columns. If not, try to at least get a subset of the data from the source to find the model size. There are tools like BISM Memory Report and Vertipaq Analyzer that can further help in this process.

It is also important to record the number of users who will be accessing the system currently as well as the estimated growth for the number of users.

Select or Upgrade Hardware Appropriately

Everyone knows SSAS Tabular is a memory intensive application, and one major issue I have seen is only the RAM is considered when hardware selections are made. However, just focusing on the RAM is not enough and there are a lot of other variables. Suppose all the other variables are constant and there is an unlimited budget, these are the recommendations:

CPU Speed – The faster, the better, will help in computing results faster especially when there is a bottleneck on the single-threaded formula engine.

CPU Cores – In theory, the more the better as it helps in managing concurrent user loads. However, in reality, a rise in the number of cores usually corresponds to a decrease in the CPU speed due to management overload. So a balanced approach has to be taken when determining the CPU Cores and Speed. Also, licensing cost increases with the number of cores for many software.

CPU Sockets – The lesser, the better as SSAS Tabular is not NUMA aware till SQL 2014. However, this is expected to change in SQL 2016 where some NUMA optimization has been made. For large tabular models, it might be a challenge to go single socket as the amount of RAM that can be supported on a system will depend on the CPU sockets.

CPU Cache – The more, the better. Retrieving data from CPU caches are 10-100x faster than retrieving data from RAM.

CPU Architecture – The newer, the better due to the hardware performance optimizations. For eg, Intel Xeon processors with Haswell architecture is always going to be faster than Sandy architecture keeping all other variables constant.

Amount of RAM – Should have at least 2.5x the model size, if the model is going to be processed on the same server. The amount of RAM can be lesser in cases of certain scale out architectures where the model is processed in a separate server.

RAM Speed – The faster, the better (yes, RAMs have speed too!) This is very important for a memory-bound application like Tabular and should always go for the faster speeds, if budget allows.

Storage – Not important at all as it does not have any effect on query performance. However, if budget allows, it might not be a bad idea to get faster storage like SSDs, as that will help in maintenance related activities like backup, storage or even getting the tabular model online faster when the service is restarted. Apart from this, there are other factors also like network latency, server architecture (scale out), etc that have to be considered, but depending on the budget and specific customer requirements, a balanced approach will have to be made.

Design the Data Model Properly

Tabular is really good at performance and in the case of small models, is extremely forgiving in terms of bad design. However, when the amount of data grows, performance problems begin to show up. In theory, you will get the best performance in SSAS tabular if the entire data is flattened into a single table. However, in reality, this would translate to an extremely bad user experience as well as a lengthy and expensive ETL process. So the best practice is to have a star schema, generally. Also, it is recommended to only include the relevant columns from the source tables, as increasing the columns will result in an increase in model size which in turn will result in slower query performances. Increase in number of rows might still be ok as long as the cardinality of the columns don’t change much.

Depending on the specific customer requirements, there could be deviations from the best practices. For e.g., we built custom aggregate tables along with the detailed fact table in the case of a very large production model for a client. The resultant measure had a conditional statement to retrieve data from the aggregate table if the detailed level dimension data was not used in the report. Since the aggregate table was only 1/10 the size of the detailed fact table, the query came out 10x times faster whenever the details were not used, which was almost 90% of the times.

Optimize the DAX Calculations

In case of small models, Tabular is extremely forgiving in terms of bad DAX code also. However, just like in the case of bad design, performance takes a hit for the worse as you increase the data, add more users, or run complex queries. DAX performance tuning is the most difficult to tune from the current list, and it is important to have a strategy for maintaining and tuning the performance. A good place to start would be the Performance Tuning of Tabular models in SSAS 2012 whitepaper.

Monitor User Query Patterns and Train Users

Once your model is in production, it is important to keep monitoring the user query patterns as well as the resources to see potential bottlenecks. Through this, you can find whether the performance issues are being caused due to inefficient DAX, bad design, insufficient resources or most importantly, whether it is just because a user is using the model inefficiently. For e.g., in one of the cases, we found out the slow performance for all users was due to a single user dumping the entire 100 GB model into spreadsheets so he could perform custom calculations on top of it. This blocked the queries for all the other users and made things really slow for them. With appropriate requirement gatherings, we ensured all the required calculations for that user were there in the model and then trained the user to use the model for his analytics.

The success of any tabular project depends on the adoption by the end users and it is needless to say the adoption would be much better if the system is fast. These 5 tips will ensure you already have a jumpstart on that journey.

Looking to Use SQL Server Analysis Services Hosting?

As this is intermediate service, you can get this SQL Server Analysis Services on our Windows Dedicated server plan. You can start from our Bundled Cloud Platinum Class to get this service running on your server. This plan has included

- Windows Server 2008/2012 license
- 2 x 2.0 Ghz Core
- 4 GB RAM --> FREE Upgrade to 8 GB RAM (Use Promo Code ‘DOUBLERAM’)
- 2 x 500 GB Storage
- 20000 GB Bandwith
- 1000 Mbps Connection speed
- 4 Static IP
- Full 24 x 7 RDP Access
- Full 24/7 Firewall Protection
- Standard ver. SQL 2008/2012
- Support MySQL db
- PLESK Control Panel
- Antivirus
- Unlimited of MSSQL dbs
- Unlimited of MySQL dbs
- FREE SmarterMail service if you register this December

Get a HUGE discount for this Christmas!! For more information, please visit our official site at http://www.asphostportal.com.



ASP.NET MVC 6 Hosting - ASPHostPortal :: Introduction Tag Helpers MVC 6

clock December 3, 2015 21:31 by author Jervis

MVC6 introduces a new feature called Tag Helpers. In this post, we will explore how tag helpers can be used to improve the readability of your Razor views that generate HTML forms.

How do Tag Helpers work?

Tag Helpers are an alternative to HTML helpers for generating HTML. The easiest way to show this is with an example.

Let’s start by looking at an extremely simple example of a login view that is bound to a LoginViewModel that contains a UserName and a Password:

public class LoginViewModel
{
    public string UserName { get; set; }
    public string Password { get; set; }
}

Here is how we would generate an HTML input for the UserName property using an HTML Helper and a Tag Helper.

<!--Create an input for UserName using Html Helper-->
@Html.EditorFor(l => l.UserName)
<!--Create an input for UserName using Tag Helper-->
<input asp-for="UserName" />

With the HTML helper, we call C# code that return some HTML. With Tag Helpers we augment some HTML with special tag helper attributes. These special attributes are processed by MVC which will generate some HTML. Both of these approaches above will generate an input that looks like this:

<input name="UserName" class="text-box single-line" id="UserName" type="text" value="">

Why is this better?

At first glance, it might look like Tag Helpers are just a syntax change with no obvious benefits. The difference however, can make your Razor forms much more readable. Let’s say we wanted to do something simple like add a class to our UserName input:

<!--Create an input with additional class for UserName using Html Helper-->
@Html.EditorFor(l => l.UserName, new { htmlAttributes = new { @class = "form-control" } })
<!--Create an input with additional class for UserName using Tag Helper-->
<input asp-for="UserName" class="form-control" />

As you can see, the HTML helper approach becomes very hard to understand while the tag helper approach is very clear and concise.

Here is a full blown example of a login form using HTML helpers:

@using (Html.BeginForm("Login", "Account", new { ReturnUrl = ViewBag.ReturnUrl }, 
FormMethod.Post, new { role = "form" }))
{
    @Html.AntiForgeryToken()
    @Html.ValidationSummary(true, "", new { @class = "text-danger" })
 
    <div class="form-group">
        <div class="row">
            @Html.LabelFor(m => m.UserName, new { @class = "col-md-2 control-label" })
            <div class="col-md-10">
                @Html.TextBoxFor(m => m.UserName, new { @class = "form-control" })
                @Html.ValidationMessageFor(m => m.UserName, "", new { @class = "text-danger" })
            </div>
        </div>
    </div>
    <div class="form-group">
        <div class="row">
            @Html.LabelFor(m => m.Password, new { @class = "col-md-2 control-label" })
            <div class="col-md-10">
                @Html.PasswordFor(m => m.Password, new { @class = "form-control" })
                @Html.ValidationMessageFor(m => m.Password, "", new { @class = "text-danger" })
            </div>
        </div>
    </div>
 
    <div class="form-group">
        <div class="row">
            <div class="col-md-offset-2 col-md-2">
                <input type="submit" value="Log in" class="btn btn-primary" />
            </div>
        </div>
    </div>
}

And now the same form using tag helpers:

<form asp-controller="Account" asp-action="Login" asp-route-returnurl="@ViewBag.ReturnUrl" 
method="post" class="form-horizontal" role="form">
    <div asp-validation-summary="ValidationSummary.All" class="text-danger"></div>
    <div class="form-group">
        <label asp-for="UserName" class="col-md-2 control-label"></label>
        <div class="col-md-10">
            <input asp-for="UserName" class="form-control" />
            <span asp-validation-for="UserName" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <label asp-for="Password" class="col-md-2 control-label"></label>
        <div class="col-md-10">
            <input asp-for="Password" class="form-control" />
            <span asp-validation-for="Password" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-offset-2 col-md-10">
            <input type="submit" value="Log in" class="btn btn-default" />
        </div>
    </div>
</form>

Overall, the tag helper version is much more readable. I especially like that we no longer need to use a using statement to generate a form element. That had always felt like a bit of a hack to me.

Another nice thing with the form tag helpers is that we don’t need to remember to explicitly add the AntiForgeryToken. The form tag helper does this automatically unless we explicitly turn it off using asp-anti-forgery=”false”

Of course, Visual Studio does a good job of highlighting the tag helper attributes so it is easy to distinguish them from regular HTML attributes

We also get full Intellisense inside the asp-for attributes.

How to enable Tag Helpers

The MVC Tag Helpers are located in the Microsoft.AspNet.Mvc.TagHelpers package so you will need to add a reference to that in your project.json file. Once you have added the reference, you can enable tag helpers in all your views by adding the following code to_GlobalImports.cshtml.

@addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"



ASP.NET MVC Hosting - ASPHostPortal :: The Difference Between Controller and View in ASP.NET MVC

clock November 12, 2015 20:28 by author Jervis

One of the basic rules of MVC is that views should be only – exactly – views, that is to say: objects that present to the user something that is already “worked and calculated”.

They should perform little, if not none at all, calculation. All the significant code should be in the controllers. This allows better testability and maintainability.

Is this, in Microsoft’s interpretation of MVC, also justified by performance?

We tested this with a very simple code that does this:

– creates 200000 “cat” objects and adds them to a List

– creates 200000 “owner” objects and adds them to a List

– creates 200000 “catowner” objects (the MTM relation among cats and owners) and adds them to a List

– navigates through each cat, finds his/her owner, removes the owner from the list of owners (we don’t know if cats really wanted this, but their freedom on code fits our purposes).

We’ve run this code in a controller and in a razor view.

The result seem to suggest that the code in views runs just as fast as in controllers even if don’t pre-compile views (the compilation time in our test is negligible).

The average result for the code with the logic in the controller is 18.261 seconds.

The average result for the code with the logic in the view is 18.621 seconds.

The performance seems therefore very similar.

Here is how we got to this result.

Case 1: Calculations are in the CONTROLLER

Models:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;

namespace WebPageTest.Models
{
public class Owner
{
public string Name { get; set; }
public DateTime DOB { get; set; }
public virtual CatOwner CatOwner { get; set; }
}
public class Cat
{
public string Name { get; set; }
public DateTime DOB { get; set; }
public virtual CatOwner CatOwner { get; set; }
}
public class CatOwner
{
public virtual Cat Cat { get; set; }
public virtual Owner Owner { get; set; }
}
}

Controller:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using WebPageTest.Models;

namespace WebPageTest.Controllers
{
public class HomeController : Controller
{
public ActionResult Index()
{
Stopwatch howLongWillItTake = new Stopwatch();
howLongWillItTake.Start();
List<Owner> allOwners = new List<Owner>();
List<Cat> allCats = new List<Cat>();
List<CatOwner> allCatOwners = new List<CatOwner>();
// create lists with 200000 cats, 200000 owners, 200000 relations
for (int i = 0; i < 200000; i++)
{
//Cat
Cat CatX = new Cat();
CatX.Name = “Cat ” + i.ToString();
CatX.DOB = DateTime.Now.AddDays(i / 10);
//Owner
Owner OwnerX = new Owner();
OwnerX.Name = “Owner ” + i.ToString();
OwnerX.DOB = DateTime.Now.AddDays(-i / 10);
//Relationship “table”
CatOwner CatOwnerXX = new CatOwner();
CatOwnerXX.Cat = CatX;
// Relations
CatOwnerXX.Owner = OwnerX;
CatX.CatOwner = CatOwnerXX;
OwnerX.CatOwner = CatOwnerXX;
//add to list
allCats.Add(CatX);
allOwners.Add(OwnerX);
allCatOwners.Add(CatOwnerXX);
}
// now I remove all the items
foreach (Cat CatToDelete in allCats)
{
Owner OwnerToRemove = CatToDelete.CatOwner.Owner;
allOwners.Remove(OwnerToRemove);
}
// now all cats are free
int numberOfCats = allCats.Count();
int numberOfOwners = allOwners.Count();
howLongWillItTake.Stop();
long elapsedTime = howLongWillItTake.ElapsedMilliseconds;
// give info to the view
ViewBag.numberOfCats = numberOfCats;
ViewBag.numberOfOwners = numberOfOwners;
ViewBag.elapsedTime = elapsedTime;
return View();
}
}
}

View:

<div class=”row”>
<div class=”col-md-12″>
<hr />
<b>Results</b>
<br/>
Cats: @ViewBag.numberOfCats
<br/>
Owners: @ViewBag.numberOfOwners
<br/>
ElapsedTime in milliseconds: @ViewBag.ElapsedTime
<hr />
</div>
</div>

Case 2: Calculations are in the VIEW (pre-compiled)

Models: same as above

Controller:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;

namespace WebPageTest.Controllers
{
public class HomeBisController : Controller
{
public ActionResult Index()
{
return View();
}
}
}

View:

@using System;
@using System.Collections.Generic;
@using System.Diagnostics;
@using System.Linq;
@using System.Web;
@using WebPageTest.Models;
@using System.Web.Mvc;
@{
Stopwatch howLongWillItTake = new Stopwatch();
howLongWillItTake.Start();
List<Owner> allOwners = new List<Owner>();
List<Cat> allCats = new List<Cat>();
List<CatOwner> allCatOwners = new List<CatOwner>();
//create lists with 200000 cats, 200000 owners, 200000 relations
for (int i = 0; i < 200000; i++)
{
//Cat
Cat CatX = new Cat();
CatX.Name = “Cat ” + i.ToString();
CatX.DOB = DateTime.Now.AddDays(i / 10);
//Owner
Owner OwnerX = new Owner();
OwnerX.Name = “Owner ” + i.ToString();
OwnerX.DOB = DateTime.Now.AddDays(-i / 10);
//Relationship “table”
CatOwner CatOwnerXX = new CatOwner();
CatOwnerXX.Cat = CatX;
// Relations
CatOwnerXX.Owner = OwnerX;
CatX.CatOwner = CatOwnerXX;
OwnerX.CatOwner = CatOwnerXX;
//add to list
allCats.Add(CatX);
allOwners.Add(OwnerX);
allCatOwners.Add(CatOwnerXX);
}
// now I remove all the items
foreach (Cat CatToDelete in allCats)
{
Owner OwnerToRemove = CatToDelete.CatOwner.Owner;
allOwners.Remove(OwnerToRemove);
}
// now all cats are free
int numberOfCats = allCats.Count();
int numberOfOwners = allOwners.Count();
howLongWillItTake.Stop();
long elapsedTime = howLongWillItTake.ElapsedMilliseconds;
// give info to the view

}
<div class=”row”>
<div class=”col-md-12″>
<hr />
<b>Results</b>
<br />
Cats: @numberOfCats
<br />
Owners: @numberOfOwners
<br />
ElapsedTime in milliseconds: @elapsedTime
<hr />
</div>
</div>

 



SQL 2014 Hosting - ASPHostPortal :: How to Optimize Your SQL Query

clock October 6, 2015 08:59 by author Jervis

Modern day software applications have millions of concurrent users. Development of an efficiently serviceable application requires a huge amount of effort and requires many tools and techniques. Software developers always try to improve the performance of the application by improving design, coding and database development. For database development, query optimization and evaluation techniques are playing vital parts.

Selection of required field only.

It is very important to avoid unnecessary data selection of the query. We should select a data field that we need but not all fields of the table.

SELECT login_id, pawwsord FROM tbluser  

Index

Properly created Indexes help to optimize search results. You need to better understand the databases before the selection of a better performing index. The selection of a highly used field as an index is very important.

CREATE clustered INDEX ind_login_id ON tbluser(login_id)  

Primary Key

The Primary Key is the most important index of the table. The most important thing about a Primary Key is the selection of a short and unique field. This will lead to easy access to the data records.

CREATE TABLE tbluser(
  id INT,  
  name VARCHAR(150),  
  email VARCHAR(100),  
  login_id VARCHAR(100),  
  password VARCHAR(10),  
  primary_key(id)  
)

Index unique column

The indexing of a unique column will improve searching and increase the efficiency of the database. You must have a better understanding of the data field and their utilization before indexing a unique column. The indexing of a less used column does not help improve the efficiency of the database.

CREATE INDEX ind_email ON tbluser(email)  

Select limited records

None of the user interfaces can visualize thousands of records at once. Hence there is no way to select all the records at once, so always limit the selection when you have a large number of records. Select the required data only.

SELECT id, name, email, login_id,password FROM tbluser WHERE 1 limite 10  

Selection of correct data type and length

Use the most appropriate data type and correct length of the data. The bad selection of a data type will produce bulky databases and poor performance. This will improve resource utilization of the database server.

CREATE TABLE tbluser(id INT,  
   name VARCHAR(150),  
   email VARCHAR(100),  
   login_id VARCHAR(100),  
   password VARCHAR(10)  
)  

Avoid in sub query

Always avoid use of IN sub-queries in your applications. An In sub-query will evaluate all the records of table A with table B (product of records) before selecting the required data.

SELECT login_id,name, email FROM tbluser WHERE login_id IN ( SELECT login_id FROM tbllogin_details)

One of the correct ways is to use an inner join as in the following:  

SELECT login_id,name, email FROM tbluser INNER JOIN tbllogin_details ON tbluser.login_id =tbllogin_details.login_id 

Avoid NOT operator

Please avoid the usage of the NOT operator situation that the number of qualifying records are lower than unqualified records. Always use a positive operator such as LIKE, EXIST than NOT LIKE, NOT EXIST.

SELECT * FROM tbluser WHERE email NOT LIKE '%gmail%'  

The prefered way is:

SELECT * FROM tbluser WHERE email LIKE '%yahoo%'  



About ASPHostPortal.com

We’re a company that works differently to most. Value is what we output and help our customers achieve, not how much money we put in the bank. It’s not because we are altruistic. It’s based on an even simpler principle. "Do good things, and good things will come to you".

Success for us is something that is continually experienced, not something that is reached. For us it is all about the experience – more than the journey. Life is a continual experience. We see the Internet as being an incredible amplifier to the experience of life for all of us. It can help humanity come together to explode in knowledge exploration and discussion. It is continual enlightenment of new ideas, experiences, and passions

 photo ahp banner aspnet-01_zps87l92lcl.png

Author Link

Corporate Address (Location)

ASPHostPortal
170 W 56th Street, Suite 121
New York, NY 10019
United States

Sign in